[solved] Seeking reccomendation on how to best deal with an aspect of my multithreading

I am generating a random mesh with enemies as my “player” moves along. (something like a linear sidescroller but with 3d aspects) It is all working perfectly. I have now encountered a point where I must make a choice on how best to deal with an aspect of my threads.

As of now, I submit my mesh generation future to the executor from the update loop of my “player state”. and when it is done it attaches to my world (does this in small chunks and works very smoothly).

I was wondering what the best way to limit my future submissions so that my threads don’t become “backed up”. Currently the mesh request submits whenever the future is null and the game is running greater than a certain FPS. This FPS “fix” is just lazy, temporary method I am using while testing (but it is rather effective in controlling everything). Does anyone have a recommendation on how to best limit submissions of my futures. Is my current “fix” fine? Should I limit my mesh generation to submitting based on player location? This is what makes most sense to me, but I though I should consult experts before I move on to make all that code.

Once this stage is complete I get to move on to rules and challenges of the game :slight_smile: This will be a first for me if I can actually make it to a “working” game. I am horrible at finishing projects (random half finished experimental projects everywhere), but I am forcing myself through!

Edit: I should mention that the reason my “level” is generating as my player moves along is that I want my level to be infinite. Its a game that the goal is “how far you can get”

If I get you right you could just track the number of submissions by adding your futures to a list and checking its size…?

Well, what I don’t understand is what you want to do when you have too many requests pending. Seems like that would leave empty chunks of the level if you don’t generate them. I would think that if you request something then you must eventually render it, no?

As it is, it seems like you maybe aren’t clearing things out when they are no longer needed? Like, you request some area to be rendered and then the player moves somewhere and that is no longer needed… you should remove it from the queue… cancel it.

Simple and elegant. thanks. It is these things that a newbie does not think of right away!

:slight_smile: I will let you know how it works!

I remove the “chunks” of mesh that are no longer needed as the player goes along in another thread. so i think everything is being cleared properly. Also my fps is great and I’ve tested my game for long periods of play time.

I was getting backed up requests because it was happening every update loop unless another condition was added. I was trying to figure out what the best condition would be and it seems normen has a great suggestion.

Thanks for the reply :smile:

Well, that’s just more of what we don’t understand about your setup I guess.

I’d assumed you were only adding new pending chunks once they were valid to add and not spamming the queue with repeatedly the same chunks. If your world was a grid then you should only queue up pending chunks when crossing chunk boundaries. Actually, that’s probably true even if your world is not a regular grid.

If the issue is that you “don’t know” if you load a certain chunk already then make a new object type that holds the Future, information about what its for and add that to the list, then you have a reference about what you did already and can check the list if you process that chunk already. Or alternatively use a (Hash)Map for quicker access.

I was spamming the queue initially (which was in a testing phase to make sure the generation worked at all). It is now queued based upon a condition. I just wanted to know what valid condition would be best here. A boundary situation would work perfectly in my world. I was just wondering if there were other options (which now i have 2 very good options thanks to you both). I like normen’s Idea and this boundary one as well.