Just note that for a long time C’s version of threading and IPC was like banging rocks together to make sparks where Java already had hi tech computers since it built threading in from the beginning.
My reference to Guava was about the LoadingCache specifically. This is like a Map that makes sure that threaded access to a key’s value is safe even if it has to load it.
So if 50 threads call cache.get(myId) and the value hasn’t been loaded yet, it is guaranteed to only be serviced by only one thread. (If it has already been loaded then it will be returned right away.) Then they put a bunch of expiry logic in there. I actually think that DesktopAssetManager should probably already be using LoadingCache. My memory says it cobbled together its own version of threaded protection and I have about 50/50 confidence that it did it without odd edge-case bugs or more likely just unnecessary inefficiencies. But so far so good, eh? My guess is that users don’t hammer the asset manager’s thread safety much but if we were to integrate a futures model, we definitely would want to bullet-proof all of that.
Threading architectures that don’t support continuations (something I really miss in JVM-based scripting languages but not in Java) can follow only a few different patterns. (Kind of making up some of these terms for simplicity and to be unambiguous)
Simple async: fire and forget (either with a pool or not)
Polling future: run the task, constantly check it for completion, do a thing on completion
Callback: run the task, task will call us back when done. (Not really any better than chaining runnables, really)
Multiphase: run the task, called once on a background thread, called again on the main thread (sort of a managed callback situation with the callback built into the task)
Lockstep: multiple threads run in parallel, they all gather together on a latch/semaphore when done. JME Bullet’s parallel threading does something like this. I could probably write a small whitepaper on why this is my least favorite form of threading but the short answer is: everyone pays for the slowest thread and in Java the memory barriers are left a little ambiguous. Just because everybody syncs together at the end (or every frame) doesn’t necessarily mean that they have a consistent view of shared memory. (Memory barriers and thread memory models is a way deeper topic.)
A game, to me, has some important constraints:
The app is already polling 60 times a second… and we want to do everything in our power to never ever slow that down. = minimize update/render thread impact.
We do not want 100 threads to suddenly swamp the CPU randomly. (If you only have ‘n’ cores and now ‘n’ threads are running 100% then no one else can run at all.)
Also, thread creation can be memory-expensive and so randomly creating them on the fly can do interesting things to several levels of the heap leading to extra GC.
We would like to avoid sending 1000s of new data objects to the GPU in one frame if possible.
Sometimes you have things that should run “right now” and some things that can wait until later. For example, you want the tree that just popped into view to load “right now” while the path finding calculation could maybe wait a little bit.
To me, the above favors prioritized thread pools and a metered multiphase approach.
So some kind of Job/Task interface like:
public interface Job {
public void runOnWorker();
public double runOnUpdate();
}
Where runOnWorker() is called on one of the pool threads. When it’s complete, it’s added to a done queue that is drained on the update thread… which then calls runOnUpdate().
runOnUpdate() in this case can return a ‘load factor’ which is an estimate of how much impact it thinks it will have on the frame. For example, a Job that will not modify the scene graph at all can return 0 while one that is about to attach some big scene graph objects can return a higher number.
The process draining the done queue can then decide to stop early and wait for the next frame if too much ‘impact’ has gone by.
Super simple example:
class LoadModel implements Job {
private Spatial model;
@Override
public void runOnWorker() {
this.model = assetManager.loadModel("MyModel.j3o");
// Do some other stuff if needed
}
@Override
public double runOnUpdate() {
rootNode.attachChild(model);
return 1.0;
}
}
getState(JobState.class).execute(new LoadModel());
And I’m not ashamed to say that I’ve arrived at this approach after failing to do this right at least 5 times… it’s also the approach that SiO2’s job stuff uses. I got tired of rewriting thread pooling wrong a bunch of times for old Mythruna and my other tech demos and finally got something I can use everywhere.
I also have some negative experiences with enterprise level software whose threading architecture is very toxic to games. (Accumulo being the worst of these.) Pitfalls abound.