Hi,
i'm not very experienced with java nor with jme. As far as i understood, the cpu waits for the graphics card to render the scene.
But isn't that wasting of cpu time? If you had two threads, one could be used for rendering and the other one for calculating the next timestep.
You could gain much speed this way, even on a single cpu system.
Or am i wrong :?
Well i see, it's a lot more complex than i thaught. XD
If you issue render commands faster than the GPU can render them, then at some point, the CPU will have to wait for the GPU. Typically, the GPU can keep at least one frame's worth of data in the pipeline (and often more), so at the point where you start waiting for the GPU, you will be more than one frame ahead of what's being displayed. Trying to calculate ahead more than that would be bad, because it would introduce unnecessary delay between reacting to user input, and displaying that reaction on the screen.
Now, if you are in a GPU limited state, and you add more work for the CPU to do, you will likely not lose much frame rate, because the CPU will just do more work until it starts waiting for the GPU, so that's not so bad. Once you start doing more work on the CPU than you have work for the GPU, the CPU will start becoming the limiting factor, and you will no longer run far ahead of the GPU. My advice is to add all the CPU work that you would want to do, and measure the frame rate. If it's OK, then don't worry. If it's not, then start looking at what needs to be optimized – CPU or GPU.