While working on another issue (ClassCastException in PBR when using setColor) I turned off vsync and then I saw that my offscreen rendering solution causes the FPS to drop from ~1000 to ~100.
The main reason for this performance drop is readFrameBufferWithFormat so I thought it might be because of format conversion but testing it with readFrameBuffer brought no real performance gain…
So I thought I did something wrong in my implementation andI tried the JME Testcases: TestRenderToTexture runs with ~1000fps, but the TestRenderToMemory has also a performance drop to ~100fps (ofc the jme thread/window not the swing one)
Is it only that bad on my setup/graphic card/…? Is this kind of FPS drop to be expected? I mean, currently the little offscreen image causes way more load then the actual application and I planned to have a few of them…
I’m by no means an expert when it comes to rendering-related engine interna, but doing a bit of calculus shows:
100 1/s * 1920px * 1080px = 207’360’000 (207 MegaPixels per Second) and when you have an 8bit color depth, you can multiply that by 3 or 4 so you have: 829’440’000 Byte per Second = 829.4 MB/s
Now a quick google showed that you could expect at least 5GB/s theoretically, including all the driver overhead and such.
So I’m not sure whether there is a bug/slow implementation or if you simply don’t have enough bandwidth (which doesn’t make that sense to me since the drawn output is transferred at 1000 FPS as well).
What is clear here though, is that you transfer huge amounts of data, so if you don’t need RenderToMemory rather do RenderToTexture and throw some shaders at it.
I am only using 512x512, but your point kind of is - this performance drop is to be expected, so avoid using this function at all (apart from special needs like screen capture and stuff).
My ultimate goal is to have a character designer with sliders, radiobuttons, … and a little rendered character preview in the middle. Currently I am doing this using jfx and an imagepanel in the middle that shows the output (image) of the readFrameBuffer.
So as I see it, I either live with the performance drop when opening the designer or I use another UI technique for this designer that can directly utilize the offscreen texture (like doing my own, or using lemur).
Yes you are right! Just underestimated the costs of transferring it from the GPU - would be ok for static scenes, but I want to also play animations, so thats not an option. Anyway, I am thinking about testing out lemur for a long time and now might be the trigger to do so Which means, afterwards I have used all the JME UI frameworks available…
Yes;) Lemur looks great btw - really nice slim architecture. And the best thing about it: Its already working with and offscreen and (as expected) FPS did not drop a single frame;)