I kind of need an idea to solve a performance problem, maybe some of you can help.
I have a large scene, that I have divided in a grid. I dynamically add and remove objects from the scene by distance to the camera. I have about 230 FPS when moving through the scene, but every time a new object is attached, there is a little lag. I tested around and found out, that the lag is caused by the file size of the texture images that are loaded when attaching the object. Since the objects newly attached are always in a rather far distance, it would be totally fine, to first load them with a smaller texture image, and then, when approaching, change the texture image to a larger one.
• Is there a smart way to deal with this?
• Is there maybe a way to load a JPEG texture progressively, so that the loading is performed over a certain time over multiple frames?
• MipMapping does not seem to work, since the size of a mip-mapped image is even larger than the size of an image with only one Level of detail.
• Any other ideas so solve this problem
Of course I considered just using a more detailed grid to reduce the size of the texture atlases, but this gives me much more Objects / geometries. Unfortunately, to prevent a lag caused by attaching the objects, the grid size needs to be so small, that the Objects increase too much and I only have around 10 FPS.
Yes, also I still have to load the larger image, which will probably cause the same lag, unless, because of the over all amount of larger textures being reduced, the loading of the larger texture might be without a lag. But yes, I also don’t think this would be the smartest solution.
Ah, interesting! But not sure how to do that? Just attaching all objects one in the first frame an then start the detaching? Wouldn’t the textures be “unloaded” as well? I am already loading all the objects in the beginning, but simply do not attach them to the scene node. Only when they are within the specified distance.
Theres basically three representations of the texture. One is in jME when you load it, the next is in OpenGL and actually references the same bytes than the one in jME. Then theres the actual copy in the GPU memory. OpenGL manages the uploading to the GPU, even using the renderer.preLoadTextures (or whatsisname) can’t make 100% sure that OpenGl actually uploads the textures. But if you make it display the texture (e.g. unculled outside of the cam view) it has basically no chance but to upload it.
Okay, I just ran a simple test and moved through the scene once, then resetting the position to start and moving through the scene again. It actually works!!
The only draw back now is, that this, in my understanding, only works, as long as the GPU has enough memory to store all the images. In my test, when moving a long way through the scene, and then resetting, unloads the first textures and the problem re-occurs.
So, I guess, the point is: My scene is just to large in concerns to memory.
So if I tried the loading of lower sized textures first, Would it be more efficient to use an extra set of materials with the larger textures and apply them to the objects within a nearer range, or to use an extra set of textures and apply them to the materials of the objects within a nearer range? Or does it make no difference?
• I created a HashMap containing the LoD Materials
• I change the Material by distance
• I get 230 FPS with full resolution nearer textures and a load radius of the full textures of 400 m, in a city with dense housing.
–> no lag, solved
Approach 2: Try to generate a Test-Texture Altas 262144*262144:
• I could not create the Atlas with PS, since my drive space was too small: I have 80 GB free
• I am sure, saving the atlas as an JPG would reduce the file size dramatically, but i could not even create it, so how to save it as JPG.
I’ll go for the first solution, since it also leaves me more flexibility in changing the scene without re-baking the whole atlas.