Would it possible in general to load a model asynchronously into the graphics memory? Lets say we have a normal 3d scene and the user moves around, now when a model comes into rendering distance the scene freezes which the model is loaded the first time and I think that is not acceptable for a good user experience. Loading the models only if they really have to be rendered is a good concept but I think it is really important to not do this on the normal rendering thread if somehow possible.
In many games the objects are just blended in as soon as they are in the memory and I think this would be a very important feature as well. Or its already possible and I just don’t know about it?
OpenGL basically decides when it needs to upload the image to the GPU. You could force that happening in the first frame by setting the cullhint of all objects to never and attaching them to the rootnode once.
Theres been talks about having a second context for such things which would allow doing that in a separate thread but its quite dangerous as one could easily crash the GL thread by accessing the same GL resource from two threads (e.g. uploading the image on the “worker” GL thread and having a model use the same texture on the normal update loop). Plus the various renderer implementations might handle this not too well.
At least android should be capable of shared contexts. Don’t know about arb_sync but i assume it would also be available.
But i can confirm it would work flawless under Jogl and Lwjgl3.
In any case, if there would be a vote, i would say such “low level” features should be managed by the user and not by the core. Like having a AsynchLoader.preload(Mesh|Textuer) and AynchLoader.delete(Mesh|Texture) could be enough.
But all in all i agree with normen, if possible just preload stuff, thats way easier for you as user and for the engine
Yes I was talking about the second step. The first step we already do in the background but I thought there must be a way to do the second one asynchronously as well. Having two GL contexts like norman and zzuegg suggested sounds interesting. Where should I start when I want to try this?
The scenario I would test would be quite simple, e.g. load one large textured object in the memory using a separate context and when its fully loaded add it to the scene from the normal update loop. Is this scalable in a way that this process can happen 100 of times in parallel (in theory)?
In theory it is scalable, in practice you only have one pci-x bus. So just having many contexts for just uploading stuff does not bring any performance.
I am not sure how you would add such features in a nice way… For getting a secondary context using the jogl renderer something like:
might work. Pretty much untested but from the logic flow i use a similar piece of code in my renderer.
You then can create your own MeshLoadingTask and TextureLoadingTask, with the neccessary gl calls. (and a client side fence at the end, followed by some sort of callback to notify your application)
I guess the jme rendering functions are not stateless so you cannot abuse them
As a warning, this is really just a very very very dirty hack in any case. It clearly is part of the category “Try it when you know what you do, but better find another solution” There are plenty of possible issues native crashing the gl context, You will have to manage any synchronisation yourself
As a addon info, basically every data holding object is sharable, while every state holding object is not shareable. The list of not sharable objects include (Even the name suggests otherwise)
Vertex array objects, Framebuffer objects.
Changes made any of the context state are never propagated to shared once. But for example if you modify a shader uniform, then it is modified globally.
All operations occurring on the same context are serialized, and using the same context from multiple threads is known to cause problems. With that said, uploading textures or meshes asynchronously is possible, but is of limited use… If you’re just trying to avoid stuttering, you can limit the number of models that you attach in a single frame, maybe use a placeholder or similar. Putting a bunch of objects into the scene during a single frame will cause stuttering regardless of whether you upload textures or meshes asynchronously or not.
Also a way more interesting way might be to change the texture loading. (not currently in jme but probably doable with a bit of changes)
→ Usually the textures are the actual large thing on models.
→ Only load up to the 16x16 mipmap (dds necessary for this as you need materialized mipmaps).
Load the rest over time, eg every few frames load one more higher mipmap level.
That way most of the work is streched over a pretty long time.
I tried jpg and png files. Can’t the internal jme-android-texture-loading logic make use of the 100x faster loading speed? I scrolled through the renderer.android.TextureUtil class and it already uses the android.opengl.GLUtils class which is probably also what the ImageView of Android uses so I would expect uploading it into the graphics memory to be about the same speed?
That was more for your explanation of why the android is faster, it simply uses a different library for loading the the desktop jvm. If you want to go for speed use compressed textures, dds on dekstop and some other name i forgot on mobile. Thy have no processing and can just directly be used as they are in a native gpu compatible format already.
3.1 Android, 3.0 Android, and Desktop image loading all work differently.
In 3.0 Android, we loaded images via Android API and then injected the data into GL. In 3.1 Android, we used a C++ library called STB image (and hence is handled by native code). For Desktop, image loading is handled via AWT and then copied into a native buffer which is then uploaded onto GL.
The reason things changed for 3.1 Android, is because the Android API killed the alpha channel for PNG images, so we had to resort to native code to avoid that behavior…