Proposed fix for issue#91 (github)/ issue#580(google code) : Monitor GPU Memory

Hi, I have worked on this issue (https://code.google.com/p/jmonkeyengine/issues/detail?id=580 )for some time now. I added 2 new methods getFreeMemory() and getMaxMemory() to the Renderer interface (com.jme3.renderer.Renderer).
getFreeMemory() returns GL_RENDERBUFFER_FREE_MEMORY_ATI + GL_TEXTURE_FREE_MEMORY_ATI + GL_VBO_FREE_MEMORY_ATI on ATI and GPU_MEMORY_INFO_TOTAL_AVAILABLE_MEMORY_NVX on NVIDIA.
2) getMaxMemory() returns “GPU_MEMORY_INFO_DEDICATED_VIDMEM_NVX” on NVIDIA and returns -1 on ATI.

I have overridden these in the classes implementing Renderer as follows :

  1. LwjglRenderer/Lwjgl1Renderer(com.jme3.renderer.lwjgl) : LWJGL provides classes that support gpu memory stats for ATI(ATIMeminfo) and NVIDIA(NVXGpuMemoryInfo). here’s the code : http://pastebin.com/f8YnFXAH

  2. OGLESShaderRenderer(com.jme3.renderer.android) : The status of openGL(https://www.opengl.org/registry/specs/NVX/gpu_memory_info.txt ) specs for NVX_gpu_memory_info extension states “NVIDIA’s Tegra drivers will not expose this extension.” . Moreover, I believe that ATI gpu’s are not present in android devices yet (as Imageon was acquired by AMD in 2006 and now belongs to Qualcomm…). So, both methods throw UnsupportedOperationException for requests related to both ATI and NVIDIA gpu’s.

  3. IGLESShaderRenderer(com.jme3.renderer.ios) : This class uses native methods and static final fields from JmeIosGles(com.jme3.renderer.ios). These static final fields do not include GPU_MEMORY_INFO_TOTAL_AVAILABLE_MEMORY_NVX, GPU_MEMORY_INFO_DEDICATED_VIDMEM_NVX constants etc. However, the glGetIntegerv method of JmeIosGLES is native and openGL api has these constants/tokens (searched in http://www.opengl.org/registry/api/GL/glext.h), therefore, passing the int values of these tokens/fields directly to the glGetIntegerv method may work. I am not sure about this and was hoping someone could verify.
    here’s the code : http://pastebin.com/hm7Uj4Vi

  4. JoglRenderer/Jogl1Renderer(com.jme3.renderer.jogl) : I spent a good deal of time browsing the JOGL docs to find some classes that could provide me with the required constants and also tried stackoverflow and jogl forum later. (jogl - JOGL and NVX_gpu_memory_info extension ).
    I will request the extension. Anyways, i have written the methods such that they should work as soon as the extension is included and also not create problems even if the extension is not present. here’s the code : http://pastebin.com/RCKSgw8F

  5. NullRenderer (com.jme3.system): method definitions are left blank.

4 Likes

Cool. Maybe you do a fork on github so we can more easily look at / integrate the changes.

Wow, this is a great :slight_smile:

1 Like

Nice work. I updated the issue on GitHub with the info we’ve got so far. As you continue working on this please refer to that issue in your patches, not the one on GoogleCode.

Hi , @nehon I’ve created a pull request so you can look at the changes. ( Issue#91 proposed fix by maany · Pull Request #125 · jMonkeyEngine/jmonkeyengine · GitHub )
@erlend_sh : i will keep that in mind :smiley:
sorry couldn’t reply earlier , was caught up in college exams/projects. :frowning:

So I read the comment on stackoverflow from datenwolf about the JOGL change

I wasn’t aware of what he states and I’m not really sure this change would be useful…
@pspeed what do you think?

I do kind of worry that to most it might give a false impression of what is going on. On the other hand, knowing the max memory of the card might be good debugging information for why frames are dropping on some scene or give a game a chance to potentially tweak how it runs on low memory cards. For example, there was a recent thread on “why does my brand new GPU run my scene like crap?” Memory could have been another data point.

I’d also kind be interested to get @momoko_fan’s take on this.

Is it a very invasive change? I haven’t really looked.

For at least desktop OpenGL I think these metrics are at least tricky and probably misleading. Memory virtualization seems the direction ARB is heading in.
I don’t have a link handy but someone filmed the “zero driver” talk, Graham Sellars of AMD talks about the ARB_sparse_texture and other tricks in modern OpenGL.

http://www.opengl.org/registry/specs/ARB/sparse_texture.txt

This is a tough one … It uses an experimental NVIDIA extension and an ATI specific extension (not AMD?), not to mention the information they expose is in different formats.

The app has to specifically call this method on Renderer, catch the potential UnsupportedOperationException - as well as take some action on this somewhat questionable information …
it doesn’t really feel right to add this.

I think its for the best to avoid those vendor-specific extensions as OpenGL is known to be quite buggy as it is. It is recommended to stick only to ARB extensions and popular EXT type extensions. I would wait for it to become official before adding support for it.

Couldn’t this be implement in a appstate, and madee a plugin instead? Or does it rely on code changes in the core?