Compressed textures with per-frame animation on billboards

Hello!

I’m working at remake of game Might & Magic 6,7. It uses art from original game archives. Monsters is presented as billboards with set of interchanging textures. I have encountered with trouble: animation is extremely heavy. JME stores textures in heap as bitmaps and each frame of monster (256x256) takes ~2MB of memory. Each monster has ~125 frames (~5 actions,~5 frames per action, 5 sides), so it takes over 200 MB of memory (for one monster!).

Is there a way to store textures in compressed state?

Try loading and extracting the data from the texture file to the texture object when its needed instead of creating texture objects for all single textures before.

So you only should have one Texture object per NPC and then extract the data and write it to that texture each frame.

Wouldn’t that create a lot of disk I/O overhead, which in turn might lead to some slight pauses while each texture is loaded?



Currently doing something similar but I’m working with one large sprite-sheet at the moment. May lead to more though, the future is not certain! =p



~FlaH

It depends on how you load the files. You can preload the texture files from disk, create a caching system or something similar all I am saying is you shouldnt create all the texture objects in memory before you actually need them.

normen said:
Try loading and extracting the data from the texture file to the texture object when its needed instead of creating texture objects for all single textures before.

I've already tried it. Loading resources from disc every frame is death for performance.
normen said:
You can preload the texture files from disk, create a caching system or something similar all I am saying is you shouldnt create all the texture objects in memory before you actually need them.

Caching system also requires heap space. Compressed cache is very slow. It is not a solution.
normen said:
you shouldnt create all the texture objects in memory before you actually need them.

I should=) All monster's resources is needed every moment when he is into the scene.

I’m looking for the way to combine low-level LWJGL API with common JME rendering. It should allow me to use GPU memory for sprites and use compression or palletes.

How to do it with JME?

You should not use lwjgl directly in jME3 as this might lead to unexpected results, rather use a shader to modify the data.

And, you should not create all single images as Texture objects, you were concluding yourself that this leads to 200MB of ram for each NPC. The cache would only hold the compressed, combined texture file for one NPC (like on disk) instead of expanding 150 uncompressed textures to memory directly. If you do it correctly the overhead should be fine. So only one frame at a time should be uncompressed and copied to the texture, not all single frames all the time.

normen said:
And, you should *not* create all single images as Texture objects, you were concluding yourself that this leads to 200MB of ram for each NPC

200MB is size of bitmap data. High number of Texture objects can be reason of performance leak, but not memory.
But I'm using single Texture object for each monster, as you advise. Texture's underlying image is interchanged every frame.
Compressed cache is a good idea. I've tested different compression algorithms with different images and number of decompressions. Results:

compression name| compression factor | millis to decompres 256x256 image (standart monster sprite size in MM7) from byte array in memory
jpeg | 8% | 5.95
png | 77% | 9.5
gif | 18% | 5
lz | 94% | 2.7
gzip | 68% | 1.4

JPEG and GIF has suitable compression factor, but decompression speed is too slow. In case of JPEG there is 3 sprites per frame with FPS=60.

There is a common situation, when there are 30-70 monsters on the scene. Each has it's own frame index.
Unfortunately compressed cache isn't a solution.

200MB of texture data = 200MB of OpenGL ram plus 200MB of jME data. As soon as you create the Texture object with data its in memory. When you display that Texture its in OpenGL ram.

3fps is just ridiculous, theres definitely room to improve your extraction code. M&M never needed excessive amounts of ram nor processing power.

You can reduce the size of all your textures in both memory and disk by 8 times if you use DXT1 compression. Also load your textures dynamically in another thread so it doesn’t hurt your FPS.

1 Like

I have not any trouble with “OpenGL ram”. My problem is related to Java Heap Space.

Can’t find Java implementation of DXT1 algorithm. In case of using compressed cache I need to load many new images every frame - asynchronous resource loading can’t help here.

normen said:
3fps is just ridiculous, theres definitely room to improve your extraction code.

I don't said about 3fps. 3 sprites per frame is max decoding rate I can reach (with 60fps). I use javax.imageio.ImageIO to compress and decompress Images.

Unfortunately compressed cache isn’t a solution. One frame is 16-32 milliseconds long (60-30 FPS). There are no compressing algorithms, wich can decompress 50 images (256x256x24) so fast in JVM with sufficient compression factor.

normen said:
M&M never needed excessive amounts of ram nor processing power.

There are no JVM in MM7. No Java Heap. MM7 uses low-weight paletted sprites, they are rendered as is and don't need to be decompressed before it. This is right approach, and it would be great if I can do so with JME.
I'm looking for a "backdoor" way to allocate textures in custom format (such as 8-bit paletted) outside the Java Heap, and then render textured geometry by JME.

I explained to you why the Java Heap space gets filled, you create a Texture object and RAM is used. Simple. Only when you display it OpenGL RAM is used, before it has to be in the Java Heap, where else?

If the memory required for a 8-bit image data is ok then just do as I said: Create one texture for each NPC and then copy the data from the 8bit bitmap data to the texture each frame.

Khm… It seems you make me understand how it works! Thank you for explanation!:slight_smile:

DXT1 textures do not need to be decompressed. DXT is a native format supported by OpenGL and so it will use 8 times less RAM on both the system ram and VRAM.

You don’t need to use any Java algorithms to compress it either. Use a tool like ATI Compressonator or NVIDIA Texture Tools to convert your textures to DDS format. This image format can be loaded by jME3.



Paletted textures is also possible, but via shader-based decompression, but you only save 4 times the memory, instead of 8 times.

1 Like

This features of JME were a dark spot for me. Thank you=)

if they are 256x256 why not use a large texture for each monster containg a atlas for al(or most) needed textures? That way you could at least greatly reduce the amount of textures needed and do the animation in shader. Opengl itselv is intelligent enough btw to deload textures from the graficcard if they are not needed . (Well in detail it keeps all texture but removes old ones if vram is to low, this can even happen while rendering, wich is kinda a performance killer(if it happens often at least), but you don’t have a really hard limit from the hardware that way. I suggest to use dxt however.

A idea for a compressed cache would be to only store differences form the last frame.

1 Like