Texture compression like DXT but lossless

Hey,

Is there any texture compression method supported by jMonkey that saves memory like DXT but lossless at least on the red and green channels?

I encode crucial information in the texture’s red and green channel so I can’t afford even small alterations to it.

Thanks,
Adam.

you passing secret codes to the CIA or something?

Hehe, no:)

I don’t believe OpenGL itself supports lossless texture compression.

Hardware does only support dxt and ecsomewhat normally.

Might I ask what you encode there? Maybee there are other solutions.

@Empire Phoenix said: Hardware does only support dxt and ecsomewhat normally.

Might I ask what you encode there? Maybee there are other solutions.


I develop a strategic game where the regions are encoded with color (the shader “knows” that which region it should highlight for example based on the color of the provincemap I have).

This is why I need lossless compression if something exists because even very small loss of data throws my region-map algorithm upside down.

@jadam said: I develop a strategic game where the regions are encoded with color (the shader "knows" that which region it should highlight for example based on the color of the provincemap I have).

This is why I need lossless compression if something exists because even very small loss of data throws my region-map algorithm upside down.

Are you talking about file format or texture format in the GPU memory?

If I load a PNG into a texture with no min or mag filtering then I expect to get exactly what I put into it when sampling in the shader.

I talk about texture format in the GPU memory, like DXT.

I’d use two kinds of textures: display textures (lossy compression) and “data textures” (uncompressed but with a low bit depth).
E.g. if there are four kinds of areas that the shader needs to handle differently, use a data texture with a depth of two bits.

Encoding stuff directly in the colors tends to impose design choices that you’d rather make differently, as you just discovered :slight_smile:

Actually I have already 2 the 2 types of textures: the display one which is already in DXT format, and now I’m speaking about the data texture which has to have a high resolution too for the provinces. I could decrease the depth of the image to 16 bits to fit my need, but it will still be large: 819281922 bytes…

This is why I’m looking for a way to compress it if possible.

Why do you need 16 bits for the data texture?
This sounds like an awful lot of data per pixel.

You might be able to put the data into the vertex shaders, then you’d avoid sending the data for each pixel.

Well, there are aroun ~4000 regions in the map, each having its own color generated from its ID. This is 12 bits, I said 16 just since thats more a round number in informatics (and because I want to avoid using the alpha channel if possible for encoding as it is harder to modify).

How would you go about encoding this information in the vertex shader?

I don’t think there is any way to reduce the load here. If you need big texture, with high color depth, no data loss and fast random access - it means uncompressed. There is always a tradeoff - you cannot really have any meaningful lossless compression for random-access data.

One thing you can do is to NOT generate the mipmaps - this could reduce GPU memory usage by 30% or so.

Thanks guys for the inputs, really appreciated!

Does anyone know how can I save an ARGB4444 format image to disk using jMonkey, as JmeSystem.writeImageFile() does not seem to work?

@abies said: I don't think there is any way to reduce the load here. If you need big texture, with high color depth, no data loss and fast random access - it means uncompressed. There is always a tradeoff - you cannot really have any meaningful lossless compression for random-access data.

One thing you can do is to NOT generate the mipmaps - this could reduce GPU memory usage by 30% or so.


How can I prevent the mipmaps from being generated?

@jadam said: Well, there are aroun ~4000 regions in the map, each having its own color generated from its ID. This is 12 bits, I said 16 just since thats more a round number in informatics (and because I want to avoid using the alpha channel if possible for encoding as it is harder to modify).

I’m still not sure why you’d need the full ID information inside the shader, but I can imagine some scenarios so I’ll just roll along with that.

@jadam said: How would you go about encoding this information in the vertex shader?

Submit the value to each vertex, pass the value to fragment shaders as “varying” variables.
These are passed to fragment shaders in interpolated form, so if all corners of a triangle have the same ID value, the fragment shader will always see the same value inside the triangle.
I can’t help with the exact calls that JME offers for this kind of stuff, I’m still learning the ins and outs of shaders myself, but that seems to be the approach one should use.

Some details:

  • Adjacent region borders can’t share vertices anymore, because the region ID needs to be different. So you have more vertices.
  • Instead of precomputing the color values and generating a bitmap, you can let the GPU do the work: generate the fragment color from a color constant or textures, apply lights (or don’t), possibly apply modifications depending on whatever flags the vertex data contains (e.g. to highlight a region).
  • You can use a 1D texture (essentially a one-dimensional array) to map from region IDs to actual flags for the fragment shader. That way, you don’t need to resend the entire vertex mesh on every state change, but just the 1D texture.

Note that I haven’t done this all myself, it’s just what I gleaned from descriptions how other people have done things and from several days of reading the OpenGL and GLSL specs.
I.e. this is just a blueprint that should work, and it’s certainly the approach I’d be taking if I had your requirements, but I’d need someone to point me towards some sections of the JME API that I haven’t yet explored well enough.

@toolforger said: I'm still not sure why you'd need the full ID information inside the shader, but I can imagine some scenarios so I'll just roll along with that.

Submit the value to each vertex, pass the value to fragment shaders as “varying” variables.
These are passed to fragment shaders in interpolated form, so if all corners of a triangle have the same ID value, the fragment shader will always see the same value inside the triangle.
I can’t help with the exact calls that JME offers for this kind of stuff, I’m still learning the ins and outs of shaders myself, but that seems to be the approach one should use.

Some details:

  • Adjacent region borders can’t share vertices anymore, because the region ID needs to be different. So you have more vertices.
  • Instead of precomputing the color values and generating a bitmap, you can let the GPU do the work: generate the fragment color from a color constant or textures, apply lights (or don’t), possibly apply modifications depending on whatever flags the vertex data contains (e.g. to highlight a region).
  • You can use a 1D texture (essentially a one-dimensional array) to map from region IDs to actual flags for the fragment shader. That way, you don’t need to resend the entire vertex mesh on every state change, but just the 1D texture.

Note that I haven’t done this all myself, it’s just what I gleaned from descriptions how other people have done things and from several days of reading the OpenGL and GLSL specs.
I.e. this is just a blueprint that should work, and it’s certainly the approach I’d be taking if I had your requirements, but I’d need someone to point me towards some sections of the JME API that I haven’t yet explored well enough.

The entire map is a big quad with texture on it, so there are no vertices around the individual regions of it, this is why I don’t think the way you described would work.

Ah, now that’s an entirely different picture.
Do you really need to define regions on a per-pixel basis?

@toolforger said: Ah, now that's an entirely different picture. Do you really need to define regions on a per-pixel basis?
Well, did not have better idea so far...

An 8192x8192 texture? But my monitor at max can only show 1920x1200. Seems like more data is provided than is needed.

We may need to know more about what you are doing and why you can’t page in relevant sections as needed.