@Empire Phoenix said:
Hardware does only support dxt and ecsomewhat normally.
Might I ask what you encode there? Maybee there are other solutions.
I develop a strategic game where the regions are encoded with color (the shader “knows” that which region it should highlight for example based on the color of the provincemap I have).
This is why I need lossless compression if something exists because even very small loss of data throws my region-map algorithm upside down.
@jadam said:
I develop a strategic game where the regions are encoded with color (the shader "knows" that which region it should highlight for example based on the color of the provincemap I have).
This is why I need lossless compression if something exists because even very small loss of data throws my region-map algorithm upside down.
Are you talking about file format or texture format in the GPU memory?
If I load a PNG into a texture with no min or mag filtering then I expect to get exactly what I put into it when sampling in the shader.
I’d use two kinds of textures: display textures (lossy compression) and “data textures” (uncompressed but with a low bit depth).
E.g. if there are four kinds of areas that the shader needs to handle differently, use a data texture with a depth of two bits.
Encoding stuff directly in the colors tends to impose design choices that you’d rather make differently, as you just discovered
Actually I have already 2 the 2 types of textures: the display one which is already in DXT format, and now I’m speaking about the data texture which has to have a high resolution too for the provinces. I could decrease the depth of the image to 16 bits to fit my need, but it will still be large: 819281922 bytes…
This is why I’m looking for a way to compress it if possible.
Well, there are aroun ~4000 regions in the map, each having its own color generated from its ID. This is 12 bits, I said 16 just since thats more a round number in informatics (and because I want to avoid using the alpha channel if possible for encoding as it is harder to modify).
How would you go about encoding this information in the vertex shader?
I don’t think there is any way to reduce the load here. If you need big texture, with high color depth, no data loss and fast random access - it means uncompressed. There is always a tradeoff - you cannot really have any meaningful lossless compression for random-access data.
One thing you can do is to NOT generate the mipmaps - this could reduce GPU memory usage by 30% or so.
@abies said:
I don't think there is any way to reduce the load here. If you need big texture, with high color depth, no data loss and fast random access - it means uncompressed. There is always a tradeoff - you cannot really have any meaningful lossless compression for random-access data.
One thing you can do is to NOT generate the mipmaps - this could reduce GPU memory usage by 30% or so.
How can I prevent the mipmaps from being generated?
@jadam said:
Well, there are aroun ~4000 regions in the map, each having its own color generated from its ID. This is 12 bits, I said 16 just since thats more a round number in informatics (and because I want to avoid using the alpha channel if possible for encoding as it is harder to modify).
I’m still not sure why you’d need the full ID information inside the shader, but I can imagine some scenarios so I’ll just roll along with that.
@jadam said:
How would you go about encoding this information in the vertex shader?
Submit the value to each vertex, pass the value to fragment shaders as “varying” variables.
These are passed to fragment shaders in interpolated form, so if all corners of a triangle have the same ID value, the fragment shader will always see the same value inside the triangle.
I can’t help with the exact calls that JME offers for this kind of stuff, I’m still learning the ins and outs of shaders myself, but that seems to be the approach one should use.
Some details:
Adjacent region borders can’t share vertices anymore, because the region ID needs to be different. So you have more vertices.
Instead of precomputing the color values and generating a bitmap, you can let the GPU do the work: generate the fragment color from a color constant or textures, apply lights (or don’t), possibly apply modifications depending on whatever flags the vertex data contains (e.g. to highlight a region).
You can use a 1D texture (essentially a one-dimensional array) to map from region IDs to actual flags for the fragment shader. That way, you don’t need to resend the entire vertex mesh on every state change, but just the 1D texture.
Note that I haven’t done this all myself, it’s just what I gleaned from descriptions how other people have done things and from several days of reading the OpenGL and GLSL specs.
I.e. this is just a blueprint that should work, and it’s certainly the approach I’d be taking if I had your requirements, but I’d need someone to point me towards some sections of the JME API that I haven’t yet explored well enough.
@toolforger said:
I'm still not sure why you'd need the full ID information inside the shader, but I can imagine some scenarios so I'll just roll along with that.
Submit the value to each vertex, pass the value to fragment shaders as “varying” variables.
These are passed to fragment shaders in interpolated form, so if all corners of a triangle have the same ID value, the fragment shader will always see the same value inside the triangle.
I can’t help with the exact calls that JME offers for this kind of stuff, I’m still learning the ins and outs of shaders myself, but that seems to be the approach one should use.
Some details:
Adjacent region borders can’t share vertices anymore, because the region ID needs to be different. So you have more vertices.
Instead of precomputing the color values and generating a bitmap, you can let the GPU do the work: generate the fragment color from a color constant or textures, apply lights (or don’t), possibly apply modifications depending on whatever flags the vertex data contains (e.g. to highlight a region).
You can use a 1D texture (essentially a one-dimensional array) to map from region IDs to actual flags for the fragment shader. That way, you don’t need to resend the entire vertex mesh on every state change, but just the 1D texture.
Note that I haven’t done this all myself, it’s just what I gleaned from descriptions how other people have done things and from several days of reading the OpenGL and GLSL specs.
I.e. this is just a blueprint that should work, and it’s certainly the approach I’d be taking if I had your requirements, but I’d need someone to point me towards some sections of the JME API that I haven’t yet explored well enough.
The entire map is a big quad with texture on it, so there are no vertices around the individual regions of it, this is why I don’t think the way you described would work.