[Solved] Compress data send to shaders

Hello fellow monkeys,

I’m still working on my blockworld voxelgame and I just recently switched to TextureArrays for the GreedyMesher to be able to combine more different blocks into one mesh. Since it works quite well I went on to implementing light and that sort of fake-ambient-occlusion that you see in most blockworld games.

Now i have to send 5 more floats per vertex to the GPU (4 float for ColorRGBA plus 1 float for ambient-occlusion-factor). That actually worked out quite well too, except for the fact the some ambient occlusions seem to be in the wrong corner, I guess I can fix that on my own though. But since the ambient occlusion factor can only be 0, 1/3, 2/3 or 3/3, (meaning 4 different values) and the color values r, g, b and a (a used as sunlight) can only have 32 possible values each, I felt like I would pack them all into one int and send only these 32 bits to the shader instead of 5 * 32 bits as I currently do.

When I thought I was done I got some error messages about “GL_EXT_gpu_shader4” needed to be enabled for bitwise operations on the one hand.
So quick first question (and I actually hope the answer is not just “yes” :smiley: )
Does it mean GLSL4 needs to be supported, which is quite new if I’m not mistaken, for simple bitwise operations or are there differences in bitshifting syntax between java and glsl?
Since I’m even newer to shaders than GLSL4 is, I guess it’s one of the noob questions.

But when I enabled it I was told int type is neither allowed for varying nor for attribute variables. Some research told me that there was no real int type in GLSL, and that int is only represented by 16 bits anyways.

So final question is: Is there any way to put these 22 bits information (5 bits for all 4 colors each + 2 bits for 4 possible ambient occlusion factor values) into a single value that I can send to the shader per vertex? The wiki article about Polygon Meshes seems to be missing some information about the vertexbuffers and only gives few information about which data types are actually send (like 3 floats for position) but I feel like every information has to be send as float or some sort of collection of floats like vector2f → vec2 or vector3f → vec3 etc., is that right?
And if there is a way to pack it all into “one value” that I can send, which VertexBuffer.Type would I have to use?

So thanks a lot for your attention, thanks even more in advance if you can help me in any way and have a nice day!
Greetings, Samwise

which render do you use in your app configuration?

That was a fast response lol
for my settings I “put(“Renderer”, AppSettings.LWJGL_OPENGL2);”, I also printed it out during runtime, it says “LWJGL-OpenGL2” to make sure its not using some old saved settings from when I tried out other renderers.
I hope it’s actually what you were asking for though, never looked into the Renderers too much yet.

could you try to set LWJGL_OPENGL4 or LWJGL_OPENGL45?

No, it means you need to enable the extension.
#extension GL_EXT_gpu_shader4 : enable
but it could not be implemented in every driver (iirc mesa does not support it).

So the best alternative would be to use an higher version that natively supports bitwise operations, they should be supported since glsl 1.3, so you should set your renderer to OPENGL3 or higher and change your j3md to use glsl 1.3 (130) or higher, for example
VertexShader GLSL120 : xyz.vert
will become
VertexShader GLSL130 : xyz.vert
and the same for the fragment shader.

Most modern games set as minimum requirement opengl 3.2 (glsl 1.50), so you shouldn’t have any issue unless you are targeting very old hardware.

When you pass an int between shaders, you must define it as flat, this will tell the gpu to not interpolate it.
eg.
Vertex

flat out int X;

Frag

flat in int X;

I’m not sure about the size of ints in glsl.

That did result in

SCHWERWIEGEND: Failed to create display
java.lang.UnsupportedOperationException: Unsupported renderer: LWJGL-OpenGL4
	at com.jme3.system.lwjgl.LwjglContext.initContextFirstTime(LwjglContext.java:239)
	at com.jme3.system.lwjgl.LwjglContext.internalCreate(LwjglContext.java:377)
	at com.jme3.system.lwjgl.LwjglAbstractDisplay.initInThread(LwjglAbstractDisplay.java:117)
	at com.jme3.system.lwjgl.LwjglAbstractDisplay.run(LwjglAbstractDisplay.java:211)
	at java.lang.Thread.run(Thread.java:748)

Jul 27, 2018 2:19:34 PM com.jme3.system.lwjgl.LwjglAbstractDisplay run
SCHWERWIEGEND: Display initialization failed. Cannot continue.

@RiccardoBlb Nice thanks a lot! That definitely fixed the bitshift error message even when I remove the line “#extension GL_EXT_gpu_shader4 : enable” (and I guess I now got what extension actually means).

Since you seem to know the dance of shaders (is there such a phrase?)
Is there a similar way to get rid of the “#extension GL_EXT_texture_array : enable” I have to use for the TextureArrays or could it be considered super standard extension, almost all cards support?
The error message when removing that extension also hinted me towards “#extension GL_NV_texture_array : enable”, is this a better decision?
Also, a new error message appeared “global variable gl_FragColor is deprecated after version 120”, what to replace it with?
Although i feel like i get closer to the solution, that made even more questions pop up in my mind, sorry for that

it seems you use LWJGL2 insted of LWJGL2 as render. Which dependency manager do you use? gradle or maven?

For best compatibility I wouldn’t recommend to use any extension if possible.
TextureArrays are supported in opengl3, so it should be fine to remove it if you use GLSL130.

Also, a new error message appeared “global variable gl_FragColor is deprecated after version 120”, what to replace it with?

Things changed a little since glsl 130, you need to define your output color now

out vec4 outFragColor;

also all the textureXXXX functions have been merged to a single texture( ) that behave differently for different arguments.
If you want to quickly convert an old code, you can import Common/ShaderLib/GLSLCompat.glsllib
at the begining of you shader, the macros in this file will convert the old syntax when the shader is compiled.

1 Like

Allright i got it working in terms of “it doesnt throw an error” at least, converting it to 130 myself and removing the texture array extension.
Also I combined my values into one float in the following style:

colorSSAOF += ((lightInt) & 0b11111);
colorSSAOF += ((lightInt >> 5) & 0b11111) << 5;
colorSSAOF += ((lightInt >> 10) & 0b11111) << 10;
colorSSAOF += ((lightInt >> 15) & 0b11111) << 15;
colorSSAOF += ((identifier >> 52) & 0b11) << 20;

with lightInt beeing a bitmask in format 0b000000000000RRRRRGGGGGBBBBBSSSSS
and identifier being more complex 64 bit information with ambient occlusion bits at position 53 and 54

that should fit into a float and although I currently get quite some strange colors I guess I can figure that out on my own.
So thanks a lot and thanks a lot again, and I consider this topic solved!

If you pass it to the fragment shader, remember to declare it as flat or it will be interpolated
https://www.khronos.org/opengl/wiki/Type_Qualifier_(GLSL)#Interpolation_qualifiers
This might be the issue you are experiencing.