Shaders and JME3 Material system

Hi all,



For some shader i'm making, i need to render the scene normals to an rgb texture.

The normals need to be in viewspace.

the shader is working in render monkey, but i can't make it work in JME3

here is the vertex shader working in render monkey


varying vec3 normal;

void main()
{
   normal= (gl_ModelViewMatrix * vec4(gl_Normal, 0.0)).xyz;
   gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}



fragment shader


varying vec3 normal;

void main()
{  
   //pack view space normal to rgb texture
   gl_FragColor = vec4(normal.xy * 0.5 + 0.5,-normal.z * 0.5 + 0.5, 0.0);
}



The problem is that in JME the gl_ModelViewMatrix multiplication has no effect, like it was the identity matrix.

I used the JME uniform matrix and attributes : g_WorldViewMatrix and inNormal


uniform mat4 g_WorldViewProjectionMatrix;
uniform mat4 g_WorldViewMatrix;

attribute vec3 inPosition;
attribute vec3 inNormal;

varying vec3 normal;

void main(void)
{
   normal=(g_WorldViewMatrix*vec4(inNormal,1.0)).xyz;
   gl_Position= g_WorldViewProjectionMatrix*vec4(inPosition,1.0);
}


but strangely normals seems to fade in and out as i move the camera... messing everything else

I also tried the g_NormalMatrix, but it gives a pitch black screen...

my questions :
- is g_WorldViewMatrix the equivalent of opengl gl_ModelViewMatrix? (javadoc seems to say so)
- why JME3 material system "overrides" opengl matrix and attributes (ie gl_vertex -> inPosition, gl_Normal -> inNormal)?
- Anyone has got an idea to fix the pb?  :P


nehon
is g_WorldViewMatrix the equivalent of opengl gl_ModelViewMatrix? (javadoc seems to say so)

Yes.

why JME3 material system "overrides" opengl matrix and attributes (ie gl_vertex -> inPosition, gl_Normal -> inNormal)?

This ensures forward compatibility with OpenGL3/4, which do not have the build in attributes and uniforms.

Anyone has got an idea to fix the pb?

Yeah, I think so. The reason it happens is because you're multiplying by (normal,1.0) and not (normal,0.0) like in the Render Monkey example. You have to use zero for the W component because the matrix's translation will be applied, which you don't want (normals being a direction, should not be effected by translation).

So in your shader, replace this line:

normal=(g_WorldViewMatrix*vec4(inNormal,1.0)).xyz;


with this:

normal=(g_WorldViewMatrix*vec4(inNormal,0.0)).xyz;


and it should work fine.

yay!! that worked, thanks a lot.

And thank you for the heads up about the build in attributes.



Another question though.



I have to reconstruct 3D view space position from depth.

I have an accurate method using a frustum corner, but for it i need the linear depth calculated like this :


(-ViewPosition.z - in_NearPlane)/(in_FarPlane - in_NearPlane)



To avoid another pass, i use what i suppose to be the "hardware" depth (i just set a texture for depth and format to Depth32F in the FrameBuffer rendering the scene, and it ends up with a depth buffer :p)
Do you know how does this depth buffer is computed? is it linear, non linear or (z/w)?

Another thing, i don't see the A32R32G32B32F format for texture (only RGB32F), is it intended or maybe it's specific to later opengl implementations?
This should be convenient to store normals and depth in one pass. (i need a 32bit Float channel for depth)

thanks again
To avoid another pass, i use what i suppose to be the "hardware" depth (i just set a texture for depth and format to Depth32F in the FrameBuffer rendering the scene, and it ends up with a depth buffer :p)
Do you know how does this depth buffer is computed? is it linear, non linear or (z/w)?

You should actually just use "Depth" as format, this is faster/better as it uses the same precision as the main framebuffer. Also Depth32F might not be supported on all OpenGL2 cards.
The depth is the usual one as specified by the OpenGL spec, so z/w.
I think you can just set gl_Position.z to the depth you want and gl_Position.w to 1.0 and that would result in your desired depth stored in the buffer, I am not sure about it though.

Another thing, i don't see the A32R32G32B32F format for texture (only RGB32F), is it intended or maybe it's specific to later opengl implementations?
This should be convenient to store normals and depth in one pass. (i need a 32bit Float channel for depth)

If I understood correctly though, you're using a separate texture for (hardware) depth, right? Or you changed your mind?
The format you mentioned would be, by jME3 convention, named as RGBA32F. I think it's there, if not, its quite easy to add it.