OgreXML Normal maps

What I’m trying to do a glsl normal map effect on an OgreXML model.  So I’m trying to do TestNormalmap.java because it I just want to do directional lights, not TestDiffNormSpecmap.java because that uses multiple point lights or TestBumpMapping.java because that does software rendering (not glsl?). 



The TestNormalmap.java loads the Tangent vectors from the collada file.  I kinda want to try this attempt and is supported by the OgreXML format, but with Blender 2.49 and the exporter I can’t seem to export the tangents.  (For some reason going into Ogre Meshes Exporter->Preferences->Tangent doesn’t actually export them, do I need to calculate them in Blender?) I would like this option better.



So the other option is the TangentBinormalGenerator.java used by TestDiffNormSpecmap.java, which takes in a Trimesh and gives you a FloatBuffer of the Tangents and Binormals.  Is it better or worse for performance to add the tangents as an attribute as in TestDiffNormSpecmap.java or should I convert them into the Trimesh ColorBuffer as in TestNormalmap.java?



These might help as reference:

http://www.jmonkeyengine.com/forum/index.php?topic=12260.0

http://www.jmonkeyengine.com/forum/index.php?topic=12334.0



Thanks in Advance  :smiley:

I believe you need to use the OgreXMLConverter either in the blender script or outside in order to generate tangents. You specify the generate tangents flag, then you convert it back to XML and the tangents are there. While at it I would also suggest generating indexed level of detail, with using the MeshLodController. This reduces the number of vertices of the model dynamically if the model is far away or framerate is low.



A rather big problem with using tangents in attributes is that you can't use VBO. This issue might not be a concern to you. jME3 does not have this problem, by the way (everything is an attribute and usage of VBO is forced).

If you want to store tangents in colors doing geom.setColorBuffer(geom.getTangentBuffer()) should suffice, and colors do support VBO in jME2.



I checked the normalmap.vert/frag shader used in TestNormalmap and found a side effect. This line

gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0 * vec4(4.0, 4.0, 1.0, 1.0);


You see the texture coordinates are repeated 4 times. You should remove the multiplication by the vec4() if you want to use this as a general-purpose normalmap shader. If you want to scale the tex coords use the texture matrix as suggested in that line.
Also, it seems to already use colors for the tangents.
Momoko_Fan said:

While at it I would also suggest generating indexed level of detail, with using the MeshLodController. This reduces the number of vertices of the model dynamically if the model is far away or framerate is low.


I don't have multiple Level Of Detail versions of the mesh so will it dynamically create them, or are you saying the MeshLodController will use the normalmap to reduce the mesh verts?  The latter would be cool.  :-o

Momoko_Fan said:

A rather big problem with using tangents in attributes is that you can't use VBO. This issue might not be a concern to you. jME3 does not have this problem, by the way (everything is an attribute and usage of VBO is forced).
If you want to store tangents in colors doing geom.setColorBuffer(geom.getTangentBuffer()) should suffice, and colors do support VBO in jME2.

Does this mean that if I use attributes at all in JME2 I can't use VBO?  Because I think I want to add hardware skinning as well to my shader, like what's already implemented for OgreXML, which requires attributes.  Basically, is it just passing the FloatBuffer into an attribute causing it not to use VBO?  I almost feel like switching to JME3 except it doesn't have good physics support or networking support yet, your making good progress though.  :wink:

Momoko_Fan said:

I checked the normalmap.vert/frag shader used in TestNormalmap and found a side effect. This line

gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0 * vec4(4.0, 4.0, 1.0, 1.0);


You see the texture coordinates are repeated 4 times. You should remove the multiplication by the vec4() if you want to use this as a general-purpose normalmap shader. If you want to scale the tex coords use the texture matrix as suggested in that line.
Also, it seems to already use colors for the tangents.

Wow!  You're on a roll, thank you so much!  I was looking for the bug that was causing my TexCoords to get messed up.  They looked fine without the shader, but even when setting the frag shader to just:

gl_FragColor = texture2D( baseMap, gl_TexCoord[0].xy );

it was still messed up, didn't think to look in the vert shader!  Cheers!  :D
SomethingNew said:

, not TestDiffNormSpecmap.java because that uses multiple point lights


... no it doesn't, you can specify the number of lights you want to use before compiling it. Also it would be an easy fix to use directionalLights instead. What Momoko suggested works but you can buy some time in the shader by passing the tangents AND binormals in the texture coordinates. that saves you the necessity of computing the binormal for every vertex, its not much, but hey, every frame you can get huh?

So you're saying put the tangents and binormals in the second and third texture coordinates?  Hmmm, I don't think I really want to do multiple lights unless it's not expensive on performance, even if they are directional lights.

that saves you the necessity of computing the binormal for every vertex, its not much, but hey, every frame you can get huh?

Actually cross product is a cheap operation. In ARBvp you use XPD which is 1 instruction while for NVvp it's just a MUL and a MAD instruction. In other words, it is cheaper to compute the binormal rather than passing it as an attribute (which would also consume numVerts * 3 * 4 bytes of memory per mesh in VRAM).

Hmmm, I don't think I really want to do multiple lights unless it's not expensive on performance, even if they are directional lights.

If you are going to use multiple lights, don't do it the suggested way. You're gonna hit the instruction limit with more than 1 light, on low-end video cards. Instead render multiple passes, one for each light. Since this will be slow you should avoid using many lights in the same location. For outdoor scenes you should only have 1 directional light, indoor scenes 1/2 point lights. People usually don't notice the difference of models shading with more than 2 lights. Lights are there to make things look shiny and normalmaps useful.

I don't have multiple Level Of Detail versions of the mesh so will it dynamically create them, or are you saying the MeshLodController will use the normalmap to reduce the mesh verts?  The latter would be cool.

OgreXMLConverter can automatically generate LOD levels. The MeshLodController will use these LOD levels if they are available.

Does this mean that if I use attributes at all in JME2 I can't use VBO?  Because I think I want to add hardware skinning as well to my shader, like what's already implemented for OgreXML, which requires attributes.  Basically, is it just passing the FloatBuffer into an attribute causing it not to use VBO?

Sorry, I said the wrong thing. You can use VBO for the standard mesh data like position, texcoord, color, etc but not for attributes. In other words, the mesh data will use VBO but the attributes will not use VBO. So you can partially use VBO.
I forgot since it's been a long time. Lex who was working with jME always complained about it so I thought it was a bigger issue than it actually was..

I almost feel like switching to JME3 except it doesn't have good physics support or networking support yet, your making good progress though.

For physics, you can use JBullet directly and just make the bodies transforms match the jME3 meshes. As for networking, what's the problem with JGN? I have used it to make a small 2D MMO (without jme) and it works great.

Momoko_Fan said:

In other words, it is cheaper to compute the binormal rather than passing it as an attribute


Not as attributes, in the texture coordinates. which means that it is a straight variable access operation and that should be faster than a cross product (which has the operation and two variable access operations). But as I said, if its anything, it isnt much and the RAM is there to be used  :P
dhdd said:

Momoko_Fan said:

In other words, it is cheaper to compute the binormal rather than passing it as an attribute


Not as attributes, in the texture coordinates. which means that it is a straight variable access operation and that should be faster than a cross product (which has the operation and two variable access operations). But as I said, if its anything, it isnt much and the RAM is there to be used  :P


The thing with storing both the tangent and binormals in the texcoord buffer, is actually that you'll have to use three texcoord buffers because texcoords only store X and Y, and the tangent and binormals are X, Y, and Z vectors, right?

Momoko_Fan said:

OgreXMLConverter can automatically generate LOD levels. The MeshLodController will use these LOD levels if they are available.

I'll have to check this out for sure.  JME3 has the same ogrexml animation system as JME2, correct?
The thing with storing both the tangent and binormals in the texcoord buffer, is actually that you'll have to use three texcoord buffers because texcoords only store X and Y, and the tangent and binormals are X, Y, and Z vectors, right?

No. You can include up to 4 coordinates in a single texcoord array. Just because usually just X and Y are used doesn't mean anything. For 3D textures for example you need 3 coordinates so that's one case right there.

JME3 has the same ogrexml animation system as JME2, correct?

I assume this isn't relevant to the MeshLodController. jME3's ogrexml loader is currently in construction together with the animation system. I consider the new design for the animation system to be better than the one used in jME2 ogrexml right now. Also, SAX is used for loading the XML which is significantly faster than DOM (more info)