Shader nuts and bolts

I’m new to jME, but have some background in OpenGL/LWJGLv3/GLSL. I’m programming in Java directly, not using the SDK.

I’m experimenting to learn the very nuts and bolts of things, shaders in this case.

I have a basic starter application, based on examples found in the tutorial in the wiki, and other wiki pages. For this particular topic, the starting place is the basic shader demo on the JME3 and Shaders page. That worked just fine.

My experiment was to replace the box with a quad, sized to fit within the coordinates used by projection space (-0.8, -0.8 to 0.8, 0.8), and drop the use of g_WorldViewProjectionMatrix. In other words, the vertex shader’s only line in its main method is:

gl_Position = vec4(inPosition, 1.0);

I was expecting this to (a) draw the quad from almost-corner to almost-corner of the screen, and (b) always do so.

The program did (a), but if I move the mouse enough, or hold one of the movement keys for long enough, the quad disappears. It’s easy to find the threshold where jittering back and forth will cause it to appear and disappear.

I presume what is happening is that the quad is leaving the view frustum. (e.g. the amount of movement necessary to cause the quad to disappear is similar to doing exactly that under the normal circumstance of having gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0);) But I find this quite surprising when my vertex shader is simply passing through the inPosition.

Said differently, since the vertex shader is always saying “use these values between -0.8 and 0.8”, why is the end result ever anything other than that? Or, since the shader isn’t doing any kind of transformation of the vertex positions, what is?

JME is a scene graph. Scene graphs manage the scene and send only to the GPU what is necessary. Your object is being frustum culled before it ever gets to the GPU because of standard frustum culling.

Probably your objects is at 0,0,0 in the root node?

If you want something full screen-based and always in the same place then you can just put it in the guiNode which is already screen-based.

There doesn’t seem to be much reason to manage a fully 3D object if it’s only going to ignore its world transforms and draw in the same place all the time… but you can do that too if you really want to: just turn culling off for that object by setting the CullHint to never.

3 Likes

Thanks, that would explain it.

As a follow-on question, speaking generally beyond jME, my understanding was that frustum culling happened after the typical translations from object space to world space and then to view space. My understanding was that it’s not known what is within the view frustum until that point. Is this understanding correct in general?

If so, does that mean that jME is already performing what I guess I’m going to call “pre-rendering translations”? And if that’s the case, is that part of what you mean by saying that “jME is a scene graph”?

There doesn’t seem to be much reason to manage a fully 3D object if it’s only going to ignore its world transforms and draw in the same place all the time

My primary goal at this point is learning. I’m seeking to understand fundamentals; in particular, I’m seeking to understand specific aspects of working with jME vs. e.g. lower-level approaches like LWJGL. I might end up employing this for some practical purpose, but I agree: research indicates there are better ways to do things one might want to use this for, using a guiNode among them.

1 Like

just turn culling off for that object by setting the CullHint to never.

I implemented this as …

backdropGeometry.setCullHint(Spatial.CullHint.Never);

… prior to the point where backdropGeometry gets attached to the root node, but it does not seem to have had any effect. Did I misunderstand what you meant?

The scene graph is transforming things prior to displaying them. The spatial knows where it is “in world space”. The camera knows where it is in “world space”… and the camera has a view projection.

For the 3D scene that is not the “2D GUI viewport that takes up the whole screen”, but the “3D scene where 3D world space objects would go”, these transforms are accumulated and passed to the shader.

A much simpler transform is done for the GUI viewport… the 2D viewport that’s meant for full screen 2D things that take up portions of the screen… but some set of transformations are still done and there is an orthogonal view projection as well.

Once things get to the shader, they are free to do whatever they want with this transform/projection stuff… but the scene graph is still managing the scene because that’s 100% it’s only real job in the whole world.

That should have made sure that the geometry was always sent to the GPU. So I don’t know what else is going on in your code.

1 Like

Thanks for that further explanation. Your prior reply stated …

Scene graphs manage the scene and send only to the GPU what is necessary.

… which gave me some valuable context. In digging into that with reference to the stuff I’m more familiar with, I understand better. e.g., I’ve previously followed the detailed Learn OpenGL, where early on one learns that the the shader pipeline uses the coordinate system translations to discard data that falls outside of the eventual viewing volume / viewing plane.

Later, there are chapters on constructing a scene graph and frustum culling. Indeed, therein Learn OpenGL states, very much along the lines of what you said …

we are going to see how to limit your GPU usage thanks to … frustum culling

It’s fair to say that this is precisely why I’m getting into jMonkeyEngine. I progressed far enough with Learn OpenGL and resources like it to realize, “gosh, there’s going to be a lot of foundation to lay down – maybe I should take another look at available engines”, but not far enough to be implementing a scene graph and such.

So, regarding this …

The scene graph is transforming things prior to displaying them.

It’s more that the scene graph is transforming things prior to supplying them to the shader pipeline, yes? I totally get the coordinate-translation dance. But as you also say …

Once things get to the shader, they are free to do whatever they want with this transform/projection stuff

… including totally ignoring it, which is what my experiment was about. Essentially, “if I supply vertex coordinates that are already within the range of screen space, will the shader just display it as-is? And will it continuously display it, since the values are always the same?”

You helped me see that there is a mechanism before that for deciding what goes to the shader in the first place. Many thanks, because I learned something!



… I will say that while I understand the idea of limiting the work the GPU has to do, and I can equally well imagine that there’s a cost to transferring data to and from the GPU’s memory, I had thought that since GPUs are (a) specifically optimized for matrix math and (b) specifically designed for speed through massive parallelization, that on balance it would be less costly to let the GPU do all the culling of invisible objects. I’m not debating this point; I’m just … a bit surprised.

Anyway, thanks again for your insights.

The scene graph, being a tree, has better opportunities to cull entire sections without transforming the parts at all. The GPU will only ever see the mesh data. The scene graph can know that the whole battleship of 1000 different meshes is behind you and not even bother to check further.

2 Likes

The scene graph, being a tree, has better opportunities to cull entire sections without transforming the parts at all.

Ah, I see … Yes, that makes a great deal of sense. Thanks for illuminating that!

2 Likes