How does jMonkeyEngine do Geometry Instancing?

Yes, i can also imagine that batched can almost cover every case we have.
In my case GeometryInstancing CAN be used, cause I just want to randomly SHIFT the animation start per instance, seed by its InstanceID, you know, just like in the NVIDIA example.
Say for the thousands of football fan in the stadium, they don’t seem like to react to the actions on the field, just yeiling, screaming all the time…
As for thousand blade of grasses, thosands of buildings, asteroids… we can still take advance of GeometryInstancing a lot.

My current solution for the football problem is : Hardware skinning (animation pallete sent via MatrixArray, can be a Texture) + batched geometries … it’s pretty impressed already but… I believe if we can render a single mesh thousand of times without violate the attributes (uniforms, varying…) limit is the BEST solution of all. As show here . http://www.geeks3d.com/20100629/test-opengl-geometry-instancing-geforce-gtx-480-vs-radeon-hd-5870/3/ … It’s a “real” way to do GPU computing.And the more we gain from a single render step the more possiblities we can add to the application (more AI, more physics…).

And I’d like to know more about the tweak needed to use this API for example the Renderer or the Geometry class.
Just saying, we do have a lot of higher API and solution for problems but hidden - in this case not expose ENOUGH API for the user maximize the usage of the GPU parrallel computing yet!

1 Like

[java]
/**
* Should be called after selectTechnique()
* @param geom
* @param r
*/
public void render(Geometry geom, RenderManager rm){
autoSelectTechnique(rm);

    Renderer r = rm.getRenderer();
    TechniqueDef techDef = technique.getDef();

    if (techDef.getLightMode() == LightMode.MultiPass
     && geom.getWorldLightList().size() == 0)
        return;

    if (techDef.getRenderState() != null){
        r.applyRenderState(techDef.getRenderState());
        if (additionalState != null)
            r.applyRenderState(additionalState);
    }else{
        if (additionalState != null)
            r.applyRenderState(additionalState);
        else
            r.applyRenderState(RenderState.DEFAULT);
    }
    if (rm.getForcedRenderState() != null)
        r.applyRenderState(rm.getForcedRenderState());

    // update camera and world matrices
    // NOTE: setWorldTransform should have been called already
    // XXX:
    if (techDef.isUsingShaders())
        rm.updateUniformBindings(technique.getWorldBindUniforms());

    // setup textures

// Collection<MatParam> params = paramValues.values();
// for (MatParam param : params){
for (int i = 0; i < paramValues.size(); i++){
MatParam param = paramValues.getValue(i);
if (param instanceof MatParamTexture){
MatParamTexture texParam = (MatParamTexture) param;
r.setTexture(texParam.getUnit(), texParam.getTextureValue());
if (techDef.isUsingShaders()){
technique.updateUniformParam(texParam.getName(),
texParam.getVarType(),
texParam.getUnit(), true);
}
}else{
if (!techDef.isUsingShaders())
continue;

            technique.updateUniformParam(param.getName(),
                                         param.getVarType(),
                                         param.getValue(), true);
        }
    }

    Shader shader = technique.getShader();

    // send lighting information, if needed
    switch (techDef.getLightMode()){
        case Disable:
            r.setLighting(null);
            break;
        case SinglePass:
            updateLightListUniforms(shader, geom, 4);
            break;
        case FixedPipeline:
            r.setLighting(geom.getWorldLightList());
            break;
        case MultiPass:
            // NOTE: Special case!
            renderMultipassLighting(shader, geom, r);
            // very important, notice the return statement!
            return;
    }

    // upload and bind shader
    if (techDef.isUsingShaders())
        r.setShader(shader);
   
    r.renderMesh(geom.getMesh(), geom.getLodLevel(), 1);
}

[/java]

r.renderMesh(geom.getMesh(), geom.getLodLevel(), 1);
So the only way to let the whole Renderer appropiately call [java] if (useInstancing) { ARBDrawInstanced.glDrawElementsInstancedARB(elMode, elementLength, fmt, curOffset, count); } else { glDrawRangeElements(elMode, 0, vertCount, elementLength, fmt, curOffset); } [/java] is to extends Material ???

I hate finding my self hacking into the core when I shoud not supposed to… Anyone know the other way around?

1 Like

For intancing I guess we’d something similar to the batchNode that would have all geometries that are similar it it’s subgraph instanced. So we could keep the scene graph API, and have the node do the complicated stuff.

@nehon said: For intancing I guess we'd something similar to the batchNode that would have all geometries that are similar it it's subgraph instanced. So we could keep the scene graph API, and have the node do the complicated stuff.

This feels like maybe a bad way to go, to me. I would think the only absolute requirement would be that the geometry is sharing the same meshes and same exact Material instance. Or is there something I’m missing.

I can see how the majority of common use-cases (rocks, grass, etc.) would suit a container node but I can think of some use cases where maybe this isn’t the best way.

Would it be enough to tag the geometry somehow? Or the material? We could collect them in their own geometry lists or something during the bucketing process.

Your use case is very specific and requires heavy changes to the jME3 animation system to be effective (using textures to store animation data). Given that these changes won’t be made in core, instancing cannot be used for animated models unless they are perfectly synchronized. I can only think of one other case, which is where the transforms per instance are animated but neither the mesh or material are, for example an asteroid field where each asteroid rotates in a unique way. I cannot think of any other case where instancing would actually be more efficient than batching. So please advise.

I guess the only interest would be to save memory, and the time to batch the scene.
Maybe culling would be more efficient too, since you could instance an entire scene (thinking of the asteroid field) and not care about what is visible or not, since the mesh is sent once anyway, and that gl clipping will prevent drawing objects that are out of the field of view. With batching you’d have to have some partitioning strategy for this kind of scene.

@nehon said: I guess the only interest would be to save memory, and the time to batch the scene. Maybe culling would be more efficient too, since you could instance an entire scene (thinking of the asteroid field) and not care about what is visible or not, since the mesh is sent once anyway, and that gl clipping will prevent drawing objects that are out of the field of view. With batching you'd have to have some partitioning strategy for this kind of scene.

Yeah, I see the use-case come up most commonly in terrain tutorials and stuff where you want to repeat the same rock 500 times in different positions and orientations. If you’re clever, you can make one rock look like a bunch of different types of rocks just by how it’s placed in the ground (I’ve seen this in Oblivion, a lot.)

With instancing, you can get away with having a slightly more complicated mesh and just repeating it. If you do the same with batching then the memory usage gets onerous.

Still, trying to get JME to manage all of those as separate Geometry might be problematic and I’m starting to like your idea of a special container node. Instead of a regular geometry object, it could be an InstancedGeometry or something that has a Mesh a Material and a list of Transforms, one for each instance.

Just spitballing, really. I just worry about the JME “don’t use too many objects” overhead if each instance is its own separate Spatial. You kind of want to treat them like one geometry anyway and having controls per rock or whatever seems silly.

1 Like
<cite>@pspeed said:</cite> Yeah, I see the use-case come up most commonly in terrain tutorials and stuff where you want to repeat the same rock 500 times in different positions and orientations. If you're clever, you can make one rock look like a bunch of different types of rocks just by how it's placed in the ground (I've seen this in Oblivion, a lot.)

Just saying, artists with the help of AAA engine do alot of other stuffs like that, :stuck_out_tongue: . Did you ever wonder how can a freaking big screen with terrain and amount of rocks, grasses and decals can possibly get drawn and still keep at least 30fps. The solution is … wait for it … GeometryInstancing and Batching and… magical Data Structure in a solid combination of course… Cant tell you much about the special data structure for this kind of terrain data (cause I don’t really know) but I can tell you it’s certainly GI envolve into this kind of magic.

Now the Data structure IS the problem, right?

InstancedGeometry or something that has a Mesh a Material and a list of Transforms, one for each instance.
Seem good enough for me!

As far as I remember a paper of Umbra somewhere talking about the way they order the scenegraph for visibility test and also mention if there is a need of draw call for that kind of GPU operation, what data structure for GI is suitable. The paper back in 2009, that all I can remember :stuck_out_tongue:

P/s: I don’t want to be an annoying kid but… let Material decide how to render Mesh is pretty wrong in several ways. I can pretty much say that even if I’m a real kid! :-?

@atomix said: P/s: I don't want to be an annoying kid but... let Material decide how to render Mesh is pretty wrong in several ways. I can pretty much say that even if I'm a real kid! :-?
Not sure it as something to do with the subject...but considering JME is shader centric, it seems like a good idea to me.
@nehon said: Not sure it as something to do with the subject...but considering JME is shader centric, it seems like a good idea to me.

Yeah, really Material is the only thing that knows everything about how to render something. If renderer did itself then it would have to ask Material for lots of stuff… and then you also lose the flexibility of being able to do something more fun in Material.

<cite>@nehon said:</cite> Not sure it as something to do with the subject...but considering JME is shader centric, it seems like a good idea to me.
<cite>@pspeed said:</cite> Yeah, really Material is the only thing that knows everything about how to render something. If renderer did itself then it would have to ask Material for lots of stuff... and then you also lose the flexibility of being able to do something more fun in Material.

Yes, this just completely out of GeometryInstancing topic but because of that number “1” (not multi instance) set there in render method of the Material class. I asked.
Still, combine of these two answer did not clarify the design, you know, it’s an OOP design or not? Somehow the relationship between Material - Mesh - Geometry are 1:1:1 and the function is misplace just for the sake of “handy” references…

It’s not really a question of why that should change but the question of system design. So for a Shader centric rendering system:
What is a Renderer anyway?
How much a Renderer know about what they going to render?
Is it by natural of thinking have render method that really do what it does. or What if I dont read the source code and assume that if I can adds functions for the Renderer just by extends this single class?
-------- Anyway, forget it !------
Back to GeometryInstancing, do you guys find it intersting enough to make something useful with that GL extension or just leave it untouchable.

1 Like

Just to add: the renderer manages the traversal, buckets, sorting, technique switching, etc. No matter what the rest of the code does, it’s ultimately the shaders that are doing the rendering. That’s unavoidable. Since the shaders are in Material I think it makes sense that Material is the thing that actually matches mesh to shader for final output.

It is by design, I guess… and I don’t think it’s a bad design. I’ll admit that it was not intuitive when I first saw it… but it only took 5-6 seconds for me to wrap my head around it. By the time my brain was done asking the “Why did they do it this way?” question it was already answering itself.

And to be clear, it’s not a 1:1:1 relationship. Meshes can be shared among different Geometry. Materials can be shared across different Geometry and different Meshes.

Mesh is the raw primitive data in model space.

Material is the “how to render raw primitive data” for different techniques.

Geometry matches a Mesh with a Material and a world transform.

So, the renderer says when a Geometry should be rendered and with what technique. The geometry says which mesh to render and with what Material… and the Material knows how to render the mesh.

I think it’s important that we’re all on the same page because this sort of directly speaks to the places where instancing can fit in. It might be that Geometry needs to be involved in the actual render more than it is. (shrug)

1 Like
@atomix said: Back to GeometryInstancing, do you guys find it intersting enough to make something useful with that GL extension or just leave it untouchable.
I'm tempted, really. Just not sure about the API. Paul's idea with a geometry that holds several transforms, sounds easy enough to do, but I'm not sure it will be easy to manipulate the "instances". I'd like to be able to add controls to instance geom for example. Controls that could only modify the transforms though.... That's why I was thinking of a node container like the BatchNode.
1 Like

Ok, may be I’m unclear by saying 1:1:1 relationship
I already know Mesh instance can be shared across the Geometry instances, the same for Material across Geometry instances.

In my mind want to say this direction:
How many Mesh per Geometry have, only one
and how many Material per Geometry, only one.

That make a straight link : Mesh <----owned by ------- Geometry ----- render with --------> Material :
At first it’s unclear that somehow the render method should be called my a “Geometry-ListManager”, in JME3 case the Renderer, which know everything about states and attributes of Geometries in the SceneGraph turn out to misplace into Material through its related Geometry.

Anyway, it’s possible to understand why a Material define “how to render raw primitive data” but not to understand how it should trigger the render method and change it with specific parameter like 1 or 2 … By judging a design, I want to clarify if there the tangled links can cause mistake for a newbie. In this case, I saw an obvious case that people will easily fall into the trap.

They can even compare the important degree of Material and Renderer class in the design of a “Rendering system” and misjudge.
And one of my fav phylosophy when I’m coding Java, sometime even without looking at the design is:
“if you can not reach to every conner of the system from the central class, there are hidden trap from the design”
It mean, the central class should take the biggest responsibility of what it does literally, in this case, Renderer should have “biggest responsibility” and also “highest albility” to change what i does literally, which is RENDER!
In this case Renderer can not reach a few conner because you know, the Material trap it in his own method and didn’t expose any secret (via parameters…), just a hardcoded number.

You can consider this just my long non sense words because I consider Geometry Instancing are more important and really want to know how to make it’s possible.
At least I saw changes in the core’s code in SVN because we are migrating into LWJGL 2.9.0. May be it’s a good time to bring out of the dark this feature.

I get that you were confused by it but that doesn’t make it wrong.

Renderer does the rendering of the scene… but it does it by delegating the actual nuts-and-bolts per-mesh rendering to the Material since that’s where the shader is. And the shader is what’s really rendering the mesh.

It’s no different than the fact that a logger does not write raw bytes to a file stream, it delegates to some other thing. It is still a logger because it is managing the logging. It lets something else deal with the stream specific stuff… ie: the streams. In fact, if a logger took CharSequence instead of String it would be almost exactly like this.

Renderer lets something else deal with the material-specific stuff, ie: the Materials. The renderer manages the other 90% of rendering.

Its actually supposed to be like that. Instancing is half-implemented in the engine because at some point I decided that the liability of supporting it outstripped the extent of its use cases.
It is actually very easy to make a subclass of geometry that exposes a numInstances field that Material.render() would pass to Renderer.renderMesh() should it find such geometry. So the design allows it to be supported, it just needs to be integrated into the engine. For example right now we don’t have any shaders that use the gl_InstanceID field or take an array of transforms, so we would need to actually create such an instanced geometry class and modify the shaders to support rendering with instancing.

Is anyone currently trying to implement this? I am using batching, but the time it takes to batch a lot of small rocks is not optimal :frowning:

@8Keep123 said: Is anyone currently trying to implement this? I am using batching, but the time it takes to batch a lot of small rocks is not optimal :(

How many is a lot and how often do you do it? And how small is small?

I am using a grid of terrains, and every time one loads, I have sets of objects that should be batched together in that tile, such as rocks and grass. Im just trying to reduce the amount of time it takes to batch it. It causes lag spikes every time a new tile is loaded.

They arent that small and there isnt too many of them. Player is 1 unit tall, and the rocks are about 1/8 the size. In each tile Id estimate about 20-30 rocks. They are pretty low poly though, and they all share the same material and mesh, but rotated and scaled for variation.

I’d like to reanimate this thread a little bit) I’m doing some experiments with jMonkey (and with 3D itself either) and in my little project I have a huge amount of simple geometries (tens of thousands) that uses a single mesh and material. Here how it looks like right now:

So I’m trying to simulate a growing tree and the current state is far from what I want it to look like. But for now, I’m struggling with rendering these geometries and dreaming of having a feature to draw them all with one draw call.

I’m trying to use BatchNode to batch objects into single geometry but this approach has some drawbacks. The batching process is very expensive with this amount of vertices and takes seconds or even tens of seconds to batch a big tree. So I’m batch gradually them in every step. It’s visible at first video: at the beginning framerate is slow and little by little it’s increasing while the amount of unbatched geometries are reduced.

But another drawback with this approach is that after batching big geometries, the engine has a very long SpatialUpdate step. It can update them for seconds or even for tens of seconds. So it seems that I should avoid batching big geometries to avoid these huge spatial updates.

All these optimizations are working but compared with an option to just send them to render in a single call is a ground and the sky.

Is there any possibility that the functionality of drawing the same mesh-and-material geometries in a single call will be implemented in the engine?

I’m new in 3D stuff and maybe I missing something - please correct me if it’s so.