Questions on Spatial Clone

Is there a way to send Vertex/index/ect information to OpenGL once, and just have that same object undergo different transformation matricies, or do you have to resend vertex/index information for each different transformation matrix you want to use?

I was thinking of using a MakeClone class that users would pass parameters into a HashMap and the class would query the hashmap on what to do. Something like

MakeClone mc=new MakeClone();







Spatial s=mc.makeCopy(milshapeModel);



What I could do is just have s and milkshapeModel use the same index/texture/color buffers, and share texturestate/materialstate. Of course this won’t be the fastest way to do it if there is a way to somehow send index information once and use it with two meshes.

SOoo… no ideas/input?

Sorry missed this post.

Yes, you can send vertices only once. This is how the old CloneNode worked, however, the vertices must be the same (no animation differences between the clones).

GL11.glVertexPointer(3, 0, t.getVerticeAsFloatBuffer());

will be used until you call it again with different data.

You can do things like change the rotation, orientation, scale of the scene but not the vertices themselves.

Because the next call to glVertexPointer will wipe out the earlier call, all clones would have to be rendered together.

Of course, this could be something that I just didn’t have enough advanced knowledge of and there could be other ways.

You could probably just reuse VBOs.

So I’ld have to render copies before I render everything else, but that may have problems with render queue…

Not necessarily. In your clone class you’d just need to override the get and set VBOXXXXXXID and set/get buffer methods to point to some shared locations. Then no matter the order drawn, they’d be ok.

Only trouble is if Clones rely on VBO we will lock out cards from making use of clones.

Well, not necessarily if he implemented both of the methods I mentioned above, since it would still be able to fall back on the shared float buffers

Ok, then maybe I’m misunderstanding. If it falls back out of VBO because it’s not supported than it would be rendered normally, and wouldn’t be a Clone.

What I understood the question to be is: Is there a way to send vertex data to the video card once and use that vertex data multiple times for multiple objects.

But that would have to be sent twice, right?

The shared nature of the overridden methods would provide the cloning behavior… The VBO support would allow the information to be sent once and referenced from then on. If there was no VBO support it would have to send the information each time but it would be sending the information from a singular shared source. I guess I was focusing more on the Clone part of the discussion, sorry if I’m off target here.

So without VBO support, I’ld have to resend the buffer to LWJGL each time, but would only have to use one buffer. OR I could ignore how the RenderQueue works for Clone and just render all the clone objects at once.

With VBO support, I’ld just send the VBO ID, and render queue could work correctly.

Yeah, basically :slight_smile: If you ignore renderqueue you may run into some troubles with skybox or other things you want behind the clones, but probably not.

What would people think if I put a check in with buffers similar to what I proposed with RenderStates? When TriMesh tries to send an index, or color etc, array or VBO it would first check to see if that is equal to the last value it sent. If it is, then don’t send it. If it isn’t then send it. This would go directly into TriMesh renering.

This way, I wouldn’t need to create a new Clone class. People would just use the TriMesh class I give them, and the render code inside TriMesh would remember if it needs to resend information or not.

I’m going to need a response to the last part before I work on Clone. It’s either save what TriMesh has done and don’t repeat in TriMesh, create a special class that renders in its own special way, or some other idea…

Ok, I’m going to step back for a moment and ask a basic question (I’m starting to get a little confused)… What is it exactly we want with Clone Node?

  1. Do we want to send data to the card one time, and render something using that same data multiple times?
  2. Do we want to store vertices that can be reused (in main memory) so we can load a model once but render it multiple times (send the data to the card every time we render, but it’s the same data so we save on memory)?
  3. Do we want to do something different?

    The old Clone Node (the one that sucked :stuck_out_tongue: ) sent the vertex, color, texture, normal, index data to the card. It then went through all the Clones and set the world matrix and rendered the data. So it was rendering multiple things using the same data. But it had many cons, mostly due to it’s implementation.

    I just want to make sure I don’t have any confusion.

I want a solution that does all the following:

  1. Works with RenderQueue
  2. Stores information once to cut down on memory
  3. Tries to send the least amount of information

    The old system did not work with render queue because all the Clone were rendered at once.

    If I simply have the two TriMesh refrence the same Vertex buffer, then yes it will cut down on memory but it will get sent each time render is called on the TriMesh.

    What if LWJGLRenderer.draw(TriMesh) remembered the last VertexBuffer that was sent. Then the next time a draw happens, if the last buffer and the one you’re trying to send are the same, skip that step. So if the person is render VBO, don’t resend VBO information if VBO IDs are the same. If the person is rendering without VBO,don’t resend Buffer information if Buffers are the same. This solution would satisfy all 3 requirements, assuming it will work.
The old system did not work with render queue because all the Clone were rendered at once.

Well there were reasons that it sucked before the RenderQueue existed.

Mostly, what people have wanted is a way to store clones of animated data and allow the animations to be at different points, but still use the same model data. Apparently WildTangent does this and I know DirectX does this. How we can do this, I'm not sure.

1) Works with RenderQueue

Each Clone will need to be a distinct node in the scenegraph.

2) Stores information once to cut down on memory

We can share the buffers pretty easily, without too much modification.

3) Tries to send the least amount of information

If a clone shares a buffer with it's left neighbor in the queue, it doesn't need to send that buffer (i.e. glVertexPointer, glNormalPointer, etc).

So, referencing the same buffers can be done the way it is now:


perhaps making a convience method or class to help with that.

Sending the buffers the least amount of time: I could see a check before rendering, but what if the renderqueue was able to sort them in such a way that if z ordering doesn't matter for a group of objects (all opaque) move the shared buffers together, then set this flag.

We’d like to have a clone for our NWN models. The problem is that the model is built of nodes and trimeshes. The nodes at various levels contain the rotations and translations and so forth. Ideally the cloning process would allow the trimeshes to be clones that referenced the same vbo id and the nodes would still remain as is.

Something to think about anyhow…

After talking with Renanse, the RenderQueue handling this may be a good idea, at least worth more discussion.

The RenderQueue can group by z ordering, as well as grouping clones. So the RenderQueue will insure that data is sent to the card as little as possible.

If the same buffers are used, comparisons by the render queue would be a simple int compare.