Renderer modification to use single index buffer

To give a bit of back story, here's the last post from the topic I originated this thread…

llama said:

If you have any questions with regards to changing the renderer, feel free to open up a topic in Dev General.

I've actually been revamping the rendering of Trimeshes a bit, to support VBOs for indices, but none of that is in CVS yet. You can already share a vertex buffer between several trimeshes. But keep in mind that if you want to use VBO currently different VBO ids will be generated (also working on that) unless you set them manually.

However there is  currently no way to use offsets inside the index buffer, so to work with the current render structure you'd need seperate indices buffers for each (different) TriMesh. LWJGL actually allows you to use the position() and limit() methods of nio Buffers to set offsets within a buffer, but because jME does rewind() before using the indexBuffer you can't use these.

Maybe if we do a indexBuffer.position(mytrimesh.getIndexBufferOffset()) instead of rewind(), where getIndexBufferOffset would return 0 by default, and you can subclass TriMesh to overide it, would be enough for you? Just trying to think ahead here..

The offsets aren't as much of a problem in my case, because I can use the slice() method in ByteBuffer to hand of a view of the buffer that has the correct starting position (so calls to rewind only go back to the beginning of the slice)  The problem is in the way TriMesh calculates the number of verticies.  Right now it just grabs the capacity of the index buffer / 3.  It would be much better in my case to use the limit() method.  Or at least have a separate overrideable method that calculates the number of verticies when the buffers are set.

Just some ideas to kick around.  Also, if anybody can point to or explain in general terms how the multi-pass renderer works it would be most appreciated.

If you're going to create slices of the Buffers, you could just set the limit on the new slice. The number of vertices is not based on the the indexbuffer size, only the number of triangles is. Limits etc. for the vertex buffer are done with getVertexQuanity, for the indexbuffer getTriangleQuantity is used. So I think slices are a good solution :slight_smile:

Not that familiar with BSP, but I assume it will not create many different slices that cover the same data right? I guess it's not so much that you wanted to put everything into one big buffer, it's just that you have your data this way.

As for the passes, did you take a look at SimplePassGame yet? There's some rather simple pass implementations like OutlinePass that give a nice idea of how it works.

Yes, the real reason I have all the data in one big buffer is because all of the C++ code examples I based this off of used that tactic, basically loading big portions of the BSP file straight into memory, to keep things ordered and reduce number of objects.

The problem code I was referring to was in TriMesh.reconstruct, where it does:

triangleQuantity = indices.capacity / 3;

That value is used later in the render(TriMesh) method of LWJGLRenderer for drawing the triangles.  But since there is a getter and setter for that value outside of the reconstruct method, I can just override the getter to return the value I really need.

Another concern I had was the fact that some of the faces in the Quake 3 BSP are natively stored as triangle strips instead of triangles.  Right now the draw method is hard-coded to use GL_TRIANGLES.  Any chance of this being changed to be a more dynamic property?

Yes, you could probably even memory map the whole buffer to a file right? That's pretty easy with Java. Could be nice for large maps.

As for the capacity thing you're right. And I can't think of a reason why limit shouldn't be used there instead, after all the result of all this is that a "safety net" is setting the limit of the buffer to it's capacity. Sounds like a bug to me.

As for GL_TRIANGLE_STRIP, maybe you can try to use CompositeMesh (extends TriMesh), which supports it. Also, you can specify and index range here, so maybe you can avoid the whole capacity/limit issue for indices too (however the same thing is the problem for vertices)

Thanks for pointing out CompositeMesh, it is sooo close to what I need.  The only quirky bit of difference in the quake 3 bsp data is that color information for a vertex is stored as GL_BYTE (3 bytes instead of 3 floats).  Right now the color info is hard-coded to use GL_FLOAT for color vertex data, and that code is in the LWGLRenderer as well.  What are the chances that could get updated to handle different color formats.  For a more general reason as to why this is a good idea, it cuts down on memory consumption. (a little :slight_smile: )

Hm yes, however right now we use a FloatBuffer so it's not just a matter of quickly changing GL_FLOAT to GL_BYTE.

It's been mentioned before actually (since byte is more memory efficient). We'll see if we can get that in eventually, but could you work around the issue for now?

Since it was a pretty painful experience for me the last time I tried making a custom Renderer implementation, I am going to modify my local copy of LWJGLRenderer and CompositeMesh to handle a different type of color data.  If I get it working, I'll post it along with the code for the BSP viewer.

It shouldn't be that hard to hack in. In predrawGeometry() check if the geometry is a composite mesh:

if (t.getType() & Spatial.COMPOSITE_MESH != null)

Then cast t to a CompositeMesh, and use your own alternate path for using a ByteBuffer instead of FloatBuffer (if some type of flag is set). If you don't use VBOs then this is the only part you have to change.

What gives me a headache is how to do this without messing up the nice clean API we have now, but this hack should be able to keep you going for now.

One other sticky point I came across:  gl pointer functions and stride.  There really shouldn't be any reason to force the verticies to be packed.  I notice that a large portion of the LWJGLRenderer has a TON of hard-coded values in there.  Any particular reason why, other than they are common values?

I also don't want to make too much of a change to the core for just one person's program.  :smiley:

Although, adding a hook in the renderer class would be nice, since I am also finding another troublesome spot:  handling BSP leaf visibility.  Right now the renderer I wrote checks the bsp visibility tree every frame.  It should really only be done every time the camera moves.  Is there some kind of event or trigger that happens when the viewpoint changes?

That's all internal to the camera I'm afraid, there's no hook. I assume there is some kind of top Node that all other elements are attachted to? Maybe you could compara old and new camera values in the draw method of that Node. You can always ask the renderer for the current Camera with getCamera()

2. make a wrapper class around all the Buffers we use, where can set things like what type of Buffer it is (float, int, short, signed/unsigned byte), what type of packing it has, what offset, etc.

This is similar to a system that we have locally here. We have created a batch system, that allows Geometry to hold multiple batches (allowing different states to be applied to different batches without the overhead of being a Spatial). These batches can (in the future) then be used to define how the batch should be drawn, TRI_STRIP, TRIANGLES, FAN etc. Right now, we are using it to batch triangles that have different material properties but are part of the same mesh. Future uses of it, can do what you describe.

We will be contributing this (if the community wants it), but first we have to clean up a few issues with it.

By all means, something like that would be appreciated in the engine (by me if by no one else  :slight_smile: )

I am going to go back to using the extension of Renderer that I wrote to get some working code together.  From there I would like come up with a proposal to change DisplaySystem a bit so that it is easier to plug in a non-default renderer.

Ok, time to finally put my money where my mouth is  :-o

Here is the source:

and here are the data files:

The only thing you need to compile it are the Jme jars.

To run it, the main class is snipehunt.glutils.jme.BspTest (no args needed)

just make sure the data zip file is in your classpath.

If you could take a look at it, llama, or anybody else, the problem I am currently having is that the non-lightmap textures for the wall and floor faces are not displaying.  And if I comment out the code in, line 184 (the if block after the CRAZY CODE RIGHT HERE comment), then the ceiling and floor textures load, but the walls are blank.  The walls, ceiling, and floor are supposed to be multitextured (first texture is base, 2nd is lightmap) but I think i’m doing something off.

Other than that, you can see from the classes in snipehunt.glutils.jme that I had to extend DisplaySystem, Timer, Renderer, RendererType, BaseGame, AND PropertiesDialog just to have a different renderer.  I am going to be working on a more “plug-in-able” solution for this, and submitting the code changes for review.

Phew, a lot to swallow, but any feedback would be great.


Thanks a lot Bakchuda! Great to see all your code did not go to waste in the end. Of course I'll take a look at it soon… and I'll also see what changes you made to the renderer for this. Or maybe the changes mojo will release soon will be useable too…

As for your solution for a more pluggable renderer, that also sounds like a good idea to encourage people to experiment with the renderer the same way you did.

Ok, finally had some time to play around with this a bit…

First thing I did was reduce garbage creation and the rogue println() statement to get a decent framerate.

A lot garbage was created by a single method due to logging so I just disabled that logging (TexturePass.constructAndApply).

Then, you use iterators or Java5 style for loops (which use iterators) so I changed that.

Only really matters in LWJGLPartitionRenderer.shaderDraw() :

// draw once for each pass of the shader
      ArrayList list = p.getShaderState().getTexturePasses();
      TexturePass pass;
      for(int k=0, len = list.size(); k < len; k++) {
         pass = (TexturePass) list.get(k);

and RootPartition.draw()

public void draw(Renderer r) {
//      Iterator i = children.iterator();
      int currentLeaf = findLeaf(r.getCamera().getLocation());
      int currentCluster = bspLeaves[currentLeaf].cluster;
      for (int i = 0, len = children.size(); i < len; i++) {
         Partition child = (Partition)children.get(i);
         if (child != null && isPartitionVisible(currentCluster, child.getCluster())) {

Thanks a ton for looking at this llama.

I will get to changing those iterators in my version as well, and also setting the code compliance to JDK 1.4  :smiley:

Sorry about the printlns, i probably should have done a project-wide grep for those.

Anyway, any comments about it as a whole?  I am not sure if I should bother translating the Shader processors from the "manual" way they are done now to use the real OpenGL shaders, unless there is a demand for better performance out of the animated textures.

Thanks again!

Well look at this thread:;topicseen#new

It was mostly the logging in that function (creating 2 native buffers each time) which is even easier to fix :slight_smile:

No real comments yet (also haven't been able to track down what causes that lightmap/texture conflict), except that this would probably speed up a lot if we can get you to use the normal rendering system again, so you can use locking. I think with the batch system we can make the renderer more flexible to support different formats (Byte, Short, Int, etc) and offsets, stride, etc.

I updated the source at this link

It has the logging removed and all the code is made to be JDK 1.4 compliant (removed Java 5 for loops and generics)

Any ideas yet on the lightmap/texture issue?