Need help in understanding how JME handles geometries

I hope I’m stating this correctly.



I’m working on that weather particle system… I’m trying to understand why I can create… let’s say 10,000 particles, manage them, draw the perspective vectors into an image without a hitch. But if I create a quad and a geometry for each… attach the quad as the geometry’s mesh… and never add it to the scene, memory usage goes through the roof and TANKS my machine.



I’m never updating these… just adding them as placeholders to the instance of the Particle class when it is created.



I also have a related rendering question. Is it possible to render a geometry list that is not attached to the scene? This one isn’t as important as the other question… because it will be impossible to ever get this far, if I can’t understand what is happening above.

Its easy for a graphics card to get a material and lots of vertices once and draw them. If you send lots of them performance goes down. Its like each new geometry basically means that you have to tell the graphics card what to do with it (material, location etc) and that is slow because communication happens via the PCIe bus.

Edit: thats why theres the GeometryBatchFactory and TextureAtlas classes

So… (this is for my own understanding so I can select the right direction to try first)



If I was to load a sprite image, perform a perspective transform and write it to the image buffer of the overlay image (which is passed to the shader to blend with the scene… well… among other things–transparency mapping, blur, etc) it is going to be faster (assuming the sprite image is small) than swapping in and out quads and rendering that?



I’m realllllly sorry for the stupid questions… I’m just a little confused about the direction to proceed with. A composite Image was my original idea, but I keep reading things that lead me to believe that the GPU can do it faster… buuuuut, the CPU seems to choke on the quads/geometries.



Hope my understanding here is correct. If not, do let me know… and thanks for the prompt answer!

If you want to render thousands of quads then batch them.



Put another way… let’s say the GPU is some person who lives on the other side of the planet. Let’s say you want to send them some documents and they are supposed to read them and reply yes or no to each line.



Which do you think would be faster…



Unbatched way:

  1. open your email program
  2. type in email address
  3. attach document n
  4. click send
  5. close e-mail application
  6. if document index < total goto 1



    Batched way:
  7. open your e-mail program
  8. type in email address
  9. attach all documents
  10. click send



    The GPU can crunch through millions of triangles without blinking. The scene graph can only handle a thousand objects or so before performance falls off the edge.

@pspeed I do get what your saying, and looked into this a bit… UNTIL… trying to understand to handle textures for a geometry that has say 1000 separately placed quads batched into a single geometry. How do you map the texture properly… say for even 2 of them?



On a side note, I think I saw that you can have named sub-geometries in a batched geometry for handle each separately. This is the case yes?

Does each quad have its own texture or all the same texture?



If the first then an atlas can help. If the second, then I’m not sure what the issue is. Every vertex has its own texture coordinates therefore every quad has its own texture coordinates for its corners.

@pspeed Right, right… so the texture needs to be applied to the new quad/geometry prior to batching it and the UV coords will transfer when added to the batched geometry. Oh… and to answer your question… all would have the same texture.



Quick brain dump to ensure I understand correctly and don’t have to keep bugging people.


  1. Create a single geometry for batching.
  2. Create the new quad/geometry only when needed.
  3. Apply the texture.
  4. Batch into the initial geometry that is being displayed in the scene.
  5. Remove the named vertex groups (sub-geometries) when they are no longer needed.



    Did I miss anything? Or is this the general idea?

Ok… another question. If I understand this correctly…



GeomteryBatchFactory.optimize(node);



will merge all contained geometries into a single geometry. This is a good thing!



However, if you need to remove one of those geometries at a later time, how do you know what VertexBuffer is what? /boggle Is see you can retrieve them by key (int… I’m assuming this is the index), but if GeometryBatchFactory.optimize(node) doesn’t return the keys, how do you know? I would hate to have to iterate through them and compare locations… yikes… seems like a bad idea.



I guess I could get a VertexBuffers.size() from the geometry and store the return-1 in the Particle. Maybe thats the correct way of doing it for single merges… in my case this always is the case. But what if your batching 17 items at once? How would you know what ended up where? If all geoms have the same vertex count, you can’t use that.



Lost…

There is also GeometryBatchNode which still manages the individual sub-geometries but batches them for rendering.

1 Like

That’s what I’m looking for. Thanks so much!!



Oh wait… will I still see the same memory issue as before? I was never passing the geometries to the GPU on my last attempt… the CPU seemed to be choking on the number of geoms I created (though… this will still be considerably less).

A believe that the geometry batch node keeps all of the sub-geometries in big buffers and just remembers where they are in those buffers. So if a particular sub-geometry is moved then it moves the vertexes in the buffer.

How many geometries are you creating anyway? Are they all just simple quads?

Just a simple question, because this thread is very interesting for beginners in OpenGL such as me.

Are geometries sent to the GPU everyframe ? I believed the video card kept the geometries in memory and you just had to send the translations and rotations. But apparently only textures are kept in video memory, right ?

Geometry is also kept on the GPU unless you change the data. Same as for a texture.

The data stays, yeah. But still communication has to happen via the bus. When you tell OpenGL to “render that geometry with that material at that location”, thats what puts the tax.

@pspeed Well… it would depend on how many particles the emitter is set to handle. But, lets say it is set to 10,000… I would say, on average, between distance culling and the fact that only so many particles would actually be in view of the camera to begin with… you would probably see somewhere between 500-1000 quads on any given frame. I can verify this… but I’m thinking my guess is fairly close to accurate. So in the end, the single geom would have no more than 4000 vertexes.

I’m definitely learning more than I thought I would going through this exercise… and it is teaching me a TON about optimizing the game I was working on before I got side-tracked with this. When you say “unless the data changes”, do you mean the number of vertices? or when a position/rotation/scale/etc happens as well?

The point is that it can be used like normal Geometry objects but has the performance (in terms of the things we talked about) of a large mesh, so the location is updated as well, yeah.

If you send a static buffer of 4000 vertexes to the GPU then it stays there.



If you modify those vertexes then it gets resent.



If you send a 1024x1024 texture to the GPU then it stays there.



If you modify that 1024x1024 texture then it gets resent.



If you have one 4000 vertex geometry then there is one cull, one dispatch call to the GPU, one transform, one material application, etc…



If you have 1000 geometries then there are 1000 cull operations, (and if all are in view) 1000 dispatch calls to the GPU, 1000 transforms, 1000 material applications, etc… basically.



Bottom line:

batching is almost always full of win. When it isn’t then it’s usually just that the batch sizes need to be broke up. (For example, Mythruna would suck hard if the world was one giant mesh because changes are generally localized… so I batch based on smaller areas.)

@pspeed or @normen I am so sorry for asking yet another question… but I am not finding any way of retrieving a specific geometry (quad in this case) from the node once it has been batched. Am I missing something?



Guess I should mention I looked at the geometry (returned as “batch[0]”) and at the Mesh from the geometry… but still no luck in determining what is what.