Voxel Advice

I’m working on a voxel engine for a game of mine (think voxatron not minecraft) and at the moment i’m using a separate box object for each voxel. Now, i know this is silly, but i’m looking for advice on how to do it better.



I’m going to try using a custom mesh, but I cant figure out how to shade each triangle separately (as opposed to each vertex). They’ll all have the same material but different colors. Ideally, i’ll be able to set the color and transparency of each face I generate.



Any thoughts?

http://code.google.com/p/jme3-voxel/

http://code.google.com/p/bloxel/

GitHub - ahoehma/bloxel-engine

and many many more :slight_smile:

Same material different color will not work (I guess).

  • If you use different Materials for different “Voxels” then you can change the color per “Voxel-Type”
  • Or you can use one Material with an image-atlas (create a Image with all you colors), then set the correct texture coord in the mesh to point to the right color

Hmm, wouldn’t setting texture coordinates mean that i’d have to duplicate vertices in order to get separate faces to have different properties? Also, all of those engines seem to be minecraft style, not voxatron style like i’m looking for. I’ve got a voxel buffer that renders into a series of objects and is meant to work much like a pixel buffer, and won’t actually have any physics or other interactions.



In addition, is there someone to disable occlusion culling while leaving backface culling working? or at least disable occlusion culling if the occluding triangles are even the slightest bit transparent?

Hmm, wouldn’t setting texture coordinates mean that i’d have to duplicate vertices in order to get separate faces to have different properties?


If you want to color every face sperately trippling the vertices is the only way to go I guess.
With
[java]mat = new Material(ManagerManager.getAM(), "Common/MatDefs/Light/Lighting.j3md");
mat.setBoolean("UseVertexColor", true);[/java]
you can say your game to use the vertex' color. That's the way I'm doing it in my game right know.

Well yes, assuming that no vertices shareposition and color at the same time, each face needs seperate ones.

(for a box this is not complelty true for example, the two triangles for one side could share two edges.

(Though if you don’t have a large scene i wouldn’t start caring about this stuff to much)

thank you all, i got the renderer working, and render the voxelbuffer into a mesh that can then be shipped off to the GPU. At the moment, I’m looking to optimize my renderer. It can handle some 200,000 voxels, on my machine at 12 fps, but for my plans to be even remotely viable I need a speedup of at least an order of magnitude.



My render loop is as follows:



Loop through all voxels

For each adjacent voxel

Check if the face in that direction needs to be rendered.

if it does, add vertices, triangles, and colors to various arraylists

Convert arraylists to arrays

clear mesh buffers.

send arrays to mesh buffers



are there any simple things i’m missing? any suggestions?



Also: the whole mesh disappears if the camera gets too close to it. Is there anyway to make it not do that? i don’t need to check for collisions with the mesh, just get the camera right up next to it.

One really big performance tip could be to use GNU Trove, there you have a TFloatArrayList which is way faster and smaller than ArrayList Float .Just Google it, it’s easy to find.



Another thing could be, to avoid those lists at all. I don’t know if it’s faster in your case to first calculate how many faces you will get and then put everything into the buffers directly or to use a list and then create a buffer with the fitting size. Would be interesting to test.



If you have the voxel-array (or a similiar easy to search in structure) and know your camera’s position you could calculate when it’s inside your structure and then put it to it’s previous position again.

1 Like
@jelloraptor said:
thank you all, i got the renderer working, and render the voxelbuffer into a mesh that can then be shipped off to the GPU. At the moment, I'm looking to optimize my renderer. It can handle some 200,000 voxels, on my machine at 12 fps, but for my plans to be even remotely viable I need a speedup of at least an order of magnitude.

My render loop is as follows:

Loop through all voxels
For each adjacent voxel
Check if the face in that direction needs to be rendered.
if it does, add vertices, triangles, and colors to various arraylists
Convert arraylists to arrays
clear mesh buffers.
send arrays to mesh buffers

are there any simple things i'm missing? any suggestions?

Also: the whole mesh disappears if the camera gets too close to it. Is there anyway to make it not do that? i don't need to check for collisions with the mesh, just get the camera right up next to it.


If you regenerate the mesh in every render loop... don't do that. :)

What's the final triangle and vertex count when you are done? I mean, what does the stats display say?

Once the mesh is generated then that should be your only bottleneck.

Oh, and don't forget to update the mesh bounds... which may be why it disappears.

Also, you can compare your performance to some of the other JME-based voxel engines/games and see if you are a lot slower. That might indicate whether it’s just your hardware or your algorithm has places that can be improved.



Mythruna is probably the biggest performance pig in that department. If you are slower than that then you have issues. :slight_smile:

Partition the space in chunks, only regenerate chunks when changed. That should considerably lower your performance needs. (bonus, only add faces that are not blocked by other spaces or in a closed space(like a hollow cube))

well, i got a good 10% improvement from switching to trove, and moving from an arraylist of Vector3f to a trove arraylist of floats. At the moment the biggest things I need to do include:


  1. subdivide the voxBuffer into 8 portions so that I only have to process 3 faces of each voxel instead of 6. (to check if they needs be rendered)


  2. add ray casting to check if a whole voxel needs to be rendered.



    What i’m having trouble with is the fact that i need to take the camera’s position and view in world space and translate it into points in voxel space. (first dealing with the fact that the spatial it’s attached to might itself have a transform, and the fact that all the voxels in a voxBuffer have a particular non 1 size). The various functions that return the frustum position (i think) in worlds space (e.g cam.getFrustumFar() ) all return floats. And i have no bloody clue how that maps to positions in world space. (the world space → voxel space transform I can do)

Are you generating your voxels every frame? If you can find a way to avoid that it will be the single biggest performance improvement you get.



Also, generating them based on what the camera will see is going to take more time than just letting the GPU sort that out. When people talk about not generating the invisible faces they mean the faces between two solid blocks.



In this area, the single most expensive thing is sending data the GPU. You want to avoid doing that as much as possible. The GPU can handle millions of vertexes as long as you aren’t resending the buffers ever frame.



Break your world up into reasonable size chunks (I use 32x32x32 and I think Minecraft uses 16x16x256) that keeps the object count relatively low while keeping the scene graph well balanced. Then only rebuild those chunks when the data changes.



If your scene graph is well balanced then frustum culling takes care of the rest.



If you want to make it look like the world is scrolling around in some finite box (I’ve only seen the pictures of voxatron, I haven’t played it) then do that with a shader trick or with some well sorted invisible boxes that fill the z-buffer. (which is what I might do in your place)

Well, the problem is that even on the best of days, my plans for how the game works would involve lots of constantly changing mesh data, and while i will definitely implement chunking and cacheing there needs to be a certain performance level even before those are added.



Plus, a quick check points to mesh generation being horrendously time intensive, and being able to frustrum cull and ray cast will cut that down a ton, while allowing me to scale mesh sizes up a lot.



Edit: A lot of reasoning behind the voxel buffer as a screen instead of a set is meshes is not that it’ll end up in the final game, but that I can learn from and optimize a lot of facets of the whole deal in a mode that’s independent of the content rendered. None of the current voxel engines provide what I need, and baking one of my own in stages let me learn things as i go along.

Thanks a lot guys, these are good advices.

@jelloraptor : I saw that I’m not the only one to try making a similar voxatron engine with jme3.

But hardest work will be jBullet performance with lots of cubes.