Procedural voxel terrain and vegetation generation

Ohhhh so I have to write a .frag and .vert that do the color value division and then attach them to the terrain material. That makes a lot of sense actually… Those should be wrapped into the library too, if one is made!

@roleary said: Oh - you're absolutely right - good point!

And yep - it’s not easy to try to filter voxel faces between chunks (I called them blocks in my implementation - I don’t know why - but now it’s confusing to talk about chunks : ).

My approach is for each chunk to actually contain an overlap layer of one voxel all the way around. So while a chunk might (in theory) be meshed as a cube with 32 voxels on each side, the chunk held in memory contains 33 voxels on each side - with the outer layer actually composed of voxels belonging neighbouring chunks (though not all voxel data is included in the overlap).

This means that updates need to update the overlaps too - but I found that many chunks don’t see much activity after meshing - and that doing it this way avoids a awful lot of lookups - especially during terrain generation, meshing, and lighting updates.

In the long run, I found that the look ups were cheaper than the memory consumption. Overlapping borders buys more problems than it solves and costs a lot (calculate the memory costs yourself… and then the costs of retrieving up to 8 blocks to edit one conjoined corner [if you do it in 3D]). If it’s really an issue, perhaps you can cache some bits in with your cell data to show which sides are solid and avoid a lookup during mesh creation. 6 bits per cell is enough if you have them to spare.

I ended up doing essentially that so that I streamlined my mesh generation and I could use it for light propagation, a* searches, all kinds of stuff.

@admazzola said: Ohhhh so I have to write a .frag and .vert that do the color value division and then attach them to the terrain material. That makes a lot of sense actually.. Those should be wrapped into the library too, if one is made!

Just a .vert, really.

I just made a test with Unshaded to see if there was a net performance drop by doing this and the results are encouraging. I can post some code if anyone is interested (and perhaps will see somewhere I screwed up the benchmark).

Oh, and also - since the values are 0-255 and you’re dividing by 255 in the shader - the buffer type should actually be:

[java]VertexBuffer.Format.UnsignedByte[/java]

Hehe - that’s more like it… Sorry about that!

Even without code, here is 50,000 batched boxes using regular Unshaded.j3md and random float-based vertex colors:

Here is the same image with a modified Unshaded.vert and sending a byte-based color buffer:

The performance difference seems within the margin for error.

@roleary said: Oh, and also - since the values are 0-255 and you're dividing by 255 in the shader - the buffer type should actually be:

[java]VertexBuffer.Format.UnsignedByte[/java]

Hehe - that’s more like it… Sorry about that!

If you are passing arrays directly then just setBuffer(… new byte { my values } ) seems to do the right thing, also.

In the long run, I found that the look ups were cheaper than the memory consumption. Overlapping borders buys more problems than it solves and costs a lot (calculate the memory costs yourself… and then the costs of retrieving up to 8 blocks to edit one conjoined corner [if you do it in 3D]). If it’s really an issue, perhaps you can cache some bits in with your cell data to show which sides are solid and avoid a lookup during mesh creation. 6 bits per cell is enough if you have them to spare.

Well, it my case I don’t have problems editing conjoined corners - for how often that happens, which isn’t very often, because I’m not performing mass edits of blocks except during terraforming, in which case the leading edge is not yet terraformed and doesn’t need to be touched, and the trailing edge is already terraformed so doesn’t need to be touched again - so there are no lookups during terraforming at all - but voxels that preclude others being beside them are enforced.

Determining how voxels are rendered requires the amount of data I use in overlaps - for example, when a voxel faces a liquid water voxel it’s tinted blue - or when a voxel faces a transparent voxel (or even an air voxel of type 0) which is at a high temperature, it’s a little more yellow to indicate it’s hot. When a liquid voxel is beside another liquid voxel with a different compression, they average the height of the top side they share to produce a smooth surface.

The overlaps do contain less data than the actual block containing the voxel - but the absence of the need for lookups, in my case, allows for lots of nice interactions between neighbouring voxels during coloring, meshing, liquid simulations, etc, without lookups.

Perhaps your cache is more performant though, which would change things - mind if I ask what you’re using? : )

@roleary said: Well, it my case I don't have problems editing conjoined corners - for how often that happens, which isn't very often, because I'm not performing mass edits of blocks except during terraforming, in which case the leading edge is not yet terraformed and doesn't need to be touched, and the trailing edge is already terraformed so doesn't need to be touched again - so there are no lookups during terraforming at all - but voxels that preclude others being beside them are enforced.

Determining how voxels are rendered requires the amount of data I use in overlaps - for example, when a voxel faces a liquid water voxel it’s tinted blue - or when a voxel faces a transparent voxel (or even an air voxel of type 0) which is at a high temperature, it’s a little more yellow to indicate it’s hot. When a liquid voxel is beside another liquid voxel with a different compression, they average the height of the top side they share to produce a smooth surface.

The overlaps do contain less data than the actual block containing the voxel - but the absence of the need for lookups, in my case, allows for lots of nice interactions between neighbouring voxels during coloring, meshing, liquid simulations, etc, without lookups.

Perhaps your cache is more performant though, which would change things - mind if I ask what you’re using? : )

I use Guava’s LoadingCache… but back when I needed to do all neighbor lookups, I just grabbed all relevant chunks before doing the mesh generation so they neighbors were already right there. Now, even if I did still have to do neighbor lookups, each cell has a mask that tells me if I should bother or not.

I had assumed that your overlap data would include the same data as the rest of our cells… which sounded like 6536 extra cells per chunk. With a few hundred chunks, it adds up fast and in Mythruna, memory is a constant worry for me. You are sharing a lot more info between chunks, though… so maybe it’s worth it.

Given how often geometry is generated versus my constant memory load, dumping the extra data made sense for me.

Also, just curious, what technique do you use for your AO? Is it SSAO or have you built the AO data into the geometry?

The performance difference seems within the margin for error.

Hmm… But we wouldn’t really expect an increase in framerate when an abundant resource becomes more abundant.

Shouldn’t the test be to monitor off-heap buffer allocation, and then increase the number of boxes until the framerate does drop, due to memory starting to become scarce?

At that point we should see that having 400% of the memory requirements for colors per vertex exhausts memory sooner and therefore places a lower limit on the number of boxes we can have in the scene while still maintaining a reasonable framerate… Which is the problem.

Also, just curious, what technique do you use for your AO? Is it SSAO or have you built the AO data into the geometry?

Ahh, yep - this is something I spent a lot of time changing and trying to find better ways to implement.

I bake it to the meshes - so each vertex has a value for exposure to sunlight and a value for artificial light, which are then used to modify the lightness at each vertex - and as the sun goes around the exposure it multiplied by the level to get an actual sunlight value at each vertex.

The problem is that when each vertex just contains its own light variables the GPU performs bilinear interpolation across the quad - but not really, because it’s not really a quad, it’s two triangles, so the interpolation results in an ugly banding effect across the middle of the voxel face! So at the moment I’m sending in the values for all 4 vertices on each vertex in packed bytes - which is a terrible waste of memory, but allows me to do my own interpolation across the quad, resulting in nice smooth shadows.

Unfortunately though, where I apply the actual final interpolated values in the pixel shader, like yay:

[java]

vec3 lightBase = baseLight.xyz * interpolatedVertexLight; // 1 FPS

pixelColor.xyz *= lightBase; // 10 FPS!

[/java]

That second line, multiplying the pixel color by the light, costs 10 FPS!

Pff… I suppose it’s because I’m limiting the capability of the graphics card to usefully cache fragment outputs - but there it is - I’ve yet to find a less expensive way to do it (but suggestions are welcome!).

@roleary said: Hmm.. But we wouldn't really expect an increase in framerate when an abundant resource becomes more abundant.

Shouldn’t the test be to monitor off-heap buffer allocation, and then increase the number of boxes until the framerate does drop, due to memory starting to become scarce?

At that point we should see that having 400% of the memory requirements for colors per vertex exhausts memory sooner and therefore places a lower limit on the number of boxes we can have in the scene while still maintaining a reasonable framerate… Which is the problem.

The point was to determine if the extra work in the shader had an impact on raw performance, ie: is there a down side at all? Apparently, no. At least not on my GPU.

The memory benefits were already obvious to me.

It was an easy win… between the color buffer and my normal+tangent buffers, I was able to shave 30 meg or so off of Mythruna’s direct memory usage (at a low 96 m clip even). I could shave another 8 meg or so off (by napkin calcs) by converting positions, too. That’s trickier for me since not all of my vertexes are aligned in a way that makes the nice… and so I’d have to split my materials (again). I may have to see what the memory impact at 192 meter clip is before I really convince myself it’s worth the work.

Apparently most of the remaining 130 meg or so is textures and other misc crud (index buffers, etc.)

@roleary said: Ahh, yep - this is something I spent a lot of time changing and trying to find better ways to implement.

I bake it to the meshes - so each vertex has a value for exposure to sunlight and a value for artificial light, which are then used to modify the lightness at each vertex - and as the sun goes around the exposure it multiplied by the level to get an actual sunlight value at each vertex.

The problem is that when each vertex just contains its own light variables the GPU performs bilinear interpolation across the quad - but not really, because it’s not really a quad, it’s two triangles, so the interpolation results in an ugly banding effect across the middle of the voxel face! So at the moment I’m sending in the values for all 4 vertices on each vertex in packed bytes - which is a terrible waste of memory, but allows me to do my own interpolation across the quad, resulting in nice smooth shadows.

Unfortunately though, where I apply the actual final interpolated values in the pixel shader, like yay:

[java]

vec3 lightBase = baseLight.xyz * interpolatedVertexLight; // 1 FPS

pixelColor.xyz *= lightBase; // 10 FPS!

[/java]

That second line, multiplying the pixel color by the light, costs 10 FPS!

Pff… I suppose it’s because I’m limiting the capability of the graphics card to usefully cache fragment outputs - but there it is - I’ve yet to find a less expensive way to do it (but suggestions are welcome!).

Just a note: FPS is not a measure. For example, a loss of 10 FPS out of 1000 is perfectly acceptable where as a loss of 10 out of 20 is not.

On my card, above 100 FPS, I’ve seen +/- 10 FPS just depending on whether the card is hot or not.

So your AO is essentially a by-product of your smooth lighting. I saw similar things with my smooth lighting (currently turned off)… in addition to light leaking through corners (which I was less concerned about). The triangle cross thing wasn’t as big a deal for me because corner to corner deltas were never very high. You could see it if you looked for it.

I pack my lighting similarly, it seems… though I have 4 bits each of sun, red, green, and blue light since I added colored lighting in this version of the engine. My world is not just quads, though… so I’m not sure I can use even your memory-intensive trick to fix the gradient issues.

I’ve taken some time off of forever tweaking my engine to actually write the game parts. :slight_smile: These threads are very distracting. :slight_smile:

The point was to determine if the extra work in the shader had an impact on raw performance, ie: is there a down side at all? Apparently, no. At least not on my GPU.

The memory benefits were already obvious to me.

It was an easy win…

Ahh, fair point - and, if it’s ok to ask - what format are you using for textures? I’ve got a couple of atlases, which contain the textures at various sizes, and then a bit of whizzy calculation to work out the scaled texture offset based on the distance from the camera… But I wonder about the actual image data - like maybe I could be using some more efficient format.

I’ve taken some time off of forever tweaking my engine to actually write the game parts. :) These threads are very distracting. :)

Hehe - yep - I keep getting back into the mechanics of the engine and putting off actually making a game! It’s endlessly fascinating.

Did you add rivers? I had a thought, that maybe I could use a ridged version of the 2D noise I use to generate the surface terrain (which is the layered over and under with all sorts of other bits and pieces of noise) to create riverish type structures - which, if the noise were configured similarly to the terrain noise, might follow fairly convincing routes… It’s only a theory, mind, I haven’t tried it yet - but I feel like rivers are really missing in my landscapes and would be awesome if they looked as they should.

@roleary said: Ahh, fair point - and, if it's ok to ask - what format are you using for textures? I've got a couple of atlases, which contain the textures at various sizes, and then a bit of whizzy calculation to work out the scaled texture offset based on the distance from the camera.. But I wonder about the actual image data - like maybe I could be using some more efficient format.

I’m still not using an atlas. I keep it in my hip pocket as a potentially easy performance win down the road but there are significant challenges for making it work in my environment… not the least of which is the bleeding issue that will either cause me pain on the shader side or on the texture composition side but probably both.

Add to that, I have some textures with bumps + normals, some with only normals, and some will be more generative. It starts to dilute any performance savings I will get trying to avoid texture switches. On the hardware where I found the texture count makes the difference between working and not working, I had to keep the texture count down to 2 or 3 to make a difference.

It would improve batching, though, so it’s always there waiting to be toyed with.

@roleary said: Hehe - yep - I keep getting back into the mechanics of the engine and putting off actually making a game! It's endlessly fascinating.

Did you add rivers? I had a thought, that maybe I could use a ridged version of the 2D noise I use to generate the surface terrain (which is the layered over and under with all sorts of other bits and pieces of noise) to create riverish type structures - which, if the noise were configured similarly to the terrain noise, might follow fairly convincing routes… It’s only a theory, mind, I haven’t tried it yet - but I feel like rivers are really missing in my landscapes and would be awesome if they looked as they should.

My terrain is generated in multiple steps, each of which can potentially be controlled by mods. So the raw terrain is generated with a fractal (stack of fractals actually) and then I turn it into base material types… then add caves… then vegetation… then eventually rivers+waterfalls, etc… and towns and cities.

Some of these (like the caves, trees, and eventually towns and rivers) are generated as descriptors when a 1024x1024 area is first encountered (on the horizon)… when the individual “chunks” are generated it then sees if it intersects caves, trees, etc. (they are spatially indexed so this lookup is fast).

Since this game is ultimately an RPG and I wanted to support a dungeon master style player that can setup more specific scenarios for his friends to play, I opted for this approach. Scripts can be added that would inject specific caves or trees (rivers, buildings) or whatever into the descriptor generation and then the rest of the engine treats them normally. Because some stuff is still randomized from there then a simple description will still yield interesting results.

So the short answer regarding rivers is that I’ll be placing water sources and let the rivers flow where they will by the physics of the terrain. Since players will be able to redirect the flow of rivers during the game, it’s basically the same system. Though during the generation step I will probably do a more coarse-grained approach with larger river sizes.

1 Like
My terrain is generated in multiple steps, each of which can potentially be controlled by mods. So the raw terrain is generated with a fractal (stack of fractals actually) and then I turn it into base material types… then add caves… then vegetation… then eventually rivers+waterfalls, etc… and towns and cities.

Fractal noise! Looks like we both found the same sweet spot there too - I’m using assorted configurations of 3D fractal noise with brownian motion for caverns, etc - and then octaves of simplex noise for humidity, temperature, etc.

So the short answer regarding rivers is that I’ll be placing water sources and let the rivers flow where they will by the physics of the terrain. Since players will be able to redirect the flow of rivers during the game, it’s basically the same system. Though during the generation step I will probably do a more coarse-grained approach with larger river sizes.

That sounds awesome!

I just tried my noise-based approach to rivers - and it actually gives pretty convincing drainage patterns - as seen in my patent-pending WIDE-O-SCOPE-O-RAMA camera right here:

Not entirely convincing, mind - note the occasional circular river at the moment - but it yields waterfalls, and streams tend to converge, which is nice - plus, there’s no difficult tracking of rivers across blocks, etc - it’s basically free, just another interpretation of the terrain noise.

In theory (untested - therefore, lies!) it should be possible to widen rivers as they approach sea level… Hmm… And maybe try add some noise in there to produce deltas… Though that would surely be more tricky to get right.

Yep… Generating rivers this way is nice - since they come from the noise, they’re just a normal part of terraforming - so plantlife can be adapted to the presence of a river with minimal work.

Here, for example, the higher humidity and slightly lower temperature near a river make trees grow in an area that’s otherwise mostly just grass and flowers - also, a lot more flowers growing on the river banks.

Now that I think about it, population density might be a function of terrain noise too - I suppose populations grow where resources are abundant - at river mouths, or at least along rivers, in regions of reasonable temperature and altitude, etc - and then, the height of buildings would be a function of density, and types of buildings functions of environmental variables… Hmm.

For anyone who’s interested, I put together a video showing the day/night cycle in this project:

Thanks!
R

4 Likes