Greedy meshing for complex voxel data

@pspeed said: It's not that I'm an nVidia fanboy... it's that I'm an ATI anti-fanboy. :)

…and a little of an nvidia fanboy. All of my cards are nvidia.

Good than it’s at least not a amd only problem. (I’m not an amd fan, but the only nvidia i ever had started burning, literally! Btw as we are offtopic, seen the benchmarks to the iris pro intel gpu’s somewhat suprising)

With greedy meshing like this, is it possible to have 2 differently-textures blocks in the same mesh? Or would the mesh then have to be split up? I would imagine it would have to be split up unless the atlasing is magical. If this is the case, greedy meshing wouldn’t help much on a patch of terrain that has a lot of variety to it.

Yep… It’s possible to do such a thing - though you have to get enough data into the shader to calculate the texture mapping - which is likely to be counter-productive. Greedy meshing won’t help much where the majority of neighbouring voxels are heterogenous - but then again, it can turn a flat water surface that might normally be composed of 256 quads (with a chunk size of 16x16) into just a single quad… So it’s a benefit averaged over entire terrains.

So I just realized that this is AMAZING for generating a physics collision mesh from voxels since all of the blocks act the same in that realm. I’m going to try and add this in at least for the physics mesh. My game is lagging in spikes even though the update loop only takes 3ms and detaching the voxels doesn’t help :[

Ahh, that’s interesting!

I use the cached voxel data for collision detection - but that’s just for mobs tooling around the place - not for physics… It might require some voodoo to wire collisions based on raw voxel data into the physics engine, but it might be worth considering.

Yeah I used to use the cached data, but I wanted to do a bit more with the engine without reinventing the wheel. There is no crazy voodoo really, I just assign a RigidBodyControl to the geometry. I am scared that is what is causing my lag though…

Okay so apparently you made a crazy large texture atlas with tiles the size of a chunk face in order to make the textures not stretch on the large faces… right? Well is that the only way to do it? I really hope not :confused: I know I can turn on texture wrapping and make the texture coordinates larger than 1 to do wrapping as long as the texture is not in an atlas, as long as it is like a 32 x 32 texture.

However, this will not work with an atlas of multiple 32x32 textures because instead of wrapping around, it will just show the other cubetype textures! Is there a way to fix this? Is there a setting or another buffer to tell the quad how many times to repeat the texture ? Texture.setWrapMode() will not do this.

Since the TexCoord buffer tells the GFX card where to map on the texture, this probably involves passing another Vec2 for each quad, to tell it how exactly to map those texCoords onto the quad, instead of always going [0,1][0,1]. At least that would be my ideal way to render graphics if I invented the pipeline. Is there such a vec2 ?

Am i making sense??

Thanks,

Andy

Sure, you’re making perfect sense - though I can only tell you what I put together for greedy quads that assume a single tile in a texture atlas should be repeated to cover the quad. In my current implementation voxel faces are only merged if they match in a number of ways - type, for sure, and therefore texture - but also lighting values, if they’re underwater, and so on. I have a standard texture atlas mapping on the x and y axes - each tile in the atlas can be any size, though in most of the videos I’ve posted I’ve used larger textures - just because I like how they look - so 128x128 or 256x256.

I pass the offset of the tile in the atlas on each vertex, and the full dimensions of the quad expressed as (quad width * TILE_SIZE) and (quad height * TILE_SIZE). Quad width and height are the number of merged voxel faces on each axis. TILE_SIZE is simply 1.0 divided by the number of tiles in the atlas on an axis… So if I have an atlas of 16x16 tiles, then the TILE_SIZE is 1.0/16 = 0.0625. I pass these coordinates in a single vec4, with the tile offset in the xy slots and the quad dimensions in zw. Then, in the shader I can repeat the texture across the quad on both axes using something like:

texture2D(diffuseMap, texCoord0.xy + mod(texCoord0.zw, TILE_SIZE));

TILE_SIZE can be set as a const in the shader, so this is a fast operation. I then do more black magic for lighting values and mipmapping - but this is the core of the operation for tiling the textures.

Of course, if you needed to change which textures are being painted onto each part of the quad you’d want more data passed to the shader to work that out - though, at least in my case, I’d wonder if such an approach would be worth it.

Best of luck!
/R

2 Likes