Tiled texture from spritesheet

I, like many others, am writing a minecraft-like voxel engine, and I have come across a problem. I just rewrote my meshing algorithm as described here.

It works great, but with an optimized mesh, It becomes more complex to generate texture coordinates. Is it possible in opengl to tile a subimage from a texture? In other words, all terrain textures are together in a single atlas, and I’m wondering if I can tile a small rect within that atlas across a quad. If it is not possible by default, does anyone know if this is possible with a fragment shader? I’m not looking for an implementation, but if you know of one I’d love to check it out. Thanks for any help, and sorry if I’m overlooking something trivial.

With a fragment shader yes it’s definitely possible. You would need to set up the correct settings to tell the shader what to do though.

I don’t see how being able to do that actually helps you though?

If he’s following the linked article then he’s merging adjoining faces… so he needs to tile the texture.

I’ll be interested to see what the stats are on a working version of that. My back of the napkin calculations ruled it out when I was originally writing mine. You save a polygon here and there in memory at the expense of longer generation times and potential rendering glitches… in addition to having to write your own tiling in the fragment shader.

@zarch using the algorithm I posted, My terrain mesh contains ~72% fewer vertices on average. I’d really like to keep that optimization if possible. The alternative to using a fragment shader is creating a separate mesh for each block texture and giving each it’s own material which will tile. This will result in a lot of state changes in opengl for the texture, which I’ve been told is something to avoid, and also will lead to a lot of additional geometries.

@pspeed my current implementation reduced a scene from ~450k triangles to ~125k. It’s a small scene as my laptop is struggling to render it, but so far it seems very effective.

@pspeed ok, I just did a much bigger test, ~4.66 million became ~1.32 million. Still a large improvement.

@patrickoyarzun said: @zarch using the algorithm I posted, My terrain mesh contains ~72% fewer vertices on average. I'd really like to keep that optimization if possible. The alternative to using a fragment shader is creating a separate mesh for each block texture and giving each it's own material which will tile. This will result in a lot of state changes in opengl for the texture, which I've been told is something to avoid, and also will lead to a lot of additional geometries.

@pspeed my current implementation reduced a scene from ~450k triangles to ~125k. It’s a small scene as my laptop is struggling to render it, but so far it seems very effective.

That’s reduced from the already optimized “only visible faces” version? For hand calculations, I was never able to get better than 50% for typical terrain. And I couldn’t come up with a comfortable way of solving the rounding gaps without adding complexity to the generator and/or losing some of the benefits.

That sounds like good savings, though.

Yes, that is an improvement from what the article calls ‘culling.’ So no internal quads are rendered. The scene is basically large rolling hills, very little flat surfaces, so it seems a good test of the algorithm. Below is a section of the scene.

Nice. I may look at this again if I’m trying to reduce memory. So far the biggest memory hogs are on the data side and not the geometry… and my most expensive geometry are the trees and this wouldn’t help for them.

Yeah, honestly it surprised me how big of an improvement it was. Oh well, thanks everyone, I’ll try writing a shader for this then.

@patrickoyarzun said: Yeah, honestly it surprised me how big of an improvement it was. Oh well, thanks everyone, I'll try writing a shader for this then.

The huge cracks in the world undoubtedly help a little bit… but still. :slight_smile:

…ah, I guess you are also using real shadows instead of baked shadows. That makes a big difference also. Each of my vertexes potentially has a different light value so I can’t just merge them away.

Yeah that’s true. Just removed the cracks to check and it is now a 63% improvement, down from 72. So yes, that and the lighting does help a lot, but both of those are deliberate so for me it works out really well.

Yeah, I wanted player placed lights and indoor settings so I didn’t have much choice. :slight_smile:

Ok, so this is the first time I’ve implemented a shader in jme. I’m just going to base it off of a predefined shader for learning purposes. Namely, Unshaded.j3md.

I need to get one byte per vertex (really one per quad) into the shader in order to generate the texture coordinates. Looking at the mesh, right now I have VertexBuffers for position, color, normals, and texture coords. Where can I place a custom data buffer? I see that VertexBuffer.Type.MiscAttrib is deprecated.

@patrickoyarzun said: Ok, so this is the first time I've implemented a shader in jme. I'm just going to base it off of a predefined shader for learning purposes. Namely, Unshaded.j3md.

I need to get one byte per vertex (really one per quad) into the shader in order to generate the texture coordinates. Looking at the mesh, right now I have VertexBuffers for position, color, normals, and texture coords. Where can I place a custom data buffer? I see that VertexBuffer.Type.MiscAttrib is deprecated.

You texture coordinates can also be vec3s, you just have to declare and set them that way.

Ah. I see now. I was over thinking it. I should be good from here on. Thanks again for all the help.