I’m working on a solution where I wanna blend adjacent voxel textures in a voxel game. So if there is a dirt block surrounded by 8 different type of blocks then there is texture blending on the dirt block from 9 different textures. In the worst case scenario: In a cube shaped block for every face for every fragment I need to sample the following from a texture array:
9 diffuse textures (1 diffuse for the face, 8 for the neighbours’ faces)
9 normal textures (1 normal for the face, 8 for the neighbours’ faces)
9 specular textures (1 specular map for face, 8 for the neighbours’ faces)
This is 27 sample per fragment. There will be probably ~60K vertices and 40K~ triangles visible all the time to do the sampling above constantly
My question: Is this too much texture sampling from a texture array?
What if I would reduce sampling by if conditions? I don’t need to blend the whole face, only the border on a certain width. In the border there would be multiple sampling but outside the border in the face there would be just 3 samples for diffuse, normal and spec texture.
Not an expert in voxel worlds but the first thing that comes into my mind is why not catch the transition edges at mesh generation time and split the block where required.
Texture fetches can be expensive, especially when they are random. Nearby texture fetches can sometimes be optimized.
If statements will not necessarily help you because on some hardware it will execute all of the paths anyway and then decide the final value at the end.
Edit: what impact that will have on you, I can’t say.
I’ve often wanted to experiment with multipass versus one pass for cases exactly like this. It comes up often in terrain and stuff.
Somewhere deep down I hoped you would comment and say go ahead, today’s hardware supports this.
I came up with a new idea to use only 1 diff, 1 spec and 1 normal textures: I would quickly generate a blended texture on the fly, add it to the voxel array and would set the array index on the material.
When I would generate a voxel, I would look at all of it’s neighbours, calculate a value for the ‘constellation’ they are in and quickly start an ‘image-editor’ task on another thread which would blend the neighbours textures onto the voxel’s texture and cache it to the voxel array. Anytime in the future I would meet this ‘constellation of neighbours’ I would just use the cached version or else I would generate a new one and cache it.
I don’t actually know how good of an idea it is because I’m too late for bed.
I just couldn’t sleep because I was so excited about this idea so I wrote an proof of concept app that merges two textures into a new third one, updates the voxel array and than updates the material. It works perfectly.
One thing I’ve always wanted to try is an old trick repurposed into modern form.
Back in the days of a fixed function pipeline, we couldn’t use fancy shaders and stuff. If we wanted to mix two textures together then we drew one mesh first and then another mesh on top with blending on. For the second shape, we’d set the z function to equals to make sure we never wasted time drawing stuff that wasn’t actually on screen.
With triplanar mapping of even one terrain type, for every fragment we look up three colors, three normals, three bumps… then blend them together. If we then wanted to blend two different terrain types (sand and grass) it would be twice that. To blend sand, rock, grass… 3x that. And so on.
I’ve often wondered at what point the fragment shader is doing so much work to blend in shader that it would be better just to draw the mesh more than once, with a different texture set each time… and subsequent draws using “equals” for z-test.
Do I understand correctly that with this old trick if I were to blend 6 materials I would have 6 overlapping meshes? For me that would shoot up the vertices count I think. But it is a good trick, I’ll do a little bit of brainstorming if it would fit my case.
Waking up in the morning I realized that if I start making and adding new textures to the texture array then I’m gonna exponentially reach the limits of the array. If I blend two types of voxels they could be in any configuration: dirt in the middle, surrounded by one grass, or two, or three, or four, or five… what is there is not only grass? So basically I would have to combine all blendable textures with almost all at one point if someone plays the game long enough. I think I should forget this texture blending thing, I’ve wasted months on this and didn’t get any further or it didn’t bring any additional benefits. I don’t wanna do a basic boring voxel world though, I wanna spice up the look a little. I’ve started to read about 2D bitwise autotiling. Thinking about bringing this solution to my game so when I place a voxel, there is a palette of other voxels that can interact with it and it should be good enough visually.
Yes, but only if right on that spot you were blending 6 materials.
The point would be to take a guaranteed n * 6 lookups per fragment and turn it into geometry that only exists where the materials exist. So, for example, in pure desert there is only one mesh. If desert overlaps dirt, then there are two meshes…and so on.
Probably super rare that all 6 materials would be represented at any particular vertex.