Handful of GPU and Shader Questions

If your corner normals all face in the same direction then the “per-fragment normals” would be constant across the surface.

…unless you really mean something else here.

might be i missunderstand you but what i mean is the tangent, bitangent and normal vectors are the same for all 4 vertices of a face.

in the vertex shader i do what i mention in the original question to extract that TBN information from 1 byte into 3 vectors that i create a mat3 from which i send to the fragment shader.

in the fragment shader i then do

...
vec3 tangentNormal = vec01.xxy;
#ifdef NORMAL
    tangentNormal = normalize((texture(m_NormalArray, finalTexCoord).rgb) * 2.0 - 1.0);
#endif
vec3 worldNormal = normalize((g_WorldMatrix * vec4(TBN * tangentNormal, 0.0)).xyz);
...

EDIT: vec01 is a const vec2 (0.0, 1.0) that i use as it was mentioned on another site as optimization, that means first line reads as vec3 tangentNormal = vec3(0.0, 0.0, 1.0);

and the point is to tessellate the faces just often enough that it still looks good when doing that texture lookup in the tessellation evaluation shader and using interpolated values in the fragment shader, and weather that would reduce the work for GPU or not

But for a face where all of the corners have the same TBN… what do you expect to be different per fragment with one quad, a quad that is split 16 ways, a quad that is split 100 ways?

TBN is constant across the whole face. Texture lookups are texture lookups… looking up the texel half way across the face will give you the same value as if you looked it up half way across the middle 100th tesselated face. no?

Or maybe there is something more complicated about your faces that I don’t understand.

what i mean is imagine looking right at the face of a block which then fills a perfect 500 x 500 pixel area on your screen

currently that is 500 x 500 = 250_000 texture lookups (just for the normals, i also do texturelookups for parallax mapping and specular mapping) to get good lighting results for that face.

if i would just use the same normal across the whole face by interpolating between the corner values (which are consistant so i dont need to interpolate at all) i could get rid of doing texture lookups in the fragment shader but the face would always be shaded exactly the same across the whole face
EDIT: not talking about the diffuse map texture lookup, that needs to be done anyway, i just mean the normal, parallax and specular map texture lookups

but if i tessellate that face to be 200x200 small tessellated quads and do the texture lookups in the tessellaion evaluation shader i get away with only 40_000 texture lookups and use interpolated values in the fragment shader (which are barely interpolated because one of those small tessellated quads only fills like 2.5 x 2.5 pixels on average) so i would still get detailed lighting but with just 16% of the texture lookups

might be i still dont get your point so please be patient in that case as im really trying to get you

So, effectively, you are talking about throwing “texel count” quads at your GPU fragment span pipeline instead of one span with some texture lookups?

…I’m not sure that’s going to save you as much time as you think. (I could even see performance going backwards.) You’ve traded something relatively fast (nice cohesive texture lookups) for per texel span setup and fragment generation.

The reason you would want to tesselate a face is if you want to represent the bumps as actual geometry. Is that what you are talking about really?

well now that you point me towards that fragment span pipeline thing (i assume thats also where backface culling happens? as i was wondering since then moving these texture lookups to the tessellation evaluation shader would probably increase the amount a lot since it would be done for faces that are later backface culled and had never done texture lookups in the fragment shader since it would not execute for them)

but well that is basically the answer i wanted to hear as it saves me from trying that approach (would probably cost me quite some time as i only looked at tessellation shaders once yet, no real experience)

and no im not talking about actually moving those tessellated vertices around (although thats interessing too, i guess i dont need that though, at least not for the voxel meshes). i was just thinking about the suggestion in the article you shared and was trying to find a way to effectively get smaller faces so i can move the calculations from fragment to vertex shader (or tessellation evaluation shader then) but from your last answer i get that its most probably not worth it so thats good to know but its still good to think about things at least. although not too often, sometimes its useful when i do it

thanks again for clarifying, kept me from wasting my time with pointless optimizations :smiley:

This would be a useful thing for curved surfaces for a couple of reasons… not the least of which being that linearly interpolating normals across curved faces is inaccurate.

although i currently dont have curved blocks, now i want to implement them
but i should not, i should instead finally continue with the actual game :smiley:

Yeah, I hope to get back to that myself soon… SQUIRREL!

2 Likes