Calculate Texture Coordinate of a Deformed normalized cube

Hello guys,

I’m currently working on a Rock generator based on GitHub - Erkaman/gl-rock: Procedural Generation of Rocks in WebGL work. The rock generation was pretty simple and it works like a charm. But I’m facing a problem after the rock transformation. The rock generation process is based on a normalized sphere which give me this mesh.

At this point the texture coordinate of the mesh are pretty and doesn’t need any processing. After the deformation process the uv are totally deformed and doesn’t fit the model correctly.

I managed to make an uv generation based on the deformation. I take each face separately and give them a proportion of the texture based on the normal of the face. (For the top face here is the process)

For the top face we take the x and z component of each point normalise the coordinate to be a portion of the current face. That will give us a correct texture coordinate for the face. And here is the result.

But as you can see, there is still distortion on some face due to the fact that I don’t take into account the depth component for each points which result in distortions !

Do you guys have any ideas on how I can make the texture stretch based on the depth of each face ?

(:rainbow: If you want any more information just ask :rainbow: )

1 Like

Maybe look into trilinear mapping.

1 Like

I kinda wonder why do you want to wrap regularly (if that is what you want) a cubic texture on initially spherical object… isn’t it easier to generate a spherical one, or at least a cylindrical one first? Because otherwise I can’t see how could you get rid of corner point distortions wherever they lay on the end object. And it’s harder to control a nature of particular distortion this way imho.

@nehon
Really good idea thanks ! I managed to make a shader to apply a trilinear mapping to the rock mesh. And it look quite good.


Even if that’s not perfect. (I might have misapprehended the technique I think it be really close to what triplanar is meant to achieve)
@Torsion
In fact I want to be able to procedurally apply texture on the mesh. (Blender has something called Minimize Stretch and I wanted to know how to simulate this behavior)

Minimize Stretch
Reference

Mode: View mode
Menu: UVs ‣ Minimize Stretch
Hotkey: Ctrl-V
The Minimize Stretch tool, Ctrl-V, reduces UV stretch by minimizing angles. This essentially relaxes the UVs.

Your idea was to generate a spherical texture based on the cubic texture ?

2 Likes

Just it looked a bit strange from the beginning, if I’d want to compensate non-uniformity of a distorted sphere I’d start from ideally covered ideal sphere, which is achieved through appropriate texture for spherical mapping (like Earth/Moon/Mars textures all around google)… but in topic description you refer to a distorted cube so maybe it is your 1st picture that misleaded me. Speaking of spherical map, there would be just two special cases on poles instead of 8 corners on a cubic map - but depending on your texture there might even be no need to do anything on them. So yes, if starting from sphere, I’d start from the texture for sphere…

@Torsion

In fact a normalized cube is a basically a cube subdivided into a sphere. (cf : gamedevdaily.io).

The method your are mentioning must be something like that

But I don’t really find this method appealing since the poles of the sphere are still distorted (less information which results in a strange pixels mess on the poles). My aim was to reduce the texture stretching as much as possible from a normalized cube with a cubic texture.

@nehon

Totally out of the subject but I need a shader guru to help me. I’m working with your shader node system. What is the best way to integrate the lighting process into my material. Do you have a shader node definition already defined somewhere ? I couldn’t find it here

Or should I create my own lighting shader node to wrap your lighting shader behaviour ?

Right, these are two special cases that probably should be treated separately (i.e. you could replace mapping there with small polar tiles (representing spherical area) or something. I don’t insist it is faster or better, it was just something that popped out once I saw your pictures :slight_smile: Anyway, your approach is interesting, why not :slight_smile:

1 Like

…then you end up with distortions where the tiles meet up because it ends up being exactly the same problem as the projected cube.

The projected cube is just pole tiles taken to the extreme, really.

To the original problem, I thought you could just reproject the cube onto the new coordinates but I guess since your object is no longer ‘equilateral’ (so to speak) that it would stretch anyway.

Agree, but that was more in response to “detail loss” problem. Generally, if you don’t want to see distortions at all you have to have 1 tile == 1 pixel, extremely speaking. Any other way implies additional constraints to the texture and/or to mapping mechanism, obviously. My point was that treating two cases could be probably easier than 8, nothing more. This doesn’t eliminate texture preparation need completely ofc.

This is still on my todo list…

I just made a test to see how I would proceed if I wanted to have a custom lighting node. For those in need for a quick and dirty node without anything but basic light computation. No specular and no parameter whatsoever.

ShaderNodesDefinitions {
    ShaderNodeDefinition Lightf {
        Type: Fragment
        Shader GLSL100: MatDefs/Lightf.frag
        Documentation{
        }
        Input {
            vec3 normal
            vec3 viewDir
            vec4 lightDir
            vec3 lightVec
            vec4 diffuseColor
            vec4 inAmbientLightColor
        }
        Output {
            vec4 outColor
        }
    }
}

#import "Common/ShaderLib/Lighting.glsllib"

void main() {
   vec4 modelSpacePos = vec4(inPosition, 1.0);
   outPosition = worldViewProjectionMatrix * modelSpacePos;
   outWorldPosition = modelSpacePos * worldMatrixInverse;
   vec3 modelSpaceNorm = inNormal;

   vec3 wvPosition = (worldViewMatrix * modelSpacePos).xyz;
   wvNormal  = normalize(normalMatrix *modelSpaceNorm);
   viewDir = normalize(-wvPosition);

   vec4 wvLightPos = (viewMatrix * vec4(lightPosition.xyz,clamp(lightColor.w,0.0,1.0)));
   wvLightPos.w = lightPosition.w;
   outAmbientLightColor = inAmbientLightColor;
   lightComputeDir(wvPosition, lightColor.w, wvLightPos, vLightDir, lightVec);
}


ShaderNodesDefinitions {
    ShaderNodeDefinition Lightv {
        Type: Vertex
        Shader GLSL100: MatDefs/Lightv.vert
        Documentation {
        }
        Input{
            vec3 inPosition
            vec3 inNormal
            mat4 worldViewProjectionMatrix
            mat4 worldViewMatrix
            mat4 worldMatrixInverse
            mat3 normalMatrix
            mat4 viewMatrix
            vec4 lightPosition
            vec4 lightColor
            vec4 inAmbientLightColor
        }
        Output{
            vec4 outPosition
            vec4 outWorldPosition
            vec4 vLightDir
            vec3 lightVec
            vec3 wvNormal
            vec3 viewDir
            vec4 outAmbientLightColor
        }
    }
}

#import "Common/ShaderLib/Lighting.glsllib"
#import "Common/ShaderLib/BlinnPhongLighting.glsllib"

void main() {
    vec2 light = computeLighting(normal, viewDir, lightDir.xyz, lightDir.w * 1.0f, 0.0) ;
    outColor = diffuseColor * inAmbientLightColor + diffuseColor * (light.x);
}

@nehon
I just realised the amount of work to translate the entire shader library into the shader node system… I would like to help you but I don’t really know how to proceed since you might have your own idea on how you will do it. I just realised that you can’t really afford to have the old shader method and the new shader node in the core since modifying one would implies modifying the other. How are you going to proceed ? Merge the old system with your shader node system ? Make a script to translate one to another ?

Such wonder ! Much shader ! So Nehon !

I’m working on an editor… but you know…times a bitch…

1 Like