Modularize Shaders

I have been browsing through the new module code. Currently for me it adds another layer of complexity since i have to have open even more files to follow the flow. During that i noticed that you implemented structs. So here is kind of a introduction to modern rendering techniques, and why i do not like the current struct. I know it is a WIP and an early test and a whole lot of work so consider this as food for the brain and not critic to the work done.

When designing the data model for the shaders there are two goals:

  • Reusable code
  • Decoupling the data-source from the data the shaders work on. Usually to actually be able to have reusable code.

Now, when you design the data structures you should also take into account “where the data gets generated”, “can the data be reused?”, “how long is the data valid”

As i can see in the PBRSurface code you already noticed that because you grouped the values:

    #struct StdPBRSurface
        // from geometry
        vec3 position; // position in w space
        vec3 viewDir; // view dir in worldSpace
        vec3 geometryNormal; // normals w/o normalmap
        vec3 normal; // normals w/ normalmap
        bool frontFacing; //gl_FrontFacing
        float depth;

        // from texture
        vec3 albedo;
        float alpha;
        float metallic;              // metallic value at the surface
        float roughness;
        vec3 diffuseColor;
        vec3 specularColor;
        vec3 fZero;
        vec3 ao;
        float exposure;
        vec3 emission;


        // computed
        float NdotV;

        // from env
        vec3 bakedLightContribution; // light from light map or other baked sources
        vec3 directLightContribution; // light from direct light sources
        vec3 envLightContribution; // light from environment 

        float brightestLightStrength;
    #endstruct
    #define PBRSurface StdPBRSurface 

No consider, i we split the struct into different ones.

first, extract the cam information:

struct CameraInfo{

}

then we could extract separate SurfaceInfo since that shold be reusable between different materials.

struct CameraInfo{

}

struct SurfaceInfo{

}

struct MaterialInfo{

}

struct FrameInfo{

}

struct GeometryInfo{

}

ultimately we probably need FrameInfo too. That would hold the frame info like tpf …
if we want to support instancing we might need another struct GeometryInfo.

Now let’s forget about the MaterialInfo for a while, because that one likely is going to hold material specific stuff, but keep in mind that we have that.

Now lets get to the reason why i would prefer a such fine grained data.

Start considering the validity time and reusability between different shaders and materials.

FrameInfo is produced by the cpu, updated each frame, and reusable between all shaders. This makes it a good candidate to pass the parameters as uniform buffer. One api interaction per frame and we are good.

CameraInfo is also valid for a render pass at least, as a matter of fact, you need access to not only the current camera in use, but for other cameras too when applying shadow maps. So we probably and up with something like:

struct CameraInfo{

}

layout (std140) uniform CameraInfos{
    CameraInfo camera[NR_OF_CAMERAS];
};

So from the above listed struct’s, SurfaceInfo might be the only one that is actually produced on the fragment shader and passed to later stages, all other are ultimately just input decouplers for the fragment shader.

Why all that?

Decoupling the GeometryInfo allows for instancing as we have it currently.
Decoupling and passing the MaterialInfo trough a UBO allows for instancing with different material parameters.
Ultimately in the drawindirect methods from gl4.6 you have to preload everything.
And then for each object you want to draw you only have to upload very little data.

struct{
 int geometryIndex; //Access to geometryInfo[geometryIndex];
 int cameraIndex; //Access to cameraInfo[cameraIndex];
 int materialIndex; //Access to materialInfo[materialIndex];
}

Since you preloaded everything it allows you to render everything that shares the same VAO and shares the same shader variant in one drawcall.

I am very bad at explaining things, so i hope it does at least make a little bit of sense

4 Likes

Yeah that is the one unfortunate side effect of this refactoring, between the glsllibs and struct approach, it ends up requiring a lot of swapping back and forth between files to work with the PBR fragment shader now.

Do you think it would help if the .glsllibs and pbr struct were located in a different place or had more straightforward names? I originally had 2 glsllibs with more obvious names: PBRTextureReads.glsllib and PBRLightCalculations.glsllibs (or something along those lines), i think now its called PBRLightingUtils which I personally think is more obscure

Also I should mention that most of the struct related stuff int hat PR was added by @RiccardoBlb so there may be a few things with that that I don’t have as good of a grasp on. And he might have some more insightful things to say in regards to your proposed changes too.

My original knee jerk reaction was to dislike the struct system, and I personally didn’t mind leaving all the variable declarations and uniforms in the .frag file instead of using a struct. I already felt like my original changes (which only split up the texture reading and lighting calculation into 2 .glsllibs by using basic inOut functions) was enough to solve the issues I’ve ran into. But I also see the benefit of the struct approach too, so as of now I’m personally okay with doing it either way.


I think I understand what you’re saying about adjusting the struct system, to pretty much just split the current StdPBRSurface struct up into a few different structs, and then also add the capability to have more structs for places beyond the .frag shader? If I’m following correctly then I think that sounds like a good approach for modularizing things even more. It would just need to be done carefully so we don’t overcomplicate things too much.

That has been my main concern, that all this modularization and refactoring could end up making it very difficult for someone to look at and understand how to fork or debug the pbr shader unless they are already fairly experienced in shaders. But maybe I’m wrong idk, I’m curious to hear more opinions on whether all the added complexity is worth it or if jme users feel like this will make them less likely to delve into forking or troubleshooting the PBR shader.

Yeah, currently with the uber struct you are limiting reusability.
Let the pbr light calculation work only on data that they actually need. PBRMaterial struct and SurfaceData struct. SurfaceData is generated on the vertex shader and could be send to the frag shader without knowing that it’s pbr. then phong lighting would of course require PhongMaterial struct.

Then, the various vertex operations could also work on the same data and can be reused across different shaders

1 Like

The work on the modular shader was just to make maintaining and creating new pbr based shaders easier by creating an idea of an extensible PBRSurface with some required fields that the dev has to fill in. So that as long as you can provide a PBRSurface and PBRLight, you can reuse the code.
Eg. you want triplanar mapping? You just create 3 surfaces, no need to rewrite the lightning logic at all.

That said, not only i 101% agree with everything in your post, it is also my personal preference when it comes to implementing this kind of shaders. It’ll probably need to be tailored to the way jme work to not overwhelm people trying to migrate their shaders, but i like the idea very much.

1 Like