I have been browsing through the new module code. Currently for me it adds another layer of complexity since i have to have open even more files to follow the flow. During that i noticed that you implemented structs. So here is kind of a introduction to modern rendering techniques, and why i do not like the current struct. I know it is a WIP and an early test and a whole lot of work so consider this as food for the brain and not critic to the work done.
When designing the data model for the shaders there are two goals:
- Reusable code
- Decoupling the data-source from the data the shaders work on. Usually to actually be able to have reusable code.
Now, when you design the data structures you should also take into account “where the data gets generated”, “can the data be reused?”, “how long is the data valid”
As i can see in the PBRSurface code you already noticed that because you grouped the values:
#struct StdPBRSurface
// from geometry
vec3 position; // position in w space
vec3 viewDir; // view dir in worldSpace
vec3 geometryNormal; // normals w/o normalmap
vec3 normal; // normals w/ normalmap
bool frontFacing; //gl_FrontFacing
float depth;
// from texture
vec3 albedo;
float alpha;
float metallic; // metallic value at the surface
float roughness;
vec3 diffuseColor;
vec3 specularColor;
vec3 fZero;
vec3 ao;
float exposure;
vec3 emission;
// computed
float NdotV;
// from env
vec3 bakedLightContribution; // light from light map or other baked sources
vec3 directLightContribution; // light from direct light sources
vec3 envLightContribution; // light from environment
float brightestLightStrength;
#endstruct
#define PBRSurface StdPBRSurface
No consider, i we split the struct into different ones.
first, extract the cam information:
struct CameraInfo{
}
then we could extract separate SurfaceInfo since that shold be reusable between different materials.
struct CameraInfo{
}
struct SurfaceInfo{
}
struct MaterialInfo{
}
struct FrameInfo{
}
struct GeometryInfo{
}
ultimately we probably need FrameInfo too. That would hold the frame info like tpf …
if we want to support instancing we might need another struct GeometryInfo.
Now let’s forget about the MaterialInfo for a while, because that one likely is going to hold material specific stuff, but keep in mind that we have that.
Now lets get to the reason why i would prefer a such fine grained data.
Start considering the validity time and reusability between different shaders and materials.
FrameInfo is produced by the cpu, updated each frame, and reusable between all shaders. This makes it a good candidate to pass the parameters as uniform buffer. One api interaction per frame and we are good.
CameraInfo is also valid for a render pass at least, as a matter of fact, you need access to not only the current camera in use, but for other cameras too when applying shadow maps. So we probably and up with something like:
struct CameraInfo{
}
layout (std140) uniform CameraInfos{
CameraInfo camera[NR_OF_CAMERAS];
};
So from the above listed struct’s, SurfaceInfo might be the only one that is actually produced on the fragment shader and passed to later stages, all other are ultimately just input decouplers for the fragment shader.
Why all that?
Decoupling the GeometryInfo allows for instancing as we have it currently.
Decoupling and passing the MaterialInfo trough a UBO allows for instancing with different material parameters.
Ultimately in the drawindirect methods from gl4.6 you have to preload everything.
And then for each object you want to draw you only have to upload very little data.
struct{
int geometryIndex; //Access to geometryInfo[geometryIndex];
int cameraIndex; //Access to cameraInfo[cameraIndex];
int materialIndex; //Access to materialInfo[materialIndex];
}
Since you preloaded everything it allows you to render everything that shares the same VAO and shares the same shader variant in one drawcall.
I am very bad at explaining things, so i hope it does at least make a little bit of sense