Lighting in Custom Shader and GammaCorrection

Hello jMonkeys,

while recently reworking one of my shaders i came to the conclusion its time to redo my lighting approach since it looks strange at night (i’ll explain soon)

Similar to games like minecraft i floodfill the light and each transparent block as well as light emitting blocks have a light level. They do not only have 1 light level for sun and 1 for artificial light but 4 channels instead, 3 for colored (red green and blue) light and 1 for sunlight.
while meshing i pack that information into a vertex buffer (the neighbours light levels actually) so i can get the light level at each block (at each face that is actually) in the shaders.

calculating the influence of the sun is straightforward, i can send the current sunlight as well as direction as uniforms to the shader, inside the shader use the normal maps and do phong lighting using the provided uniforms taking the sunLevel at this face into account.

Since the sunlight is usually quite whitey i can do the multiplication with the color from the texture to dim it or get it at full brightness (for day/night) and multiply it with the sunLevel at this face (for darker inside caves etc). However the colored light, say red for example has no green and blue components so when i place a red lamp on a green grass block it does not get lit. (multiplying red with green results in black)

On the other hand, when i add that colored light instead of multiplying it, at night it looks really ugly since the surface loses all its texture (the sunlight is low strength, thus the texture color is close to black after multiplying and then red light is added results in quite plain red faces with no textures)

so i decided to treat all colored light just as if it was right infront of the face (0,0,1) and then use the normals from the normal map varying around (0,0,1) to calculate dot products and thus vary the amount of colored light to apply over the face. it looks ok but im curious about other approaches and also the falloff of the light looks stange, in that all strongly lit faces are quite smililarly lit while the less light a face receives the faster it gets dark.

so i read about how monitors show colors and from what i get it turns out when i read a value from a texture i need to read it like

vec4 color = pow(texture(m_Tex, texCoords), vec4(2.2));

then do my calculations, i can use my linear falloff light values from the floodfill and later output my color like

outColor = pow(calculatedColor, vec4(1.0/2.2));

but when i enable gammaCorrection i can just output my color like

outColor = calculatedColor;

because thats what the gammaCorrection setting is for?

i would like to provide the user a setting so they can specify their own gamma in some reasonable range, how would that interfere with say Lemur GUI because i expect it to mean i have to turn gammaCorrection off and do it manually with the users specified value

so 2 questions: is treating the colored light the way i do reasonable (with assuming it is right infront of the face so i can dot it with the per-fragment-normal)? are there better approaches?
and question 2: where does gammaCorrection take place and how should i treat the colors correctly inside the shaders when doing calculations?

Fragment Shader Code (yupp, its a mess, so i tried to put some comments)

its the main function only, since the rest is not needed for the logic

void main() { 
    const vec2 vec01 = vec2(0.0, 1.0);

    vec3 finalTexCoord = texCoord;
    #ifdef PARALLAX
        finalTexCoord = smartSteepParallaxMapping(texCoord);

    vec4 texColor = texture(m_TextureArray, finalTexCoord);
        if (((1.0 - step(0.9999, useTransparency)) * 2.0 - 1.0) * (1.0 - texColor.a) < -0.0001) {

    vec3 colorTex = texColor.rgb;                   //this is the original color as looked up in the texture (which assumes no colored light and full white sunlight)
    vec3 colorSun = m_SunColor.rgb *; //this is the suncolor provided by material parameter scaled by sun strength at this block
    vec3 colorLight = colorRGBS.rgb;                //this is the colored light at this block

    vec3 tangentNormal = vec01.xxy;                 //default normal ponting away form the face
    #ifdef NORMAL
        tangentNormal = normalize((texture(m_NormalArray, finalTexCoord).rgb) * 2.0 - 1.0); //per fragment normal as lookedup in the texture
    vec3 worldNormal = normalize((g_WorldMatrix * vec4(TBN * tangentNormal, 0.0)).xyz); //transfers normal from tangent to worldspace using TBN from vertex shader

    //float diffFactorRaw = max(dot(worldNormal, -m_SunDirection), 0.0);           //Phong
    float diffFactorRaw = pow(dot(worldNormal, -m_SunDirection) * 0.5 + 0.5, 2.0); //Phong Adjusted
    float specFactorRaw = 0.0;
    vec3 specColor = vec3(0.0, 0.0, 0.0);
    #ifdef SPECULAR
        //specFactorRaw = max(dot(normalize(viewDir), normalize(reflect(m_SunDirection, worldNormal))), 0.0); //Phong
        specFactorRaw = max(dot(worldNormal, normalize(-m_SunDirection + normalize(viewDir))), 0.0);          //Blinn-Phong
        specColor = texture(m_SpecularArray, finalTexCoord).rgb;
    float specStrength = (specColor.r + specColor.g + specColor.b) / 3.0;

    float ambFactor = 0.15;                                         //define ambient factor
    float diffFactor = 1.0 - ((1.0 - diffFactorRaw) * SUN_FACTOR);  //scale the diffFactor by matParam SunFactor
    float specFactor = step(0.250000, diffFactorRaw)                //in case diffuse was lower than 90 degrees
                     * specStrength                                 //  lookup specular component from texture
                     * pow(specFactorRaw, 32.0);                    //  and scale it by the specular factor raised to a power
    float ssaoFactor = 1.0 - (ssao * SSAO_FACTOR);                  //scale ssao from vertex shader by provided matParam

    vec3 baseWorldNormal = worldNormal; //baseWorldNormal will be used for light that has no direction, its assumed to be right infront
    #ifdef NORMAL
        baseWorldNormal = normalize((g_WorldMatrix * vec4(TBN * vec01.xxy, 0.0)).xyz); //if normal is enabled, assume its pointing straight away from the face, which results in diffent dot products all over the face
    float diffRGBFactor = dot(baseWorldNormal, worldNormal) * 0.5 + 0.5; //not sure its a good approach

    vec3 diffuseColor = diffFactor * colorSun * colorTex;     
    vec3 ambientColor = ambFactor * colorTex;
    vec3 diffuseRGBColor = diffRGBFactor * colorLight;
    vec3 specularColor = specFactor * specColor;//((vec01.yyy + colorTex + colorSun) / 3.0);//colorTex * colorSun;
    color = vec4((diffuseColor + ambientColor + diffuseRGBColor + specularColor) * ssaoFactor, 1.0);

thanks for any input and many greetings from the shire,

Gamma correction is not usually an option. It’s usually used or it isn’t. One or the other.

When you pass colors to the shader, pass them gamma corrected using the setAsSrgb or getAsSrgb method. Those should sort the input colors for you. In relation to textures, they have to be re-done knowing that they will be “lighter” or “washed out” - a.k.a gamma corrected.

In the graph below, it shows how traditional CRT monitors output their color. LCD (or whatever these days) continue the trend on purpose. Gamma correction “flips” the change so it becomes linear. At least that’s my understanding of it.

Thanks a lot for the quick reply.

I did not mean to make gammaCorrection itsself an option, i just wanted to provide the user a small range (say between 1.8 and 2.2, i read something like that). So effectively gammaCorrection would still always be on, just not the built-in one but a custom one

So i should get the textures using say the assetManager, then manually convert them using the methods you provided, send them to the shaders and read them in as usual, but when outputting them i should output them like

outColor = pow(calculatedColor, vec4(1.0/m_GammaValue));

and turn off built-in gammaCorrection so the scene is not too bright?
How would i make it work with other shaders? Say would my GUI be too dark and i would have to account for that?

You can safely consider every texture that is painted to be sRGB, while every texture that is generated (such as alpha, normals, roughness, etc.) to be linear.

gamma correction in jme does two things

  1. Convert input textures to linear color space
  2. Convert the output to default framebuffer to sRGB color space

If you choose a color by looking at a color table or with a color picker, unless the tool is doing linearization on your behalf, you should consider it as sRGB, this means you have to linearize it before assigning it to the material (in the way jayfella suggesed, using setAsSrgb) , because jme doesn’t do that for you.

You don’t need to implement your own color correction, also because it’s not as easy as you think, every linear operation will cause wrong results if performed on sRGB images, this includes filtering, mipmapping and decompression, things that you can’t control from a shader.

1 Like

Thanks a lot for your reply also

now that makes me wonder, does the conversion to linear space take place when loading them through assetManager? im calculating my own mipmaps before seding the textures to the GPU because of some transparency problems i had

The core problem i had was that the colored light felt wrong, but that is probably because i have gammaCorrection turned off and still use the linear values from the colored Light. instead i should enable the gammaCorrection and everything should work, only the user doesnt have the option to specify a custom gamma value?

Opengl has special format variations for srgb images, the linearization is done automatically by the opengl implementation.
If you want to manually generate mipmaps you need to linearize when you read and convert to sRGB when you write.

If you want custom gamma value, then you can do sRGB conversion in your shader, the problem i was talking about concerns only the linearization.
To do that in jme you need to leave the gamma correction disabled and enable only linearization with


then when you want to output to srgb you use the line you posted.

There are two things you should consider and try to avoid:

  1. Writting non sRGB colors to RGB(A)8 images will cause them to lose accuracy
  2. The sRGB conversion you posted is an approximation and so converting back and forth between linear and sRGB will add errors

i guess i slowly get the idea…
so my setup should be like this?

  • load textures using assetManager, with gammaCorrection turned off. Since i can consider every image painted to be in sRGB i need to first convert the loaded textures to linear color space
  • then i can do my custom mipmapping (based on MipMapGenerator.generateMipMaps() just slightly adjusted to handle alpha differently)
  • then i can bind these linear color space mip mapped textures to my shaders
  • in the shaders i read colors like texture(m_Texture, texCoords) without any conversion since they are in linear color space already
  • i can safely do any calculations in the shader
  • then output my final color like outColor = pow(calculatedColor, vec4(1.0/m_GammaValue)); and additionally set renderer.setLinearizeSrgbImages(true); ? not sure i get this one
  • then i have postprocessing filters like a fog shader or a custom tonemapfilter
  • those get the sRGB colorspace values from the texture and i need to manually convert them to linear space before interpolating colors or other calculations, then converting back to sRGB when outputting and have to live with the fact that they lose some accuracy?

tells jme to use the sRGB formats so that opengl can do the linearization for you.
From the javadoc

      * If enabled, all {@link Image images} with the {@link Image#setColorSpace(com.jme3.texture.image.ColorSpace) sRGB flag}
      * set shall undergo an sRGB to linear RGB color conversion when read by a shader.
  • those get the sRGB colorspace values from the texture and i need to manually convert them to linear space before interpolating colors or other calculations, then converting back to sRGB when outputting and have to live with the fact that they lose some accuracy?

You don’t have to manually convert them to linear if you use setLinearizeSrgbImages(true).
If your render pass and filters use formats that have enough accuracy eg RGB16, you don’t need to convert back and forth between linear and srgb for each pass and you can have only the last pass doing the linear → sRGB conversion.

1 Like