Deferred lighting problem (my own project)

I’m working on a lighting kind of system, and I’m having some major trouble calculating the distance from the light to the fragment position. The system’s first pass (the regular rendering of the screen) is done by rendering the whole scene with a full white ambient light. Then the second pass shades it through a Filter. (Deferred lighting more or less; I’m not really planning on making it super great, it’s for a small project.) I’m using the exact same methods of getting the fragment’s position as that which is used in the SSAOFilter’s frag shader: reconstructing position from a depth buffer. From what I can gather by looking at it, it returns the position in eye-space.

As a test-case, I put a light at the origin, and a box slightly forward from the origin and within the light’s radius. I run the test-case and at first it looks fine, until I move the camera. When the camera moves, the light follows it. I realized quickly that this was because it was taking the light’s world-space position in eye-space, and the origin in eye-space is the camera’s location. So, I multiplied the light’s position by inverse(g_WorldViewMatrix). That didn’t work, so I tried just regular g_WorldViewMatrix. Again, the same result. I tried converting the returned position from the depth buffer into world-space by multiplying by basically every imaginable matrix and its inverse, and nothing helped.

Frag shader code:
[java]
vec3 getPosition(in vec2 uv) {
//Reconstruction from depth
depthv = texture2D(m_DepthTexture, uv).r;
float depth = (2.0 * m_FrustumNearFar.x) / (m_FrustumNearFar.y + m_FrustumNearFar.x - depthv * (m_FrustumNearFar.y - m_FrustumNearFar.x));
//one frustum corner method
float x = mix(-m_FrustumCorner.x, m_FrustumCorner.x, uv.x);
float y = mix(-m_FrustumCorner.y, m_FrustumCorner.y, uv.y);

return depth * vec3(x, y, m_FrustumCorner.z);

}

vec3 getNormal(in vec2 uv){
return normalize(texture2D(m_Normals, uv).xyz * 2.0 - 1.0);
}

void main() {
int lightType = 0;
vec3 position = getPosition(texCoord) - m_CameraPosition;
if(lightType == 0) {
float dist = length(m_LightPosition - position);
if (dist > m_LightRadius) {
discard;
}

    float attenuation = 1.0 - (dist / m_LightRadius);
    clamp(attenuation, 0.0, 1.0);

    gl_FragColor = texture2D(m_Texture, texCoord) * m_LightColor * vec4(attenuation);
    //gl_FragColor = vec4(clamp(distance(position, vec3(0.0)) / 200.0, 0.0, 1.0));
    gl_FragColor.a = attenuation;
}

}
[/java]

Note that for the distance between the two I’ve tried the length of their differences and using the distance method, both of which resulted in no change.
If any more code is needed, just ask.

PS: I realize there are probably many mistakes, probably major ones in the general structure of the thing, but this is my attempt at bettering my knowledge of GLSL so I expect it to be riddled with errors. I wanted to figure this problem out myself but after 2 days of trying to literally no avail I have given up.

instead of multiplying by the inverse g_worldViewMatrix you need to multiply by the inverse viewMatrix.
inverse g_worldViewMatrix goes from view to model position, you want from view to world, so that’s inverse viewmatrix.

Also if you are in a filter don’t use the global view matrices (the g_ ones) because that’s not the ones from the scene that’s the one from the parallel projection camera used to render the full screen quad.
So you need to pass your own matrix to the material by fetching it on the cam in the filter.

@nehon thanks for clearing some confusion.
However, after implementing “getCamera().getViewMatrix()” and using it to convert the light position to world space, the result was identical again. This leads me to wonder if the error is outside the GLSL Frag shader, but if I manually set the gl_FragColor to something else it removes any odd effects.

Edit: Also, no combination of that, other matrices from the camera, or inverses of any matrices from the camera resulted in the correct changes; however, there were some changes with each matrix so it shows that the error does in fact lie in the shader somewhere.

I took the liberty to put code tags around your code sample for readability.

this line
vec3 position = getPosition(texCoord) – m_CameraPosition;
is a bit suspicious.
Not sure it does what you expect.

What you need is to have both position and m_lightPosition in the same space so you can have a consistent distance.

getPosition return the position of the fragment in view space.
g_LightPosition is in world space.

so several options :

  • pass in the viewMatrix from the camera and multiply it to g_lightposition (remember that to have a correct transformation you need to do matrix * vector, not the other way around)
  • pass in the inverse view matrix and multiply it to what returns getPosition
  • look into the post water filter there is a getPosition function that returns the position of the fragment in world space using the ViewProjectionMatrixInverse (sent as a material parameter, not a global one look in the filter code to see how it’s computed).

@nehon
[java]
void main() {
int lightType = 0;
vec3 position = getPosition(texCoord);
if(lightType == 0) {
vec4 lightPos = m_ViewMatrix * vec4(m_LightPosition, 1.0);
float dist = length(position - lightPos.xyz);
if (dist > m_LightRadius) {
discard;
}

    float attenuation = 1.0 - (dist / m_LightRadius);
    clamp(attenuation, 0.0, 1.0);

    gl_FragColor = texture2D(m_Texture, texCoord) * m_LightColor * vec4(attenuation);
    gl_FragColor.a = attenuation;
}

}
[/java]
That’s the current code. Still the same problem, it seems.

how do you populate m_LightPosition and m_ViewMatrix?
post the full filter code please (using <java> tags)

@nehon the Internet adapter on my work computer is being strange, so I can’t post the exact code, but setting those 2 variables is something like:

[java]
lightMaterial.setVector3(“LightPosition”, light.getPosition());
lightMaterial.setMatrix4(“ViewMatrix”, viewport.getCamera().getViewMatrix());
[/java]

In order to get the normal map, I use the exact code copied from SSAOFilter for the method postQueue() in the Filter as well as a Pass to get the normals.
then (eventually for each light) I run the shader with the light’s properties and write each one to the same output texture. I’m not really confident in the way I’m doing the whole Filter, because it’s my first actual attempt at a Filter, so I’m pretty sure there are some mistakes, and tomorrow when my Internet’s functional I’ll post the full code to probably confirm that.

Also sorry for not using the tags, it must’ve changed at some point recently when I was out of town and I didn’t know the way to do it. Thanks for that haha.

@nehon
[java]
public class DeferredLightingFilter extends Filter {

//Constants
private AssetManager manager;
private RenderManager renderManager;
private ViewPort vp;
private int screenWidth, screenHeight;
//Lighting stuff
private Pass normalPass;
private Material lightMaterial;
private ArrayList&lt;DeferredLight&gt; lights;
//Other junk
private Vector2f frustumNearFar;
private Vector3f frustumCorner;

public DeferredLightingFilter() {
    super("DeferredLightingTest");
    lights = new ArrayList();
}

@Override
public boolean isRequiresSceneTexture() {
    return true;
}

@Override
public boolean isRequiresDepthTexture() {
    return true;
}

@Override
protected void postQueue(RenderQueue queue) {
    Renderer r = renderManager.getRenderer();
    r.setFrameBuffer(normalPass.getRenderFrameBuffer());
    renderManager.getRenderer().clearBuffers(true, true, true);
    renderManager.setForcedTechnique("PreNormalPass");
    renderManager.renderViewPortQueues(vp, false);
    renderManager.setForcedTechnique(null);
    renderManager.getRenderer().setFrameBuffer(vp.getOutputFrameBuffer());
}

@Override
protected void initFilter(AssetManager manager, RenderManager renderManager, ViewPort vp, int screenWidth, int screenHeight) {
    this.manager = manager;
    this.renderManager = renderManager;
    this.vp = vp;
    this.screenWidth = screenWidth;
    this.screenHeight = screenHeight;
    
    normalPass = new Pass();
    normalPass.init(renderManager.getRenderer(), screenWidth, screenHeight, Format.RGBA8, Format.Depth);

    lightMaterial = new Material(manager, "MatDefs/DeferredLighting.j3md");

// lightMaterial.setTexture(“Output”, geometryBuffer.getTexture());

    frustumNearFar = new Vector2f();

    float farY = (vp.getCamera().getFrustumTop() / vp.getCamera().getFrustumNear()) * vp.getCamera().getFrustumFar();
    float farX = farY * ((float) screenWidth / (float) screenHeight);
    frustumCorner = new Vector3f(farX, farY, vp.getCamera().getFrustumFar());
    frustumNearFar.x = vp.getCamera().getFrustumNear();
    frustumNearFar.y = vp.getCamera().getFrustumFar();

    lightMaterial.setVector2("FrustumNearFar", frustumNearFar);
    lightMaterial.setVector3("FrustumCorner", frustumCorner);
    lightMaterial.setVector2("ScreenRes", new Vector2f((float) screenWidth, (float) screenHeight));
    lightMaterial.setVector3("CameraPosition", vp.getCamera().getLocation());
    lightMaterial.setTexture("Normals", normalPass.getRenderedTexture());
    lightMaterial.setMatrix4("ViewMatrix", vp.getCamera().getViewMatrix());
}

@Override
public void preFrame(float tpf) {
    Texture2D frameColor = new Texture2D(screenWidth, screenHeight, Format.RGBA8);

    for (DeferredLight light : lights) {
        //Throw each light's properties into the shader and run it on the existing texture.
        //And pray to the bleebus that it works properly.
        lightMaterial.setFloat("LightRadius", light.getRadius());
        lightMaterial.setColor("LightColor", light.getColor());
        lightMaterial.setVector3("LightPosition", light.getPosition());

        Pass lightPass = new Pass();
        lightPass.setPassMaterial(lightMaterial);
        lightPass.init(renderManager.getRenderer(), screenWidth, screenHeight, Format.RGBA8, Format.Depth, 1, true);
        lightPass.setRenderedTexture(frameColor);
    }

    this.setRenderedTexture(frameColor);
}

@Override
protected Material getMaterial() {
    return lightMaterial;
}

public void addLight(DeferredLight light) {
    lights.add(light);
}

}
[/java]

I don’t see anything obvious reading the code.
Actually looking at your code I don’t see how he light could move with the camera…
Is there a way you post a working sample as a project maybe?

I’m sure this isn’t it… but how did you set the position of the light?

@nehon I’ll work on a test-case, shouldn’t be too long.

@pspeed I use a class called DeferredLight, and I set it originally with just a line like the following:

[java]myLight.setPosition(new Vector3f());[/java]

However, fearing that was the problem, I changed it to be more specific (though I figure there would be no difference I decided I might as well be on the safe side)

[java]myLight.setPosition(new Vector3f(0.0F, 0.0F, 0.0F));[/java]

@vinexgames said: @nehon I'll work on a test-case, shouldn't be too long.

@pspeed I use a class called DeferredLight, and I set it originally with just a line like the following:

[java]myLight.setPosition(new Vector3f());[/java]

However, fearing that was the problem, I changed it to be more specific (though I figure there would be no difference I decided I might as well be on the safe side)

[java]myLight.setPosition(new Vector3f(0.0F, 0.0F, 0.0F));[/java]

My fear was that there was an overlooked myLight.setPosition( cam.getPosition() ) somewhere… I figured not but when you get down to the bottom of the rabbit hole, you will reach for anything. :wink:

[java]
@Override
public void preFrame(float tpf) {
Texture2D frameColor = new Texture2D(screenWidth, screenHeight, Format.RGBA8);

    for (DeferredLight light : lights) {
        //Throw each light's properties into the shader and run it on the existing texture.
        //And pray to the bleebus that it works properly.
        lightMaterial.setFloat("LightRadius", light.getRadius());
        lightMaterial.setColor("LightColor", light.getColor());
        lightMaterial.setVector3("LightPosition", vp.getCamera().getViewMatrix().mult(light.getPosition()));

        Pass lightPass = new Pass();
        lightPass.setPassMaterial(lightMaterial);
        lightPass.init(renderManager.getRenderer(), screenWidth, screenHeight, Format.RGBA8, Format.Depth, 1, true);
        lightPass.setRenderedTexture(frameColor);
    }

    this.setRenderedTexture(frameColor);
}

[/java]

Somehow that almost works. It works perfectly when the camera’s direction == Vector3f.UNIT_Z, but other than that it’s half right and half still by distance. now I’m even more confused .___.

Edit: the light no longer follows the camera, but it’s still not right. I’ll try a few more things that I can think of…

make sure to check that: Commits · kwando/dmonkey · GitHub
GitHub - kwando/dmonkey at dev

@Setekh I really wish I would’ve remembered that post! >.< looking at that code, I see that their approach was probably far more effective than mine anyway.
Is that branch up for use in projects, or are there limitations?

@vinexgames, it certainly works, but there are some issues in the current design of it though and I do not have the time to play with that project right now (to bad, because I want to).
Use it at your own risk, and some attribution to the authors would be appreciated if you decide to use it =)

I have managed to run it - looks very nice. I had to change Ambient.frag to process only one directional light - for whatever reason, 2nd and 3rd light data was pure garbage causing random memory artifacts when processes (they are put properly into material parameters, it has to be something elsewhere which was causing issue). NVidia 660GTX.

Edit:
Found the error.

[java]
vec4 dirColors = vec4(0);
[/java]

dirColors was not initialized to zero, so it was getting random values causing the jitter. After changing the line to above, everything works perfectly.

@kwando What sorts of issues? I ran the couple of test-cases and it looks pretty nice. Does it support specular maps? If not, that’s something I have an idea for and may try to implement.

I’m thinking about possibility of hacking it around and convert to use 3 stage approach, instead of 2 stage. Current implementation does geometry step outputting everything needed (normals, diffuse, specular for material) in first pass and then render lights which use material data in GBuffer to output final colors. 3-stage approach does just very basic geometry pass at first (just depth, normals and possibly glossiness factor), then does lights which based on normals/position output total light contribution and then having final, full geometry pass which is very similar to standard rendering, just having light contributions as input texture instead of doing light calcs.

This allows to use distinct materials for geometry without issues and makes porting existing materials considerably easier (you just take away light calc and use input lightmap values, rest stay the same). It also helps with size of GBuffer a bit - but at cost of 2 geometry passes instead of one. Still, having full flexibility for materials is probably a clear win IMHO.

@abies Any chance of posting a vid when you get this completed?