Feedback about calculating light per-fragment

tl;dr, I made some changes to my Lighting.j3md and I’d like to share it and get feedback.

Some background: my games make use of static lightmaps for most of their lighting, however I use dynamic lights sparingly for simple effects (explosions, flickering lights, etc.) also I’m currently not using specular.

I noticed that point lights looked bad, I could clearly notice the triangles of the geometry, like this:

A quick search in the forum pointed to adding more triangles as the only solution. But I had thought the lighting was calculated per-pixed so I was disappointed when I looked at how the Phong model works to realize it’s heavily dependent on interpolation.

I tried to move all the work to the fragment shader and I came up with the following formula based on distance:

I simply took 1/x^2 and moved the curve to the left and down a bit so it crosses the axis. But then I had to take the normals into consideration, and for those I must interpolate and I found that if I multiplied the dot product of the light direction and normal by a constant the change near 90 deg was more rapid and had less effect in the result overall.

This is how it looks now:

These are the changes in my Lighting.frag:

       #ifndef PHONG_LIGHTING
         vec2 light = vec2(0.0);
         light.x = computeLightingPerPixel(, normal);
         vec2 light = computeLighting(normal, viewDir,, lightDir.w * spotFallOff, m_Shininess) ;

I added a parameter to switch to the default way, computeLightingPerPixel() is defined as this:

float computeLightingPerPixel(vec3 lightDir, vec3 normal) {
  float dotp = dot(lightDir, normal);
  // attenuate using normal so is it doesn't blink over 0 degree
  float side = clamp(8.0 * dotp, 0.0, 1.0);
  float front = step(0.0, dotp);
  float dist = length(worldPos -; // I pass these from the vertex shader
  float posLight = step(0.5, g_LightColor.w); // 0=Dir 1=Point 2=Spot 3=Amb
  float dirLight = 1.0 - posLight;
  return max(0.0,
     dirLight * dotp +
     posLight * front * side * (1.04 / pow(5.0 * dist * lightPos.w + 1.0, 2.0) - 0.04));

so I basically calculate length() for every fragment, I guess that’s quite inefficient. Is that why this is not the standard procedure in games?
I’m happy with the result and the performance seems ok, I need to add many lights to make the framerate drop, more than I plan to ever use.

Since I’m very new to shader coding, I wanted to check if I’m missing something or there’s some obvious optimizations I can do.


With a lot of modern GPUs, subdividing a mesh more can be cheaper than expensive per fragment calculations, as it only costs a small amount of VRAM, so you might want to test which gives better performance.
Regarding the length() you’re using in the computeLightingPerPixel() function, you’re already passing the worldPos in from the vertex shader, so it’s already being interpolated anyway, so what’s stopping you from doing that entire calculation in vertex shader? The equation seems linear, so interpolation should work rather well I think.

1 Like

The second screenshot looks nice, however phong lighting in jme is calculated per fragment (unless you enable vertex lighting), the fact that some values (eg, normals) are interpolated and passed to the fragment doesn’t change that (those fragments are part of the same plane).

I think the difference here is that you are not using phong at all in your second screenshot and maybe that goes more toward the look you are aiming for. And actually i personally think that the base phong model doesn’t look very good generally, the games that i’ve seen looked the best with phong either had a lot of baking or a tweaked model, that’s why, imo, it is best to use PBR whenever possible.


I mean, whether it’s phong or not depends on which hairs you want to split, I suppose.

Phong technically doesn’t talk about where the light component comes from… and to me that seems like the primary difference here.

For point lights, where the distance from the light matters, calculating that difference at only the vertexes versus at each fragment is going to make a big difference. (Per fragment is what subdividing gets us closer to.)


The brightness across that surface will look much more correct in the subdivided second picture (or if distance is calculated across the surface instead of only at the ends).

grizeldi is right that the distance could be interpolated, though… saving a sqrt per fragment.

JME’s phong lighting suffers from other issues, too, though… because the light direction itself is interpolated, it suffers from the same issues that will plague interpolated normals. Essentially that reprojecting an interpolated direction vector will introduce artifacts over large distances. Double-whammy.

I made this image some time ago showing two different cases of interpolated direction vectors (tip to tip) that get reprojected. The first is a super extreme case where the normals are almost 180 from each other. The second is a more normal case.

Green color is angular interpolation (the ideal), red is linear interpolation + reprojection (normalize). And yellow (which you can’t even see in the top image) is the direct linear interpolation tip-to-tip.

1 Like

That’s probably the best solution generally. Since I generate the level geometry using my own editor I’d need to modify the code that generates the mesh to add extra vertices, I don’t think it’d be very hard and I added it to my TODO list, but I wanted to experiment with a quicker solution first.

Yeah, I don’t think this is phong at all, I don’t know how to call it but seems like a very “naive” formula.

mhh, I don’t understand, look at the same image:

the two extreme points are equidistant from the light, so interpolating with result in all middle points having the same light intensity, but that’s clearly not correct, the points in the center are closer and should be illuminated more.

Think about it in 3D, if you have a equilateral triangle and the light is right above the center, the center would be brighter forming darker circles around until reaching the vertices. Linear interpolation would think everything should be equally dark, no?


1 Like

That is exactly what I’m saying. I’m pointing out why subdividing (or fragment-based) is more accurate.

It’s a trade off. If you have giant triangles then JME’s approach doesn’t like point lights because of the image I hand-drew.

1 Like

Oh, Ok, that’s what I understood from your reply but I got confused when you said I could interpolate the distance. You mean I can but I won’t get the same result.
Actually I tried that and it didn’t look good (in this section of the map the triangles are very uneven).

Ah, yes… that’s correct. (ie: my statement before was wrong about interpolating distance.)

1 Like