Proper handling of point light halos

Hi all,

I have a design question because I’m just stuck. I want to render the source of point lights (in this special case airport runway lights) with a halo around the light. I’ve drawn a nice halo texture which I apply to a billboarding quad. The result should look like this (rendered with depth test disabled):

When depth test is enabled I get the following result, the halos are cut at the intersection with the ground:

In thick fog the halo should become quite big, in the meters scale. So displacing the quads by some height, or closer to the camera, would have unacceptable side effects.
The idea is to draw the whole halo texture if the center pixel is not occluded by an object.
As an airport can easily have a thousand lights I do not think collision detection is the proper way to achieve this (performance-wise).
I tend to check the quad’s center pixel’s (at x0, y0) depth in a shader and do something like

if (depthFromDepthTexture @ x0,y0 < lightDistanceFromCamera) {
   gl_FragColor=haloTextureValue
}

But I have no idea how to achieve this though I’ve written a few shaders already. Any other thoughts are welcome. The reason for all this is that I didn’t get the bloom filter to make nice halos because they become too small.

If you have an idea on how to deal with halo textures in this case please share it with me.

Try turning off depth write.

I mean, I presume they are all one big object instead of separate objects. (If they are separate objects then they are not in the transparent bucket where they should be.)

If you do web searches for alpha sorting, alpha z buffer, etc… you will find out why this happens. Short answer: the nearer quads get drawn first and fill the z-buffer even for the transparent pixels. Anything drawn after with depth test won’t draw there because the previously drawn quad has already filled in a nearer z value.

Actually depth write is off, but I see that I need to be more clear in the description of what I want to achieve:
I do some research on rendering runway lights in fog (day and night). Especially during night time and with thick fog the halos become really big, this is due to scattering of the light at the fog particles.
The important thing is that the visibility of objects that do not emit light is less than the visibility of light sources. (I assume this is because of the high contrast difference between the two.) In an example this means: Visibility = 400 m, but Runway Visual Range (RVR) = 800 m. (For pilots RVR is the key factor for making decisions. RVR can be determined by counting lights along the runway edge, which have a well defined distance.)

Basically I want to test a simplified model where the color of the light does not attenuate over distance, only the size of the halos shall become smaller.
In fact, you never see a light source’s color attenuated by the fog. You only see the light or not. As soon as you see the light source you will ultimately know the color of the light.

For the reason stated above (Visibility < RVR) I do not see a way to draw objects and halos in one rendering step. My current setup is:

  1. Render all the objects (with depth write enabled)
  2. Attenuate colors due to atmospheric effects (i.e. fog) using FilterPostProcessors. I understand that this step deletes depth information unless I put the halos in the translucent bucket. Therefore:
  3. Render the halos in the translucent bucket. Each halo is made of a billboarding quad sharing one material and optimized using the GeometryBatchFactory.

This way I should have full control of haloes’ size and color. But I don’t quite know how to detect occlusion, even if I limit it to a depth check at the center of the halo.

If the halos are just billboarded Quads (as in JME Quads) then I guess depth sorting is backwards in the translucent bucket for some reason. You can control this by settings it’s comparator.

Note: 1 and 2 could be combined into one step with a custom shader. I have shaders that do atmospheric scattering as part of their normal frag processing. They are even open sourced.

Not exactly sure what depth write has to do with anything here, since the quads are intersecting the terrain which writes depth. This is a very common issue with particles and billboards … We already support soft particles which will make those halos softly “fade” into the ground instead of breaking like they do now. For you this probably won’t work as you want to render them whole.

One option is to disable depth testing and then hide the halos if they are behind another object. You can do this in a shader by using a feature called VTF (vertex texture fetch). Render those halos as point sprites, and in the vertex shader fetch from the depth buffer, and perform the depth test there manually (should be pretty fast).

Actually instead of using all those hacks, might be worth looking into volumetric fog techniques to get the real thing.

An alternative way might be wirting a simple postprocessor for it.
Give in all lights that are unobscured from camera, and render based on distance+fog your halo into the framebuffer. Ignoring all other stuff.

Thank you so much for all your answers. With your help I have found a solution that fits my needs.

Unfortunately my hardware does not support vertex texture fetch, which may have been a very elegant solution. And I think my hardware won’t be able to do volumetric fog on so many lights, but in fact I have read a few papers about that topic. Might be an idea for a later project.

So here is how I’ve done it, for people with a similar problem:
I have looked into soft particles and how it is done in jME. My initial idea was to “wash out” the soft particles just a little more. Then I have noticed that the TranslucentBucketFilter renders Geometry in a post process step (I didn’t know this was possible, but Empire_Phoenix’s answer guided me into that direction).
Finally it was trivial to make my own post process filter which hands the depth texture to the halo material, which basically answers the question how to access depth from within the halo fragment shader. A performance impact is not noticeable with 70 lights, and I think it will be a fast effect with more lights.

Guys, you’re awesome, thank you once again!

2 Likes