Best practices for glowing fog?

@toolforger said: Definitely. Only that the "depth" varying is written to by a vertex shader. It's essentially a per-pixel variable here, not an interpolated value created by the pipeline.

A varying is an interpolated value managed by the pipeline. No varying is created by the pipeline. You set the value at one vertex (in this case depth… which was simply calculated from clip space in your example and had nothing to do with what was already in the frame) and the value gets set at another vertex (both in the .vert) and then the pipeline interpolates those for you across the triangle. And it’s a linear interpolation so normal vectors, etc. technically need to be renormalized or they won’t be unit vectors.

Several people with more experience than you have told you that shaders are not the way to do this. You may think its simpler but it isn’t as the shader does not have the information it requires and there is no easy way to get it that information. Its far more complicated than the point mesh approach. The point mesh is slightly more complicated than the billboard one…but is probably faster than either the shader or the billboard approach. Billboards would almost certainly be too slow for a large/complex model. Do the point mesh.

In the time you’ve spent tearing your hair out over shaders you could have tried the point mesh one - and if that still didn’t work then you would be worth looking at other (more complex) options.

D’oh - of course the varying needs to be interpolated if written by a vertex shader and read by a fragment one.
Yeah, I found the registry page just yesterday and bookmarked it right away.
Page 103, section 10.23 (“Version”) of the GLSL ES 1.0 reference mentions it’s between 1.10 and 1.20 feature-wise. That would mean GLSL100 is actually more than GLSL110.

Now… what’s the recommendation on doing per-voxel depth calculation?

  • One separate depth buffer per backfacing fragment, created from Java, filled by its backfacing fragment shader?
  • One separate depth buffer per voxel, created from Java, filled by three (sometimes more) backfacing fragment shaders?
  • Cull the backfaces, create and fill a vertex buffer with backfacing fragment coordinates from Java, let the frontfacing fragment shader interpolate the matching backface coordinate and compute the depth from there?
    Option 3 seems easiest to do from my current knowledge. (Options 1 and 2 require telling each shader what buffer to use, and I don’t know how to manage a multitude of buffers, so that looks scary to me.)

Which leads me to the question: Where do I start reading about how to manage buffer objects from JME? Google gave me lots of pointers, but all referred to some forum questions handling some specialized aspect.

Render all of the “on” voxels with BlendMode.Add or BlendMode.AddAlpha… then the values will be added together. Whether you render that as is or use it as input to some other pass is up to you but at least you get away with one render.

Activating transparency isn’t the problem.

The real issue is getting the alpha values right. Here’s a manually-built Gimp image showing the effect I’m after (voxel backgrounds are transparent, orange, grey, and black, respectively):

You wanted to accumulate “voxel density”. I was telling you how to do that without rendering one pass per voxel.

Okay, then “voxel density” is the term.

I don’t think that two-pass rendering would work.
Consider this situation:

For the overlapping screen estate, the voxels would be fighting about what the Z value should be.

EDIT: I could render each voxel to an off-screen buffer and move it on-screen.
That would be three-pass I guess: One pass to determine Z, one pass to render the voxel, one pass to transfer to the screen.

My thinking has been that passing the xyz screen coordinates of the backfacing polygons to the frontfacing fragment shaders and letting them do the interpolation and z depth calculation would require less GPU resources.
Does that make sense?

I no longer have even a remote idea of what the final effect you are going for is. So I’m not sure I can contribute to this conversation anymore.

Good luck with it, though.

I’d say you should first write the backend for this and then think about the visual display. You won’t manage to bring all necessities from both sides to an ideal zero-friction match anyway and exchangeable visual displays should be planned anyway if this is supposed to be as substantial code-wise as you seem to want to approach the planning.

I’m planning for a Voxel class, and in fact have started writing it. Just one attribute, the RGBA color. (Alpha as applied to a ray that goes through 1.0 of a voxel; this will be scaled to actual voxel depth.)
Limitations: Can’t cast or receive lights or shadows, will not properly deal with shapes inside the voxel, can’t handle a 3D texture.
Fortunately, my use case (density-based star field glow) shouldn’t be affected by that, except that the field will be a bit blocky.

The backend should be trivial: A function that calculates a color based on world coordinate.
I’m happy with a trivial backend; dealing with shaders for the first time is complicated enough as it is.

What’s an “exchangeable visual display” - different GLSL versions, multiple monitors, something else?
GLSL versions shouldn’t be a problem, the fragment shader just needs to multiply alpha values. The biggest challenge I’m facing right now is how to give the fragment shader the information it needs to determine that multiplication factor; the biggest problem is that I don’t have a good grasp of the available techniques yet, but I’m now far enough into the docs that I can start experimenting.

Done: Look how the color data gets moved from Box to shader.
In progress: simplifying Unshaded.j3md so that it just applies a color.
Next: extending the interface to the shader so that it knows what factor to multiply the alpha with.