Best practices for glowing fog?

If you do not allow fog objects behind fog object and you are not afraid of multiple passes, you could

  1. render the depth of the backside of an arbitrary mesh into a depthbuffer
  2. render the frontside of the mesh and lookup the depthtexture, fogDensity=backsideDepth-frontsideDepth. Should look pretty realistic but it works only for non overlapping fog meshes. (Overlapping in the z-direction of the camera)

Oh, multiple passes would probably not be a problem (just a second pass, who cares…)
I’ve seen that backsideDepth-frontsideDepth just yesterday. It’s nifty, and it even correctly handles objects inside fog.
My trouble is that I’m having serious trouble getting at useful shader documentation. I found the OpenGL pages alright, but they don’t really tell me much about 1.00 shaders (or is the first version 1.10?) And I want to stay compatible with as old hardware as possible, that’s going to be much less trouble even on newer hardware :slight_smile:

Fog behind fog actually isn’t a problem, fog intersecting fog would be. Make sure fog volumes don’t intersect then :slight_smile:

That said, It seems that with proper docs, the shader approach would be MUCH simpler.
I just can’t find the docs :frowning:

(And I feel your pain, pspeed… having to rewrite your own perfectly working software just because some hardware idiot botched it up greatly sucks.)

Actually the first thing i would try is based on the a voxel approach, i would try it as first because it is probably the easiest one to implement.

Imagine a voxel grid of fog over your map, the distance between each voxel is 1wu. On each voxel you want to have fog on place a billboard quad with twice the voxel distance in size. Make them all look into the camera, apply subltile adaptive blending. (Maybe with a ( animated? ) noise? ).

The shader itself would be super easy, especially if you don’t want lighting. A grid of size 10x10x10 would generate 4000 vertices, which at larger scale could rule older hardware out…

But beside that thats probalby the most straightforward implementation.

Yes, this seems interesting.
One thing I have seen mentioned is using a fragment shader for each voxel face.
Backfacing faces are drawn fully transparently (this just serves to bump up the z buffer).
Frontfacing faces compute fog depth as pixel Z minus current Z buffer value.

This has the added charm that it’s doing the right thing if some object is inside the fog voxel. (Not my use case actually, but it’s a nice windfall advantage and maybe the use case will come.)

I haven’t found out how to access the Z buffer from a fragment shader, unfortunately.
EDIT: I found gl_FragCoord which tells me the Z value, but that’s the Z for the current fragment’s pixel. I’d need the Z of the pixel behind it (i.e. the Z that was valid before the fragment shader started working on the current pixel).

I’m still unsure what exact GLSL version JME’s GLSL100 version identifier maps to: GLSL110? GLSL100 ES? GLSL100 non-ES (I never found the reference for that)?

Use opengl2 minimume requiremnt for glsl , as this is the shader based pipeline requirement for jme anyway.

@toolforger said: I haven't found out how to access the Z buffer from a fragment shader, unfortunately. EDIT: I found gl_FragCoord which tells me the Z value, but that's the Z for the current fragment's pixel. I'd need the Z of the pixel behind it (i.e. the Z that was valid before the fragment shader started working on the current pixel).

You can’t do it without multiple passes. Generally, in a highly parallel architecture like an OpenGL pipeline buffers can be written to or read from but not both at the same time.

In general the way is:
1 pass: Write backfaces to depthbuffer
2 pass: bind depthbuffer as textures and calculate with the frontfaces your densitiy

@EmpirePhoenix OpenGL 2.0 defines GLSL 1.10.
I see some JME shaders specifying GLSL100. E.g. core-effects/Common/MatDefs/Water/SimpleWater.SimpleWater.jm3d.
Is that just saying “I don’t care about GLSL version” (so effectively the same as “GLSL110”), or is something else going on?

For posteriority, here are the links to the GLSL 1.10 reference:
Full reference manual: http://www.opengl.org/registry/doc/GLSLangSpec.Full.1.10.59.pdf
Quick Reference Guide (i.e. “cheat sheet”): http://dac.escet.urjc.es/rvmaster/rvmaster/asignaturas/g3d/glsl_quickref.pdf
Should I add these links (and those to newer GLSL references) to https://wiki.jmonkeyengine.org/legacy/doku.php/jme3:advanced:jme3_shaders ?
I could also add a paragraph explaining the GLSLxxx directives. (How about integrating Momoko_Fan’s PDF into that page?)


<a href='http://hub.jmonkeyengine.org/members/zzuegg/' rel='nofollow'>@zzuegg</a> I'm seeing people record the previous depth in a shader using a varying variable, like this:

[java]varying float depth;
uniform float near_clip;
uniform float far_clip;
void main (void) {
	vec4 viewPos = gl_ModelViewMatrix * gl_Vertex;
	depth = (-viewPos.z-near_clip)/(far_clip-near_clip);
	gl_Position = ftransform();
}
[/java]
(GRRR, somehow I can't get the Java markup to work.)

No texture required, that's neat :-)
(A Sampler2D texture is probably only necessary for blur and other multi-pixel operations.)
TODO: Check whether near_clip and far_clip are really the GLSL-provided clipping plane distances.

@all: To record the depth, I'd need to attach that fragment to each and every shader that gets sent to the GPU.
Is there a way to tell JME to do that without touching every single geometry in the scene? (Also, I'd have to somehow specify that on the fog nodes, this fragment should be run after the color-generating fragment.)

This:
depth = (-viewPos.z-near_clip)/(far_clip-near_clip);

Is not the depth of geometry drawn already. It’s the depth of the geometry that is being drawn now… in clip space. Which is kind of odd to do, I think, since gl_Position.w will have that or some variation.

You don’t have to believe me but I will say it again: you cannot write to and read from the same buffer in the same shader… especially the .frag shader. The only way to read the previous values of the frame buffer is to have done it in a previous pass and shunted it over to a texture that you can sample.

And that’s still not going to let objects (or particularly parts of objects) pass in front of each other very well either.

You’ve already been given the best/easiest likely solution. Did it not work?

@toolforger said: That said, It seems that with proper docs, the shader approach would be MUCH simpler. I just can't find the docs :-(
You understand that shaders were not invented by the JME team, right? We don't have that much doc about shaders as much we don't have that much doc about Java.... Now you have this big thing called "the internet", with a couple of good search engines, google even bing if you like to hurt yourself, I guess you'll find valuable resources on these.

You’re over thinking. Get to work, try, experiment, you’ll get a lot further than by asking questions and not listen to the answers you’re given…

1 Like

Sorry, this became a wall of text because I didn’t want to spam this thread with a separate post for each answer. I hope marking the sections with underlines will allow everybody to quickly find what interests them.

On the “not listening” tangent… I’m still waiting for an answer how to interpret “Shader GLSL100” in .j3sn files, so I might not be the only one who’s not listening properly…
Actually I have been listening and googling and reading like mad. It’s just that you guys aren’t the only source I’m turning to, and some of the overhead comes from the need to integrate all these information sources. If you find my lack of faith in your force disturbing, well, I prefer knowledge over faith :slight_smile:

I don’t (yet) understand multipass . So much to read, to little time :slight_smile:
Seems like I need to set multiple passes up from the Java side, it’s not in the shader definitions. I’d have to (somehow) pass the depth buffer written by the first pass to the second pass - is that right?
(I guess I’ll be in trouble with two-pass anyway. I need to be able to have multiple voxels behind each other, so the buffering would have to happen on a per-fragment basis.)

On the provided shader code sample: The shader code is just what I found at https://github.com/imi/IMI-Max-patches-for-Max6/blob/master/Toolbox/_GL/depth_of_field/depth.jxs . Didn’t look shady or like it would not have been properly tested, so I thought it would be a workable approach.
Starting on that theory, I arrived following assumptions. Can’t say which of them are wrong:

  • Shaders run in parallel across pixels, but for each pixel, they run sequentially in back-to-front order. (The order could be different, but things can be arranged that way. I dimly recall that JME3 actually does that.)
  • If a shader sets a “varying” variable for a pixel, that variable stays available for any subsequent shaders that work on the same pixel.
  • Vertex shader runs on the backfacing triangles of the voxel, setting the “depth” variable (a “varying”).
  • Fragment shader runs on the forward-facing triangle pixels, picking up the “depth” that was set by the vertex shader from the backfacing triangle. (pspeed is right: if the vertex shader ran for the forward-facing triangle, the fragment shader would pointlessly get the depth of the forward-facing triangle itself. I plead guilty of being misleading about that.)
    Which of these assumptions are wrong? What other assumptions would make the shader code in depth.jxs work?
    EDIT: It seems that a varying variable isn’t supposed to survive to between fragments. The builtin gl_FogFragCoord might, but this kind of creative abuse tends to bring out driver/firmware bugs, so I guess that’s it for this idea. I still have no idea why this approach could have possibly worked - maybe the graphics card of the programmer happened to forget to clear “varying” variables between fragments?

On the oddness of calculating the depth differently: Z values are non-uniform, so they need to be linearized before they can be used to calculate voxel depth.
I’m not sold on that specific formula yet. I found an article that does some in-depth analysis and arrives at a different one. If anybody is interested, I’ll post the link.

Experimenting - well, it’s a bit difficult to conduct useful experiments. First, I’m only just gaining enough knowledge to even interpret failure modes. Second, I might be testing just my 3D card, not GLSL in general, so I might end up with something entirely unportable (and get lots of support tickets after release). Third, 3D cards have their own set of quirks, and if something fails, I wouldn’t know whether I’m staring at my own bugs or at a 3D card bug.
That said, I think I just acquired (barely) enough knowledge to gain insight from experiments, so I got a (very bland) test scene set up the day before yesterday, tonight it will be material definitions and shader code (and probably some hair-pulling).

Most of the assumptions are wrong, actually.

A shader is a single .vert and a .frag. The .vert file runs to make rows of fragments (scan line conversion). varyings defined in the .vert interpolate across the fragments. If a fragment passes the depth testing setup then the .frag is called for each fragment. (It may also be called anyway and rejected later… we can’t know.)

varyings are not for previous passes. They are for communication between the .vert and .frag.

The reason you cannot get the proper depth in the .frag has nothing to do with front facing or back facing. The depth buffer is write-only in the fragment shader. You can only write to it. You cannot read from it…no matter how hard you try you will never be able to read from it. It is write only. The varying “trick” you saw is not a trick. It is also not doing what you wanted. It is varying across the triangle that is currently being drawn. A very useful thing.

The varying is only alive during that shader. It doesn’t hang around for other shaders.

Multipass works in a few different ways depending on what you need. If you need to read the depth or color information in the second pass then the first pass writes this information to textures instead of directly to the frame buffer. Then the next pass has access to this information as a texture and can do with it what it wants.

For further reading, this was the second result in a google search for “reading depth from a fragment shader”:

@toolforger said: Experimenting - well, it's a bit difficult to conduct useful experiments. First, I'm only just gaining enough knowledge to even interpret failure modes. Second, I might be testing just my 3D card, not GLSL in general, so I might end up with something entirely unportable (and get lots of support tickets after release). Third, 3D cards have their own set of quirks, and if something fails, I wouldn't know whether I'm staring at my own bugs or at a 3D card bug. That said, I think I just acquired (barely) enough knowledge to gain insight from experiments, so I got a (very bland) test scene set up the day before yesterday, tonight it will be material definitions and shader code (and probably some hair-pulling).

The best way to experiment with shaders is to take some of the existing ones and modify them in some way that seems logical… verify the results, try again, etc…

For example, fork Unshader.j3md and the .vert and .frag file. Modify the .frag to always set red to 1.0 in the final color. Verify that the shader is running and tinting all of your geometry red. Then modify something else that seems sensible.

@toolforger said: On the "not listening" tangent... I'm still waiting for an answer how to interpret "Shader GLSL100" in .j3sn files, so I might not be the only one who's not listening properly... Actually I have been listening and googling and reading like mad. It's just that you guys aren't the only source I'm turning to, and some of the overhead comes from the need to integrate all these information sources. If you find my lack of faith in your force disturbing, well, I prefer knowledge over faith :-)
mhhh where to start. first....it's in the doc on the wiki. I get you often have a TL;DR syndrome, but unfortunately I can't really help you with that. try to search for "version" in this one doc.https://wiki.jmonkeyengine.org/legacy/doku.php/jme3:advanced:jme3_shadernodes. I still recommend reading it completely but since you seem to lack shader knowledge i recommend reading on it before. second...you're stumbling on things that have no importance or not yet any relevance. asking this question is like "I want to build a house, how does the microwave oven works?". You'll get there eventually but you need to learn to start from the beginning really...
@toolforger said: I don't (yet) understand multipass . So much to read, to little time :-) Seems like I need to set multiple passes up from the Java side, it's not in the shader definitions. I'd have to (somehow) pass the depth buffer written by the first pass to the second pass - is that right? (I guess I'll be in trouble with two-pass anyway. I need to be able to have multiple voxels behind each other, so the buffering would have to happen on a per-fragment basis.)
So you don't understand what you're told and instead of trying to learn about it you just dismiss the solution... dismissing pspeed advises is probably the most unwise thing you can do...Some people would pay for his advises...
@toolforger said: On the provided shader code sample: The shader code is just what I found at https://github.com/imi/IMI-Max-patches-for-Max6/blob/master/Toolbox/_GL/depth_of_field/depth.jxs . Didn't look shady or like it would not have been properly tested, so I thought it would be a workable approach. Starting on that theory, I arrived following assumptions. Can't say which of them are wrong: - Shaders run in parallel across pixels, but for each pixel, they run sequentially in back-to-front order. (The order could be different, but things can be arranged that way. I dimly recall that JME3 actually does that.) - If a shader sets a "varying" variable for a pixel, that variable stays available for any subsequent shaders that work on the same pixel. - Vertex shader runs on the backfacing triangles of the voxel, setting the "depth" variable (a "varying"). - Fragment shader runs on the forward-facing triangle pixels, picking up the "depth" that was set by the vertex shader from the backfacing triangle. (pspeed is right: if the vertex shader ran for the forward-facing triangle, the fragment shader would pointlessly get the depth of the forward-facing triangle itself. I plead guilty of being misleading about that.) Which of these assumptions are wrong? What other assumptions would make the shader code in depth.jxs work? EDIT: It seems that a varying variable isn't supposed to survive to between fragments. The builtin gl_FogFragCoord might, but this kind of creative abuse tends to bring out driver/firmware bugs, so I guess that's it for this idea. I still have no idea why this approach could have possibly worked - maybe the graphics card of the programmer happened to forget to clear "varying" variables between fragments?
wrong wrong wrong wrong, there is nothing right about this at all really. READ ON A BASIC SHADER WORKFLOW FIRST, again you're starting from the end!!!!
@toolforger said: Experimenting - well, it's a bit difficult to conduct useful experiments. First, I'm only just gaining enough knowledge to even interpret failure modes. Second, I might be testing just my 3D card, not GLSL in general, so I might end up with something entirely unportable (and get lots of support tickets after release). Third, 3D cards have their own set of quirks, and if something fails, I wouldn't know whether I'm staring at my own bugs or at a 3D card bug.
That's experimenting...how do you think i learned shaders?
@toolforger said: That said, I think I just acquired (barely) enough knowledge to gain insight from experiments, so I got a (very bland) test scene set up the day before yesterday, tonight it will be material definitions and shader code (and probably some hair-pulling).
Good go ahead, it'll make you balder, but trading hair against knowledge is a fair deal.
@pspeed said: The reason you cannot get the proper depth in the .frag has nothing to do with front facing or back facing. The depth buffer is write-only in the fragment shader. You can only write to it.

That was my first approach but never discussed here.
“depth” is a “varying”, not the depth buffer.

@pspeed said: The varying "trick" you saw is not a trick. It is also not doing what you wanted.

That would mean that the whole shader code was bogus. Because the shaders communicate via a “varying”, and the fragment shader needs to process depth values generated by a different fragment’s vertex shader else the depth would always compute as zero.

@pspeed said: It is varying across the triangle that is currently being drawn. A very useful thing.

Definitely.
Only that the “depth” varying is written to by a vertex shader. It’s essentially a per-pixel variable here, not an interpolated value created by the pipeline.

@pspeed said: varyings are not for previous passes. They are for communication between the .vert and .frag.

Yes, the second sentence is 1.10 spec language.
It wasn’t 100% clear whether that was meant to be interpreted exclusively. 4.3 language is 100% clear on the topic though, so I guess that buries the idea.
It’s a pity actually, fragment-to-fragment communication would have been quite cool. But… well, I’m not going to redefine GLSL after all :wink:

@pspeed said: For further reading, this was the second result in a google search for "reading depth from a fragment shader": http://gamedev.stackexchange.com/questions/10964/why-do-pixel-shaders-not-let-us-read-directly-from-the-framebuffer-or-the-depth

Got it. Didn’t google for that because the approach didn’t use frame or depth buffer. It was interesting anyway because it confirmed the parallelization granularity.

@pspeed said: Multipass works in a few different ways depending on what you need. If you need to read the depth or color information in the second pass then the first pass writes this information to textures instead of directly to the frame buffer. Then the next pass has access to this information as a texture and can do with it what it wants.

Yep, I’m currently investigating how to run multipass shaders in JME.
Searching the forum already turned up enough links, I’ll just have to read them. sigh

There’s one problem with a two-pass approach.
Voxels may be behind each other. And they’re semitransparent, so I can’t simply occlude the one that’s farther away.
So I can’t use a screen-sized depth buffer. I’ll need to create a buffer per voxel - which would mean fragment shader cooperation, which rules that out.
So… one separate buffer per backfacing fragment. And phase-2 fragment shader would iterate through all the fragment buffers and select the one that (a) covers the current coordinate (oh, so the buffers need to have a coordinate associated) and (b) does not return the default value.
Or… create a buffer per voxel from the Java side (just create it, don’t send any data). Tell the fragment shaders which buffer they should use. I’ll have to read up on every step of that process, it’s a bit outside of what people normally do.
And a single-pass alternative… cull the backfaces but collect vertex coordinates, transform them to screen coordinates, send them all as a VBO, and tell the frontfacing shaders at what offsets in the VBO they’ll find the coordinates that they need. Let the shader interpret each backface as an infinite plane, compute the depth, then use the minimum depth of all backfaces (i.e. use the min() function instead of branching, I hear shaders hate branching).
Finally… the approach that I’m not thinking about but that everybody else knows about :wink:

Which route sounds best?

Again, if you want simplicity, try the billboarding quads. You don’t need any new shader. You can do it with the unshaded. You don’t need multipasses, custom meshed, vbo’s, vao’s, or something which does not come with standart jme. If the solution looks not good, ok, go for a different method. But i would always start with a solution which could be implemented in a very few lines of code. In worst case it does really not look good and you have lost ~20 minutes.

1 Like

@nehon been there, read it, will re-read it some more times.
The status of GLSL 1.00 is… strange.
OpenGL Shading Language - Wikipedia doesn’t list GLSL 1.00 at all.
Google searches turn up GLSL ES 1.00, which I gather is actually WebGL (I dimly recall it’s saying it’s somewhere between GLSL 1.10 and 1.20).
Khronos OpenGL® Registry - The Khronos Group Inc doesn’t list GLSL 1.00 either.
Does GLSL 1.00 even exist?

I guess my problem isn’t tl;dr, it’s that I’m trying to cover all bases so that my code is solid enough to survive in the wild.
I’ve seen too much 3D software suffering from graphics cards woes. JME seems to be exceptionally stable, and I don’t want to lose that advantage by being careless with shaders.

Good go ahead, it’ll make you balder, but trading hair against knowledge is a fair deal.

:smiley: :smiley: :smiley:
Actually I hope to achive true baldness in time. It’s sexy :smiley:

@toolforger said: @nehon been there, read it, will re-read it some more times. The status of GLSL 1.00 is... strange. http://en.wikipedia.org/wiki/GLSL doesn't list GLSL 1.00 at all. Google searches turn up GLSL ES 1.00, which I gather is actually WebGL (I dimly recall it's saying it's somewhere between GLSL 1.10 and 1.20). http://www.opengl.org/registry/ doesn't list GLSL 1.00 either. Does GLSL 1.00 even exist?
GLSL1.00 is opengl es glsl version yes. Opengl es is a lighter opengl designed for mobile and web platforms. this version is here to say "my shader works from this version to w/e" glsl1.1 was the glsl version of opengl 2 and since opengl 3.3 the kronos group made opengl and glsl versions match so there is no more confusion. But.... they started over with glsl ES because glsl ES v1.0 matches with opengl es 2.0....

if you want all the specs of both opengl and glsl since the begining check this page Khronos OpenGL® Registry - The Khronos Group Inc
I use this a lot
Also here are glsl es 1.0 specs http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf
and here is the reference page of opengl es 2.0 (the one actually used by JME on android)
OpenGL ES 2.0 Reference Pages

A professional isn’t somebody who knows everything in advance, its just somebody who doesn’t make the same mistake twice :wink: