Sorry, this became a wall of text because I didn’t want to spam this thread with a separate post for each answer. I hope marking the sections with underlines will allow everybody to quickly find what interests them.
On the “not listening” tangent… I’m still waiting for an answer how to interpret “Shader GLSL100” in .j3sn files, so I might not be the only one who’s not listening properly…
Actually I have been listening and googling and reading like mad. It’s just that you guys aren’t the only source I’m turning to, and some of the overhead comes from the need to integrate all these information sources. If you find my lack of faith in your force disturbing, well, I prefer knowledge over faith
I don’t (yet) understand multipass . So much to read, to little time
Seems like I need to set multiple passes up from the Java side, it’s not in the shader definitions. I’d have to (somehow) pass the depth buffer written by the first pass to the second pass - is that right?
(I guess I’ll be in trouble with two-pass anyway. I need to be able to have multiple voxels behind each other, so the buffering would have to happen on a per-fragment basis.)
On the provided shader code sample: The shader code is just what I found at https://github.com/imi/IMI-Max-patches-for-Max6/blob/master/Toolbox/_GL/depth_of_field/depth.jxs . Didn’t look shady or like it would not have been properly tested, so I thought it would be a workable approach.
Starting on that theory, I arrived following assumptions. Can’t say which of them are wrong:
- Shaders run in parallel across pixels, but for each pixel, they run sequentially in back-to-front order. (The order could be different, but things can be arranged that way. I dimly recall that JME3 actually does that.)
- If a shader sets a “varying” variable for a pixel, that variable stays available for any subsequent shaders that work on the same pixel.
- Vertex shader runs on the backfacing triangles of the voxel, setting the “depth” variable (a “varying”).
- Fragment shader runs on the forward-facing triangle pixels, picking up the “depth” that was set by the vertex shader from the backfacing triangle. (pspeed is right: if the vertex shader ran for the forward-facing triangle, the fragment shader would pointlessly get the depth of the forward-facing triangle itself. I plead guilty of being misleading about that.)
Which of these assumptions are wrong? What other assumptions would make the shader code in depth.jxs work?
EDIT: It seems that a varying variable isn’t supposed to survive to between fragments. The builtin gl_FogFragCoord might, but this kind of creative abuse tends to bring out driver/firmware bugs, so I guess that’s it for this idea. I still have no idea why this approach could have possibly worked - maybe the graphics card of the programmer happened to forget to clear “varying” variables between fragments?
On the oddness of calculating the depth differently: Z values are non-uniform, so they need to be linearized before they can be used to calculate voxel depth.
I’m not sold on that specific formula yet. I found an article that does some in-depth analysis and arrives at a different one. If anybody is interested, I’ll post the link.
Experimenting - well, it’s a bit difficult to conduct useful experiments. First, I’m only just gaining enough knowledge to even interpret failure modes. Second, I might be testing just my 3D card, not GLSL in general, so I might end up with something entirely unportable (and get lots of support tickets after release). Third, 3D cards have their own set of quirks, and if something fails, I wouldn’t know whether I’m staring at my own bugs or at a 3D card bug.
That said, I think I just acquired (barely) enough knowledge to gain insight from experiments, so I got a (very bland) test scene set up the day before yesterday, tonight it will be material definitions and shader code (and probably some hair-pulling).