Like everyone, I am writing a bloxel game. Depth fog, to obscure the world loading, can significantly improve the aesthetics of my game, so I attempted to add it. jME’s built-in fog filter is insufficient because it creates fog that starts fading in at the camera and then never fully fades out. I want fog that starts fading in near the edge of the view distance, and then becomes completely opaque at the view distance. After much digging through the tiny amount of documentation on jME’s filters exists, along with the filters in the Github repo, I managed to create a functional filter that renders fog mostly as I want:
With this filter I get a very strange artifact. The fog seems to be in a plane paralell to the camera’s plane instead of in a sphere around the camera. This artifact is very noticible when the camera moves left or right:
The presence of this artifact suggests that the linearDepthvariable that my shader finds is not actually the distance between the camera and fragment in world space, but rather some sort of orthographic projection thing. As such, it seems that I need to somehow get the true world space coordinates of the camera and the fragment and perform the distance formula, but I am at a loss for how to do this. I found some mutterings about how to get the world space coordinate of a fragment, but I found nothing regarding the camera (I prefer to use something built-in to jME to keep my code clean, without passing the camera position all over, as this is implemented via a FilterPostProcessor filter).
Is there some easier way to fix this issue than to find the world space coordinates? If not, how can I find the world space coordinates of the camera and fragment?
You will probably be sooooo much happier if you just add fog to the real shaders instead of trying to do post-proc fog. But that’s just my advice.
A few lines in the .vert to add a camera local Z varying and then a few lines in the .frag to calculate fog however you like… using whatever fog color you like. Probably 5-6 lines of code in total after forking the shaders (if you haven’t already).
This has the added benefit of letting you still have a real sky with fog. Also being able to have some materials (like lights) that don’t use the fog… and then using a black fog color at night to give cool night time distance effects.
It’s a real shame that Lighting.j3md was never converted to shader nodes because this would be super simple then.
Edit: the other cool thing is that you can base fog on camera distance instead of just depth… that way you won’t see farther near the edges of the screen. (Ever turn your head sideways to see farther in Minecraft? Yeah, then you know what I’m talking about. :))
I meant to address the technique that you’re describing in my question, but I forgot. I want to avoid forking of materals as much as possible, as the jME materials are good for 95% of what is needed, and I am using jME instead of LWJGL to avoid mucking around with GLSL as much as possible. For this reason, a filter that can apply to all materials period seems like the best option, as it is very much set-it-and-forget-it when programming (ie. I don’t need to rewrite the fog each time I use a new material).
Edit: the other cool thing is that you can base fog on camera distance instead of just depth… that way you won’t see farther near the edges of the screen. (Ever turn your head sideways to see farther in Minecraft? Yeah, then you know what I’m talking about. :))
That same effect is what is happening in my game (as is visible in the video), and I want to write a filter that fixes it somehow.
As I was trying to figure out how the various variables are passed into shaders in jME I saw m_FrustumNearFar in a few places, but it wasn’t clear where it came from. It seems strange to manually set it via material.setVector2, but I can’t think of an alternative. I was also under the impression that m_FrustumNearFar was equal to g_FrustumNearFar, however this now appears to not be the case. Can you clear any of this up?
In jme, shader uniforms prefixed with g_ are set automatically by jme and uniforms prefixed by m_ are set by the user.
If you don’t want to be confused you can name it m_MyFrustumNearFaretc
In that case, what is the purpose of using a m_FrustumNearFar if it will contain identical information as g_FrustumNearFar? When I was trying to figure out filters, I replaced the g_FrustumNearFar with hard-coded values equal to my frustums and it had no change, so I concluded that they are the same values.
It will not contain the same data, since it (most probably) contains the FrustrumNearFar of postprocessor’s camera.
You need the value of your’s scene camera.
You 3D scene has a perspective camera.Thus, all g_ variables that depend on the camera will be set according to it.
Now, assuming this is the issue, a post processor filter, renders a full screen quad. How does it do it? A new orthographic camera? Thus shaders, that write post processor effects will then get their g_ variables filled with the postprocessor’s camera.
Any approach that doesn’t use trig functions is going to be giving you parallel depth. Basically, it’s the dot product with the view vector. Anything involving near/far planes is only changing the “units” basically… from view space to world space.
It’s not going to fix the parallel problem. For that you have to reverse engineer the hypotenuse of the triangle formed by x,y,depth.