Best practices for glowing fog?

Hi all,

I need some advice on how to deal with space-filling glowing… hm… fog. (It’s supposed to approximate the visual effect of a volume of space with a given number and brightness of stars in it. Can’t use a skybox because point of view can change enough to show parallax. I guess it’s not limited to starfield glow anyway.)

From the “Light and Shadow” tutorial, I gather that I could use cubes with an Unshaded.j3md-based Material (haven’t checked yet how to identify these just got https://wiki.jmonkeyengine.org/legacy/doku.php/jme3:beginner:hello_material#transparent_unshaded_texture, is that the right recipe to follow?).
Set transparency to 100% so that a thin veil is different from a deep fog.
Keep the number of cubes low enough so the GPU can handle it (i.e. use progressively larger cubes at larger distances from point of view).

Sounds viable?
If no, why? What are the alternatives?

I just read the chapter of the new “Game Engine Gems 2” book (read it online as it’s one of the few on amazon “look inside”) on volumetric cloud rendering. Based on that technique, you could use voxels and point sprites… or basically a point cloud with point sprites. Semi-transparent with blend mode AlphaAdd.

Found the book on Amazon, but no “look inside” link :frowning:
Google Books would should be lots of pages about water vaport density calculation (not useful for me) but snip the first salient pages about turning density to voxels to quads, sigh…

The chapter discusses using billboards instead of volumes. Which led me to the realization that side-facing faces will shine through the front face of the neighbouring cube, creating an artifact with too much glow.
I’ll have to think about how to vary the intensity of the glow at each surface point so that it reflects the cube volume behind it (according to the raycast from viewer through cube).
Which means I’ll need a specific shader for that, and I have zero experience with shaders. shit
Or is one of the delivered shaders suitable? I’m seeing quite a lot of shaders in Common/MatDefs, but I have no idea what each of them does, and they don’t come with comments. I do see a Fog[15].frag and a Fog.j3md, so they might be the right thing to use already… hm, nah, seems like it’s simply adding a general distance-related fog to the scene (just guessing).

I also found http://en.wikipedia.org/wiki/Volume_rendering. Seems to cover exactly what I want, except it doesn’t tell me what the right approach to use in JME would be :wink:

This link on amazon has the “Look inside” link for me:

I don’t know what country you are trying from… maybe it’s different.

It’s possible that the normal particle shader would work for the technique. I’m not sure what effect you are trying to achieve exactly so I can’t say.

I have voxel densities and want to create a translucent image of that. This would look similar to this image:

(Note for detail lovers: It’s a “maximum intensity projection”, which means that bright features completely eclipse less bright ones. That’s the one detail I’ll want to do differently, 'cause I’ll want the hints of thin wisps in front of or behind bright areas.)

Got the “look inside” link, thanks. Turns out I had to turn on Javascript.
And… page 30 “is not available at this time”. Seems like they are bent on frustrating me :wink:
Well, I think WP already gave me the proper keywords, now the question is: How much readymade support does JME offer for voxel rendering, and how would I have to do myself?

There are many ways to render voxels. An additive point cloud is easily done with a custom mesh of points. Point clouds with real Z and transparency get tricky, though.

That’s why I asked about what you are trying to do. I think I kind of get it but as you say, even your demo image isn’t really what you want and the differences are pretty extreme in how to go about it.

The difference is just a minor detail, the salient point is that it’s showing transparent 3D voxel data :slight_smile:
For me, the voxels will be generated on the fly so I can easily do LOD.

The key point is that assuming that each voxel is represented by a Box with a transparent Material, that Material would need to be more or less transparent depending on how much box volume is behind it (as seen from the camera).

Approaches I’m seeing right now:

  1. Do it with a fragment shader. I just did a single “diagonal” read through some reference material, that’s probably not enough. It would make it easy to compute the voxel depth “behind” a surface point on a per-pixel basis, but it seems that accessing the back face of a Box from the front-facing fragment is a big no-no in GLSL. Maybe there’s a way around that, but my almost nonexistent expertise definitely isn’t enough for that kind of stuff.
  2. Use a Box, use a Material that’s lighted and getting more transparent in the corners. Tricky to get right, because the transparency ramp endpoints would depend on viewing angle and where the backfacing corners are. Wouldn’t make use of shaders but should be reasonably efficient. Downside: Hard to get right, so a lot of work testing all corner cases.
  3. Use particles. Haven’t talked about that yet because it’s an ugly hack, but it should work: Generate particles that don’t move. It seems that they must expire eventually, but that would be just a “twinkle, twinkle, little cloud” effect, and I could live with that.

I’m currently considering starting with particles, and once I find the time and expertise, I might switch to (2).
Sound reasonable, or am I overlooking an easy way to do what I want?

You don’t have to use the particle generator to make particles. You just have to make a custom mesh and set the type to Points. The mesh will need some other attributes… you can look at the particle emitter code for what. And the material will need the quadratic value set (I found this by trial and error).

It’s by far the easiest approach to rendering voxels and with a BlendMode.AlphaAdd or whatever, thicker sections will be brighter and so on since each layer adds to the previous.

Thanks, understood.
What’s the “quadratic value set”?

The particle material has a “Quadratic” parameter. I set mine to 6, I think. I never unrolled the math to see why it’s up to me to set it.

Ah okay.

I’m currently thinking about trying approach #2 anyway - turned out I don’t need half as much vector geometry as I thought.

Here’s the lay of the land, any comments appreciated:
a) Define each voxel using a minVector, a maxVector, and an RGBA color. (RGB defines the glow color and intensity of an 1x1x1 volume, A the transparency. RGB values should be pretty small - at R=G=B=0.01, a 100x100x100 voxel with full transparency will already amount to 100% white.)
b) Don’t show the voxels as Boxes (that wouldn’t work anyway). Instead, flatten them into billboard-like images that use RGBA color gradients to give the right impression.
c) To get the right gradients, split the billboard into a mesh like in https://docs.google.com/drawings/d/10jtZvDyc75WGCWYiRcIv2O4swPKZQC4M8e8NN33FR2k (embedding somehow doesn’t work).
Black lines show where the front-facing voxel edges are, dotted black lines back-facing edges, red small-dotted lines show inner mesh boundaries.
d) Assign a gradient to each mesh triangle, interpolating from the vertices. For example, the ABC triangle in the mesh would have A = B = full transparency, C = RGBA color (multiplied by voxel front-to-back distance at C). The gradient would probably have to be nonlinear if the alpha channel isn’t zero - I’ll have to work out the math for that yet. Probably going to be something exponential, maybe a gamma curve.

So I’m going to create a GlowingVoxel class.
Not sure how to construct this thing.
It needs a scenegraph geometry like a standard Box, but the mesh to use is Camera-dependent. Does the scene graph even know where the screen viewport camera is?

Have you considered a point mesh rather than a standard mesh? Should be a lot simpler for this purpose…

@zarch said: Have you considered a point mesh rather than a standard mesh? Should be a lot simpler for this purpose...

Wish I’d thought of that. :wink:

@pspeed said: Wish I'd thought of that. ;)

I know you already said that, he seemed to have missed it though :stuck_out_tongue:

Yep, missed that. Thought it was relating to filling the voxel with particle dots to simulate a glow.
But of course I can set up a ParticleMesh with a single, immobile particle that has just the right texture :smiley:

Oh. Wow. It’s (ab?)using a Control attached to the geometry just to get a Camera.
Now that’s sneaky. I’ve been reluctant to try this, but if this is part of official jME3 code, I guess I shouldn’t worry no more about mixing abstraction levels in this way.

Hm. Now I’ll have to extend ParticleEmitter. And I can’t (not easily anyway) because it’s closing off the list of allowable cases via a Type enum, limited to Point and Triangle.
Is that correct, or am I overlooking something?


<span style="text-decoration:line-through">On the shading front, I'm getting an exponential attenuation formula for the alpha channel. (Formula is exp(-d), where d is linearly interpolated between 0 and a voxel-attached value.)
Is such a shader already available somewhere? Is it even doable with an 1.00 shader? (UPDATE: http://www.opengl.org/sdk/docs/manglsl/xhtml/exp.xml seems to say it's available right from 1.10. Now I'm wondering whether GLSL100 is the same as 1.10 or not...)
</span>UPDATE 3: No exponentials necessary, color intensities are logarithmic, cancelling out the exponential function. I.e. alpha can be interpolated linearly.

I'm still working on the formula for the color (i.e. the glow). It's probably going to be a few additions, multiplications, and exp() calls.

UPDATE 2: Off to re-read Kirill's/Momoko_fan's JME3 Materials PDF. It seems to be a bit old; is it still relevant?

If you use particle emitter just to get a point mesh then you are using a sledge hammer to swat a fly.

I will offer this idea only one more time and then bow out because I’m tired of repeating it.

It goes something like this:
Create a point mesh (not a particle emitter but a mesh of type Point).

Fill it with points. Give it one point for each voxel. You can look at the particle emitter to see how it fills in the point values… you may also want to use an atlas like it does.

Assign some foggy texture. I copied JME’s smoke texture for some of my stuff. It’s a nice atlas of smoke puffs.

Set the material parameter PointSprite to true.

Turn the blend mode on the material to AddAlpha… not Alpha… but AddAlpha. This will pay attention to the alpha on the texture but otherwise add the values to whatever is already there.

Set the quadratic material parameter to something like 6… though you can play with different values, I guess. It has something to do with how the point sprites are scaled.

when you talk about creating a billboard for each particle it makes me shake my head just a little because that’s exactly what point sprites are.

There may be several reasons this approach is not what you want but a) it’s worth a try, and b) it’s the simplest proposed method so far that would approach a fog version of the picture you showed.

1 Like

Thanks, what you’re proposing is getting a lot clearer to me now.

  1. I didn’t understand that you’re proposing to merge all voxel points into a single point mesh. That’s certainly something to explore, I’m not yet ready to deal with that though; right now, I’m amining to get a single, huge (~10% screen size) voxel displayed correctly.
  2. I’ll expect to do a space-filling glow of varying intensity. Unless I’m mistaken that means large voxels to keep the triangle count manageable, and no visible overlaps or gaps between voxels to avoid artifacts. Pointsprite mode doesn’t seem to offer a way to scale a texture, if that’s true I’ll have to try a triangle mesh.
  3. PointSprite mode - that was new. Seems to be tied to a texture bitmap, which would mean a really large texture for large voxels. I’d probably have to assume roughly one screenful of texture memory for that as a worst case - could that become an issue? If yes, large voxels are probably better handled using a triangle mesh.
  4. Got the AddAlpha hint. Will have to experiment with that, each voxel could have a glow (RGB) and an opacity (alpha), I’ll have to check which mode applies. If no mode works out I’ll simply use two sets of voxels, one for glow and one for opacity.
  5. ParticleEmitter says that “Quadratic” is a float that’s something to do with attenuation. Which sounds as if the shader is using that parameter to define how large of a circle from the texture it’s using, less about scaling the texture. Can’t say for sure though, I haven’t found the shader code that uses Quadratic (if it’s even in shader code - I have such a huge backlog of code reading that I haven’t even tried to find out).
  6. I agree that BillboardControl would be overkill. I had thought they were some kind of sprite, but they seem to actively manipulate the scene graph to always turn the billboarded Spatial towards the camera. Nice approach but I agree it’s gross overkill compared to setting up a mesh.

Remember I’m just starting to understand what jME is doing. I’m doing my best, but the amount of information to absorb is quite a lot, and I have to go on assumptions mostly because I haven’t found the time yet to study everything in enough detail. Heck, sometimes I don’t even know what to look for - that’s why I’m asking stupid questions here.
Anyway… I hope I have demonstrated enough understanding to keep you from running away in disgust :wink:
(I understand it can be frustrating to clean up the misconceptions of a newbie; please have patience.)

Point sprites are limited in maximum screen size on some cards. Sometimes as small as 64 on-screen pixels.

Since I still don’t know what effect you are trying to achieve then I can’t really comment on whether that’s a problem with my approach or not. I’m still thinking sprite-per-voxel and basing it on the image you posted.

Good luck with your approach.

I think I can now describe in a nutshell what I want to achieve:
Display a voxel as a glowing, semitransparent cube. (With “glows” I mean “doesn’t need lighting”, not “has a halo” or “casts lights on other scene objects”.)

Voxels have a real in-scene extent, players could zoom in to have a voxel fill all of the screen.
I guess that means point sprites are out, and a triangle mesh it is.

My current plan is a mesh flattened into a plane that’s orthogonal to the viewing angle. The mesh would look like the image I posted, with nominal RGBA color in the center rectangle, the hexagonal area around the rectangle interpolated to full transparency at the borders.
The mesh plane and the image vertices will have to be constantly adapted to the viewing angle; the desired effect is an impression of a glowing, semitransparent cube seen from an angle.

I see I can use a Geometry subclass to carry the corner vertices (would be just like Box) and the mesh.
I’m not 100% sure within what part of the rendering process the mesh should be adapted to the viewing angle. My current best guess would be doing it inside runControlRender(RenderManager, ViewPort).
I’m assuming I can pick up implementation details from the particle system, particularly from its triangle mesh variant (though the latter uses just two triangles as a basis for a Material - I’ll restrict to a uniform RGBA color because textures won’t work anyway).

I hope this writeup is clearer than my previous attempts.
I guess my problem is that I don’t know very well yet what goes without saying and what needs to be stated explicitly.
I try to improve on that with each iteration and hope I’m converging quickly enough without overtaxing your patience (but I fear I’m losing this struggle).

Point sprites can also be emulated with triangles and a clever vertex shader. Same effect, basically. I have to rewrite a couple of my sprite-based shaders to do it this way because things like my shader-based flames look really stupid on systems with sprite-size limits. (I’m looking at you ATI.)