(November 2016) Monthly WIP screenshot thread

Nothing super amazing, but took a bit of thinking to get my head around it all. Multi-threading can be an interesting puzzle to solve at times, and it took me probably 3 days to even understand marching cubes and pauls implementation. Anyway. Yay!

9 Likes

Cool. Can it be persisted?

Sure. Its just a map of overrides at this point. The work was in modifying the mesh multithreaded. Even the box ray doesnt like it because thing are going in and out of the scene node. Perlin worms work pretty much the same. I have those working too but i need to spend some time cleaning up. I have done a lot of testing for a lot of things. Boulders causing terrain damage etc.

Cheers guys =)

@iamcreasy I didn’t plan to make it wet, but since it’s in the PBR pipeline…add some spec, reduce roughness, darken base color… 90% wet.

@Tryder I did try that method, the results were ok, and could make a decent starting point … for anyone wanting to go down this route, add an octave of Voronoi noise first will give the rock a sharp scalloped look.

Tree attempt 8, with “smash” lit PBR leaves (“smash lighting” is an unfinished jme project thing I toy around with and is used to bake faux translucency into objects).

13 Likes

Jesus dude, how come EA or Ubisoft haven’t already kidnapped you in your sleep?

2 Likes

Hi @thetoucher. I am currently looking for real-time methods to compute light scattering in clouds. I reckon that you’re working on something similar with leafs, and I wonder if there is an option that I missed. How does it work?

You and me both =) I’m currently working on more advanced fog solution for smoke and clouds (and fog), most of my current success is coming from volume lighting like solutions. I want better light shafts so I’m working in that direction, still very early thought.

This smash lighting works offline by first lighting the mesh with a fake lighting rig (jme lights + extra area and hemi lighting) that is setup to mimic the world lighting. For my clouds, shadows are calculated using volumetic data, static mesh shadows are in the pipeline (they are scary since I don’t wan’t to ray-trace, I’m dubious about the accuracy of using a shadow cam like solution and area lights hurt my insides).

This lighting information is then blurred and warped - the core of the smash lighting - that gives a translucent like effect. It does not use any volume information, its works on the mesh level.

The new lighting information is written to the mesh, and the mesh is written back to the disk for later runtime use (an optional step). Finally this new lighting information is fed into the final material (Unshaded, Lighting, PRBLighting etc…) with a little tweaking, at runtime with next to no performance hit.

As for clouds specifically: the clouds I did a while back (which you are probably interested in, I can’t remember if I ever made a detailed post about them), I started by defining the cloud volumes (the shape of each ‘cloud’) using a density volume and skinning it with verts, (a voxel like approach, think Minecraft but with vertices rather than cubes), these cloud ‘meshes’ can, and were, modelled by hand in Blender at one point but it was taking too long to make large areas so I went for a more procedural approach. The cloud vertex points are then ‘smash lit’ as described above, then point sprites are rendered at each vertex in each ‘cloud’… so its just particle sprite based, big let down I know but I’m yet to find a more appropriate solution for close up / going inside type clouds. If I ever get around to patching the “rolling / spinning” issues (where the edges of the clouds look like a buzz saw while rolling the camera) they won’t look half bad.

2 Likes

Thank you for the in-depth explanation and the links that you provided. You gave me quite some reading material. I also have a link for you, with a follow-up, regarding volumetric rendering.

A little citation:

A 3D texture is warped to fill the view frustum and dynamically updated with the density of air and fog at each texel. Each texel of the resulting volume is illuminated independently into a second 3D texture. Finally the illumination is accumulated into a third 3D texture so that each texel contains the amount of light scattered towards the camera along that direction and up to that distance.

This sounds very interesting for inhomogenous media (e.g. clouds). The difference to the work by Dobashi et. al. is, that you finally ray march through that 3D texture instead of placing the 2D slices along the camera direction.

I figure that your approach is more suitable for homogenous media, right?
Since you’ve been playing around with ray marching already, how do you consider performance of the ray marching technique with a 160x90x128 texture? Would the above mentioned method be doable in real-time or should I go into another direction?

This might solve your problem: I once suggested automatic scaling and orienting of the impostor quads in the vertex shader to @MoffKalast , you can find it here. You need to provide an up vector to the impostor’s material, then they’ll keep their orientation when you roll the camera.

oh wow, nice find ( the links ), that is pretty much exactly what I have in my head, and have about half way working in progress.

My approach relies on a homogeneous media so it can eliminate the need for any ray marching ( == performance ^ infinity).

I’ve always been anti ray marching in any form for realtime solutions, as I have it in my head it’s always going to be to slow… that said, given some of the reading I’ve been doing I will look into again in the next few days/weeks/years (who really knows). – What I’m saying is I’m biased against the use of marching… but I’m almost always wrong :wink:

Cheers, I will have a look at what old 'kalast has done. None of my solutions could fix the issue of “what happens when the camera is completely vertical?”, the sprite is now orientated along the XZ axis, so there is no more “up” =/

No idea at all sorry, 2m odd samples doesn’t sound that bad to me, that’s about one full screen pass at average res.

Yes, you’d probably have to freeze some rotation angles, of the up vector itself, when going through a vertical look direction.

I will soon start with volumetric shading of clouds, and I will probably go the RESET (ray-marching) way though. I don’t see any other suitable solution for inhomogenous media, which is a requirement. The videos however look very good, and hardware is tending to get better.
I guess this will be a lot of work too. :sweat_smile:

BTW: That blog also features PBR and deferred rendering, if anybody is interested.

Cheers

Hi,

Can you provide some working links to example code in your great article about volume lighting? Unfortunately, I’m too dumb to understand GPU Gems, so I’m looking for some working example to implement this effect in my game. As far as I understood the whole trick is to have a geometry modified in vertex shader by shadowmap’s data and used to lit up pixels in fragment shader. And you know how to do it. :slight_smile:

1 Like


Didn’t bother finishing (it’s finished in blender) just playing around with shaders. Nothing impressive just learning the basics. I have specular highlights just from sending position and colour, they have no diffuse since thats all lightmap. Bunch of UV maps, multiple diffuse and specular maps.

20 Likes

Mobs are spawning.
Technically - nothing special. Just some lights, particles and texturing effect.

17 Likes