(November 2016) Monthly WIP screenshot thread

Thank you for the in-depth explanation and the links that you provided. You gave me quite some reading material. I also have a link for you, with a follow-up, regarding volumetric rendering.

A little citation:

A 3D texture is warped to fill the view frustum and dynamically updated with the density of air and fog at each texel. Each texel of the resulting volume is illuminated independently into a second 3D texture. Finally the illumination is accumulated into a third 3D texture so that each texel contains the amount of light scattered towards the camera along that direction and up to that distance.

This sounds very interesting for inhomogenous media (e.g. clouds). The difference to the work by Dobashi et. al. is, that you finally ray march through that 3D texture instead of placing the 2D slices along the camera direction.

I figure that your approach is more suitable for homogenous media, right?
Since you’ve been playing around with ray marching already, how do you consider performance of the ray marching technique with a 160x90x128 texture? Would the above mentioned method be doable in real-time or should I go into another direction?

This might solve your problem: I once suggested automatic scaling and orienting of the impostor quads in the vertex shader to @MoffKalast , you can find it here. You need to provide an up vector to the impostor’s material, then they’ll keep their orientation when you roll the camera.