PBR, indoor scenes Light probes

My game is located entirely inside. So i wish to have my environment maps and IBR maps reflect that a bit. However light probes assume that the model is centered on the lightProbe and (or ==) that the cube map is at infinity. ie you don’t care where the object is located. Only relative rotations (view rotation etc).

This works great for outdoor scenes. But for indoor where the say ceiling lights are not far over head, this is pretty inaccurate. What are the current ways this is solved in real time engines? The obvious way is to have more than one light probe with some overlap and interpolation between the 4 or 8 probes ( bi linear, tri linear with height). This seems reasonable and efficient.

It seems the code does not support this. But again perhaps it is not so hard to add. Currently i get 500FPS on fairly low end hardware so the extra 4x work per pixel doesn’t seem unreasonable.

Either way i don’t really know how else to attack this. So thoughts or other solutions are welcome.

Hey, light probes have a radius, and the reflection has a parallax correction depending on this radius. Unfortunately light probes for now only have a spherical radius and when you are in a square room reflection can be out of place in the corners.
There are several ways to fix this, though not yet implemented but i’m actively reading on how we could do this.
If ou want to join the reading and chime in, you’re pretty welcome 2 brains are usually better than one.

So first, you have Sebastien Lagarde’s blog post about parallax correction and probe blending in Remember me.

Slightly outdated, but every technique used today is more or less based on that technique.
Basically, they have probes that in different areas and can have a sphrical/parralelepipedic influence used for parallax correction. The blending is made by computing a blended cubemap every frame. Though note that the blending works per object basis, not on a pixel basis. So if you have an object that spans through several probes areas, it doesn’t work well.

Second interesting reading is Robert Cupisz.

The idea here is more to have a probe field, with many probes placed by hands, and somehow make a triangulated mesh of the probe filed. Then you compote an object barycentrique coordinates in a given cell and use them as weight to blend between the 4 nearest probes. Note that this technique only talk about diffuse ibl lighting and not specular (radiance). Though it can be adapted. Also note that here again, this is supposed to work on an object basis too, for dynamic object moving around the scene, lighting is supposed to be baked for static objects… which is meh IMO.

Last reading, that seems really promising but would be the hardest to implement: McGuire Precomputed Light Fields:

http://graphics.cs.williams.edu/papers/LightFieldI3D17/

Here probes are automatically distributed in the area, no manual placement.Also, they don’t use cube maps, they flatten down the maps to square maps and have an atlas of all the probes in the scene. Meaning that all lighting is on a pixel basis. That seems to solve a lot of issues from previous techniques (parallax, tedious manual placement…) , though it can be heavy on the precomputed data. For example in the sponza demo they have 64 probes…

So what’s the plan… IMO we need a combination of all those techinques, implement several of them and let the user pick the best one for the scene he’s working on…

3 Likes

Awesome nehon. I will read up. thanks. My google foo just wasn’t finding anything past “this is how you set it up in $ENGINE” .

I will be a bit biased to what would work well in my case unfortunately. But automagic is what i want as the levels are randomly generated. And hence players would need to wait for baking…

But object based should be fine for the most part in my case.

And then there is just the whole… i really like math and making shaders.

On a side note. Does the PBR use nodes? From what i understand it doesn’t does it.

[EDIT] Oh and thanks for your very complete reply

Interesting reading.

The first 2 approaches are more or less the same. That is averaging probes. If we only care about GI data we can have very compact probes via Spherical Harmonics. Fine detail reflections (env map) isn’t much different + parallax correction. But obviously bigger maps.

Both of these appeal a lot. I have done a lot of that sort of thing before. I even have my own high performance delaunay triangulation and fast spacial search (finding a triangle i am in) with adjacency info. However i would probably use a simpler approach after some code i wrote for Smoothed Particle Fluid dynamics. It is a fairly simple system of just taking a normalized weighting around the POI. this can be fast if done properly at the object level (vertex in a pinch… but why). I am thinking that perhaps the math is also equivalent to taking parallax corrected samples from the k probes/maps and sum weighting that. Coherency would keep it fairly fast.

I am tending towards a POI based system, where more than one POI can be defined. Therefore performance would be fairly scaleable.

The last approach is something that is very interesting. Pretty cool. And way too much devil and details for my liking. There example was using 2G (if i read it correctly, and i like to think i did.) Keeping branching coherent would be difficult. Also these types of techniques always have lots of little details that turn out to be important but are often glossed over or omitted from the paper.

My requirements are fairly simple but fairly specific. I need real time or fast bake times for a whole room (seconds or less) and automagic. I have randomly generated rooms and short loading delays are fine. Added bonus for low VRAM footprint. However in my case view distance is fairly far away… ie top down shooter (or a bit of an angle). So automagic is probably going to be easy.

I will really look at the code far more carefully tomorrow. (almost 2am here). But thinking of moving the PBR shader to nodes. This would make me much more familiar with the code.

Please note. If this is something you would prefer to look at do yourself i can leave it be. Otherwise i guess it just ends up as a new Material anyway.

Well if you are talking about the PBR to ShaderNodes thing yeah I’d like to do it, but I might not do it soon enough for you so go ahead.

I see you have LightProbeBlendingProcessor there. I assume the idea is to pass the probes and blend factors through to the shader.

Also as for nodes, we will see :D. I will simply make my own material with nodes for now. The goal being exactly the same result. I also note that you have some debug in there, ie a debug node. This is interesting.

Once done, when done, if done, i will of course publish/post it for feedback.

The idea was to bake a cube map from different cube maps, but I’m not sure the idea is still relevant.
That’s pretty much the POI approach of Sebastien Lagarde

In what way would that not be relevant? I mean i would want to do GPU side generally. Again all the work is the env map for highly reflective surfaces.

Well,I’ve talked with Sebastien Lagarde. He works now for Unity and he told me that they don’t use that.
That was done for Dontnod’s Remember me game.
They use Cupisz’s (no surprise the guy is from unity too). And he said it was a solid approach

Then I’d bet what McGuire’s paper will become the standard in no time.This guy is defining standards you know… :stuck_out_tongue:
So yeah… I’d rather spend time on his technique than on an old one (powerful though).

Though we could use a combination of things IMO this paper is interesting:
http://jcgt.org/published/0003/02/01/
allows you to store a cube map in a regular 2D map, this could help us to have an atlas of the probes in the scene and make it easier to blend on a pixel basis… with the tetrahedral blending of Cupisz
Already this would be a huge step forward IMO.

But yeah… maybe wait that clever people use mcGuire before and release more detailed papers/implementations :wink:

1 Like

So a day of some math and more reading. More details and code.

I also put another batch of beer down. Last lot turned out well. Heres hoping for 2 for 2.

Cupisz examples only use light field with 27 spherical harmonics. This makes interpolation lightning fast. Extending this for Env Maps is clearly straight forward. I still see these as the same approach really. Its just how you choose probes and weights that are different. I notice you have already got interfaces in place for different strategies.

I think we are going to disagree with McGuire’s paper. It won’t be used any time soon as there are some glaring omissions that are glossed over (See what i did there). The largest being it only works on static scenes.

In this approach every single lightProbe is a full on differed rendering buffer. With “z” and “n” for every single probe. that is crazy and a bucket load of data no mater how you pack it. They can even render the scene without any geometry at all. It requires fairly dense sampling. It already uses 2G of VRAM and has a bunch of aliasing issues. Not to mention the challenge of getting consistent performance across different hardware.

Now try and work out moving geometry with that… there only suggestion was CPU side work that was “similar”. So there goes hardware skinning. there goes a huge boat load of stuff.

Could it be done? Perhaps, with new hardware, but at what point are you just doing plain old full GI aka renderman style anyway. It will be quite a trick to add moving geometry without going back to something closer to the “old fashion” light probe approach.

So the approach i would like to try is basically number 2 (Cupisz). However there is a bit of a variant that should make it simpler. But a bit more inefficient. I would get a OBB adjusted ray for the 3 probes around the POI. Sample each probe normally and weight the result. It could well turn out to be too slow. But right now i get well over 200FPS on pretty low end hardware, so i have some spare cycles. At this stage the experiment would be for only a 2d probe sampling. For now i am not going to move to nodes. I will just see if this works. the current PBR shaders are quite readable. I should be able to manage at least as far as an experiment goes.

Have to admit i do like the Octahedron mapping as well. Much easier to atlas than cubemaps.

2 Likes

Then let’s roll : Cupiz’s Tethahedral tesselation with Cigolle’s Octahedron mapping :wink:
That sounds so smart…

2 Likes

So had some other code issue that needed attending to. But back to this.

Played around a bit with light probes. Played around with having a lot of them (255 to be exact).

So as i understand, in the current implementation pre computes the normal* 9th coefficient SH light probe into the irradiance map. So each light probe as a spec environment cube map, and the same resolution “premultipiled by normal SH” irradiance cube map.

This then reduces rendering time to sample irrandiance cube, and the correct mip level spectacular map. Making live rendering very fast.

Another way of course is to use just 9 values for a SH of the irradiance light Probe, and then multiply with the normal vector at sample time. Indeed a light Probe could be a matrix for this calculation, (eq 12 of http://graphics.stanford.edu/papers/envmap/envmap.pdf). And as i understand this can be a varying. So the matrix can be constructed per vertex, ie cheaper than per pixel.

The nice thing with that is we know it will work. The math is 100% linear so everything combines linearly no problems. No extra artifacts would be added. Of course sampling each light probe and weighting the result will also give the same result.

Spec is not so nice. It is not linear. However with a dense enough sampling it would “tend to linear”. I am still thinking of sampling each env map from parallax adjusted and weight summing the result. However this will produce bizzar artifacts from env maps placed far from each other.

At this stage i will leave cube mapping in place. Moving to octahedron mapping seems like something that fits nicely with moving to nodes.

Now the problem i am mostly trying to solve is triangulation. Not because that is hard. But because in many use cases, in my case even, i won’t have or don’t need triangles/tetrahedrons. For example a long corridor can be a single probe every now and then with a single edge. Also lots of probes is too slow for precomputing. I realize for pre made maps this isn’t a problem. But i need to create probes as part of the “loading” screen. In fact the triangulation/edges/contentedness in my case can be part of probe placement.

Anyway. Random thoughts and just an update of where i am at. Will keep cracking on. feedback always welcome.

Out of interest what is the slow part of calculation the light probes? I assume its the importance sampling step. But i didn’t profile it.

1 Like

I haven’t been following this thread close enough, but could this help:

And the demo is here:
http://codeflow.org/webgl/deferred-irradiance-volumes/www/

I guess with this kind of technique you have to rethink the way you would place probes. For a corridor and with a naive approach we would place like 3 probes along the corridor in the center. Imo with this approach you need 2 probes in each corner, one on the floor one on the ceiling (if applicable) and maybe one in the center.I think that’s how they solve the light leaking and the parallax issue… but yeah that make like 5 probes instead of 3 and I guess the proportion is even bigger for full fledged scenes… So storing 9 floats instead of the irradiance map could be a win :wink:

Speaking of which. I’m really considering doing this… it would save memory and wouldn’t make a lot of difference performance wise… but… if at some point we want to pass data from several probes to a shader, nothing is easier than a texture, and very widely supported… so idk… we could support both and compare…

Generating the Irradiance map is really fast however the radiance maps takes some time because of the importance sampling. I don’t remember how many samples we use (depends on the mip level) but at some point it’s a long process.
The process is on the cpu, and even multithreaded it can take some time. But this could be offloaded to the gpu and IMO it could be faster.

2 Likes

As normal had some other things to do in between.

So doing the probes on the GPU shouldn’t be too much of a problem. In fact its easy with pure opengl, i am just not sure how to fit it into the jME pipeline. I think i sort of get it.

The steps are

  1. render a cubemap (6 90deg views) to texture.
  2. render the appropriate set of quads/tris with the correct UV coords, to convert to octah mapping. rendering to texture again of course.
  3. render that texture 9 times * by each of the 9 SH basis function and solid angle weight.
  4. Sum each image. Either with a opencl call or perferably with mipmaps/repeated 2x2, or 4x4 sample reduction.

Clearly some steps can be folded together. This should be many times faster than importance sampling on the CPU. Also it is claimed than quite small octahedrons can be used in this case. BOTE calculations at least have the reductions and * by SH functions as super fast. Rendering the scene 6 times for each probe perhaps not so much. Also the dependencies from each step will limit total performance. But there are other tricks that can be done.

In jME as i understand i would need to use a viewport. Set a scene with the required quad and material, and do that for each. Render from “texture to texture”. This, like the Environment Camera would be a appState.

Feedback and insights always welcome.

1 Like

If it can be useful, this is the importance sampling ported to GLSL and OpenCL

to get the GLSL code you can run

gcc -DGLSL -E ImportanceSampling.h > ImportanceSampling.glsllib

or just set #define GLSL before including it in the glsl shader.

I have also the java part, but it’s mixed with lwjgl calls so it would require some time to be cleaned.

Yes i have seen this, or at least others like it. However you don’t need to importance sample it. You can just integrate the whole thing, ie sum it. No branching and high coherence. You just need the correct solid angle correction (technically the correct measure in the coordinate system being used.) Since we are using ochta mapping pixels in the middle of a triangle will subtend a larger solid angle that the pixels at the corners. But this is a easy one time calculation.

Note also that its not uncommon to use 10000 importance sample steps. Which is the same as 100x100 texture. I have found 64x64 is more than enough for 9 coefficient SH (3rd order).

Sometimes i think people forget why we do things like importance sample. It is a trick to speed up hard numerical integration(*). However in this case its not really a hard integration. And losing coherence just doesn’t seem worth it in this case.

( * )OK so in some of the literature i have been reading, its a way around caring about solid angles and coordinate paramaterizations on a sphere. But TBH it seems to mostly very popular because a lot of people cut and paste code, or pretty much just do what the other guy did rather than understand why. We are not ray casting here to sample a complicated geometric scene.