Hey, light probes have a radius, and the reflection has a parallax correction depending on this radius. Unfortunately light probes for now only have a spherical radius and when you are in a square room reflection can be out of place in the corners.
There are several ways to fix this, though not yet implemented but i’m actively reading on how we could do this.
If ou want to join the reading and chime in, you’re pretty welcome 2 brains are usually better than one.
So first, you have Sebastien Lagarde’s blog post about parallax correction and probe blending in Remember me.
Slightly outdated, but every technique used today is more or less based on that technique.
Basically, they have probes that in different areas and can have a sphrical/parralelepipedic influence used for parallax correction. The blending is made by computing a blended cubemap every frame. Though note that the blending works per object basis, not on a pixel basis. So if you have an object that spans through several probes areas, it doesn’t work well.
Second interesting reading is Robert Cupisz.
The idea here is more to have a probe field, with many probes placed by hands, and somehow make a triangulated mesh of the probe filed. Then you compote an object barycentrique coordinates in a given cell and use them as weight to blend between the 4 nearest probes. Note that this technique only talk about diffuse ibl lighting and not specular (radiance). Though it can be adapted. Also note that here again, this is supposed to work on an object basis too, for dynamic object moving around the scene, lighting is supposed to be baked for static objects… which is meh IMO.
Last reading, that seems really promising but would be the hardest to implement: McGuire Precomputed Light Fields:
http://graphics.cs.williams.edu/papers/LightFieldI3D17/
Here probes are automatically distributed in the area, no manual placement.Also, they don’t use cube maps, they flatten down the maps to square maps and have an atlas of all the probes in the scene. Meaning that all lighting is on a pixel basis. That seems to solve a lot of issues from previous techniques (parallax, tedious manual placement…) , though it can be heavy on the precomputed data. For example in the sponza demo they have 64 probes…
So what’s the plan… IMO we need a combination of all those techinques, implement several of them and let the user pick the best one for the scene he’s working on…