The EnvironmentCamera class is an invaluable tool for rendering environment maps and baking light probes. One thing that’s awkward about using it, however, is that many advanced effects (especially atmospherics, like Rayleigh-Mie scattering and clouds) rely on screenspace effects that are implemented as filters or scene processors, while EnvironmentCamera
renders Spatial
scenes through a set of internal viewports. It would be tremendously useful for realtime PBR baking if EnvironmentCamera
supported screenspace effects.
I see two ways to support this cleanly:
-
Expose
EnvironmentCamera
's viewports directly via aViewPort[] getViewports()
method. The upside to this method is that the end user can do whatever they need to do with the viewports. The downside is that access like this has a lot of ways to mess up the internalViewPort
state. -
Add
get/setProcessor()
methods toEnvironmentCamera
and manage adding and removing them to the viewports. The upside to this is that the viewports are never externally exposed and therefore cannot be altered. The downside is that this approach is more limited, and it’s conceivable that someSceneProcessor
implementations may not support use on multiple viewports at once, so this may not allow use of all scene processors inEnvironmentCamera
.
I personally favor approach #1 as EnvironmentCamera
is a specialized utility class and needs some effort to use correctly, so I don’t see exposing the internal state as a major downside. To cause an issue, users would have to be directly modifying ViewPort
state and at that point users probably either (a) know what they’re doing, or (b) at least have an idea of how they got themselves into trouble and some possible ways out.
Edit: This approach is also the more powerful of the two by far - if something can be done in a ViewPort
it can also be part of an evironment snapshot. Approach #2 is substantially more limited as it only gives a method for attaching scene processors, nothing else.
Thoughts?