Finding the Potentially VIsible Set through a window?

The benefit of occlusion culling is in the fact that it allows you to avoid rendering a stapler inside a drawer inside a desk inside an office inside a building when the building happens to be inside the view frustum. Beyond that, you usually don’t want to merely cull the stapler from the scene graph; you want to never even load the stapler’s model until the camera is at least in the office. For this purpose I turn to potentially visible sets to allow me to decide which models should be in memory based on the camera’s current position. By dividing the scene into cells and loading only the models that potentially be seen from the camera’s current cell and nearby cells, I can avoid wasting any resources on distant occluded objects.

In most occlusion culling that I’ve read about, we strive to determine what is occluded based on the exact position of the camera. This places a burden on the CPU, but makes the math easier because we only need to worry about a single point of view. In contrast the potentially visible set for a cell has to somehow determine what might be visible from all points in the cell. Since it is a calculated before the game begins there is no time limit on the calculation, but a brute force solution is impossible since there are near-infinite potential camera positions.

Surely the most important type of occlusion for this purpose is the occlusion of a wall with a window which allows someone to see into a building while standing some distance away. If the current cell were up against the window then we could assume that the entire room is potentially visible, but from a distance we cannot make that simplification. If we allow that simplification from a distance then it could easily result in a chain-reaction that would cause us to put the entire interior of a large building into the potentially visible set when each room contains a doorway into another room, forcing us to consider each room potentially visible in turn.

So the question is how do we calculate a potentially visible volume when viewed through a window from a distance when we only know that the camera will be somewhere inside a known volume? I have considered putting the the camera in each corner of the convex hull of the cell. That allows us to calculate a series of visible volumes by using the point of view and the edges of the window, but then it’s not clear how to combine the volumes into one large volume. We surely cannot assume that the union of those volumes will cover everything potentially visible.

Here is an image to illustrate the problem:

That is why portal culling is considered useful only in limited indoor environments. If the window is visible we render the entire room. If the window does not clip the frustum we skip the entire room.

Cryengine has a decent explanation about the tradeoffs:

http://docs.cryengine.com/display/SDKDOC4/Culling+Explained

1 Like

From a decent distance, you will likely only see a limited amount through that window anyway. Add an environment reflection and you’ll see nothing.

Are you saying that the best solution is to have a small draw distance when looking through windows?

We could make windows opaque until the camera is in the cell adjacent to the window and that would make things easier by evading the issue. If that would be too short a draw-distance then we could simply decide that windows become opaque if they are two or three cells further away. This would force us to render several cells deep into a building even if the camera can only actually see one cell, but that still saves us from rendering the entire interior of large buildings.

Even if that would be good enough, it’s not an elegant solution. Is this really such a hard problem that I will be reduced to using such a crude approximation? Since this is preprocessing, I won’t even have the excuse of trying to save CPU time.

Is it a static map? Take a look into BSP trees.

I do not know how to write this in Java code, but here is some drawings which might help:

What is seen:

Or simpler:

Maybe you can calculate the outer cameras, and then only calculate the vision for those two.

EDIT: you can use optic physics for this by seeing the window is one dot (the red dot in the drawing).

Perhaps an intersting alternative is using cube maps to represent the room.

I have found a paper describing how this could be done using 5-dimensional constructive solid geometry. The paper is awfully concise and uses some advanced mathematical terminology, but it is fascinating.

Here is the pdf: Potentially Visible Sets (PVS), Mikko Laakso

The paper suggests that the only alternative to using its technique is to sample points. It seems that if I am willing to just assume that all portals are visible to a certain depth, then I ought to be willing to accept the approximation of a sampled set of points. Since a portal is always going to be no bigger than a cell, we can take the corner points and middle points of the potentially visible portals and create planes from those points using the edges of the viewing portal, and then test that the cell overlaps the volume defined by those planes.

On the other hand, there is no reason why I should be scared away by a little 5-dimensional constructive solid geometry. It is less intuitive but it should give exact answers efficiently if I can figure out exactly how it works.