As far as i can tell, the depth is resolved the same way of the color textures by MultiSample.glsllib
This doesn’t seem correct to me, since depth represents spatial data. Lets say for example we are at the edge between a cube and the skybox, some samples might end on the skybox and some on the cube, the averaged result will place the depth somewhere in between the cube and zFar, where in reality there should be nothing.
So, “random” or “nearest depth”. Random might be interesting (no way to tell for sure which is first, I guess). Nearest seems like it would also edge cause artifacts.
Since there is no right answer on how to mix depth I guess the code took the position that most things interested in depth might be gradient based and would be happier with an intermediate value than some hard incorrect cut-off.
First sample approach would de facto behave like if there was no multisampling for the depth.
actually the current code is potentially turning edge pixels into wrong values while the “cut-off” value would be the correct one.
Then we need to consider also that depth is non linear.