Shadows in VR (with instancing)

I got SSAO working in VR with instancing, which was relatively easy: just needed an instancing-aware normal.vert, and then the filter works on the whole scene (which includes both eyes).

Shadows seem a bit trickier, though… currently, the DirectionalShadowFilter draws shadows over both eyes, as if instancing wasn’t happening (which is expected). Only one shadow map should be needed… so how do we go about squishing the shadows to half the screen size, and scoot 2 copies of them to either side?

Create a new filter handler, that creates two AbstractShadowFilters, that share a single shadow map? The first filter gets the left camera’s projection matrix, the second gets the right. The post shadow vertex shader would squish the shadow & move it to either side.

Not sure that is the best way to go about it (or if it’d even work). Is there a way to do it in one filter? Looking for some assistance… thank you!

The post shadow pass is this : find the position of a pixel in world space, project it in the shadow map light space, make the shadow test for this position, paint it black if it’s occluded, white otherwise.

There are 2 ways it’s done in JME :
As a geometry pass : All the shadow recieving objects are re rendered on screen with a forced technique (postShadow) and modulated over the original render (blend mode modulate). The world pos and the projection in light space are done in the vertex shader and sent as verryings to the fragment shader. There, the shadow test is done to shade the pixel.

As a filter : The geometries are not rendered again, but instead the world position of the pixel is reconstructed from the depth buffer. Everything happens in the frag shader. It’s usually a lot faster as this thechnique only requires to render 1 additional geometry (the fullscreen quad) whereas a geomatry pass requires a lot more geometry render depending on the complexity of the scene.

Now, with your technique, the problem is that you need the instance id of a geometry to know where to render it. And you don’t have this information in a Filter context.

What I’d do for a starter, is to go the classic post Shadow geometry pass. If you achieved to make instancing work for the normal pass thechnique for SSAO, doing it for the postShadow technique will be pretty similar. Without the need of changing the shadow processor code.

If you really want to use the filter approach you’ll need to pass the id information as a texture buffer to the filter shader. However, this would require another geometry pass to render this buffr, which would kill the benefit of rendering shadows in a filter.

An alternative to this (and it woudl be benefical for SSAO too) would be to render the needed buffers (normal, or instance ID) during the geometry pass (when lighting is rendered) using MRT (multi render target, rendering several images in the same pass). I tested this for SSAO and it dramatically increase the perfs, because it saves a geometry pass. However to do this, there are lot of changes to do in the FilterPostProcessor.

Another alternative, that I hope we’ll have in a near future in the engine, would be to render the shadows along with the lighting in the lighting pass. But that implies a lot of changes in the lighting shader, and maybe to the core engine.

So to sum it up… your best bet for now is the post shadow pass.

1 Like

I’m really desperate for performance, so getting the filter to work is far more desired. I’d like to exhaust all options with that first, before having to resort to the geometry pass. Trudging ahead, I’m making some interesting developments that I hope will lead to a solution:

I’ve created a new extension of DirectionalLightShadowFilter, which also will use a custom set of j3md/vert/frag shader files for shadow processing. The only real difference is, I’m including the inverse projection matrix of the right camera. My hope is, when the fragment shader is running over both eyes, and it tries to calculate the “world position” based on the depth data & UV of the filter quad, it will get the right position depending on which eye we are on & display the right shadow result.

In the fragment shader, I should always know the second “instance ID” is when UV.x > 0.5 (e.g. the right side of the filter quad is being rendered). When I’m on the right side of the filter quad, I use the right camera’s projection inverse matrices.

It sounds like it should work in theory, although I’m sure I’m not implementing it right (at least).

This is how I’m trying to get the world position from within the PostShadowFilter.frag:

vec3 getPosition(in float depth, in vec2 uv){
    vec4 pos = vec4(uv, depth, 1.0) * 2.0 - 1.0;
    #ifdef INSTANCING
        pos.x *= 0.5; // trying to squish position to half-screen widths
        pos.x += (uv.x > 0.5 ? pos.w * 0.5 : pos.w * -0.5); // trying to move them to either side
        // now below, we pick which projection matrix to use
        pos = (uv.x > 0.5 ? m_ViewProjectionMatrixInverseRight : m_ViewProjectionMatrixInverse) * pos;
    #else
        pos = m_ViewProjectionMatrixInverse * pos;
    #endif
    return pos.xyz / pos.w;
}

There also is another matrix row that needs to be supplied:

    #ifdef INSTANCING
        vec4 useMat = (texCoord.x > 0.5 ? m_ViewProjectionMatrixRow2Right : m_ViewProjectionMatrixRow2);
        float shadowPosition = useMat.x * worldPos.x +  useMat.y * worldPos.y +  useMat.z * worldPos.z + useMat.w;
    #else
        float shadowPosition = m_ViewProjectionMatrixRow2.x * worldPos.x +  m_ViewProjectionMatrixRow2.y * worldPos.y +  m_ViewProjectionMatrixRow2.z * worldPos.z +  m_ViewProjectionMatrixRow2.w;
    #endif

I update the values in my new DirectionalLightShadowFilter preFrame like so:

@Override    
protected void preFrame(float tpf) {
    shadowRenderer.preFrame(tpf);
    if( VRApplication.isInstanceVRRendering() ) {
        material.setMatrix4("ViewProjectionMatrixInverseRight", VRApplication.getVRViewManager().getCamRight().getViewProjectionMatrix().invert());
        Matrix4f m = VRApplication.getVRViewManager().getCamRight().getViewProjectionMatrix();
        material.setVector4("ViewProjectionMatrixRow2Right", temp4f2.set(m.m20, m.m21, m.m22, m.m23));
    }
    material.setMatrix4("ViewProjectionMatrixInverse", viewPort.getCamera().getViewProjectionMatrix().invert());
    Matrix4f m = viewPort.getCamera().getViewProjectionMatrix();
    material.setVector4("ViewProjectionMatrixRow2", temp4f.set(m.m20, m.m21, m.m22, m.m23));
}

There is a definite separation in shadows between the eyes, but they are not where I want them. This is managing shadows with only one filter… looking for assistance! Thank you!

So close!

This is using this for the getWorldPosition in the PostShadowFilter.frag:

vec3 getPosition(in float depth, in vec2 uv){
    #ifdef INSTANCING
        vec4 pos;
        if( uv.x > 0.5 ) {
            // right eye
            pos = vec4(uv.x * 4.0 - 2.5, uv.y * 2.0 - 1.0, depth, 1.0);
            pos = m_ViewProjectionMatrixInverseRight * pos;
        } else {
            // left eye
            pos = vec4(uv.x * 4.0 - 1.0, uv.y * 2.0 - 1.0, depth, 1.0);
            pos = m_ViewProjectionMatrixInverse * pos;
        }
    #else
        vec4 pos = vec4(uv, depth, 1.0) * 2.0 - 1.0;
        pos = m_ViewProjectionMatrixInverse * pos;
    #endif
    return pos.xyz / pos.w;
}

I’m not sure everything I need to change is contained in this function, but I feel like a few more tweaks, and the shadows will line up just where I want them… any help is greatly appreciated!

1 Like

Hot damn. I did it.

This is sweet! Shadows in VR, with no additional passes over non-VR use. Hell yeah!

I had to move some multipliers out:

vec3 getPosition(in float depth, in vec2 uv){
    #ifdef INSTANCING
        vec4 pos;
        if( uv.x > 0.5 ) {
            // right eye
            pos = vec4(uv.x * 2.0 - 1.0, uv.y, depth, 1.0) * 2.0 - 1.0;
            pos = m_ViewProjectionMatrixInverseRight * pos;
        } else {
            // left eye
            pos = vec4(uv.x * 2.0, uv.y, depth, 1.0) * 2.0 - 1.0;
            pos = m_ViewProjectionMatrixInverse * pos;
        }
    #else
        vec4 pos = vec4(uv, depth, 1.0) * 2.0 - 1.0;
        pos = m_ViewProjectionMatrixInverse * pos;
    #endif
    return pos.xyz / pos.w;
}
1 Like

I was just about to post about that :slight_smile: glad you fixed it so fast.

What else is on the horizon for you in terms of VR development, besides finishing the game itself?

I’m really just trying to finish my game, which requires a ton of VR improvements :stuck_out_tongue: Not sure where improvements will be needed next, but we’ll soon find out!

Cool shot showing how this is all coming together (filters are all instancing-aware):

2 Likes

Nice work
And nice idea. :wink:

1 Like

looks fantastic, looks like i’ll have to get a VR headset just for this!

1 Like

I will soon start to use my VR headset for jME, aiming at a game for Steam with and without VR.
My question: Is any of this open source? It’s no problem if not.
I will make a thread for this question soon and find answers somehow.
Anyways, your improvements look fabulous and I think you are doing great PR for jME with this! :chimpanzee_smile:

Yes, this is all open source! Github repository is here:

Just got done testing with my Vive. Shadows work excellent. SSAO is a little funky, though… because the SSAO effect isn’t exactly the same in the second eye, so the effect doesn’t stereo converge very well. Not sure how to best go about it… perhaps darken the same areas in both eyes? If a spot gets darkened on the left, darken the same spot on the right, and vice versa? That should guarantee symmetry, but may not be exactly where SSAO should be… but it’d probably look better.

Do you have an option to deactivate instancing?
It would be nice to compare the usual code path and the instancing code path.

Instancing is disabled by default, because you need slightly modified shaders to use instancing. I haven’t converted all the core jMonkeyEngine shaders to support VR instancing, because it currently conflicts with traditional instancing uses. There is a PRECONFIG_PARAMETER.USE_VR_INSTANCING that you can see where it is used.

1 Like