VR rendering in one pass with instancing

Hey guys – something that has been talked about in UE4 & Unity is leveraging instancing to draw geometry for both eyes in one pass. This would provide a significant performance improvement instead of needing a rendering pass for each eye. An overview is described in this presentation (starting on slide 12):

I know there exists an InstancedNode, which may be useful here? The presentation describes squishing the first set of objects to the left side, and the instanced copies on the right side. However, could you render objects to separate textures using “multiple render targets”, since all we really need is a texture for each eye to submit to SteamVR’s compositor…

InstancedNode class: jmonkeyengine/InstancedNode.java at master · jMonkeyEngine/jmonkeyengine · GitHub

Wondering what are your thoughts, @rickard and jME3 pros…

1 Like

If you want to take advantage of instancing then they’d have to be rendered to the same frame target.

Instancing saves draw calls because it can render many instanced geometries as one draw call. That one draw call can’t really be split across different renders then.

OK, understandable. Thank you.

My next question is… how could I tell the two geometries apart in the vertex shader? If I need the first instance to render on the left side, and the second on the right… I need to supply an instance ID to the vertex shader. Not sure how to go about that part…

You’d give them each their own transform.

Some of the vertex attributes are shared across instances and some can be per instance. Instancing uses custom vertex attributes to store the transform for each instance. JME has this built in but note you can also directly control which vertex buffers span instances and which are repeated.

You have access to an instance ID also but you shouldn’t really need it. Though I guess it would be a way to even avoid the extra transforms. if instanceID = 0 do left world transform, else do right. Lighting.j3md was even rewritten to abstract out the normal transforms so that they could be implemented differently for instanced and non-instanced without affecting the rest of the shader. You could tap into this but I suspect you already have your own custom shaders.

The difficulty is to perform the clipping properly IMO. And to know how to clip you need to know the side…
That’s interesting though…
Also before going straight to instancing, there might be some things to do… like what he mention about not rendering shadows twice and so on…

I’d presume I’d have to do 3 things to make this a practical reality:

  1. Transform the second instance over about 65 mm (eye separation).
  2. In the vertex shader, render things based on instance ID on the left & right side of the screen.
  3. Automate creation of the second instance for objects added to the root node

The slides talk about how to go about clipping with “SV_ClipDistance and SV_CullDistance”, never used it, though.

I’d be able to determine which side is being rendered by the instance ID we’ve been talking about above.

Quite the undertaking, though. UE4 & Unity even acknowledge the work involved. Very interesting, though. I currently skip shadows altogether, but implementing this might allow shadows in.

That’s directX.
Not sure there is an opengl equivalent. At least that is not deprecated.
Last time I had to fiddle with a clip plane I had to crop the matrix itself. Look at Camera.setClipPlane().

Hrm… OK.

However, I’d only have one camera, one pass & one projection matrix… but it would need to be clipped 2 different ways depending on instance ID.

I wonder if just waiting for & using Vulkan to cut draw call overhead will render this endeavor less fruitful.

Sounds like Vulkan will be harder to implement than VR instancing:

They are not mutually exclusive… ideally we’d have Vulkan & VR instancing. I’d still like to take on VR instancing when we come up with a strategy. Sounds like clipping will be a key complexity. I presume it would be possible in a fragment shader to discard pixels on the wrong side? However, that’d require messing with fragment shaders in addition to vertex shaders…

Just for giggles, I tried to accomplish clipping via a fragment shader. Basically, I take gl_Position.x to the fragment shader, and discard any values >= 0.5. It kinda works. I say “kinda” because for some reason, it isn’t clipped exactly down the middle as you approach closely to objects. Things far away get clipped perfectly. As things get close, objects perspectively push into the other side, just a tad. Not sure why…

Picking up from the last performance improvement idea… if the second instance/eye needs a different projection matrix, how would that work with one camera? I’d have to provide two projection matrices to a vertex shader, and pick one depending on the instance ID? Actually, that looks just like what is happening on Slide 16 of the original post…

Why not have a second camera?
A camera is essentially what computes the projection matrix for you.
You could send projection matrices from both cams to the same shader.

OpenVR actually will tell you what the projection matrix is for each eye, which is what I normally set for each camera. However, I may be able to just store the results from OpenVR to send to vertex shaders, with one camera… could still have a second camera, but not sure what use it’d provide beyond holding a projection matrix.

On second thought… just keeping the second camera, even if it doesn’t have an associated active ViewPort or render pass, will make things a bit easier… simply for handling math & playing nice with how jMonkeyVR currently works. Anyway, it is a good suggestion nehon. Playing with it now, but I can tell this won’t be an easy task.

@pspeed, you mentioned I have access to the instance ID… where is it? I’d like to access it from within the vertex shader, as described in the above slides.

My understanding is that you just define it and its set for you. JME doesn’t/can’t set it as it’s all done inside a single draw call.

So declare it as the docs say and it should be set for you.

My work today on this:

In progress…

1 Like

Making some progress here! Got stuff rendering split to each eye with instancing, which is happening automatically with a new VRInstanceNode. Currently requires some code in both the vertex & fragment shader, but I’m making an easy-to-implement shader include for each. Screenshot:

Lots of stuff not working yet, but progress should be steady. Clipping between the eyes is shaky. The right instanced geometry needs to use the right projection matrix (right now it just uses the left). Skybox is skipped in the screenshot above.

5 Likes

git commit of current progress:

… and just like that, perfect clipping (using GL_CLIP_DISTANCE0 in the vertex shader, no fragment shader modification needed) & the second render getting the view matrix of the second camera (enabling depth):

Exciting! Now to make sure it works with SteamVR…

EDIT: Ignore the odd flat block in the left view… still working out some bugs in the auto-instancing code… also, eyes have been flipped so I can test cross-eyed for depth :wink:

4 Likes