OpenVR Available, Convert?

Not sure I understand the concern with TexCoord2/3 … First of all, it is used in the engine, in fact both unshaded and lighting materials use TexCoord2. Its no different than any other attribute.

Consider using a SceneProcessor instead of a Filter. The filtering system is not designed for low-level processing like this.
With a SceneProcessor you can choose to render any mesh you want and do other things which might be required for stereo 3D rendering.

The submit overhead for filters is probably very low … the main performance drain is # of pixels processed by the filter, so I am guessing that both approaches would have similar performance characteristics.
However do note that some filters will not expect the discontinuity in the middle of the screen where the two eyes intersect. For example, bloom, SSAO and possibly other filters do a “blur” like operation, so the left eye pixels will be blurred onto the right eye and vice versa, causing visual artifacts on the sides of each eye.

Yeah as Kirill said you just have to use them…
Set the buffer on the mesh, then use the attribute inTexCoord2/3/4/5 … in the shader

Thank you for the input, guys. Sounds like TexCoord2 & TexCoord3 is implemented – just wanted to confirm that. I updated the shader to pull from those attributes:

Perhaps I’ll wait for @rickard to dig into this to see if a SceneProcessor is the route to take. It may be.

We have our own SSAO shader in the VR library, so it could be tweaked if artifacts at the center are causing problems. However, I’ve noticed SSAO often kills performance too much in VR, and is often just not used. We’ll take that bridge when we get there, though… still need to get anything to display first. :smile:

Yea in general I wouldn’t suggest using filters at all for VR. One of the valve presentations mentioned that VR requires 5 times more pixels to process per second than a 1080p30 monitor. Filters are very heavy in terms of pixel processing, so only the best of the best graphics cards would be able to handle it.

I remember checking out SceneProcessors for something, maybe it was the Android VR project. At that time, it was not suitable. But I also think a more direct approach than Filters would be desirable. I think the StereoCamera might have the be ditched to achieve what we want now.

…which is strange because the filter post processor is just a SceneProcessor in the end. Ergo everything you could do in a filter you could do in a scene processor… without filter limitations.

Yeah. I can’ remember what the reason was…
It might have been that I wanted to add one SceneProcessor to the scene, but with the separate cameras I needed one for each eye.
In any case, I think we should look at this lib with fresh eyes. It still relies a lot on assumptions made over 2 years ago. There may be several areas where it can be improved.

Edit: I think I remembered wrong. I see now that I actually did use the SceneProcessor, but for a related project. I may not have considered it for the OculusFilter directly.

jherico writes here:

Mesh based distortion instead pre-computes the distorted locations of a
set of points on a rectangular mesh. You can then render this mesh with
the scene texture painted on it with conventional texture coordinates
and a simple shader.

Reporting some progress on the OpenVR integration.
I’ve managed to get visuals but without any distortion, as of yet. It is done through a single AppState with two offscreen FrameBuffers. The scene is rendered in these and then displayed on two “distortion meshes” with the shader with separated TexCoords described earlier in the thread.
These are then viewed through the standard camera. So there are no Filters or SceneProcessors involved. Neither is an Application needed, yet. I’m saying “yet”, since I think where we ran into problems with the Oculus was when hardware got involved.
Like I said, there is no distortion. JOpenVRLibrary.VR_IVRSystem_ComputeDistortion seems to be returning a linear distribution of TexCoords. I don’t know if this is due to the “null driver”. I haven’t been able to find much on the forums about this.

Edit: Oh, and no visible 3D effect either. The same image seems to be displayed in both FB’s. Haven’t looked into wat this is due to yet.

Edit 2: Yeah, JOpenVRLibrary.VR_IVRSystem_GetEyeToHeadTransform seems to be returning 0-transforms for both eyes


Linear distribution of TexCoords may be expected for the null driver… have you tried plugging in your DK1 or DK2 & see if something else happens?

I believe an Application (E.g. VRApplication) will still be needed for handling of the guiNode(), at the very least.

What is going on with the red boxes?

Shouldn’t the 3D effect be generated by having 2 cameras, separated apart by the inter-pupillary distance value? Is each FrameBuffer not the output of each camera…?

I’m thinking that, as well. Have asked on the steamvr forum. Seems to be spammed with general usage questions, though. Little dev-talk.

We’ll see. If it’s not recommended to extend SimpleApplication like that, I’d like as far as possible to avoid it. Besides, the GuiNode might work differently with the new arcitecture.

That should actually be out of bounds, what isn’t rendered by the shader. I expect that with correct distortion, they would have more of an angle. I switched the default black to red to make sure what was happening there.

If it works anything like the cardboard (I am assuming it does), the getEyePose matrix should return the correct offset. So yes, the cameras should be separated, but I don’t think we have to create the vector ourselves.

The Valve developers do sometimes post there, but I’m finding it more rare because it is getting spammed a bit. Unfortunate… However, I did reply to your post here:

One thing you can do is create the chaperone_info.json file yourself. Here is mine:

The GUI elements will need to be objects in 3D space, not projected as an overlay in the GUI rendering bucket. Replacing the guiNode with something that handles that automatically is a pretty elegant solution. Having a VRApplication can also automate initialization & destruction of the hardware. We’ve been complimented on how easy it is to integrate jME3 applications into VR applications, and extending SimpleApplication to a VRApplication is an important piece of that.

I was able to get an IPD of 64mm from OpenVR, so I’d hope to see correct offsets… but perhaps not using the null driver. Looks like we may need to plug in our Rifts to make more progress here…

I’d like to add: let’s try to fix one thing at a time. I don’t believe we should be refactoring the whole library’s structure at the same time as getting OpenVR distortion working. We might introduce unexpected bugs that would distract us from making OpenVR progress if we try to do two things at once. After distortion is working, we can then evaluate changes to the VRapplication and other structures on its own merits. We know the current setup worked nicely with the Rift, so lets just swap our what is necessary to get OpenVR working first.

No dice with the chaperone file. I’ll give them some more time to answer.

What makes me think it’s not the null driver is that the display that is shown when launching Steam VR seems to have distortion (the out of bounds areas are not square).

My approach to this is simply KISS. I want the simplest possible solution without any unnecessary overhead. Once a test application is running one can ponder about general changes. But without the simplest possible solution it’s much more difficult to assess what is the least common denominator.
The OpenVR API (so far) is much more direct than the Oculus Rift drivers so I see no reason to add more complexity to it just for the sake of sticking to old conventions. We also have the matter of controllers and base stations to consider, which is not at all covered in the existing project.

But it’s still to early to say what will be required in the end. I want to see some hardware working. I tried with my DK1 but it wasn’t detected.

Edit: There is no refactoring done at all up to this point. Just an AppState handling everything.

I’ve commited the WIP. Tell me if you have any luck with hooking up your OVR to this.


I moved 14 posts to a new topic: Virtual Reality library architecture discussion

Thank you for the commit. I’ll have to boot into Windows to play with it.

I’m a big fan of KISS too, but don’t forget to apply KISS in the eyes of our end users too. Making a piece of the library “simple” might be offloading complexity to the VR developer, which isn’t good. If we can get rid of filters and streamline the library due to OpenVR being more direct – great, but I don’t want it to be at the cost of ease of use for developers. VRapplication & guiNode management was a simplification of VR development, irrespective of what SDK we use, in my opinion.

I did put in some VR input code into VRapplication, as seen here:

… using the VRInput object, meant to abstract out whatever input is available:

What SDK version do you have? It sounded like v0.4.4 SDK was the most compatible with SteamVR, while the latest is v0.6.0.1. I’ll have to plug in my DK2 to test.

The problem with “we can always go back and fix it later”, is that it rarely happens. As you expand the library, the temporary functions get more entangled and difficult to weed out.
Therefor I think that debating general issues about the library is important and beneficial for it. It helps us make better decisions while developing it further. Especially with this spare-time and online development, it’s the best forum we have.

I think the issue we’re facing here is that everything is kept in the same thread. This discussion should be split into several threads so that information that may be crucial for the issue currently at hand is not obstructed by more general discussions. That way the devs can also choose what discussion to involve themselves in and at their leisure.

I’ll see if I can restructure the last posts of this thread into a new one.

Edit: Discussion now continued here: Virtual Reality library architecture discussion (continued from OpenVR Available. Convert?)

Thank you for moving that section elsewhere.

I’m looking over the commit now.

This is precisely why I don’t want to mix restructure changes with OpenVR conversion:

SEVERE: Uncaught exception thrown in Thread[jME3 Main,5,main]
	at jmevr.util.VRGuiNode.attachChild(
	at jmevr.TestOpenVR.initTestScene(
	at jmevr.TestOpenVR.simpleInitApp(
	at com.jme3.system.lwjgl.LwjglAbstractDisplay.initInThread(

VRGuiNode is trying to grab the VR hardware from VRapplication, but you unexpectedly side-stepped VRapplication to extend SimpleApplication in the new TestOpenVR. Now I have to muddle through stuff like this to get back to distortion correction.