OSVR: Time to switch, methinks

With the announcement of Oculus dropping Extended mode, getting Direct Mode working is essential. OSVRguy Tweeted this in response immediately to my request:

“Direct Driver mode also coming to OSVR very soon. Works on all OSVR-supported HMDs”

This type of response & direct support is what we need. Valve & their SteamVR forum has been quite tricky to get support from, so it is sometimes hard to work with. OSVRguy is putting me in contact with an engineer to get better jMonkeyEngine support.

I’m also trying to see if we can get OSVR to help promote jMonkeyEngine as an integrated solution :smile:

Just got word from the Razer & OSVR CEO: he will help promote jMonkeyEngine & OSVR. This may be big for jMonkeyEngine. Let’s keep the momentum going!

9 Likes

Valve has responded to “Direct Driver” mode, even hints at supporting Oculus’s Timewarp:

I’m putting a pause on switching to OSVR. It really is a tough decision. It looks like OpenVR will be getting Direct Driver mode, and actually will use the Oculus compositor for that device. That will allow OpenVR developers access to some specific Oculus-SDK related benefits (like Timewarp, according to Valve).

I also got some more details from OSVR, and apparently they do not build binary packages for Linux & Mac. We’d have to build them ourselves, or completely drop support for Linux & Mac. OpenVR, on the other hand, builds and provides them automatically.

OSVR supports a ton of devices, is completely open-source, and the CEO & engineers have reached out to us for support. I’m still not set on what I’ll use in the long term… however, I think it may be a good idea to let these SDKs become a bit more stable before jumping yet again. jMonkeyEngine works very nicely with OpenVR now, and it may work perfectly once the VR Compositor gets an update (expected next week). Hard to throw that progress away.

You also can’t argue integration with SteamVR natively will be nice, since that platform is so widespead. OSVR requires another runtime, which will likely be the 3rd one asked of end users (Oculus Rift runtime, SteamVR runtime & OSVR runtime).

I’m still interested in OSVR support, so I might pick this up. When I have time is another matter, though. I’ll also be needing an OSVR HMD to test things, eventually.

I was hoping you would be interested in picking this up. OSVR would make more sense being integrated with the main jME3 branch, being open source and all. The publicity working with the OSVR guys would be excellent, too. I wish I had time to implement both OSVR and OpenVR, but I don’t. When I am not on my cell phone, I’ll post what I got from the OSVR engineers which should make getting started a little easier.

You won’t necessarily need an OSVR headset to test it. OSVR is suppose to work across many headsets, so if it works with one, it should work on others (otherwise, OSVR has a bug to fix).

nudge :wink:

Good thing Erlend nudged, because I had only seen your last reply.
I suppose I’m in contact with them too, but I haven’t talked to anyone on the engineering side so any info would be helpful. I guess though that it’s another case of JNI/JNA against the existing core api before there is anything interesting to talk about :slight_smile:

Thank you for the nudges. Here is what I got:

"I’ll try to answer as many of the questions as I can. Note that this would be a good question to post on the developer mailing list, since the questions and answers are probably worth archiving publicly. If you post something similar to your question on the mailing list, I’ll re-post my answers there too. http://osvr.github.io/mailing-lists/

The general API structure for client applications/game engines (client, as opposed to devices “serving” data or capabilities) is a C API/ABI, with a header-only C++ wrapper for optional use - what I recently learned is apparently called an “hourglass API design”. While there are more headers bundled with an OSVR built snapshot, only a few are actually used for client applications - most are for “internal” APIs. The main project you’d interact with is OSVR-Core, which provides the core functionality on both client and device sides (which share common code in addition to their separate APIs). You’d use headers in osvr/ClientKit, which depends only on others in that directory and on some headers in osvr/Util (corresponding to the osvrClientKit and osvrUtil libraries). In the source code, the API headers are under inc, so that’s inc/osvr/ClientKit - they install to include/osvr/ClientKit when the project is built. We’ve tried to keep headers topical so that client app devs need not consider functionality they aren’t using, as well as to reduce build time by allowing minimal include sets.

I’d recommend looking at the documentation that is generated from the source code and some additional text files in the repo: http://resource.osvr.com/docs/OSVR-Core/ - this is the “public” version that omits all the internal and “implementation detail” items. (Both versions are linked from the OSVR-Core section at http://osvr.github.io/contributing/ ) You might want to start at the “Writing a client application” section: http://resource.osvr.com/docs/OSVR-Core/TopicWritingClientApplication.html

Regarding platforms and binaries: At the moment I don’t know of any folks in the community working on Mac OS X support, but I do know it has built on OS X before and there are about 3 places I can think of that would need porting to run on OS X in the current source. It’s designed to be portable, including to Mac (and iOS), and the dependencies and underlying components work on Mac OS X, but nobody has stepped up and made those platforms a priority yet. It does build and run on Linux (in fact, building and testing on Linux is a part of our CI pipeline, so no OSVR-Core commit ends up approved to have a Windows binary uploaded unless it also builds and passes tests on Debian with both GCC and Clang), but we haven’t had any interest in uploading any pre-built binaries for generic Linuxes. It is subjectively easier to build on Linux than Windows (because of package management and prevalence of packages for dependencies), which might help a bit. I believe Valve is using a specific Ubuntu version as their “target” that they build Linux binaries on/for, but in general if you have the source it usually works best to have a native package for each distribution/version than a binary blob built against really old libraries that hopefully works on a range of systems. (We do have auto-built snapshots/binaries for Android being built and uploaded.)

The current rendering approach uses a generic (parameterized, with parameters supplied by the OSVR system) shader rather than a distortion mesh for applying any required distortion. The shader is fairly simple and can be found in a number of different languages (GLSL, Unity’s shader language, etc.) in various examples. I’m going to guess you’re wanting the OpenGL version, which is here: https://github.com/OSVR/distortionizer/blob/master/vizard/ShaderTest.frag and https://github.com/OSVR/distortionizer/blob/master/vizard/ShaderTest.vert (There’s a corresponding tool to determine the distortion/CA parameters used in the description and that shader interactively for arbitrary displays.) Hopefully that’s not too hard to integrate - there has been some interest in a mesh-based distortion option as well, but I’m not sure what timelines might be on that. Of course, you can generate a distortion mesh given the distortion model and/or shader, if that works better than using the shader directly.

Note that the current API has the client application responsible for parsing a JSON display descriptor and constructing appropriate matrices - while there are a number of implementations of such code you could model yours after, the matrix generation is being pulled into new APIs in ClientKit in the very near future - as in, I’m working on a standardized implementation in a branch, not quite done, but almost ready to merge. So, you might consider doing the input portion of your integration before the rendering portion, just stubbing out the rendering portion, given that you’ll have a much simpler time building your rendering implementation soon. It would be helpful to know what degree of “control” you want to maintain over your rendering setup, and what sorts of data your rendering system would expect to be output. (Pulling this functionality into core is more of a challenge of API design to suit a wide variety of client application architectures than it is a challenge of the actual implementation.)

By design, a client application doesn’t have to concern itself with plugins, etc. so don’t let the number of plugins and repos unnerve you. OSVR provides generic interfaces for a range of VR devices and a “path tree” structure of semantic names by which your app/engine can be independent of the hardware and configuration used to provide the data used. Hopefully this info is helpful - you should be able to find your way around the client examples given the Doxygen link above. Let us know if you have any additional questions, or feedback on rendering data, and I’ll do what I can! (And, if I see a variant of your email on the mailing list, I’ll put a variant of this response on it as well as a reply. If nothing else, let the list know of your progress!)

Thanks for getting involved in OSVR!

Ryan


Ryan A. Pavlik, Ph.D.
Senior Software Engineer, Sensics, Inc."

The biggest complications I saw in getting OSVR support is multiple headers & no OSX/Linux binaries. OpenVR just has one openvr_capi.h that includes everything & has Windows/Linux/MacOSX builds done automatically. OSVR is also really really big (I found it overwhelming actually) and the examples were not as straightforward as OpenVR’s “hellow world OpenGL” one.

Anyway, you will probably need to JNAerate a few header files to start. You might want to get intouch with Ryan yourself to get more direct assistance (support@osvr.com).

2 Likes

Regarding binary builds, you can use the same approach as what jME3 uses for native bullet. I.e. inject the JNI entry points into the source code of the native library and then build it. Then you get a single DLL / SO / DYLIB file instead of two if you were to compile the JNI library separately.

I received an OSVR HDK today and I’ll start getting ready for adding support. Considering we’ve done 3 plugins now (Oculus, Cardboard and OpenVR) I think this will go smoothly. I’ll shout if I need any help.
I will initially try to use the OpenVR plugin (Edit: for an application) though as I have a need to get started right away.

@Momoko_Fan can you point to a text/link on the process used? I’ve done JNI before, but only by writing the C++ classes manually.

2 Likes

[OSVR] Connecting to default (local) host
[OSVR] Client context initialized for com.osvr.exampleclients.MinimalInit
[OSVR] Got connection to main OSVR server
[OSVR] Got updated path tree, processing
[OSVR] Connected 0 of 0 unconnected paths successfully
OK, library initialized.
[OSVR] Connection process took 18ms: have connection to server, have path tree
Library shut down, exiting.
[OSVR] Client context shut down for com.osvr.exampleclients.MinimalInit

What I mean is the JNI entry point source code is located in the same library which you’re wrapping.
The only disadvantage is that you cannot use the build scripts from that project since you’re using your own.

You can see how this is done for native bullet here: