I’ve continued my VR experiments and it’s been a tale of 2 halves. Android hasn’t gone well, but Actions based upgrade for JME’s OpenVR has gone well.
Android
Oculus do include documentation on using OpenXR bindings. However its all very C++ focused and I got out of my depth very quickly. I believe that we don’t use LWJGL in our android apps because android has java support for OpenGL without it. That said LWJGL now does support android and that would probably make android VR much easier.
Regardless I am officially giving up on Android VR for now at least (but will someday revisit, especially in light of OpenXR on LWJGL)
PC VR
Trying to upgrade to allow JMonkey to use Actions based input on OpenVR (rather than the legacy input it currently uses) has gone much much better though. (Perhaps the bug @grizeldi hit has been resolved?). I have got a hybrid application where I mostly use JME to boot up a VR context then make some direct LWJGL calls to get actions based input values.
I realise in the long term we may want to look at OpenXR, but this will hopefully take us up to 2020 VR rather than 2016 VR, which is a step up at least.
Action Manifests
In action based VR the application declares the actions it wants as well as mappings from buttons to those actions for controllers the application is aware of (the nice thing about this is if a new controller comes along the user themselves can configure the mappings themselves using (for example) SteamVR. These files must exist at a physical file location (i.e. they can’t just be in resources within the jar)
My actions manifest files were as follows for this test
actionsManifest.json
{
"default_bindings": [
{
"controller_type": "oculus_touch",
"binding_url": "oculusTouchDefaults.json"
}
],
"actions": [
{
"name": "/actions/main/in/OpenInventory",
"requirement": "mandatory",
"type": "boolean"
},
{
"name": "/actions/main/in/test2",
"requirement": "mandatory",
"type": "boolean"
},
{
"name": "/actions/main/in/scroll",
"type": "vector2",
"requirement": "mandatory"
}
],
"action_sets": [
{
"name": "/actions/main",
"usage": "leftright"
}
],
"localization" : [
{
"language_tag": "en_us",
"/actions/main" : "My Game Actions",
"/actions/main/in/OpenInventory" : "Open Inventory"
}
]
}
OculusTouchDefaults.json
{
"action_manifest_version" : 0,
"bindings": {
"/actions/main": {
"sources" : [
{
"inputs" : {
"click" : {
"output" : "/actions/main/in/OpenInventory"
}
},
"mode" : "button",
"path" : "/user/hand/left/input/x"
},
{
"inputs" : {
"click" : {
"output" : "/actions/main/in/test2"
}
},
"mode" : "button",
"path" : "/user/hand/left/input/y"
},
{
"inputs" : {
"position" : {
"output" : "/actions/main/in/scroll"
}
},
"mode" : "joystick",
"path" : "/user/hand/left/input/joystick"
}
]
}
},
"category" : "steamvr_input",
"controller_type" : "oculus_touch",
"description" : "Bindings for the jmetest demo for a oculusTouch controller",
"name" : "jmetest bindings for a oculusTouch controller",
"options" : {},
"simulated_actions" : []
}
Java code
Within Java I got handles to the actions during the application initialisation (just after the call to VREnvironment#initialize
) as well as setting up my objects that talk to native buffers. (This is more efficient than doing them every time)
public static void main(String[] args) {
AppSettings settings = new AppSettings(true);
settings.put(VRConstants.SETTING_VRAPI, VRConstants.SETTING_VRAPI_OPENVR_LWJGL_VALUE);
VREnvironment env = new VREnvironment(settings);
env.initialize();
if (env.isInitialized()){
VRAppState vrAppState = new VRAppState(settings, env);
VRInput.VRInput_SetActionManifestPath("C:/Users/richa/Documents/Development/jmonkeyVrTest/src/main/resources/actionManifest.json"); //hard coded for experimental purposes
//LongBuffer longBuffer = ByteBuffer.allocate( (Long.SIZE / 8) * 100 ).order(java.nio.ByteOrder.nativeOrder()).asLongBuffer();
LongBuffer longBuffer = BufferUtils.createLongBuffer(1);
int error1 = VRInput.VRInput_GetActionHandle("/actions/main/in/OpenInventory", longBuffer);
openInventoryHandle = longBuffer.get(0);
int error2 = VRInput.VRInput_GetActionSetHandle("/actions/main", longBuffer);
actionSetHandle = longBuffer.get(0);
int error3 = VRInput.VRInput_GetActionHandle("/actions/main/in/test2", longBuffer);
test2Handle = longBuffer.get(0);
VRInput.VRInput_GetActionHandle("/actions/main/in/scroll", longBuffer);
scrollHandle = longBuffer.get(0);
System.out.println("Handle: " + openInventoryHandle);
activeActionSets = VRActiveActionSet.create(1);
activeActionSets.ulActionSet(actionSetHandle);
activeActionSets.ulRestrictedToDevice(VR.k_ulInvalidInputValueHandle); // both hands
clickTriggerActionData = InputDigitalActionData.create();
inputAnalogActionData = InputAnalogActionData.create();
Main app = new Main(vrAppState);
app.setLostFocusBehavior(LostFocusBehavior.Disabled);
app.setSettings(settings);
app.setShowSettings(false);
app.start();
}
}
The use of an action manifest disables legacy inputs (so existing jme calls to VRInputAPI#isButtonDown stop working) which is expected, however the pose (where the hand is, what its pointing at) continue to work, which is pleasant.
Then later within every simpleUpdate
the action handles are used to get the current user input
List<Geometry> handGeometries = new ArrayList<>();
@Override
public void simpleUpdate(float tpf) {
VRAppState vrAppState = getStateManager().getState(VRAppState.class);
int numberOfControllers = vrAppState.getVRinput().getTrackedControllerCount(); //almost certainly 2, one for each hand
//build as many geometries as hands, as markers for the demo (Will only tigger on first loop or if number of controllers changes)
while(handGeometries.size()<numberOfControllers){
Box b = new Box(0.1f, 0.1f, 0.1f);
Geometry handMarker = new Geometry("hand", b);
Material mat = new Material(assetManager, "Common/MatDefs/Misc/Unshaded.j3md");
mat.setColor("Color", ColorRGBA.Red);
handMarker.setMaterial(mat);
rootNode.attachChild(handMarker);
handGeometries.add(handMarker);
}
VRInputAPI vrInput = vrAppState.getVRinput();
for(int i=0;i<numberOfControllers;i++){
if (vrInput.isInputDeviceTracking(i)){ //might not be active currently, avoid NPE if that's the case
Vector3f position = vrInput.getFinalObserverPosition(i);
Quaternion rotation = vrInput.getFinalObserverRotation(i);
Geometry geometry = handGeometries.get(i);
geometry.setLocalTranslation(position);
geometry.setLocalRotation(rotation);
}
}
int error3 = VRInput.VRInput_UpdateActionState(activeActionSets, VRActiveActionSet.SIZEOF);
int error4 = VRInput.VRInput_GetDigitalActionData(openInventoryHandle, clickTriggerActionData, VR.k_ulInvalidInputValueHandle);
if (clickTriggerActionData.bState()){
System.out.println("openInventory");
}
int error5 = VRInput.VRInput_GetDigitalActionData(test2Handle, clickTriggerActionData, VR.k_ulInvalidInputValueHandle);
if (clickTriggerActionData.bState()){
System.out.println("test2");
}
VRInput.VRInput_GetAnalogActionData(scrollHandle, inputAnalogActionData, VR.k_ulInvalidInputValueHandle);
System.out.println("Joystick control" +inputAnalogActionData.x()+"," + inputAnalogActionData.y());
}
What I plan to do next
I plan to try to better integrate this with jmonkey (I.e. avoid all the direct calls to lwjgl) with a new VRInputAPI (deprecating the old one) and putting a PR in for that (as long as no one thinks I’m on totally the wrong track). I’ll also aim to add to the wiki for whatever ends up getting created (what should I do regarding documenting something that won’t be in the current version of jmonkey but would be in a future one?)