Hm… maybe future versions of the library will, so it might make sense to have two kinds of objects, a static one that represents the library and a nonstatic one that respresents a Rift display.
It might be hard to correctly associate each feature with either the driver or the display, but it should still help with the transition once the Oculus libs can handle multiple displays. OTOH I won’t tell you off if you decide it’s not worth the effort (but maybe you’ll want to to explain the reasons so the next person who works on the lib knows why the decision was made that way).
True, true… In any case, i consider the whole library in flux until I (or someone) has solved the initialization problem. I’ll consider reverting the static object decision.
Any feedback from people using the library? Has it caused any problems? (Actually, most jme normal users won’t interact with the oculus itself, anyway. The idea was always to make it something you just switch on and off. Only other java users using the classes directly should notice anything).
At a glance, it does seem like rift support is pretty simple. Its the conclusion I came to when I was wondering what the heck was taking them so long to get it to the market, but watch this video from PCGamer - they do a pretty in-depth review, including various “view modes” and highlight the problems that arise from various setups. If you’re interested in purchasing one, and providing the ability in your game, it’s definitely worth a watch, and intriguing in the complexity of trying to make it a “natural transition” from a regular monitor.
There have been requests about linux support in the thread (@phr00t ?), and since Oculus Rift now officially supports linux, I thought I’d give it a try.
I’m not a linux user, so I’d like someone who is to verify if my trail of thoughts on how to achieve this is correct;
- I grab the libovr.a files (32 and 64 bit) from the linux sdk and add them to my own library.
- I need to compile my own c++ files to something that supports linux instead of a .dll, and linking the libovr.a files in that project. Can I do this on a windows machine (with mingw or gcc)?
Thanks,
Rickard
I can’t give you exact instructions, but I can provide the Linux terminology.
Windows: “I want to create a DLL (dynamically loadable library) from an object file (.o)”.
Linux: “I want to create a shared library (.so) from an object file (.a)”.
Try asking the search machine of your choice with that terminology, it should help.
(I can’t give the exact instructions because my last work on a C program predates even Linux.)
Building shared libraries is a pretty advanced task (because the Gnu tools need to do that in a cross-platform fashion, which makes the issues horrendously complicated).
If your first attempt doesn’t work, try the following progressively more complicated tasks of compiling and running:
- A program from a single C source.
- A program from multiple C sources.
- A program consisting of a C source and a separately compiled .a file.
- A shared library (just compile to .so).
- A program consisting of a C source that loads that .so file.
Thanks! I’ll see what I can do.
Another update on the Oculus hardware:
- New hardware prototype “Crystal Cove” is out
- Now full HD resolution instead of 1280x800
- Movement smear reduced/eliminated by alternating screens with black
- Translation is now tracked (via infrared cameras (wow)), previous prototype tracked just rotations
- Translation tracking markedly improves immersion
- Translation tracking VIRTUALLY ELIMINATES MOTION SICKNESS
- Consumer version will have translation tracking and “at least” full HD
- Dev Kit with translation tracking and full HD may or may not be available before consumer version
Exciting news indeed.
Source: http://tinyurl.com/oculus-2014-01 (German)
Some sites (http://www.polygon.com/2014/1/7/5284258/oculus-unveils-rift-prototype-with-positional-tracking-and-mysterious) claim it is OLED screen.
“Our new OLED panel in the prototype switches in well under a millisecond, so it’s faster than any LCD monitor on the market … what we’re doing is we’re taking the image and flashing it on when it’s correct, and only keeping that on for a fraction of a millisecond and then turning it off and then going black until the next pulse,”
Positional tracking is done by having the headset painted with white dots and using some kind of infrared webcam to do motion capture.
Heise confirms the OLED claim (Heise has a high reliability with technical details). The OLED is used for faster switching time (not higher frame rate), i.e. they wait until the image is stable, make it bright for a very short time (microseconds), then switch the display off again. Eliminates smear. The switching speed is impossible to do with LCD, so the OLED bit makes very much sense.
The white dots are IR LEDs, Heise says, and they’re being tracked by a camera (singular). If the camera is hi-res enough, they could infer distance to camera from how large the LED pattern looks; not sure what resolutions you’d need though.
I’m a bit unhappy about the separate camera. An input device shouldn’t require more than plugging its cablel in. But ah well, we’ll see what impact it will have when the hardware hits reality.
They will have to solve the situation with multiple Rifts in the same room: telling them apart, dealing with temporarily occluded LEDs, that kind of stuff. I think that’s solvable by adding a camera or two (to have a better chance to not be occluded) and combining their data; I hope they have some smart guys who can code that up without adding latency.
OTOH if (when ) the Rift becomes a success, they can still add that, and they already are doing some very smart things in software; this might really work out in the end.
@toolforger said: I'm a bit unhappy about the separate camera.
To say the least. I think it sounds horrible.
I think that all additional peripherals lower the chance of it becoming a successful consumer product.
Not that i have any better solution, but I would have thought they’d go the magnetometermagnetic field path.
Magnetic field does you give translation only in one direction. You need three.
Ultrasonic echolocation might be an alternative - one generator, four cheap microfones on the headset, plus maybe a DSP and the software.
@toolforger said: hope they have some smart guys who can code that up without adding latency.
I would definitively use the word “smart” along with John Carmak.
I meant smart coders for signal processing.
John is certainly a great leader, and probably a great game coder, but signal processing and the algorithmic stuff you need there is a different skillset. Doesn’t mean he can’t be great for that though
@rickard said: There have been requests about linux support in the thread (@phr00t ?), and since Oculus Rift now officially supports linux, I thought I'd give it a try. I'm not a linux user, so I'd like someone who is to verify if my trail of thoughts on how to achieve this is correct; 1. I grab the libovr.a files (32 and 64 bit) from the linux sdk and add them to my own library. 2. I need to compile my own c++ files to something that supports linux instead of a .dll, and linking the libovr.a files in that project. Can I do this on a windows machine (with mingw or gcc)?Thanks,
Rickard
I’ve been able to build a dynamic library (.so) in Linux and it works, though I don’t have a Rift yet.
The library implements the same JNI API present in your code (I borrowed it to implement in my modest Java engine).
Probably you will find useful to read the Makefile. If you have any question I’ll try to help.
The code is here: http://yombo.org/wp-content/uploads/2014/01/OculusLib.zip
About cross-compiling from windows, I guess you can, with gcc.
Edit: Forgot to mention I’m using 64bit Kubuntu 13.10
Edit2: Well, I changed the Java class, package etc so it is not exactly the same JNI api.
@yombo said: I've been able to build a dynamic library (.so) in Linux and it works, though I don't have a Rift yet. The library implements the same JNI API present in your code (I borrowed it to implement in my modest Java engine). Probably you will find useful to read the Makefile. If you have any question I'll try to help. The code is here: http://yombo.org/wp-content/uploads/2014/01/OculusLib.zipAbout cross-compiling from windows, I guess you can, with gcc.
Edit: Forgot to mention I’m using 64bit Kubuntu 13.10
Edit2: Well, I changed the Java class, package etc so it is not exactly the same JNI api.
Thanks! I’ll see what i can do with this.
I tried an occulus rift the other day, pretty awesome 8)
*Added methods for direct access to orientation and acceleration on the OVR object. Requested by “rupy” on the Oculus Developer forums.
*Realized a new Quaternion was created every update in the CameraControl class. Changed it to setting the old one instead.
Did you have time to try to do the Linux version?
@yombo said: Did you have time to try to do the Linux version?
Sorry, no. I had enough trouble with the JNI, alone, this time. My lack of c-skills is impeding me.
Been spending some time trying to build the linux version. I’ve gotten as far as it outputing “OculusLib.o”. The liboculus.so target is not succesful, however, complaining about the following:
…/mingw32/bin/ld.exe: cannot find -lpthread
…/mingw32/bin/ld.exe: cannot find -lX11
…/mingw32/bin/ld.exe: cannot find -lXinerama
…/mingw32/bin/ld.exe: cannot find -ludev
I’m assuming it’s a missing library of some sort. Removing the flags gives me errors like the following:
OculusLib.o:oculusvr_input_OculusRift.cpp:(.text+0x2f): undefined reference to OVR::System::Init(OVR::Log*, OVR::Allocator*)' ending with ../mingw32/bin/ld.exe: OculusLib.o: bad reloc address 0x4 in section
.text$_ZN3OVR9Allocator11GetInstanceEv[__ZN3OVR9Allocator11GetInstanceEv]'collect2.exe: error: ld returned 1 exit status