So, the Linux 'nagging thread' has kind of dragged on for a while. Oculus has pledged Linux support, but given no ETA on when we'll see positional tracking supported for DK2. It could be this week. It could be months.
However, I think that sitting around and waiting for Oculus to solve the issue kind of goes against the spirit of Linux anyway. If you want something done, you should be willing to do it yourself. To that end, I'm working on porting the current 0.4.2 SDK to Linux. I've managed to reverse engineer some of the code required to interact with the LEDs so that they can be turned on. The camera is natively supported. Right now the primary missing component to getting this done is the software for calculating a head pose based on an image from the camera.
Unfortunately this kind of math is outside my field of expertise. So I'm putting a call out to the community to see if interested parties might be able to assist with this.
If you want a Linux SDK and don't want to wait for Oculus to get around to it, and you've got the skills, here's a video of them captured from the webcam on Linux:
You can download original file from here:
https://s3.amazonaws.com/Oculus/oculus_rift_leds.webm
If you can write C or C++ code that will take that video, or an image from that video, and turn it into a head pose, let me know and I'll work with it to produce a viable Linux SDK with positional tracking.
0
Comments
Anyway, do you have a limited Linux SDK you could put into JOVR in the meantime? Something that would at least get rotational tracking & SDK-side distortion working? I'd like to continue Rift development with my game, 4089, but when I try to use it in "Rift mode", it simply crashes because no linux libraries can be found in JOVR 4.2.0.
PowerColor RX 480 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV
https://github.com/PeterN/freetrack/tree/master/Freetrack
Thank you for stopping by! I'm eagerly awaiting the Linux SDK release. Are you able to share if it will be included in the next SDK release?
PowerColor RX 480 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV
Are the LEDs supposed to flash in a specific pattern or are they supposed to be on continuously?
In regards to the math: http://en.wikipedia.org/wiki/Delaunay_tessellation_field_estimator
What I am thinking: Calculate the DTFE for several known states (Needs a cartesian robot) of the HMD and then use a curve fitting tool to get a function representation of it for the coordinates or store those in a LUT and interpolate.
An alternative, community developed SDK may be what is needed for these projects to be able to use the Rift hardware.
http://github.com/bjornblissing/osgoculusviewer
WWW: http://www.vti.se/drivingsimulator
Twitter: http://twitter.com/bjornblissing
Co-author of Oculus Rift in Action
As already mentioned, it would be helpful to see the output of the camera when the leds are controled with the oculus sdk and then reproduce the modulation. 3D model of the relative led location is also necessary. Finally, you need an external sensor to test and validate the algorithm.
Personally, I plan to ditch the cheap camera solution (that's what oculus calls it in the talk linked above) altogether and use the priovr head sensor instead. Unfortunately, the delivery of priovr is also late - looks like no one in VR indurstry is able get things done on time :roll:
1) An LED numbering scheme and careful measurements of the XYZ LED positions on the headset relative to some fixed point on the headset. If Oculus was feeling nice the could provide us with this data.
2) Some video with a mask for each frame indicating which LED's are on for that frame. If we could get a capture of what Oculus is doing we could use a similar scheme for modulating the LED's state to identify them.
We already have rotational data from the other sensors, so apart for correcting yaw drift, rotational information from the pose estimation is not strictly necessary.
webpage - blog - youtube channel
Because maybe if we could scope it out with more precision than faith-based beliefs... there could be a way to partition the problem more incrementally (and have it solved sooner).
Also I question whether positional tracking is in fact just a matter of maths... the same way I question whether virtual reality is just a matter of binary numbers and photons.
I have done some USB tracing on a Windows machine and have figured out how to read the Camera's EEPROM and how to enable the exposure synchronization. There is a block of data at 0x2000 in the EEPROM that I hope encodes the lens distortion parameters:
I have uploaded my code here: https://github.com/pH5/ouvrt
An open driver stack will be a huge benefit for the community, and actually for Oculus themselves too.
One of greatest community shortcuts I've rediscovered is how a Blender developer offers a VirtualBox .vha image http://wiki.blender.org/index.php/User:Ideasman42/ArchLinuxVirtualBox -- literally, you download, launch with VirtualBox tools and are sitting at a productive Blender dev prompt -- replete with all necessary tools including an IDE.
Perhaps such an instant-on "VR dev image" is something we could work on as a community, in parallel to (and in support of) ongoing lower-level efforts?
In theory such a tool would immediately foster an influx of new hands to help in general, with even those on Windows and OS X able to boot within a VM and test Linux-side features like distortion.
I'm thinking a remastered Ubuntu LiveCD .iso would be sufficient if it had a pre-installed IDE and relevant github SDK pointers. The README then might only need three bullets:
Step 1: Download a bootable .iso
Step 2: Boot it into any x86_64 machine (or VM container)
Step 3: ~90 seconds later, hit F5 to debug some example C++ code (etc)
Do we think this is a good idea? If so, does anyone have experience remastering LiveCDs yet? And what open source IDE might offer newcomers to VR (and potentially Linux) the most pleasant first-time experience?
I really like this idea
I've made a (very) experimental live respin of Fedora 20, including Nvidia's vendor-supplied binary GPU driver and our entire VR software stack. It boots directly from USB stick, but it obviously can't run inside a VM due to lack of GPU access. The idea was to make it easier to test-drive our software, but it could serve as a starting point.
Interpreting the last 8 bytes of the nine 12-byte blocks as double yields:
697.363044, 697.464979, 379.949473, 229.406064, -0.507535, 0.327629, 0.000416, 0.001165, -0.126384
Those look suspiciously like the focal lengths and distortion center point that blackguest measured.
I guess the remaining values have to be the radial and tangential distortion coefficients in some order.
Maybe I'm now grasping at straws, but the 2-byte blocks at 200a and 200c interpreted as 16-bit signed integers are:
22526, 44993
Could those be the intrinsic parameters' principal point in units of centipixels?
At this point it is possible to:
* Pull the LED/IMU positions (3D model of the HMD)
* Talk to the HMD, initialize LEDs, sync with the camera
* Identify and relatively-stabley track LEDs w/ blob and blink tracking
I've dropped a line to okreylos to see if he has a repo/code dump for his new work, but it seems like w/ the basics tackled, pose estimation is next (and then sensor fusion).
Since jherico's OculusRiftHacking repo hasn't had updates, I forked it to keep track of the existing code/tools: https://github.com/lhl/OculusRiftHacking
I'm happy to take pull requests if there's any extant code that I've missed (I just pulled the projects into root but maybe will do some shuffling for if becomes unweildy say rift vs pose or other reference project code) or just let me know of something that's out there and I'll add it.
I'm documenting information I find in the wiki: https://github.com/lhl/OculusRiftHacking/wiki
Anyone with a github account can add/edit this documentation. Unlike OculusVR, I won't be deleting the wiki without notice (grumble grumble)
Also, to kick off some pose estimation discussion, I found an interesting paper by Fernando Herranz et al on Camera Pose Estimation using Particle Filters which describes a particle filtering algorithm for pose estimation. Positional/angular error seems large but it was done w/ a 640x480x30Hz camera and there's no sensor fusion. I assume that the optical tracking can be relatively coarse / used primarily for drift correct against the much more accurate IMU data.
I'm not a hardcore CV guy at all, so looking forward to hopefully hearing some informed feedback here?
As long as a community version doesn't prompt oculus to stop supporting their linux drivers I really the idea as I can't hack around with the low level stuff atm.
I'm a computer science researcher based in medical imaging and machine vision. The positional tracking of the LEDs is a far simpler project than several other ones I've done recently so skills won't be a problem... but time is.
I'd be far more inclined to dedicate some of my time to the project if there was a clearer picture of what the goals are and what the progress is.
Is someone heading the project who might be able to setup a quick site hosting some information, a repository of the latest bits of code, list what's being worked on by who, and what is needed?
Oliver Kreylos has written up an excellent summary of his and others' findings so far here:
Part 1:
http://doc-ok.org/?p=1095
Part 2:
http://doc-ok.org/?p=1124
Co-author of Oculus Rift in Action
As jherico mentioned, he's hosting a project right now as a meta-repository w/ available code. It looks like okreylos/doc_ok/Oliver is chugging along - he just tweeted a pic w/ pose estimation a couple hours ago.
I've started a doc on the wiki w/ a roadmap: https://github.com/jherico/OculusRiftHacking/wiki/Roadmap
Looks like sensor fusion is the only missing piece once Oliver gets around to packaging his work. Oculus has published a fair amount on how they do things, and there's lots of existing work (see the Sensor Fusion page on the wiki for the results of a quick fishing expedition)
After the components are done there's the matter of wrapping up all the work in a proper statically compiled daemon and packaging it up. I'm assuming that since chunks of the code is is GPLv2 that's what the daemon will end up being, but that shouldn't be a problem for people using it, since the way I see it, the daemon will just run as a system service and emit or broadcast tracking data...
We are getting close to beating facebooculus in getting dk2 working in linux!