First of all, please oculus, set up a bug tracker or a mailing list, or something that will facilitate a direct path of communicated bugs and isuses without going through this forum.
I'll post a couple of issues I've found so far with my minimal testing of the new SDK:
1. The LibOVR makefile is broken
1.1 the shared library is built with a -soname including the path from the LibOVR directory to wherever the so is placed. So for instance here the soname of the shared library is something like: ./Lib/Linux/Release/x86_64/libovr.so.0 which is clearly wrong
1.2 there is no installation target in the makefile
I've uploaded a quick fix for both these issues here:
http://mutantstargoat.com/~nuclear/tmp/libovr_makefile_install.patch
2. OVR_CAPI_GL.h defines the Disp member of ovrGLConfigData with type _XDisplay* which does not exist. There is a struct _XDisplay (notice: not typedefed), and a type Display. So you'll either have to define it as struct _XDisplay* Disp; or the more correct, not using internal implementation-specific types: Display* Disp;
Obviously none of the test programs within oculus are written in C.
3. in CAPI_GL_DistortionRenderer.cpp, function DistortionRenderer::Initialize, if the user didn't set the Disp member of ovrGLConfigData, the SDK attempts to open a new connection to the X server by calling XOpenDisplay(NULL). That is a mistake, because the application might well have connected to a completely different X server instead of whatever the DISPLAY env var says, which is what XOpenDisplay uses when called with a null argument. The correct approach would be to call glXGetCurrentDisplay() instead, and get the exact same connection used by the client program.
I'll post more issues when I find them, but again, please provide a more direct avenue of reporting bugs.
0
Comments
This is not foolproof, as some systems use the ld-gold linker which doesn't follow dependencies of libraries being linked, and even with the original GNU linker, dependencies can't be carried by the static version of the library anyway. But it doesn't hurt.
Ideally you would also provide a pkg-config file (libovr.pc) which will have to be installed in $(PREFIX)/share/pkgconfig, by the install rule of the makefile. pkgconfig files list commandline options required during compilation and linking so that running pkg-config --libs libovr in the application's makefile, will output the commandline needed to link all of libovr's depedencies. This is de-facto standard practice for all UNIX libraries with complicated dependencies, so that the application won't have to know what each librarie's internal dependencies are.
For now I'm maintaining a mercurial repository of the oculus SDK with my changes so far: http://nuclear.mutantstargoat.com/hg/ovr_sdk/ if you clone it and try an "hg diff -r 0:tip" you'll get a patch of all my changes from the original OVR 0.4.3 release.
webpage - blog - youtube channel
Co-author of Oculus Rift in Action
CMake is ok, I've used it some times but I'm not a huge fan; I prefer hand-crafted makefiles. But either way this is not the venue for such an argument
webpage - blog - youtube channel
1) LibOVR causes a segfault in fglrx/Catalyst (AMD's proprietary driver) by incorrectly passing a GLX_FBCONFIG_ID as a GLXFBConfig to glXGetVisualFromFBConfig. I'm guessing that in nVidia's proprietary driver, GLXFBConfigs happen to just be the ID, but the GLX spec makes it fairly clear that you shouldn't assume that. It seems that in AMD's driver a GLXFBConfig is actually a pointer to a struct so you'll get a segfault when glXGetVisualFromFBConfig tries to dereference it. I don't really have any experience working directly with Xlib/GLX, but I've hacked together a modified implementation of SDKWindow::getVisualFromDrawable that seems to work.
bool SDKWindow::getVisualFromDrawable(GLXDrawable drawable, XVisualInfo* vinfoOut) { _XDisplay* display = glXGetCurrentDisplay(); unsigned int value; glXQueryDrawable(display, drawable, GLX_FBCONFIG_ID, &value); const int attribs[] = {GLX_FBCONFIG_ID, (int)value, None}; int screen; glXQueryContext(display, glXGetCurrentContext(), GLX_SCREEN, &screen); int numEls; GLXFBConfig * config = glXChooseFBConfig(display, screen, attribs, &numEls); if (numEls) { XVisualInfo* chosen = glXGetVisualFromFBConfig(display, *config); *vinfoOut = *chosen; XFree(config); return true; } return false; }2) With the above change, the OculusWorld demo doesn't crash, but it seems to a crapshoot as to whether the window ends up on the correct display. Even when it's on the correct display it's not rotated to account for the portrait orientation of the Rift's display. I haven't attempted to work around it by rotating the display using xrandr since I fixed the segfault, but oculusd seemed pretty unhappy when I had tried that before and the README suggests it's a bad idea.
My initial attempts at setting up a separate X11 screen for the Rift have failed so I don't know if that would fix the rotation issue.
Another issue: oculusd is not really a daemon.
It would be useful if oculusd was a daemon. One would start it from init as user nobody or a special user oculus or whatever, and it would sit there forever until applications need it (exactly like a windows "service" does). To do this, oculusd must be able to "daemonize" itself, by double-forking when it starts to shed it's controlling terminal. Then an init script can easily be crated to allow it to start during system bootup.
Possible problems: if oculusd needs any X11 stuff, it might be trickier to make it a proper system daemon. On my spacenavd project, I'm solving this issue by running an X11 detection routine using inotify on the X11 socket, to defer any X11-related initialization until after the X server has started running.
webpage - blog - youtube channel
edit: ovrHmd_ConfigureRendering is responsible for the shader program state leak
Other instances of state creep i've found in the previous SDK version was leaving VBOs/IBOs bound, making straight-up client-side vertex array programs fail. Haven't tested if this problem persists in this version of the SDK as well.
webpage - blog - youtube channel
Presumably this is why they're using a new shared context within the distortion renderer. Are you seeing state that's moving across that context boundary?
Co-author of Oculus Rift in Action
Are they? As I said, I haven't checked if the VBO problem persists with the new SDK, and I haven't looked into the source of LibOVR 0.4.3 yet at all.
My report of the shader program state creep was merely from outside observation: The simple fixed-function example program[1] I've written last month using 0.4.2 failed to draw anything other than the grey-ish framebuffer clear color after updating to 0.4.3, and when I added a glUseProgram(0) after ovrHmd_EndFrame, the problem went away. There are no other calls to glUseProgram anywhere as I wasn't using shaders at all.
[1]: http://nuclear.mutantstargoat.com/hg/oculus2 (mercurial repo)
webpage - blog - youtube channel
webpage - blog - youtube channel
Usage: ./oculusd [options]
Options:
-h | --help Print this message
-p | --pid Location of PID file
-d | --daemonize Daemonize
Have you tried using the command line argument to daemonize it, and are having an issue with it, or did you not realise that the argument was needed?
Yes. Actually most of the Linux issues seem to stem from difficulty in creating the shared context correctly. But once they get it working, there shouldn't be any more issues of state creep.
Co-author of Oculus Rift in Action
Ah! right, I missed that completeley. Usually it's the other way around. Daemons usually become daemons by default and avoid daemonization with a -d argument. Thanks for pointing it out, we can cross this one off the list then
webpage - blog - youtube channel
Thanks, seems to work here too. I too am getting segfaults on Debian Sid, fglrx driver package version 1:14.9+ga14.201-1. Your patch solves that for now.
This actually appears to work without issue for me with the demo:
I do have to manually send the window to the oculus screen but my window manager (awesomewm) requires me to do so anyway.
However, the RiftConfigUtil is very unhappy and returns "Error: Please do not rotate your rift's screen.", then segfaults:
A similar patch is likely required.
AFAIK that would require you to use a secondary video card and start an X server that's only connected to that, because you can only have one FGLRX driver instance bind to one PCIe card. I do have an onboard Intel GPU so I could in theory do this, but it's too much trouble for me. Best to try and fix these issues instead.
Really? You can have two 3d accelerated X screens (as many as there are video outputs actually) on my nvidia using the nvidia driver. I'm not doubting you as we are talking different architectures and drivers, and I'm only familiar with nvidia. I was just surprised.
Unless there is some other way of running two X servers that I'm not aware of.