Applications are now open to attend this year's Oculus Connect. Join us October 11–12 and be first in line for an inside look at what’s new and next for VR.
New To The Forum? Click Here To Read The How To Guide. -- Developers Click Here.
I agree with that, it's been some time since you announced a new version of the SDK, and we think that it may not be worth developping anything now using the current version if they will be a lot of changes in the API.
The only waste of time with starting now is in implementing the distortion shader and head-neck model, which won't be needed in the next SDK. The actual stereo rendering part won't change, just use the provided eye and projection matrices and when you update they will continue to work properly.
The biggest change in the next SDK will be a replacement for the DX/OpenGL Present call, which should just be a one line code change. That one call will handle all of the distortion, display timing, and time-warping.
There may be other code differences. According to the preview slides the next sdk is written in C with C++ helpers instead of pure C++.
But it's still pretty minor, code wise. There's still a lot of prototyping that can be done with the current sdk even without position tracking and timewarping.
The driver library might be C++, but the C interface sits on top of that and then the C++ helpers sit on top of the C interface.
That's interesting. I wonder where they're planning on putting the Sensor Fusion. The most obvious place would be the OVR Service.
Bah! I wish they'd just release the updated core SDK and do more incremental improvement. It's frustrating being blocked on working on some sections of the book because I know they're going to be impacted by API changes.
The slightly worrying thing: OVR Service?
I hope the rift isn't going to require installation to work. My college doesn't give installation rights to staff or students. For example, I can't use my 3d connexion devices, kinect or novint falcon on the classroom computers because I'm not allowed to install drivers. I need to use my own laptop or surface pro 2 for stuff like that.
(Luckily my laptop kicks ass, so I don't mind, but it limits students)
1>d:\documents\ovr_sdk_win_0.3.1\oculussdk\libovr\src\capi\d3d1x\CAPI_D3D1X_DistortionRenderer.cpp(33): fatal error C1083: Cannot open include file: '../Shaders/Distortion_vs.h': No such file or directory
So predictably not only is SDK 0.3.1 pre statically linked but they are releasing it for windows only. No sign of Linux support. No mention of plans to either. :evil:
Is this the start of the rot facing the FB buyout?
I think their main focus is right now on the windows platform, because things are running much smoother there compared to Linux. I tried both and the rendering on the windows machine seems to be much smoother. I haven't tried it yet with the latest 14.04 Ubuntu (only 12.04 LTS), but during the next days I will.
And as they stated, it's just a PRE-view. That propably means, a normal release is supposed to support all platforms.
Mac and Linux support has also been omitted from this release only.
So no worries. Nevertheless, I can't wait to get my hands on the Mac version, C API is much nicer to wrap than the C++ version. Any date on that maybe?
EDIT: Update: I've fixed the problem by turning off prediction. Prediction doesn't seem to work at all if you aren't using the rendering functions, and there are apparently no sanity checks in their algorithms to stop it predicting on garbage data.
I moved Doom 3 BFG to SDK 0.3.1 and the tracking (the only thing I am currently using, besides detection) has gone completely haywire and totally random. Here's the code:
#include "OVR_CAPI.h"
#include "Kernel/OVR_Math.h"
using namespace OVR;
...
// *** Oculus Sensor Initialization
ovr_Initialize();
// Create DeviceManager and first available HMDDevice from it.
// Sensor object is created from the HMD, to ensure that it is on the
// correct device.
//pManager = *DeviceManager::Create();
// We'll handle it's messages in this case.
//pManager->SetMessageHandler(this);
// Release Sensor/HMD in case this is a retry.
//pSensor.Clear();
//pHMD.Clear();
hmd = ovrHmd_Create(0);
if (hmd)
{
// Get more details about the HMD
ovrHmd_GetDesc(hmd, &hmdDesc);
if (ovrHmd_StartSensor(hmd, ovrHmdCap_Orientation | ovrHmdCap_YawCorrection | ovrHmdCap_Position | ovrHmdCap_LowPersistence, 0)) {
hasOculusRift = true;
hasHMD = true;
}
}
else
{
common->Warning("Oculus Rift not detected.\n");
LoadVR920();
}
...
if (hasOculusRift && hmd) {
//float predictionDelta = in_sensorPrediction.GetFloat() * (1.0f / 1000.0f);
// Query the HMD for the sensor state at a given time. "0.0" means "most recent time".
ovrSensorState ss = ovrHmd_GetSensorState(hmd, 0.0);
if (ss.StatusFlags & (ovrStatus_OrientationTracked | ovrStatus_PositionTracked))
{
Posef pose = ss.Predicted.Pose;
float y = 0.0f, p = 0.0f, r = 0.0f;
pose.Orientation.GetEulerAngles<Axis_Y, Axis_X, Axis_Z>(&y, &p, &r);
roll = -RADIANS_TO_DEGREES(r); // ???
pitch = -RADIANS_TO_DEGREES(p); // should be degrees down
yaw = RADIANS_TO_DEGREES(y); // should be degrees left
}
}
It's basically the same as the previous code, only using SDK 0.3.1.
Is that because I'm not calling any of the timing functions in the rendering loop?
OK, who left a breakpoint inside ovrHmd_BeginFrameTiming?
I guess I won't be running this in Debug mode.
EDIT: Adding ovrHmd_BeginFrameTiming and ovrHmd_EndFrameTiming around the sensor reading didn't work. It's still acting like a random number generator.
EDIT: Update: I've fixed the problem by turning off prediction. Prediction doesn't seem to work at all if you aren't using the rendering functions, and there are apparently no sanity checks in their algorithms to stop it predicting on garbage data.
Temporarily change ss.Predicted.Pose to ss.Recorded.Pose if you are having this problem. But try to eventually implement rendering the proper way, because turning off prediction and rendering incorrectly (like I'm currently doing) is very bad.
2EyeGuy: I've seen the same erratic prediction behaviour when I disabled the call to ovrHmd_endFrame() for debugging purposes. It's pretty wild (this forum needs a barf smiley ).
I just ran into this (predicted values being chaotic while recorded values are fine) porting my Ogre Oculus wrapper to the new sdk.
It seems the problem is ovrHmd_GetSensorState(hmd, 0.0) (which is what the sdk docs show).
The value of 0.0 doesn't seem to give the pose for the current time as documented.
Instead it works correctly if you use: ovrHmd_GetSensorState(hmd, ovr_GetTimeInSeconds())
I haven't looked into the source to see what's going on or if I left some initialisation step out.
I just ran into this (predicted values being chaotic while recorded values are fine) porting my Ogre Oculus wrapper to the new sdk.
It seems the problem is ovrHmd_GetSensorState(hmd, 0.0) (which is what the sdk docs show).
The value of 0.0 doesn't seem to give the pose for the current time as documented.
Instead it works correctly if you use: ovrHmd_GetSensorState(hmd, ovr_GetTimeInSeconds())
I haven't looked into the source to see what's going on or if I left some initialisation step out.
Yeah I noticed that when running my 'tracker test' example, which renders an outer transparent sphere with the current reading, and an inner solid sphere with the prediction reading. If I pass 0.0 into the method, the inner sphere starts off peaceful and then begins to spasmodically twitch around with increasing fury as the small noise values in the acceleration and angular velocity get increasingly amplified.
It seems the problem is ovrHmd_GetSensorState(hmd, 0.0) (which is what the sdk docs show).
The value of 0.0 doesn't seem to give the pose for the current time as documented.
Instead it works correctly if you use: ovrHmd_GetSensorState(hmd, ovr_GetTimeInSeconds())
I haven't looked into the source to see what's going on or if I left some initialisation step out.
In the OculusRoomTiny example, they use ScanoutMidpointSeconds from the ovrFrameTiming returned by ovrHmd_BeginFrame() (or ovrHmd_BeginFrameTiming() if not using SDK rendering).
To facilitate prediction, ovrHmd_GetSensorState takes absolute time, in seconds, as a second argument. This is the same value as reported by the ovr_GetTimeInSeconds global function. In the example above, we specified a special
value of 0.0 that is used to request the current time.
I just checked the code, ovr_GetTimeInSeconds doesn't check for special values of time. It gets passed to SensorFusion::GetSensorStateAtTime(double absoluteTime) which does:
To facilitate prediction, ovrHmd_GetSensorState takes absolute time, in seconds, as a second argument. This is the same value as reported by the ovr_GetTimeInSeconds global function. In the example above, we specified a special
value of 0.0 that is used to request the current time.
I just checked the code, ovr_GetTimeInSeconds doesn't check for special values of time. It gets passed to SensorFusion::GetSensorStateAtTime(double absoluteTime) which does:
So a value of 0.0 results in a negative delta time (increasing in magnitude over time). Not healthy for prediction.
Based on the sensor state structure returned, I suspect that they originally wrote that against a version of ovrSensorState that only returned a single pose. The current version returns both the instantaneous pose, as well as a predicted pose. So to use the predicted you have to pass in the absolute time, but to use the 'actual' pose, it doesn't matter what you pass in.
Comments
The biggest change in the next SDK will be a replacement for the DX/OpenGL Present call, which should just be a one line code change. That one call will handle all of the distortion, display timing, and time-warping.
But it's still pretty minor, code wise. There's still a lot of prototyping that can be done with the current sdk even without position tracking and timewarping.
I suspect it's far more likely that they're simply putting a C interface on top of the existing C++ codebase
Co-author of Oculus Rift in Action
The driver library might be C++, but the C interface sits on top of that and then the C++ helpers sit on top of the C interface.
That's interesting. I wonder where they're planning on putting the Sensor Fusion. The most obvious place would be the OVR Service.
Bah! I wish they'd just release the updated core SDK and do more incremental improvement. It's frustrating being blocked on working on some sections of the book because I know they're going to be impacted by API changes.
Co-author of Oculus Rift in Action
It is a significant change from the old API, however it shouldn't take too long to update your project.
More news to come. Thanks.
PowerColor RX 480 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV
I hope the rift isn't going to require installation to work. My college doesn't give installation rights to staff or students. For example, I can't use my 3d connexion devices, kinect or novint falcon on the classroom computers because I'm not allowed to install drivers. I need to use my own laptop or surface pro 2 for stuff like that.
(Luckily my laptop kicks ass, so I don't mind, but it limits students)
In terms of drivers, this may be necessary for certain features (i.e. for the camera).
PowerColor RX 480 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV
I get this
1>d:\documents\ovr_sdk_win_0.3.1\oculussdk\libovr\src\capi\d3d1x\CAPI_D3D1X_DistortionRenderer.cpp(33): fatal error C1083: Cannot open include file: '../Shaders/Distortion_vs.h': No such file or directory
I basically added
wherever I could add it.. but still nothing..
Visual Studio C++ 2010 Express
I put my compile-issues in a seperate thread.
Kind Regards
Patrick
Is this the start of the rot facing the FB buyout?
I think their main focus is right now on the windows platform, because things are running much smoother there compared to Linux. I tried both and the rendering on the windows machine seems to be much smoother. I haven't tried it yet with the latest 14.04 Ubuntu (only 12.04 LTS), but during the next days I will.
And as they stated, it's just a PRE-view. That propably means, a normal release is supposed to support all platforms.
Kind Regards
Patrick
I moved Doom 3 BFG to SDK 0.3.1 and the tracking (the only thing I am currently using, besides detection) has gone completely haywire and totally random. Here's the code:
...
// *** Oculus Sensor Initialization ovr_Initialize(); // Create DeviceManager and first available HMDDevice from it. // Sensor object is created from the HMD, to ensure that it is on the // correct device. //pManager = *DeviceManager::Create(); // We'll handle it's messages in this case. //pManager->SetMessageHandler(this); // Release Sensor/HMD in case this is a retry. //pSensor.Clear(); //pHMD.Clear(); hmd = ovrHmd_Create(0); if (hmd) { // Get more details about the HMD ovrHmd_GetDesc(hmd, &hmdDesc); if (ovrHmd_StartSensor(hmd, ovrHmdCap_Orientation | ovrHmdCap_YawCorrection | ovrHmdCap_Position | ovrHmdCap_LowPersistence, 0)) { hasOculusRift = true; hasHMD = true; } } else { common->Warning("Oculus Rift not detected.\n"); LoadVR920(); }...if (hasOculusRift && hmd) { //float predictionDelta = in_sensorPrediction.GetFloat() * (1.0f / 1000.0f); // Query the HMD for the sensor state at a given time. "0.0" means "most recent time". ovrSensorState ss = ovrHmd_GetSensorState(hmd, 0.0); if (ss.StatusFlags & (ovrStatus_OrientationTracked | ovrStatus_PositionTracked)) { Posef pose = ss.Predicted.Pose; float y = 0.0f, p = 0.0f, r = 0.0f; pose.Orientation.GetEulerAngles<Axis_Y, Axis_X, Axis_Z>(&y, &p, &r); roll = -RADIANS_TO_DEGREES(r); // ??? pitch = -RADIANS_TO_DEGREES(p); // should be degrees down yaw = RADIANS_TO_DEGREES(y); // should be degrees left } }It's basically the same as the previous code, only using SDK 0.3.1.Is that because I'm not calling any of the timing functions in the rendering loop?
Follow along: https://github.com/CarlKenner/dolphin/commits/VR-Hydra
Latest Version: viewtopic.php?f=42&t=11241&start=1020#p249426
I guess I won't be running this in Debug mode.
EDIT: Adding ovrHmd_BeginFrameTiming and ovrHmd_EndFrameTiming around the sensor reading didn't work. It's still acting like a random number generator.
EDIT: Update: I've fixed the problem by turning off prediction. Prediction doesn't seem to work at all if you aren't using the rendering functions, and there are apparently no sanity checks in their algorithms to stop it predicting on garbage data.
Temporarily change ss.Predicted.Pose to ss.Recorded.Pose if you are having this problem. But try to eventually implement rendering the proper way, because turning off prediction and rendering incorrectly (like I'm currently doing) is very bad.
Follow along: https://github.com/CarlKenner/dolphin/commits/VR-Hydra
Latest Version: viewtopic.php?f=42&t=11241&start=1020#p249426
2EyeGuy: I've seen the same erratic prediction behaviour when I disabled the call to ovrHmd_endFrame() for debugging purposes. It's pretty wild (this forum needs a barf smiley
It seems the problem is ovrHmd_GetSensorState(hmd, 0.0) (which is what the sdk docs show).
The value of 0.0 doesn't seem to give the pose for the current time as documented.
Instead it works correctly if you use: ovrHmd_GetSensorState(hmd, ovr_GetTimeInSeconds())
I haven't looked into the source to see what's going on or if I left some initialisation step out.
Yeah I noticed that when running my 'tracker test' example, which renders an outer transparent sphere with the current reading, and an inner solid sphere with the prediction reading. If I pass 0.0 into the method, the inner sphere starts off peaceful and then begins to spasmodically twitch around with increasing fury as the small noise values in the acceleration and angular velocity get increasingly amplified.
Co-author of Oculus Rift in Action
Based on the sensor state structure returned, I suspect that they originally wrote that against a version of ovrSensorState that only returned a single pose. The current version returns both the instantaneous pose, as well as a predicted pose. So to use the predicted you have to pass in the absolute time, but to use the 'actual' pose, it doesn't matter what you pass in.
Co-author of Oculus Rift in Action