cancel
Showing results for 
Search instead for 
Did you mean: 

Free DX 12 Rift Engine Code

lamour42
Expert Protege
Hi,

if you want to write code for the Rift using DirectX 12 you might want to take
a look at the code I provided on GitHub https://github.com/ClemensX/ShadedPath12.git

The sample engine is extremely limited on draw abilities: I can only draw lines!
But it may serve as a learning playground for DirectX 12 and Rift programming.

I find it fascinating how a bunch of simple lines suddenly become great if you can walk around them and view them from any direction when you wear the Rift!

The current state of the code is a first step to porting my older DX 11 engine to DX 12.
If you want you are allowed to use any code you like in your own projects.

I want to express gratitude to galopin, who came up with a detailed 8-step guide on how to combine
DirectX 12 with Oculus SDK rendering. See this thread https://forums.oculus.com/viewtopic.php?f=20&t=25900
When I found out that using oculus API ovr_CreateSwapTextureSetD3D11 on a
D3D11On12Device throws NullPointerExceptions I would have given up if he had not given this advice!

Some features of the code example:

  • Engine / Sample separation. Look at Sample1.cpp to see what you can currently do with this engine and see how it is done.

  • Oculus Rift support (head tracking and rendering). See vr.cpp

  • Post Effect Shader: Copy rendered frame to texture - Rift support is built on top of this feature

  • Use Threads to update GPU data. See LinesEffect::update()

  • Synchronize GPU and CPU via Fences

  • Free float camera - use WASD or arrow keys to navigate. Or just walk/turn/duck if you wear the Rift


Any feedback welcome.
56 REPLIES 56

cybereality
Grand Champion
"glaze" wrote:
"cybereality" wrote:
That's amazing!

Maybe I will revive my engine project and update to DX12.


I liked your engine blog posts.


Thanks. I'm glad someone appreciated them.

galopin
Heroic Explorer
"cybereality" wrote:
"glaze" wrote:
"cybereality" wrote:
That's amazing!

Maybe I will revive my engine project and update to DX12.


I liked your engine blog posts.


Thanks. I'm glad someone appreciated them.


An url ?

cybereality
Grand Champion
It's on my blog ( http://www.cybereality.com ).

Just click the three line icon on the top right corner to access the 3d engine series.

Honestly, most of it was just my thoughts on development (not a lot of code) but I am considering doing some more posts with pieces of code depending on how a feel about reviving the project.

lamour42
Expert Protege
Bone Animation is in. See the ObjectViewer app.
Be warned, however, that until I provide docu and tools for mesh creation you will have a hard time creating your own animated objects.
If you are interested, my Content creation chain is this:

Blender --> Collada Export --> parse Collada XML with Java and produce custom binary .b Format --> engine reads .b files at runtime

Still, if you are interested in looking at code that does CPU bound bone animation you might want to take a look. Of Course, GPU bound Animation is the ultimate goal, but that will come (much) later.

Here the most simple example you can get: A single Joint.


An animated worm:

cybereality
Grand Champion
Awesome!

lamour42
Expert Protege
"cybereality" wrote:
Awesome!


Thanks Cyber! It really helps to get some encouraging words along the way! 🙂

In the meantime I added ambient, directional and point lights to the engine. Also support for background music and directional sound.

Although there are many, many things I would like to add and enhance (like shadows, terrain rendering), I think there is now enough functionality available to try for more entertaining demos. I will exactly do that - and certainly along the way fix end enhance the engine while building the demo.

cybereality
Grand Champion
@lamour42: Was there any trick to getting the 1 million objects running? I finally got something somewhat working but performance seems worse than with DX11. Even with just around 2,000 cubes, the performance is tanking. The code is really hacked together at this point, so I'm probably doing some silly stuff, but maybe you have some advice.

galopin
Heroic Explorer
"cybereality" wrote:
@lamour42: Was there any trick to getting the 1 million objects running? I finally got something somewhat working but performance seems worse than with DX11. Even with just around 2,000 cubes, the performance is tanking. The code is really hacked together at this point, so I'm probably doing some silly stuff, but maybe you have some advice.


There is no real trick to be fast, should be natural. You really need to put a lot of effort to be slower than dx11. One possible thing is if you try to recycle an allocator/command list still in use by the gpu and fence waiting for completion. Doing so would create bad idle bubbles.

I will put my sample online at some point too, but for the moment, it is my platform to report bugs to nvidia/amd/microsoft. Things like xbox one code can't go public and will have to be stripped too 🙂

For nvidia, i highly recommend the driver 364.xx, they fixed some very bad memory corruption in the d3d12 drivers 🙂

lamour42
Expert Protege
Hi,

I disagree somewhat with Galopin here. I find it very easy to be slower with DX12 than with DX11. That is because driver level is much more thin and things that may have been done in parallel with DX11 won't be parallel in DX12 automatically.

To be fast you should pay attention that either everything large is already in GPU for processing so that only minimal data like WorldViewProjection matrix and maybe some parameters need to be transferred to GPU before the draw call.

Or if you have to transmit larger amounts of e.g. vertex data you have to make sure this runs in its own thread so that everything else does not have to wait.

And of course C++ comes into play big time. At one point I had very bad performance just because I iterated over my vertex data with an auto loop over a vector, forgetting to use a referenced auto variable.

To see perfomance bottlenecks I strongly recommend to try the diagnostic features of Visual Studio. They give a very detailed look at everything that goes on in CPU and GPU.

galopin
Heroic Explorer
your c++ mistake is irrelevant to dx11 or dx12 here 🙂 My point is that there is no more hidden cost in the API, nor black magic in the Present black box and there is not a single costly call in the rendering side of things ( minus multiple SetDescriptorHeaps on intel inside a commanlist…), all the costly things are on the creation side, that are irrelevant to amain loop.

Most good practice from dx11 are still relevant, like keep big things prepared once and for good on the local memory and try to update less as the gpu has a slow bandwidth from the main ram. It is true that dx12 allow a multithread feeding of command list, something deferred context in dx11 that fail to provide improvements. But in raw processing power, a single command list feeding, similar to a 11 implementation is still far faster. The fully setup-ed PSO allow a brainless driver now.

So unless you explicitly wait on fence every frame and kill the cpu/gpu parallelism, you cannot be slow !

EDIT: haha "intel inside"…