[I copied this post from my old site.]
I spent some time in 2014 working on this for fun.
The reason for working on this was mostly that I thought it was cool. I was planning on writing games with the resulting renderer, but that never happened. :)
There are three kinds of artifacts in the video. The holes are caused by my system only loading triangles and not tesselating the N-gons in the source model. The stuck image effect when the head moves quickly is a result of the Timewarp. Finally, I don’t remember what the noisy surfaces are but I believe they are overlapping polygons.
The ray tracer was written on top of LibOVR pre-6.0. I started with a DK1 and later migrated to a DK2. I was lucky to have been around the time where the hardware was relatively simple. Understating how the DK1 worked was really easy and migrating to the DK2 was an incremental step.
The most important things you need to write a ray tracing renderer for the Rift (circa 2014) are:
- the distortion function
- head orientation
- position information (DK2 only)
That’s pretty much it. The distortion function got a little more complicated for the DK2. The DK1 was a simple barrel distortion straight from what you can get from Wikipedia, and the DK2 had a polynomial with a more intimidating name: an 11-point Cubic Hermite spline.
We needed to get the distortion function, and in order to do that, we needed to look in the SDK. The DK1 had the distortion function right in the docs and even described how to write a shader to integrate VR into your game or app. I really loved its simplicity. The DK2 was trickier. They moved to a mesh-based approach for performance reasons, but it was still possible to figure out the actual analytical distortion function. I am not sure if I could still go and get the distortion function today if I tried to, since a lot of the Rift’s core functionality has moved into the firmware.
Lens distortion correction is super easy with ray tracing once you have a formula for the correction. Ray tracing is the modelling of light trajectories, so to get distortion correction, you just apply the function to the ray as soon as it leaves the camera.
We also need to get the distance from the eyes to the lens and the user’s IPD (inter-pupilary distance) and the physical dimensions of the screen. These values define the view frustum.
Regarding GPGPU languages, I started with Compute Shaders. I liked the idea of GPGPU shaders in OpenGL. However, I ended up switching to OpenCL when an Nvidia driver update made my raytracer 30% slower. There were also annoying compiler bugs.
For the actual ray tracer, I am loading a single mesh from an OBJ file, splitting it into chunks using an octree, and loading those chunks as individual leafs into a BVH (Bounding Volume Hierarchy), then discarding the octree. The octree method is used because the triangles may be scattered randomly in the file and we want to have adjacent triangles close together when building the BVH. My BVH implementation is based on the Physically Based Rendering book. It is using the SAH (Surface Area Heuristic) construction method.
For VR, you need to take into account chromatic aberration of the lenses. This was done in GL as a post-processing pass, with a pixel shader that separated the R and B color components, in polar coordinates, multiplied by two respective constants and the distance from the center of the lens. In other words, for any given pixel, the R and B color components would be multiplied by a number. The numer gets larger the further the pixel is it from the center of the lens. The constants were found “heuristically”, which is a way of saying “I tweaked them until thinks looked good”.
This implementation had a simple Timewarp that rotated the head based on the prediction that it got directly from LibOVR. OpenCL renders a frame to a texture, and before VSync, the texture gets drawn by GL. The Timewarp implementation is just a rotation shader.
Timewarp is interesting because it’s another place where ray tracing could be a big win (Unless there is something I am not seeing). I whish I had implemented this: OpenCL is sharing a texture with GL, so to implement async Timewarp, we could use two textures instead of one. The GL renderer could simply grab the last finished frame, always drawing at the monitor’s refresh rate.
Real time ray tracing is really fun. I don’t think ray tracing is going to “win” any time soon, and much less for VR, but it was a fun experiment, and I learned a lot.
This paper: Understanding the efficiency of ray traversal on GPUs was interesting and useful. Another useful link is the ompf2.com forum.
You can get the code here