Work in progress: light field rendering in VR
2016-05-05
4 minutes read

This is a work in progress update related to my second Light field render engine. The first one was described last october in the video Implementing a Light Field Renderer. It was based in part on Aaron Isaksen paper “Dynamically Reparameterized Light Fields”. The view synthesis was performed by reprojecting carefully selected quads from the source images (see video). It ran on the CPU and projected the result in a desktop window.

Scope

For this second engine the scope of the project is the following.

1. View synthesis on the GPU, display in VR

The implementation is in CUDA and outputs to the Oculus Rift (DK2 and runtime 0.8 in this version). While I could get away with 30 to 50 ms to render sub-HD images in the previous project, VR requires the total time to render both eyes be under 10 ms, and the resolution is much higher.

2. Quality vs ray budget optimizations

The particular constraint I’m using to drive the project is to try to get the best quality out of a given budget in Megarays. I settled on a 100-Megaray budget for the experimental phase. This is medium size, for comparison the Lytro Illum physical Light field camera clocks 40 Megarays, while one of the example in my previous project was 900 Megarays. Gigaray-range light fields are often encountered for omnidirectional applications. Note that the examples in the video below are 300 Megarays, I still haven’t matched the quality I would like to reach for the 100 Mr budget.

3. Close-up with limited depth range

I’m focusing on close-up subjects with shallow depth.

Demo

 

 

There are a few important things that are not reflected in the video, some good some bad.

  1. Resolution: The video shows the mirror window of the application which uses a much reduced resolution than the actual view projected in VR (see performances paragraph below).
  2. Scale and depth: completely missing from the video, as always the case for VR, is the good sense of scale and physicality of the subject. The head is rendered in VR at the actual physical size of the head of the person, which is a bit unsettling at first, in a good way.
  3. Uncanny valley: The quality is there (at respectable distance) but the fact that the person is completely still is awkward. Due to this, it looks more like a photograph with depth than an actual person being there. A short breathing and blinking animation loops would go a very long way.
  4. Eye accomodation failure: the person’s head is well-locked in the VR world thanks to positional tracking, and the quality is beliveable. Due to this, there is an expectation of increased definition when moving closer to the face. The eyes tries to refocus progressively when getting closer, but the quality is capped from the source dataset. This gives a strange feeling that your eyes don’t work. I had not experienced this before.

Dataset details

Female head

  • 300 Megarays.
  • Original model is 768K polygons, with 8K texture.
  • Capture: path tracing at 1000 samples/pixels.

Crystal

  • 300 Megarays.
  • Capture: path tracing at 4000 samples/pixels. (And a specular depth of 48 because the crystal has many internal surfaces that the light must traverse and bounce on).

Both light fields were captured in Octane 3.0 alpha using a custom Lua script.

Performances

The rendered view is 2364×2927 per eye. This is a pixel density of 2.0x on the Oculus DK2 with the eye relief I’m using. The images are then warped to the DK2 screen which is 1920×1080. It runs comfortably within the 13 ms (75 fps) budget on a GTX 980Ti.

Additionally the view is interpolated and blit to the low-resolution mirror window. This step happens just before posting the render to the Oculus compositor.

Obviously the most interesting thing about rendering light fields is that the render time is independent from the complexity of the scene. This is why I focus on examples that are hard or downright impossible to render in real time. The woman face is more than 750K polygons, uses a complex material and 8K textures. The crystal has particularly challenging light pathways and caustics (not really visible with the settings I used unfortunately) that can only be rendered realistically using ray tracing.

Application

The application in the video is called Hypercapsule. It draws some parts from Capsule, the application from this post which renders omnistereo images in the Oculus DK2. The hyper comes from the 4D mindset used in core parts of the implementation.

I’m not going to publish this application at this time, it’s a step towards something larger.


Back to homepage


comments powered by Disqus