Tips for creating stereoscopic 3D ( S3D )

If people were meant to see 3D movies they would have been born with two eyes.
-Apocryphally attributed to Sam Goldwyn

Stereoscopic 3D is here to stay. Studios and manufacturers are pushing more shows and fancier tech. While digital workflows, impossible in the early days, mean a better experience for the viewer without the headaches of poorly balanced S3D.

This is an outline of our workflow for a native S3D show. At Lexhag the depth grade is done on the fabulous Mistika. For VFX I’ve invested in Eyeon Dimension.


Basic terms

The basic principles of S3D are covered in depth elsewhere online. In brief the basic terms are:

S3D not 3D

Now when I say I’m a 3D artist I have to be careful. 3D means different things to different people.

  • 3D means I make virtual models of things in a 3D computer graphics package (3dsMax etc).
  • S3D Stereoscopic 3D vision (I’m wearing silly glasses ).

My site is mostly dedicated to CG models and animation. So when I talk about Stereo Vision i’ll always say S3D.

Interocular (io) distance

The separation between our human eyes is called the Interocular distance. As a rule of thumb this distance is 65mm.

Interaxial (ia) distance

Interaxial (ia) distance is the separation of the stereo camera pair. ia should vary shot to shot and changes our impression of scale.

Convergence (cv) distance

Each camera’s center line crosses at the convergence point. This also place the viewing plane.

The Depth Budget

Each shot has a maximum amount of usable depth within which to create effective 3D.

Maximum Deviation

The maximum deviation on screen is a measurement of Parallax. It limits what is safe for the viewer. Avoids eye strain, and in turn headaches and sickness. Too much deviation must be avoided, especially positive Parallax which could (depending upon screen size and seating position) cause our eyes to diverge. It is possible to ‘break the rules’. But not, continuously or, by large amounts for long periods.

Guides for TV (as defined by Sky):

  • Positive Parallax (appears behind the viewing plane): +2%
  • Negative Parallax (appears in front of the viewing plane): ­-1%
  • Total Parallax: 3%

The illusion of stereoscopic vision.

It’s important to note that the illusion of S3D created on the screen is not the same as our perception of depth in real life. Our eyes have a fixed FOV. Cameras do not. We have brains designed to fill in the gaps. Cameras do not.

Interaxial (ia) and Convergence (cv) distances should vary shot to shot.

There is no ‘real life’ value to dictate Interaxial distance. Especially when close ups may place someones head 10 foot high on the cinema screen. Fixing Interaxial distance is hugely restrictive, and ia should in fact vary shot to shot.

Setting ia and cv is a creative decision to support the story. Together they define the scale of the viewer and the scene. We can view the shot as gods or insects it’s upto the DOP and the director to decide.

1/30th rule of thumb

Interaxial distance will generally be around 1/30th of the distance to the convergence point. So if your converging 30m away then the camera pair should be 1m apart.

When shooting native S3D we’d ideally need two camera rigs:

  • A mirror rig. For action close to the camera. ( where ia is less the width of the camera )
  • A side by side rig. For action far from the camera

Shooting Native S3D

Our workflow for shooting native S3D

Shoot Parallel

This would mean that the cameras are literally parallel. But it’s suggested the cameras are toed in slightly and made to converge on the furthest point in the scene.

Shooting parallel means the convergence point will be set in post. This is done by simply repositioning the plates.

  • The plate for the Left eye slides Right
  • The plate for the Right eye sides Left

As the plates are transformed in this way the convergence point moves closer to the viewer.

Shoot wider (more pixels) than your final frame format.

Unless the images are oversized. Sliding the plates left and right will of course create black bars at the edge of frame. Shooting overscanned plates saves a lot time in post painting over black at the edge of frame.

Be careful of issues that Flatten S3D

General the idea is to preserve detail in the image, and pay attention to 2D and S3D depth cues.

  • Use a Long depth of field. Blurring reduces detail and flattens the S3D
  • Use Wider lenses than 2D photography.
  • Use light to create points of interest at different depths in the scene.

S3D Post production workflow

This is an outline of our workflow for a native S3D show. At Lexhag the depth grade is done on the fabulous Mistika. 

1. Triage.

  • Fix colour. Match histograms in each eye.
  • Fix rotation, position, and scale in each eye.
  • Establish whether the shot needs complex and costly fixes.

2. Pre DepthGrade.

Roughly set the depth of each shot using simple position and scale transforms. If the cameras were set parallel this is absolutely necessary otherwise the entire shot will be in front of the viewing plane destroying the sense of S3D.

2a. Setting the convergence point (cv)

The convergence point is set by a simple translation of the left and right eye.

2b. Setting the Interaxial distance (ia).

If the Depth volume needs changing, a new camera position must be generated for each eye. This requires each plate to be warped to reflect a new Interaxial distance. The team must decide whether this is best done with a depth based warp, or a warp based on a disparity map ( see the dimension tools in the fusion studio manual )  But sometimes the only choice is a painstaking handcrafted re-projection in Nuke, Fusion, pfDepth etc

If the shot is going to VFX then the changes required to create a new interaxial distance MUST be baked into each plate. DVE transform information is easy to pass around but per pixel changes are not. Camera tracking is going to be nasty with warped plates. If a camera track is needed then changes in ia MUST be done by the VFX team.

3. Pre Colour Grade.

This sets the look for each shot. This will be passed to VFX as a LUT file.

4. VFX

Compositors need to see each shot in S3D. In principle they’ll see the final look of the show on their monitor. We wish for data degradation to be kept to a minimum as images are passed from the Mistika to VFX and back again.

Some optical flow operations such as calculating motion vectors and zDepth are computationally expensive. This data should be cached in RAM, or pre-rendered into .exr files. Continuously calculating this data obstructively slows down our workflow.

4a. Dealing with Colour fixes and Colour Grades.

We have a Linear workflow using a LUT file from a colour grade. But now we have several colour transforms to manage, to balance each eye, and create the look of the show.

It seems simplest to bake colour fixes ( matching the left and right eye ) into the plates, and pass colour grades as LUT files. With more automation all colour fixes could managed as data with baking anything into the plate before the final grade.

4b. DVE export and import

Ideally DVE’s should be exported from the Mistika as Data to be imported as equivalent DVE nodes in Nuke or Fusion. We would like DVE data saved the metadata of each frame. SGO has offered to help us develop this tech. I can develop loader scripts for FUSION, but I may need help to create loader scripts to import DVE info into NUKE.

5.Final Depth Grade and Colour Grade

When VFX has completed their slate there will be a final depth and colour grade before the show is delivered.


Viewing S3D

I’m not sure if there is an ideal solution for viewing S3D images. As usual it will be a trade off between cash and quality.

Active Monitors (shuttered glasses)

Most Nvidea cards support S3D. Shuttered glasses also require a 3D vision ready monitor with a 120Hz refresh rate.

Passive monitors

Polarised light means cheap glasses without active shutters. However the downside is that images can be half as bright and at half resolution.

Anaglyph

Anaglyph is kinda old skool but works well to give a sense of depth without forking out for a 3D monitor and shuttered glasses. I carry a few cardboard glasses with me if I’m working away from my office.

It takes a while for our eyes to adjust but eventually our brains compensate for the hue shift in each eye. However this means that we have to be careful about our colour perception. Before doing colour corrections remove the glasses and give your eyes time to settle back to normal.


Links


Skys Basic Principles of Stereoscopic 3D.pdf

Autodesk stereoscopic whitepaper.pdf