My understanding of the RED camera workflow for visual effects

This article work in progress as I try and sort out the best VFX workflow for myself and those working with me.

It seems to me to the best option is to use REDLogFilm and save compressed 16BIT exr files or 10bit dpx files. REDLogFilm directly copies the Cineon spec and we can use our standard Log to Linear workflows.

But …

There’s a lot to the Red system. Opinions differ and it seems like we are overwhelmed by options, not all of which are fully understood. My gut tells me that this should be easier… If anyone out there wishes to contribute drop me a line.


Workflows for using redcine-x to may involve debarring and reducing resolution. If you can save to R3D files then color depth is preserved.

With RED, the original raw capture is stored as an R3D file. The non-destructive recipe is stored as “metadata” in a separate RMD file…… ( see rmd-non-destructive-editing)

If you do not save to R3D files then the process can be destructive to varying degrees depending what you do. This article is about using non-r3d files in our digital workflow from the red camera

Use your eyes first

There’s a saying in VFX If it looks right it is right. No amount of funky tools or hardcore maths can compensate for a bad eye. However, I’m not happy unless I know that it will look just as right to everyone else in the production pipeline. To do this I need to make a well informed guess on how the shot will appear when graded, and understand how the data is passed between different systems and software.

Digital workflows (use your brains as well)

When passing shots around between departments, it’s best to preserve as much detail as possible. It would be great if we could always keep our image data in the exactly as it was in the camera, but that rarely happens. Inevitably other file formats are needed, and there are definitely good and bad ways of going about this conversion.

Camera profiles and LUTs

When dealing with RAW or LOG camera files issues arise because what we see is not what we get… Unless you use an appropriate Camera profile and viewing LUT, things get confusing, or just plain ugly.

The main point is you JUST need the right camera profile and viewing LUT for your monitor. It’s not more complicated than that. We can preserve the image detail all the way through post and VFX to the final grade.

File formats

We also want small files that are compatible with multiple systems. Converting formats is OK so long as we simply save the same RGB value for each pixel into a new format. (The numbers are the same but the wrapping is different ). Or the we can guaranty that the image degradation is imperceptible. For me the best options are.

  • DPX
  • EXR
  • Apple Prores

I’d like to include R3D files in this list but at the moment they are proving problematic of our post production workflow. It’s doesn’t seem like there’s an easy way to trim the R3D clips to the correct length without converting them to a new format. Without trimming the files there’s simple too much data to pass around, and working remotely would be impossible.

Bit Depth and Dynamic Range.

See colour management at Sony Imageworks.

(This section is directly from the redcine-x-grading-tutorial)


These control how digital values get mapped over to actual colors and shades.

  • The color space specifies the range of possible colors
  • gamma describes how a given numerical change in the file translates into a given brightness change in the image.

In general, the color space and gamma should use the most up-to-date color science, such as

  • REDgamma3 instead of REDgamma2,
  • REDcolor3 instead of REDcolor2.

Manual Grading (use REDLogFilm)

The one exception is with fully manual grading, in which case many prefer to set gamma to REDLogFilm (while keeping the color space at its most up-to-date setting), since this standardizes the tones and makes footage more receptive to a wide range of grading styles.

That’s ok so long as people read up on what to do?

There is a ton of literature on cameras and color theory. A lot of it gets very technical very quickly, and that tends to make people panic.

But it doesn’t have to be complicated, and mostly you don’t need to know the maths.

Compositing with RED

According to Graeme-Nattress from RED you can simply use a cineon node to linearise RED footage.

With REDLogFilm it follows the cineon spec precisely and is valid over a wide ISO range – (any ISO up to ISO3200 at which point there’s some mild compression to fit all the code values in) and is easily and instantly invertible to linear light for VFX work via the standard cineon equation in Shake, Nuke etc. (default log2lin settings work great or just interpret the dpx as cineon) and hence matches the RED exr linear light output.

Does the Camera Man need to know YOUR chosen RED workflow?

No they don’t. That’s never going to happen anyway…
Digital cameras are complicated. Each has its own unique quirks, and freelancers may use them infrequently. Some guys will be upto speed and some will make mistakes. If the red camera Is not set correctly… the image can be underexposed.

It makes no difference to postproduction we can still use RedLogFilm for our pipelines.