Jump to content

Understanding DPX Files and How They Preserve Details


Karl Lee

Recommended Posts

Hi everyone.

 

Some of my questions are similar to those addressed in another thread recently started in this sub-forum, but I think my questions are a little more focused specifically on the finer points of DPX files, so I thought I'd start a new thread.

 

Anyway, a couple of days ago I received a DPX sequence (as well as a ProRes4444 rendering of the DPX sequence) of some S16 I sent off for processing and transfer, and I'm really looking forward to tinkering with color grading for the first time (I have Adobe SpeedGrade CS6). While I understand that a DPX sequence, or a ProRes4444 rendering of a DPX sequence, is advantageous in color grading as opposed to grading a standard telecine transfer, I'm still trying to clarify my understanding of DPX files at a fundamental level and comprehend how they can preserve details that, in their original non-corrected form, may not initially be visible.

 

Here's what I'm trying to understand. Let's say, for example, that you have digital copies of the exact same frame of film. One is a single frame from a 1080p24 telecine transfer, and the other is a DPX image from a film scan. If I were to import the frame from the telecine transfer into SpeedGrade and decrease overall brightness to adjust for some blown out highlights, odds are that this would just result in the highlights becoming uniformly darker and without any increased detail. However, if I were to perform the same brightness adjustment on a DPX image, I may be able to reduce the size and brightness of the highlights and reveal details behind the highlights that previously were too bright to be visible.

 

Therein lies the crux of my understanding of DPX files. While I realize that DPX files are high resolution, not only in terms of pixel size but also in color depth, how do they pack in this "extra information" into a single, static scanned image that can be brought out by adjusting parameters like brightness? In essence, being able to reveal details behind highlights by adjusting brightness is almost as if you were adjusting the aperture on the lens, so I'm just trying to wrap my mind around the "magic" of how this is done and how the details behind the highlights are captured in the DPX file.

 

I guess I'm thinking about this along the same lines as an audio recording. Let's say that you have an audio recording with hot and, at times, clipped audio. Generally speaking, once the audio is clipped, there's nothing you can do to fix it. Even if you bring down the overall level of the recording upon playback, any clipping that occurred during the recording process is still present and audible. I'm not sure if this is a true apples to apples analogy, but it's just one way (incorrectly, perhaps) that I've been trying to approach an understanding of DPX files.

 

Link to comment
Share on other sites

  • Premium Member

You could have a DPX file of a crappy high-contrast image with no latitude for correction -- it's just an uncompressed container for the image. Generally that image would be something very flat like 10-bit Log, which is mainly the reason it has a lot of information for grading. The lack of compression is also good but in that case, it just depends, sometimes a mildly compressed file color-corrects just as well without artifacts creeping in.

 

In other words, your log scan could be stored in compressed ProRes4444 or it could be stored as uncompressed DPX files, but either way, probably all the dynamic range available for correction would be the same, the difference is whether the compression used for ProRes4444 would show up at times when color-correcting, or whether it caused any small loss of detail.

Link to comment
Share on other sites

  • Premium Member

I think theoretically you'd have to say that the details that you're pulling out are visible if you just look at the raw data in the file. When you view (certain types of) log DPX on a conventional display, the electronics may be deliberately clipping off, or at least compressing to the point of invisibility, highlight information which will become visible when you alter the luminance in grading.

 

There's nothing particularly different about the way a DPX file works. It stores pixel values. How those values are interpreted is what causes the useful behaviour.

 

This is actually quite a useful discussion because it's far too common for people to get the idea that "dpx files have more information in them because look at this highlight." There's nothing particularly clever about DPX files. There's nothing particularly clever about ProRes, or DNxHD. The cleverness is simply that they are supported in software that does useful things with them.

 

P

Link to comment
Share on other sites

  • Premium Member

I mentioned this in the other thread, but often post houses have separate scanning hardware for their telecine sessions and their "DI" or "UHD" scan jobs. Your HD telecine could be done on an old Spirit - if you ask for DPX, you could automatically be bumped over to an Arriscan for example. So now it has nothing to do with compression or file containers - the sensor in the scanner sees more information.

 

This is not a rule of course! You can get DPX out of old Spirit Datacine scanners. Just something I've noticed at a couple places I've had film scanned at.

Link to comment
Share on other sites

As others have pointed out, it's more about the way the film is transferred than about the target file format. You can very easily make a DPX file off of a telecine, if it's connected to a digital disk recorder that works in DPX. But because Telecine systems are traditionally video-centric with realtime color correction hardware always present in the pipeline from film to file, any color correction that happens at that stage would be baked into a Quicktime file, DPX, or videotape.

 

If the HD is done on a telecine, and you're not seeing details in the highlights, it's probably because the colorist chose to blow those out to get the desired look. Data scanning is a different workflow, where the color correction stage is separate from the scanning stage. The idea is that the data scan should capture the full dynamic range of the film as faithfully as possible, so that the maximum amount of information is available on the digital files to grade it to the desired look later. Thus, black levels tend to be elevated so as not to crush out any shadow detail, and whites are lowered so that you get all the highlight detail that's on the film and can blow it out in the grade later.

 

-perry

Link to comment
Share on other sites

Thanks for all of the information. Quite honestly, about everything I know about log and Rec.709 color spaces I've learned within the last 24 hours. I've been reading through about any resource I've been able to find regarding log and Rec.709 and how they relate to film scans and video, so I'm a little more understanding of the whole concept now than I was when I wrote my initial post. I guess at this point, I know enough to be dangerous :)

 

That said, am I correct in assuming that ideally, the best option for me would be to color grade my OCN log film scan would be to apply a LUT in SpeedGrade that mimics the characteristic curve of the original film stock? Apparently SpeedGrade CC includes presets / LUTs for a number of Kodak film stocks, but unfortunately I'm using SpeedGrade CS6 which does not include the Kodak film stock characteristics. That said, I've read that LUTs can be converted to work with multiple color grading applications, so are LUTs freely distributed and shared anywhere online? As I'm not an experienced colorist, having some variety of a film LUT would be a good starting point and help me get started. I'm following up with the lab which scanned my film, so perhaps they will be able to provide some suggestions.

 

Just to make sure I'm understanding this, a color grading preset or LUT intended to mimic a film stock is best suited for video recorded or film scanned in log mode, correct? I was watching an Adobe tutorial video in which a Kodak LUT was applied to video which appeared to be recorded in standard Rec.709, so I found that to be a little odd.

 

Finally, are the terms "Rec.709" and "linear" in terms of color space essentially interchangeable? I've seen it referenced using both terms, so I'd just like to make sure I'm clear on that.

 

There's still a lot to learn, but I'm getting there!

Link to comment
Share on other sites

  • Premium Member

LUTs are simply color transformations. You can be working in any colorspace when you use a LUT. It is very common to use LUTs to transform a log image into a more desirable, contrasty image and those kodak characteristic curves sound like an interesting option to explore as a starting point!

 

Rec.709 and linear are not interchangeable. Rec.709 is a standard that includes a defined color space and gamma. It's a top-to-bottom standard that was developed to create consistency between televisions. Linear vs. log is a more complicated thing that I'm not qualified to get into haha

Link to comment
Share on other sites

Just to make sure I'm understanding this, a color grading preset or LUT intended to mimic a film stock is best suited for video recorded or film scanned in log mode, correct? I was watching an Adobe tutorial video in which a Kodak LUT was applied to video which appeared to be recorded in standard Rec.709, so I found that to be a little odd.

 

Finally, are the terms "Rec.709" and "linear" in terms of color space essentially interchangeable? I've seen it referenced using both terms, so I'd just like to make sure I'm clear on that.

 

There's still a lot to learn, but I'm getting there!

 

The usage of "Linear" varies and means different things really when talking about Log vs Linear and when talking about rec.709 linear.

 

Basically all digital sensors are linear by default, which subsequently means that highlights gets way more nuances/shades than black levels if stored in a linear file. It is because in digital, twice the amount of light on the sensor creates twice the electrical charge, So when storing data in a 256 shades of grey format (8bit) you will "waste" a lot of the shades on the highlights if the mapping is normal. Let us say you got a camera that can record 9 stops of dynamic range, then basically you will have the shadows of 3 stops be fighting around 30 of those 256 shades of grey, The midtones of 4 stops get 90 shades, and the bright last 3 stops will take the remaining 126 shades. So you see that the highlights take 3-4 times as many "shades" than the shadows. Now, my numbers aren't exact, but they sort of say what the problem is, i.e. there isn't a fair distribution of shades between the stops. So, what Log does (or should do) is making sure that every stop gets the same amount of "shades". So lets say we have 9 stops, and we need to squeeze it into those 256 shades, then each stop gets about 28 shades each (compared to linear in which the shadows 3 stops combined only had 30 shades). So that is "Linear vs Log".

 

Then you have linear in other terms which basically just means it doesn't have a gamma applied to it, but as Dan pointed out, rec.709 has a gamma. What it can mean is that there can maybe be a "counter-gamma" to it (don't know a proper term). Like, now you can shoot in "flat mode" which looks like Log, but might not be log, and that might be stored in a codec that uses rec.709.

 

LUTs I would say are good for shooting without baking in the look, it allows you to ballpark much better how the final image will look once graded. Most common LUT I would while shooting in Log or RAW is the rec.709 LUT. Log is very lowcontrast desaturated look, which is crap if you want to show the director/producer what it will look like.

Edited by Karl Eklund
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...