Jump to content

Camera Dynamic Range (DR) vs Display


Recommended Posts

I recently read the ACES Primer, where one of the headings was “bringing back the light meter”. My understanding on this was that often scenes would be lit to accommodate for the output device (display referred) and would hence bottlenecking the look of the film, by not using the cameras full DR potential, especially (or at least) with respect to future exhibition. 

It made me think about a question I have had for a while, and that is in regards to the lighting of a scene and placing of the highlights. If the camera has 16stops of DR but the displayed image (exhibition) will be in SDR with a lower bit depth (which to my understanding is indirectly tied to DR), does it make sense to place information into the upper and lower bounds of the cameras DR.

What will happen to the highlight information in the 14/15/16 stops of the image, if the exhibition can only display a DR of 10 or even 8 stops; will they clip? Will the gradation in the stops luminance just not be represented faithfully?

Or does it make more sense to light a scene in a way that keeps the information within those limited 8 / 10 stops. But then what is the point/use or happens to the information in the higher stop values?


(I am aware that important information is always best placed in the gamma of the characteristic curve.)

I hope my question makes sense, I have yet to find an answer that clearly paints the reasoning to me and would greatly appreciate anyone willing to explain it to me.

Thank you for your time

 

 

 

 

IMG_2948.jpeg

Link to comment
Share on other sites

DR is tied to bit depth only if the coding is linear. But this is not the case at all for most displays. Common colour spaces, such as Rec709, sRGB, or cinema projection, all use a gamma curves to define the relationship between luma (0-100% form black to white) and real world brightness. Codes in the lower end are much closer to one another. Although the origin was related to the way Cathode Ray Tubes work, this also more or less follows the capacity of the human vision to distinguish light levels. This is a very rare case of a technical non-linearity being favourable, even decades later while CRT have disappeared. LCDs still mimic the response of CRT, and this is not only for backward compatibility: even if we could completely forget about CRT, it would still be a good idea to use a non-linear curve. We might use a Log curve instead of gamma curve, but that would not be that much different.

Display DR depends on the definition you choose. To my knowledge, there are at least three:

- display contrast: this is the ratio of peak white to "black" level. LCD displays (whether TN, IPS or VA) are not pitch black when luma is 0. A consumer IPS panel is typically around 1200:1 to 1500:1. Some other technologies, as OLED, have true blacks and can be considered to have an infinite contrast ratio.

- ratio of the peak brightness to the tiniest difference between two adjacent levels in the whole brightness scale. Usually this tiniest difference is between the lowest and second lowest levels. The value depends on the gamma curve. In Rec709 with a theoretical 2.4 pure gamma curve on a display with infinite contrast, the order of magnitude is a 1:1000 ratio (10 stops).

- ratio of the peak brightness to the threshold above which adjacent levels are close enough so that the human eye cannot distinguish them. This is the criteria used in the BBC white paper WHP309 about HLG. The order of magnitude is 1:40 ratio (5.27 stops). There are luma codes below, but they translate into differences in brightness that the eye is able to see.

There could be an additional definition: real world DR. For example, if the light output by house lamps falls on your screen, it will hide the darkest details. This cannot be determined as a specification since it depends on the viewing conditions. But this is of up-most importance and this explains why, when producing for consumer TVs, web or (worst case) smart-phones, one might prefer to stay away from the darkest levels since there is a high risk that the viewers will miss them.

What happens if the camera can record more stops than the display ? This is up to the production. It can be simply clipped. But most of the time, this will be "compressed": differences in high brightness levels will be reduced compared to what they would be if the curve was a pure gamma. This is the common "shoulder" part of so many tone mapping curves.

Edited by Nicolas POISSON
  • Like 1
Link to comment
Share on other sites

@Nicolas POISSON Thank you for the thorough answer. Yeah with being indirectly tied to bit depth, I was referring to how even in log encoding, for example, there still needs to be enough bits to be assigned per stop value to ensure an image with minimal posterization. The part about the DR definitions is also very interesting, I wasn't aware of the latter two, so I'll have to do more reading on them.  

I especially appreciate the last part of your response, so would you say it would be fair to imply that by compressing the highlights the "gradation in the luminance stops are just not be represented faithfully" (relating to the wording of my original question)?

Link to comment
Share on other sites

In a basic system, video cameras have a gamma of 0.45 (OETF: Opto-Electronic Transfer Function) to compensate the inherent gamma 2.2 of CRT (EOTF: Electro-Optical...). The whole chain has a gamma of 0.45*2.2 = 1 (OOTF), which means the brightness output by the CRT is simply proportional to the scene brightness.

The problem is the rather limited DR, which is often described as a clue of "video-ish look". Even in the analogue era during the 90's, some pro video cameras had highlight compression, which means the OOTF was not linear. Consumer cameras, however, mostly used the gamma 0.45 curve. A Sony TRV900 - a popular consumer DV camera - could deliver like 6 stops of DR.

Things started to change in the late 2000s, especially when photo cameras entered the video market (the famous Canon 5DmkII was released in 2008, IRC). Both cameras and displays improved, but backward compatibility was important. The solution was to keep a 0.45 gamma curve for levels up to the mid-tones, and then use highlight roll-off ( =compression). What was formerly a pro feature became very common. At the beginning, this was presented as an additional "cine profile". But nowadays, the limited "pure gamma 0.45" must have disappeared from most cameras. All built-in looks do have a more or less pronounced roll-off. No video is faithful to the scene luminance since the last 15 years.

  • Upvote 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...