Jump to content

16bit Linear to 12bit Arri LogC. Interpolation in the shadow tones?


Recommended Posts

This question has already been asked once but that thread was 7 years ago, and since then I'm sure much more information has become available to the public and thus better explanations have also become possible.

I'd like to take the example of ARRI's image processing pipeline. 

At a native ISO of 800 there are 7.8 stops above 18% diffuse reflectance available. Therefore, 18% x 2^(7.8) = 4011% diffuse reflectance is the clipping point. 

As far as my understanding goes, all cinema cameras sensors (but let's stick with Arri)  record light linearly and in turn linearly convert the analog signal to 16bit digital values (Linear ADC) with the voltage generated being proportional to the the digital value(I.e Gamma =1) and only then does the conversion from 16bit Linear values to 12bit Arri LogC take place. Let's also assume all of the 16bit data is available for use, so excluding noise and other limiting factors,in order to make life simpler

Now let's say I want to expose for an 18% gray card so that it ends up as middle gray on Arri 12bit LogC( I can't remember exactly what IRE they recommended, but let's assume it was 40% IRE). This is where I start getting confused. 

On a Linear ADC (assuming we expose correctly LINEARLY so that 18% gray card is 0.4486% LINEAR IRE. Because 18/4011 is 0.4486%) :

18%/4011% x 65535= 294

This means that there are only 294 tones from pitch black to 18% gray card on the original data the linear sensor/ADC recorded. Yet, once the 18% gray card gets converted from Linear 0.4486% IRE to Log 40% IRE on 12bit there are 1638 tones from pitch black to 18% gray card. Where are all these extra tones coming from? Is there interpolation happening here? 

The whole point of Log recording is to allocate more bit values to the darker areas up till 18% gray where we are more sensitive. Yet, the logarithmic values are all just redistributions of the Linear 16bit values if I understand correctly. Thus, there was never more than 294 tones of true information recorded between pitch black and 18% gray. 

 

The only other thing I can think of is that when we're exposing for 18% gray card to be 40% IRE on 12bit LogC we're actually OVEREXPOSING on the Linear Readings so that an 18% gray card is read as 1638/65535 x 4011% = 100% diffuse reflectance(or 2.5% LINEAR IRE). Which gives us 1638 tones of true information because 100%/4011% x 65535=1638 on the 16bit linear values

I hope I explained my conundrum clearly. If not, feel free to ask I'll try and explain again. Thanks

 

PS: I know this doesn't help in any way with being a great cinematographer. I just got curious haha. 

Link to comment
Share on other sites

I am really not an expert of that topic, but I am interested. Maybe you already have read it, but just in case, you will find plenty of information in this document:

https://www.arri.com/resource/blob/31918/66f56e6abb6e5b6553929edf9aa7483e/2017-03-alexa-logc-curve-in-vfx-data.pdf

I do not know where the "1638 tones" value comes from. My understanding is that LogC maps the 18% gray (294th step in 16 bit linear at ISO800) to relative "0.391", which should be 400 in 10bit, 1601 in 12 bit. This does not mean there are 400 or 1601 "real" tones below. After all, the logC curve does not start at zero: pitch black is mapped at relative 0.0928 (at any ISO), which would be 95/1023 or 380/4095. If I understand correctly, the 294 lower tones are mapped to 95-400 in LogC 10 bit and 380-1601 in 12bit. So you have more steps in the output than digitized by the ADC, which does not improve quantization, but at least preserves it (seems almost 1:1 in 10 bit).

The drawback of "more quantization steps than needed in the lower tones" is "less quantization steps than possible in the higher tones". But you still have 623/1023 or 2494/4095 steps for the higher part (well, indeed a bit less since clip might not be at 100%). Maybe still more than enough ?

Edited by Nicolas POISSON
Link to comment
Share on other sites

Forgive me but, I haven’t read the entirety of your post I’ve just seen you’re going down a rabbit hole.

 

https://gabrieldevereux.com/2022/01/15/misc_log_container/

 

Above is a fair bit of info I wrote on my website a while  ago.

 

Here’s the general gist (of how it works briefly in a crude fashion as always).

Your linear signal has an offset (of 256 values for Log-C) prior to encoding to the container. A camera log container has a linear bias (in the case of 12-bit Log-c) it’s linear to 1024 and then compression takes place (the gist is it compresses each stop above 1024 to 512). No interpolation occurs except if you attempt to reconstruct the 16-bit linear curve after encoding.

 

re more steps in the output than digitised by the ADC, that’s not necessarily truly. Most HDR cameras sample below the noise floor.
 

I’m not sure where everyone is getting values from… but if your looking in a bog-standard color NLE such as resolve etc values aren’t necessary accurate in terms of WFM, RGB picker readouts. 

 

Link to comment
Share on other sites

Just now, Gabriel Devereux said:

Forgive me but, I haven’t read the entirety of your post I’ve just seen you’re going down a rabbit hole.

 

https://gabrieldevereux.com/2022/01/15/misc_log_container/

 

Above is a fair bit of info I wrote on my website a while  ago.

 

Here’s the general gist (of how it works briefly in a crude fashion as always).

Your linear signal has an offset (of 256 values for Log-C) prior to encoding to the container. A camera log container has a linear bias (in the case of 12-bit Log-c) it’s linear to 1024 and then compression takes place (the gist is it compresses each stop above 1024 to 512). No interpolation occurs except if you attempt to reconstruct the 16-bit linear curve after encoding.

 

re more steps in the output than digitised by the ADC, that’s not necessarily truly. Most HDR cameras sample below the noise floor.
 

I’m not sure where everyone is getting values from… but if your looking in a bog-standard color NLE such as resolve etc values aren’t necessary accurate in terms of WFM, RGB picker readouts. 

 

“Photosite values are radiometrically linear representations of the energy they receive, represented as 16-bit unsigned integers, incorporating an offset such that a photosite receiving no energy would have a 16-bit unsigned integer value of 256.”

Just to take a direct quote from an ARRI RDD

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...