Jump to content

How does IRE correspond to exposure?


Evan Richards

Recommended Posts

There has to be a correlation right? Lets say I shoot a 18% gray card and it is sitting at 50 IRE on a waveform monitor. What would an increase of 1 stop read? A decrease of 1 stop?


I'm trying to create some customized false color LUTs just for my own purposes to analyze images in resolve. But I'd like to be able to say ok, gray = middle gray, one stop over that should be yellow, two stops over that should be orange, three stops over that should be red, etc. but I'm struggling with knowing how to measure the exposure gains.


Thanks!

Link to comment
Share on other sites

  • Premium Member

Depends on the camera and the gamma setting you are measuring -- ARRI Log-C takes the 14-stops of DR of the Alexa and spreads it so that when pointing at an 11-step grey scale, black falls around 15% and white falls around 65%, leaving the range above and below to record extra shadow and highlight information (on cameras with a narrower DR like the Genesis/F35, in log mode, black was around 10% and white around 70%).

 

And it's a log curve, not a straight line, so a 1-stop change in exposure isn't going to equal the same percentage of IRE values if near the bottom or the top of the scale. But in the middle, it seems to work out that 5%-ish equals a stop on an Alexa? 6%? But switch to measuring Rec.709 and probably each stop is more like a 10% change.

  • Upvote 1
Link to comment
Share on other sites

Depends on the camera and the gamma setting you are measuring -- ARRI Log-C takes the 14-stops of DR of the Alexa and spreads it so that when pointing at an 11-step grey scale, black falls around 15% and white falls around 65%, leaving the range above and below to record extra shadow and highlight information (on cameras with a narrower DR like the Genesis/F35, in log mode, black was around 10% and white around 70%).

 

And it's a log curve, not a straight line, so a 1-stop change in exposure isn't going to equal the same percentage of IRE values if near the bottom or the top of the scale. But in the middle, it seems to work out that 5%-ish equals a stop on an Alexa? 6%? But switch to measuring Rec.709 and probably each stop is more like a 10% change.

 

In my mind I guess I was thinking exposure would be an objective measurement and and one stop increase would be the same irrespective of which camera is being used.

 

So, it seems like what you're implying is that maybe the best way to measure where the exposure lies would be to shoot a test and view it on a waveform monitor. Expose a gray card to 50% (just for ease of measurement) then expose up one stop at a time until it gets to white and expose down a stop at a time until it gets to black.

 

Thanks David!

Edited by Evan Richards
Link to comment
Share on other sites

  • Premium Member

 

In my mind I guess I was thinking exposure would be an objective measurement and and one stop increase would be the same irrespective of which camera is being used.

 

So, it seems like what you're implying is that maybe the best way to measure where the exposure lies would be to shoot a test and view it on a waveform monitor. Expose a gray card to 50% (just for ease of measurement) then expose up one stop at a time until it gets to white and expose down a stop at a time until it gets to black.

 

 

 

If every camera had the same increase in signal for every stop of exposure, that would mean they all had the same dynamic range.

 

Waveforms were traditionally used to measure broadcast range where on an 11-step grey scale black might be 0% and white would be 100%, and I think 18% grey was 45%? Caucasian skin tone at full exposure (Zone 6 I believe) would be 70%. But there were variations even in that. You have to decide in your tests if you want to measure the changes in log gamma mode or display gamma mode, or both.

Link to comment
Share on other sites

If every camera had the same increase in signal for every stop of exposure, that would mean they all had the same dynamic range.

Yes. I suppose that makes sense. When you put it that way, it seems so logical.

 

But then I think about it more...and it starts to confuse me. Maybe my definition of what a "stop" is is off. I had always thought of it as doubling or halving the light. If you increase your aperture by one stop, you double your light. If you drop in a .3 ND (1 stop) you cut the light in half. So it seems like if you had two cameras with different dynamic range as long as you start with the same exposure on both cameras doubling the light (increasing by one stop) on each camera would be the same increment regardless of sensor. Double is double. Different dynamic ranges allow one camera to shoot MORE stops than another, but the stops themselves would be the same. How am I thinking about this wrong?

Edited by Evan Richards
Link to comment
Share on other sites

  • Premium Member

On film, the response to exposure is not linear except within the straight line portion of the characteristic curve. In digital, sensors respond to light in a linear fashion but often that is processed into some sort of gamma that is non-linear.

 

I'm not enough of a video engineer to explain IRE and whether a voltage increase in a signal in an Alexa is different pre and post A-D conversion, etc. but just remember that your waveform is measuring a fairly processed signal. So when you say "two cameras should show the same increase in signal for every stop of exposure increase", you'd have to define what "signal" you were measuring to compare them. Plus some sensors have complex outputs in order to achieve higher dynamic range -- for example, this is what ARRI writes about the sensor in the Alexa:

 

The Dual Gain Architecture simultaneously provides two separate read-out paths from each pixel with different amplification. The first path contains the regular, highly amplified signal. The second path contains a signal with lower amplification, to capture the information that is clipped in the first path. Both paths feed into the camera's A/D converters, delivering a 14 bit image for each path. These images are then combined into a single 16 bit high dynamic range image. This method enhances low light performance and prevents the highlights from being clipped, thereby significantly extending the dynamic range of the image.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...