Jump to content

gamma curve and log


Sam Kim

Recommended Posts

i'm working on a sony f800 and i have put in a gamma curve that matches panalog.

i'm also working on the genesis and capturing panalog.

 

by eye and by tests of color correct this works fairly well and then a friend asked me to define the difference between a gamma curve and log.

 

i had no real answer for him. please correct me if i'm wrong, but my understanding of logarithmic curves is that they are taking the sensor data and making quantization points to make a better curve to get the most from your sensor. does a gamma curve not do the same thing?

Link to comment
Share on other sites

  • Premium Member

Two usually-different things. Both are power laws, but log encoding (or what's called log encoding) is usually intended to make better use of quantization points (as you correctly point out), whereas gamma changes were originally intended to counteract the nonlinearity of CRT displays and their associated hardware.

 

Gamma is, strictly speaking, an exponential function, that is we raise the brightness of the image to a given power. It tends to affect shadow and the lower end of midtone most obviously.

 

Log encoding can mean more or less whatever the manufacturer wants it to mean - well, it shouldn't, but generally it does. It tends to be used, incorrectly, as a catchall term to describe any power law (or curve) that greatly increases the brightness of shadows and midtones, which is somewhat similar to what an exponential gamma function does, but for different reasons, and there may be lumps and bumps in the curve that exist only to serve the engineer's idea of what is best in that particular circumstance. This is why we end up with proprietary names - s-log, panalog, etc - they're not just literal log encodings, they're specially tweaked curves.

 

As a practical matter all real-world camera systems (and scanners, etc) will manipulate the power law applied to the raw sensor data in such a way as to alter gamma and possibly also apply log encoding, and they'll do that according to whatever the manufacturer or setup engineer happened to feel was most effective. There are few absolute mathematical rules in play.

 

P

Link to comment
Share on other sites

  • Premium Member

In layman's terms, Log is one type of gamma curve. Film negative, when scanned, naturally forms a Log gamma curve, whereas digital camera sensors are more or less linear and a gamma curved is forced upon the signal.

 

Most Log formats like S-Log pr PanaLog make white (Zone 10 on a chip chart) something like 70 IRE instead of 100 IRE, the idea being that since you record slightly over 100 IRE in level, you are capturing about 2.5-stops over white in terms of overexposure detail. This is different than shooting Rec.709 and setting white at 100 IRE but using a heavy knee compression to hold more overexposure detail. Now I suppose you could do that but just underexpose everything by 2.5-stops and hold a similar amount of headroom compared to PanaLog, but then your midpoint is lowered and your image will be a bit noisier once you lift it.

 

Black seems to be between 5 IRE and 10 IRE with most of these Log formats. I don't see as much advantage at that end, once something is black, it's black, so recording a lifted black may slightly help with some color-correction of dark scenes, but it also tends to make you think you have more usable shadow detail than you actually have because once you set the blacks to 0 IRE, the shadows look darker.

 

Anyway, on Season Two of "United States of Tara" I had to record the Genesis in Rec.709 gamma so used the gamma tables and knee compression to create a fake PanaLog effect but it wasn't the same, I couldn't get the same highlight protection. Luckily I was allowed to go back to PanaLog for Season Three.

Link to comment
Share on other sites

Anyway, on Season Two of "United States of Tara" I had to record the Genesis in Rec.709 gamma so used the gamma tables and knee compression to create a fake PanaLog effect but it wasn't the same, I couldn't get the same highlight protection. Luckily I was allowed to go back to PanaLog for Season Three.

 

you used the Genesis and shot to rec 709 but created a gamma curve on top of that to create a fake panalog? that sounds like you had to fight against yourself a lot then?

 

thanks for the information.

 

what i'm understanding is that gamma curves are used because that gamma in tv monitors and such. and that log encoding was made to match the film world and the characteristic curves of film. is this right?

 

to me it still sounds like they're doing the same thing with linear sensors just matching to different output destinations in mind but that curves are being manipulated by companies to match what they think best serves their product.

 

one thing that concerns me, is that my colorist mentioned that since we're correcting everything to a rec709 plasma (no money for a big room with P3 color space) i was wondering how that will look when projected in environments that i have no control. at school we have a 4k projector that the projectionist can set to rec 709 so i can expect similar results to what's being displayed in the color correct, but what about everywhere else? is this just a normal thing though? exhibiting is always up for grabs?

Link to comment
Share on other sites

  • Premium Member
log encoding was made to match the film world and the characteristic curves of film. is this right?

 

Not exactly, although it's probably accurate to associate log shooting more with what would be thought of as "film" projects.

 

Consider a 10-bit image, with values from 0 to 1023. Let's say we want to create a ten point greyscale, so we create a black (at brightness 0) and a white (at brightness 1023) and ten equally spaced points inbetween, roughyly every 100 code values. If we do that we have created a numerically linear greyscale. The problem is, in an image with linear encoding, viewed on a display chain with a gamma near 2, the greyscale won't actually look linear - there'll be big steps between the darker chunks, and small steps between the lighter chunks. The image encoding does not allocate equal bandwidth to both shadow and highlight detail, and we are not using the available data space as efficiently as we might.

Top: linear; bottom, logarithmic grayscales.

post-29-0-90978300-1295830808.png

Log encoding is designed to mitigate this by stretching out the shadow detail, making better use of the available range. Because of this the image will look unnaturally flat and even foggy on a linear display, with what should be black or shadowy rendered as a midtone; however, when we come to grade, we can make manipulations without having to stretch out the shadow detail from an already stretched state, and we are less likely to run into quantization noise (banding or contouring) and other problems, such as compression artefacts, if we want to recover shadow detail.

This does cause enormous problems; I get constant tales of people trying to view a log image on a monitor with no LUT in it, and saying things like "I fired a huge light into the scene and nothing happened" - well no it won't, because you are viewing a massively low-contrast image. Log images are not designed to be directly viewable; you need a lookup table in the monitor to meaningfully view it, and that is a creative decision.

That's what log encoding does. All the other stuff, about sitting up the shadows and bringing down highlights so that nominal 100% is actually at only 70%, is where the creative mathematics and proprietary curve fiddling comes in. All these claims about "you get two stops beyond 100%" are technically fairly meaningless. The manufacturer decides what to call 100% and where to place it numerically on the camera's output, and chooses how to compress the highlights beyond that point, and they are free to make any choice they like and even alter that choice with lookup tables. Cameras have whatever range they have, that is fixed in the silicon of the sensor, and trying to make it seem like more by sitting the blacks up is in my view cheating at best.

 

P

Link to comment
Share on other sites

  • 3 months later...

What actually David is claiming is not totally correct... let me elaborate in a more down to earth manner and a bit excadurated, very closely to what Phil's chart try to show:

 

Let’s assume that we want to capture an 8 stops image from a sensor that has 12 bits internal processing that’s 4096 steps of light intensity to an 8bit compressed H264 file that’s 256 steps like a 5D camera is doing.

 

In “linear” the top brightest stop n. 8 will consume 128 steps, the stop n. 7 will consume the next 64 steps and the stop n. 6 will consume the next 32 steps. The top 3 brightest stops of an image when is being described in linear will result to “consume” the 87,5% of the available bits in a file, leaving the lower 5 stops of the mid-tones/shadows to be described by just 32 bits…!!!

 

On "Log" we allocate 10 stops in equivalent segments like 25 bits for each stop. That results to a secondary data compression of an applied curve, which needs to be decompressed with the use of an appropriate 3D LUT in order to counteract accurately also the saturation of the converted image. By doing that we effectively recreate information that wasn’t visible in the original file but was there in the original image the sensor was capturing.

 

So the difference between Linear and Log is night and day…

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...