Jump to content

Let's talk about linear to log, A-to-D in digital cameras


Charles Zuzak

Recommended Posts

That might change in the future, but as of now every digital camera sensor reacts proportionnaly/lineary to the number of photons. So the curve of every camera as far as RAW data is concerned is like this :

RTEmagicC_91c2b0f812.jpg.jpg

This one if you use a linear x-axis (like number of photons, lux etc..)

 

 

bmcc_dng_lut.gif

 

Or like this if you wan a log x-axis (stops of ligth for example)

 

So thats how RAW is stored and we cant do much about it (excpet non linear sensor, no yet available).

 

All the encoding like logC comes from applying mathematical fuctions to this RAW data, and it's purely digital (nothing analog to it)

For example if you want to see your file with the IE 800 logC curve : arri applies something like this : 0.25*log10(5.56*x+0.05)+0.39.

And then we get something where every stop has roughly the same number of bit values than the others.

 

So again the dynamic range is limited by the number of time you can divide by two starting from the clipping value:

4096, 2048, 1024 etc etc etc .. 1

Link to comment
Share on other sites

  • Replies 104
  • Created
  • Last Reply

Top Posters In This Topic

Well I think it's pretty hard to do otherwise.
Apparently the Nikon D2X has some processing prior to the ADC (for white balance apparently) http://www.dpreview.com/glossary/camera-system/sensor-linearity

The axiom camera might feature a non linear response as well apparently https://apertus.org/axiom-alpha-imagesensorcurious to see how that turns out.

 

But basically ever camera that has "log encoding" has one to counter their linear response.

Link to comment
Share on other sites

  • Premium Member

All of them have at least some processing prior to the ADC; amplification, for a start. The key point is that various stages of processing take place before the image is ready to be recorded and the output of those stages is processed to a point where it has a desired response compared to the incident light. I don't want to dissuade people from being interested or attempt to obfuscate things unnecessarily, but the specifics of how that's done and what sort of data is available at each stage is specific to particular designs and not really that interesting from a cinematographic perspective.

 

When most people discuss this they're considering the number of bits recorded versus the absolute dynamic range of the sensor, which is a situation where it's unlikely that a bit depth and number of stops need have any particular relationship.

 

P

Link to comment
Share on other sites

Because ADC are linear devices, hence my question as to why there aren't ADC with logarithmic responses.

 

There are. For years the phone company used such to compress audio signals. They were called 'companders', for 'compresser and expander', to 'fit' the dynamic range into what the phone company could carry, and still have intelligible audio. The result was... well... 'phone sound'...

 

There are analog ways to do this, with devices that are non linear, or via digital conversions and simple 'LookUpTables'...

 

Back in the olden days when I was still a analogous engineer, I designed such, for a different application than audio, but based on the same principle to using a ADC, then a 'rom' to look up the output values given the ADC input values. Later on for video signal processing, one would have a 'memory board' to do the LUT look up.

 

These days I'm told due to the 'cheapness' of processing elements, that a lot of what used to be done via an actual 'memory' device to implement the LUT, that a set of ALU elements are used to perform the function.

 

But in general the concept is simple... Analog signal -> DSP -> Wyrdness out... to whatever satisfactory level of wyrdness is desired... for the price... there's always the price...

Edited by John E Clark
Link to comment
Share on other sites

 

Yes, because sRGB is nowhere near linear.

 

P

 

It's not because sRGB is non-linear that my screen displays 9.5 stops from an 8 bit encoding. One could still, in principle, build a screen with a linear log relationship between these same two ranges. But screens are not made that way. They use an sRGB mapping between the two, which gives us finer gradations in the brighter pixels, than they do in the darker pixels. I assume screens are made this way because they were originally easier/cheaper (or still are easier/cheaper) to be manufactured this way, ie. rather than for perceptual reasons - as if we might prefer finer gradations in the brighter pixels.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

I made this chart from measurements of the light emitted by different pixels on a given computer screen (for a film out application). Varying the brightness/contrast of the screen will vary the range of the emitted light, but the curve will remain relatively the same non-linear shape. A camera can be understood as the inverse of this (light going into the camera is mapped to pixel values according to whatever curve the camera manufacturer finds balances engineering constraints and perceptual efficiency.

 

If camera's encode light ranges in a non-linear way it suggests there is some physical/engineering reason for doing so (which might very well be the curves associated with screens!), as I don't understand any perceptual reason for doing so - why would we want finer gradations in the brighter pixels? - why not just a linear log scale as that would be the most compact digital representation of light in terms of perception.

 

 

 

 

ComputerScreen.jpg

Edited by Carl Looper
Link to comment
Share on other sites

  • Premium Member

 

 

It's not because sRGB is non-linear that my screen displays 9.5 stops from an 8 bit encoding.

 

Well, it is - by definition, if the display was absolutely linear, it couldn't have more than 8 stops of dynamic range.

 

Anyway, the reason sRGB has bigger gaps between the dark shades - that is, it brightens the shadows - is simply because it was designed for viewing in brighter environments than are typical for TVs. sRGB was designed for offices, not living rooms, so they punched it up a bit.

 

P

Link to comment
Share on other sites

The OP's contrast between "linear" on the one hand and "log" on the other is very misworded.

 

The term "linear" refers to the shape of the characteristic curve (to use traditional film jargon) in an otherwise log scale chart. Is the curve a straight line? Or does it bend? If it's a straight line then we call it linear. The xy axis are still in terms of a log scale. Pixel values are internally coded logarithmically: as bits. So it's not as if encoding light logarithmically would be "non-linear". Rather one can have a linear mapping between stops and bits, or a non-linear mapping between the two. And it is actually a linear mapping between the two which would be the ideal mapping. But I suspect it's physical/engineering constraints that make this not possible.

 

An analogous case can be found in film. Ideally film would have a linear mapping between exposure value and density. Any non-linear mapping is to be treated as not ideal. Both the linear and non-linear attributes of the curve occupy log space. But physically (ie. from a chemical engineering perspective) only a part of the curve can be kept linear - the middle part - the toe and shoulder becoming non linear.

 

I suspect it is the analogous case with computer screens (projectors etc) - that it's due to engineering inefficiencies that the mapping between switches and light output can't hold to a linear mapping.

 

C

Link to comment
Share on other sites

  • Premium Member

I think when most people use the term "linear" with respect to photography they're talking about linear light, that is, absolute photon counts.

 

The problem with all of this, and the reason that it's all of academic interest only (as I've been trying to say for some time) is that practically nothing ever actually works like that.

Link to comment
Share on other sites

 

Well, it is - by definition, if the display was absolutely linear, it couldn't have more than 8 stops of dynamic range.

 

Anyway, the reason sRGB has bigger gaps between the dark shades - that is, it brightens the shadows - is simply because it was designed for viewing in brighter environments than are typical for TVs. sRGB was designed for offices, not living rooms, so they punched it up a bit.

 

P

 

A display which has 8 bits driving it can be still be linear but drive any range of light output it likes. But I take your point about viewing environment. So it's not due to engineering constraints?

 

C

Edited by Carl Looper
Link to comment
Share on other sites

I think when most people use the term "linear" with respect to photography they're talking about linear light, that is, absolute photon counts.

 

The problem with all of this, and the reason that it's all of academic interest only (as I've been trying to say for some time) is that practically nothing ever actually works like that.

 

Absolute photon counts is both linear and logarithmic. There is no contrast between these terms. If I encode 256 photons with the number 256, and encode 64 photons with the number 64, I have a logarithmic encoding of the light because numbers in a computer are encoded logarithmically (base 2).

 

What's the difference between 1 and 2? 1 bit

What the difference between 128 and 256? 1 bit

 

It is why we can store 256 different numbers (0-255) using only 8 switches (bits).

 

C

Edited by Carl Looper
Link to comment
Share on other sites

The last line in that last post should probably read:

 

It is why we can store any one of 256 different numbers (0-255) using only 8 switches (bits).

 

Now if it's the case that screens are preferably non-linear (brighter pixels having finer gradations) due to viewing conditions, then the most optimum encoding, when digitising light (stops to bits), would not be a linear encoding but one that had the same non-linear shape as the screen's mapping of bits to stops.

 

I don't know if digital cinema projectors are designed any differently from screens - but if they were then cameras for such would do best to match the projector in terms of how the projector maps bits to light output. One will then get the best possible range between camera input and projector output.

 

However once you factor in a whole heap of digital grading voodoo between camera input and projector/screen output, it hardly matters what the optimum data throughput would be. More to the point would be having standardised consistent mappings in the pipeline if one isn't to create bit rot through trial and error attempts to recover the original signal. And beyond that is all the creativity that wants to go on in post. Creatives insist on having an aversion to 1:1 mappings between an original image and the result on the screen. And this just means capturing the highest possible range with the highest number of bits ie. preferably more than than what a screen/projector can actually manage - to give some room for that insistent creativity.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

.....The image itself isn't made up of whole discreet stops, the luminance varies in intensity continuously.

 

Using this thought of Davids as a stepping off point....

 

Isn't luminance varying in discrete steps or quanta? Isn't this what the quantum physicists want to tell us.

 

Rather than think of a photon in such simple terms as just something that a pixel could count.....lets accept that a photon is something that is not well enough understood. A sensor may notice... photon impact events. Perhaps it can count those. But what's happening in that event?

 

As an analogy, lets consider all those (people ) crossing the sidewalk, with a sensor counting them. Let's ignore everything about them, except that their foot fell within range. Perhaps, ecstatic that we have understood something about the distribution of human traffic, we will be unable to see something else about the people themselves. Even on a simple physical level there is room to discern.

 

To think that a thing, anything, a photon.. has no more to offer than the simple fact of its existence, to be counted....

 

Is there anyone else on the forum that finds this grating over a raw nerve.....

Link to comment
Share on other sites

  • Premium Member
A display which has 8 bits driving it can be still be linear but drive any range of light output it likes.

 

In terms of absolute light output compared to some norm, yes, but it can still only have a 256:1 dynamic range, assuming all 256 steps are photometrically the same size. That's the point at issue and I'd have thought it should be absolutely axiomatic. Again, this is almost never the case in reality, and certainly not in the case of your computer monitor.

 

 

Isn't luminance varying in discrete steps or quanta? Isn't this what the quantum physicists want to tell us.

 

Yes, which is why sensors have shot noise.

 

And yes, quantum physics is confusing. For most of our purposes it's adequate to think of a photon as a little ball of energy that can be counted, though.

 

P

Link to comment
Share on other sites

I'm quiet happy to have revived this topic people have this debate !

You guys have quiet a lot to say about it, it's great (though a bit confusing).

 

Do we all agree on these 4 kinds of response curves for most professionnal cameras :

 

1A and 2A will have something linear (like lux) on the x-axis so basically that's how the camera and computers see things :

 

1/ y-axis = Coded values by the ADC pre-processing

For the Alexa example, that'd be how the 16bit RAW file would like (if it was recorded)

RTEmagicC_91c2b0f812.jpg

 

2/ Curve post-LogC encoding (either 12bit raw or 10b or 12b prores).

It's basically the values of 1A after going into a mathematical function involving their log.

 

graph2.jpg

 

 

NOW 1B and 2B are the exact same curves, only we switch to a log scale on the x-axis (ex : stops of light hitting the sensor).

Because the log scale is what we're used to (as humans).

 

1B/

graph.jpg

 

 

2B/

arri_log_signal.jpg

Edited by Tom Yanowitz
Link to comment
Share on other sites

 

In terms of absolute light output compared to some norm, yes, but it can still only have a 256:1 dynamic range, assuming all 256 steps are photometrically the same size. That's the point at issue and I'd have thought it should be absolutely axiomatic. Again, this is almost never the case in reality, and certainly not in the case of your computer monitor.

 

I don't follow this at all.

 

A 1 bit image can drive my computer screen without any issue. It displays a black pixel (measuring about 0.8 EV) and a white pixel (measuring about 10.3 EV) between which there's a difference of 9.5 stops.

 

And reciprically a camera that can see 9.5 stops could encode that range using one bit per pixels.

 

And we can do the reverse. A camera that can only see a range of 1 stop could encode that 1 stop range using 10 bit numbers (0 to 1023). A screen capable of emitting only a 1 stop range could be driven by a 10 bit image.

 

However there will be limits to the number of bits we'd actually want to use in any setup. And it's all to do with efficiency (cost/return analysis). One is noise. We don't want to allocate any more bits if any of those bits just end up representing noise - as it would be a waste of memory. Another is the number of photons a system can detect. There's no point having the ability to numerically represent, say 1024 photon detections, if our system can only detect 512 photons. A third would be what we can visually distinguish with our eyes. We don't want to store pixel values for pixels between which we can't see any difference.

 

C.

Edited by Carl Looper
Link to comment
Share on other sites

  • Premium Member

I think what Carl's trying to say is that the white of a one-bit display could be any number of photographic f-stops brighter than the black relative to some independent measure such as a scene's general exposure, which it could. What you can't have is 256 shades of grey distributed equally in linear light which have more than eight stops of dynamic range (or more properly, contrast ratio). This is a somewhat tricky concept and we're having to resort to increasingly synthetic and unusual situations to describe it, and I should reinforce that this is not stuff anyone needs to know to be a cameraman. Anyway.

 

If the "darkest black" is 1 photon, 2 photons is a stop brighter.

 

If the "darkest black" is 1000 photons, 2000 photons is also a stop brighter.

 

The absolute light levels can change, but the range is still n stops. The difference between f5.6 and f8 is a much larger amount of photons than the difference between f2.8 and f4.

 

To wit, in fantasy numbers:

 
Photons:    Stops:
1           0
2           1
3            
4           2
5
6
7
8           3
9
...
256         8      
 

P

Link to comment
Share on other sites

  • Premium Member

The problem, as someone has already stated, is the initial assumption that bit-depth determines the dynamic range. It does not. Bit depth determines the precision or granularity of the light values being recorded, not their overall range.

 

The dynamic range is determined by properties of the photosites of the sensor. As the OP stated, the ADC quantizes the voltage output of those photosites into a digital value.

 

Here's an example. Let's say you have two cameras, Camera 1 with four stops of dynamic range and Camera 2 with ten stops. The lower dynamic range of Camera 1 means at a certain point, it can no longer distinguish between darker and darker values. Everything goes to black. On the other end, lighter and lighter values above a certain value go to white. Assuming both cameras have photosites that output a voltage range between 0 and 1 volt, and both use an 8-bit ADC, I've attached a table that shows what values from both cameras might look like. These are made up numbers, just to illustrate the point.

 

Log curves in cameras are used to extend the dynamic range, not to account for ADC conversions but because the eye is less sensitive to light changes with brighter values than darker. Therefore cameras don't need to record changes in light values as accurately in the brighter areas as they do in the darker.

 

 

post-3524-0-88559500-1441573810_thumb.png

Link to comment
Share on other sites

I think what Carl's trying to say is that the white of a one-bit display could be any number of photographic f-stops brighter than the black relative to some independent measure such as a scene's general exposure, which it could.

 

Yes, this is exactly what I'm saying. But the only part I don't follow is this:

 

What you can't have is 256 shades of grey distributed equally in linear light which have more than eight stops of dynamic range (or more properly, contrast ratio).

 

Why not? Is it a physics/engineering problem? I can understand it being a perceptual desire that they not be distributed evenly, but it's certainly not a mathematical problem.

 

For example, given a screen displaying a 9.5 stop range (or contrast ratio), I can select a distribution of grey levels across that range, each of which is 0.037109375 stops brighter then the previous, yielding an even distribution of 256 grey levels, between the darkest pixel and the brightest pixel. And to each grey level I can assign a number (from 0 to 255), and use that number to switch on the desired grey.

 

C

Link to comment
Share on other sites

  • Premium Member

 

 

I can select a distribution of grey levels across that range, each of which is 0.037109375 stops brighter then the previous, yielding an even distribution of 256 grey levels

 

That's the misunderstanding.

 

If each step is a given number of f-stops brighter than the previous, then it is not an even distribution. It's not a linear ramp (it's a logarithmic ramp). If each step is a given number of counted photons brighter than the previous, then it's a linear ramp.

 

P

Link to comment
Share on other sites

 

That's the misunderstanding.

 

If each step is a given number of f-stops brighter than the previous, then it is not an even distribution. It's not a linear ramp (it's a logarithmic ramp). If each step is a given number of counted photons brighter than the previous, then it's a linear ramp.

 

P

 

Ok, we're back to this question of what is meant by "linear". If by linear we mean the difference between 1 photon and 2 photons is the same as the difference between 1000 photons and 1001 photons, then I understand what you mean.

 

But what technology counts photons, or drives photon counts using such a counting method? Nothing because it would be massively (ginormously, humungusly) inefficient. The use of the term "linear" in this way is next to useless.

 

Rather by "linear" is meant the variation in the exponent is linear: 2^1, 2^2, 2^3, 2^4 ... or non-linear as the case may be in a "curved" variation of the exponent.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

I understand the confusion.

 

Properly speaking a linear signal would be one in which each increase in value would be x photons greater than the previous: 100 photons, 200 photons, 300 photons, 400 photons ...

 

If we introduce a log scale to count photons (such as stops) it just means we're counting the exponent rather than the number of photons. But once we've elected to count photons in this way (counting the exponent) we can choose to do this linearly, or non-linearly. If the exponent count proceedes: 1,2,3,4 ... or 100, 200, 300, 400 ... we would call both of these sequences as linear. The corresponding photon count is, of course, non-linear.

 

Linear variation of the exponent (E):

 

E COUNT

0 1

1 2

2 4

3 8

4 16

5 32

 

Equally linear variation of the exponent (E)

E COUNT

0 1

2 4

4 16

6 64

8 256

 

We use this way of counting photons because it's the simplest way of characterising how we otherwise see light with our eyes. We see equal changes in brightness in terms of equal changes in the exponent (rather than equal changes in the photon count).

 

If we introduce a curve in the exponent variation, it's for reasons, it would seem, to do with either physical engineering, and/or indeed that we don't quite see equal changes in brightness in terms of equal changes in the exponent (due to the influence of surrounding light, be it dark or light).

 

But to facilitate this new idea (of counting the exponent in a non-linear way) doesn't mean abandoning the log space (stops to density, or stops to bits) that we've otherwise set up in the first place. We just introduce a non-linear mapping (of the exponents) between the two log scales we've otherwise already adopted. Mainly because the log scale is mathematically the simplest representation of the way our eyes see variation in brightness - the rest being fine tuning on top of that representation to accomodate viewing conditions.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

If we look at this situation more closely we'll find that the number systems we employ today are a nice fit to the way we perceive changes in light. If the changes in light are logarithmic (with respect to the photon count) the number of digits we use are also logarithmic with respect to the number being represented by those digits.

 

What does this mean? Well, to get a good idea of this lets look at the earliest number (or counting) system (that is still employed today). For each count a separate mark is made. Think of a prisoner in a jail cell counting the days:

 

Modern, Ancient

1 l

2 ll

3 III

4 IIII

5 IIII

6 IIII I

7 IIII II

8 IIII III

9 IIII IIII

10 IIII IIII

 

The ancient system is inefficient since the number of marks made are increasing linearly with respect to the number being represented by those marks. So if you were to get to a count of a 1000 for example, you'd have a 1000 marks on your clay tablet or rock wall. And to read such back you would have to count the marks. I've skipped a step in the above - using the mark for 5 to cross out the marks made for 1 to 4. In such crossing out is a step towards increasing the readability and efficiency of the marks made with respect to the number otherwise being represented by those marks.

 

The ancient Romans will adapt this primitive linear system so that the number of marks made will increase at a rate slower than the count:

 

1 I

2 II

3 III

4 IV

5 V

6 VI

7 VII

8 VIII

9 IX

10 X

 

Eventually the arabic number system becomes popular and takes over the world:

 

1

2

3

4

5

6

7

8

9

10

 

The number of marks made (the number of digits) is now far less than the number otherwise being expressed by those digits. What has emerged is a non-linear mapping between the number being expressed, and the number of digits used to express that number. The written mark on paper is far more compact than the count. The number of digits is effectively a log scale version of the original number. The base in this case is base 10. In the case of computer systems (with on/off switches), the base is 2

 

0 0000

1 0001

2 0010

3 0011

4 0100

5 0101

6 0110

7 0111

8 1000

9 1001

10 1010

 

So instead of representing, say 1023, with 1023 digits (marks), we can now express it in base10, with 4 digits: "1023" or in base 2 with 10 digits: "1111111111"

 

The fact that our way of representing numbers is log based (with respect to what is being counted), and the way we perceive equal changes in light is also log based (with respect to photon counts), means there is no loss of efficiency, or indeed accuracy, in mapping photon counts to the number systems we use. The number of digits (be it base 10, or base 2) will increase linearly with respect to what we see as equal changes in the light. It's the number of digits (the number of binary digits or bits) that matter in a computer system - because it is this number which determines how much memory is being used up (how much clay in a clay tablet is being used up).

 

C

Edited by Carl Looper
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Broadcast Solutions Inc

CINELEASE

CineLab

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Film Gears

Visual Products

BOKEH RENTALS

Cinematography Books and Gear



×
×
  • Create New...