Jump to content

Let's talk about linear to log, A-to-D in digital cameras


Charles Zuzak

Recommended Posts

  • Premium Member

Hi Carl,

 

Forgive me for butting into the conversation you are having with Phil, but isn't a digital film sensor a photon counter? From my point of view, that's the basic premise of this thread... that this linear photon counter needs log applied to properly mimic how our eyes operate.

 

A camera's sensor responds linearly to the light striking it (equally proportional to the photons striking it). So, the value output by a sensor does increase the same amount going from 1 to 2 photons, as it does going from 1000 to 1001 photons. I hope we all understand that this is an example, I doubt any sensor can actually perceive 1 photon or the difference of 1 photon.

 

In the days when the dynamic range of sensors was limited, just reporting that proportion was good enough. But, as DR has increased and the number of bits used to report the light values has only grown by a few bits, it's become more important to apply a log function for images to "look right".

 

Is that your understanding or am I missing your point to Phil?

Link to comment
Share on other sites

  • Replies 104
  • Created
  • Last Reply

Top Posters In This Topic

Hi Carl,

 

Forgive me for butting into the conversation you are having with Phil, but isn't a digital film sensor a photon counter? From my point of view, that's the basic premise of this thread... that this linear photon counter needs log applied to properly mimic how our eyes operate.

 

A camera's sensor responds linearly to the light striking it (equally proportional to the photons striking it). So, the value output by a sensor does increase the same amount going from 1 to 2 photons, as it does going from 1000 to 1001 photons. I hope we all understand that this is an example, I doubt any sensor can actually perceive 1 photon or the difference of 1 photon.

 

In the days when the dynamic range of sensors was limited, just reporting that proportion was good enough. But, as DR has increased and the number of bits used to report the light values has only grown by a few bits, it's become more important to apply a log function for images to "look right".

 

Is that your understanding or am I missing your point to Phil?

 

 

Physically photon detections are being transformed into the flow of electrons, so the number of electrons flowing will be proportional to the number of photon detections occurring. We might call this counting. The electrons are counting the photons. And it's a linear equation that can be used to represent the mapping between photons and electrons. In the conversion of electrons to binary digits (through the ADC) this would also be counting but we'd describe that using a non-linear equation - in the mapping between electrons and bits. When decoding these bits back into a number (into a photo-electron count) we'd use the reciprical non-linear equation, eg. bits = 1010 = 1 * 2^3 + 0 * 2^2 + 1 * 2^1 + 0 * 2^0 = 8 + 2 = 10.

 

The digital signal (as distinct from the electron signal) begins life having already natively encoded the light logarithmically.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

  • Premium Member

Carl,

 

I don't seem to be able to get the quote function so I'm going to just address the point that the relationship between photons and electrons is linear, just as you say, but I don't think your assertion that the count between electrons and bits is a non-linear equation. It isn't. It's perfectly linear. You are correct that binary is a log based representation, but that log only occurs when I type 1011 to represent 11 which represents eleven ticks on the prison wall.

 

The log function of a camera changes the spacing between lower counts of electrons and higher counts of electrons. Both linear and non-linear functions are represented in binary form in digital storage, but the linear spacing is remapped using a function. So assuming my ADC counts 1 electron as 1... if I didn't use a log function, 1 electron would be 00000001 and 127 electrons would be 01111111. Using a log function 1 electron is represented as 00000001 and 127 electrons is represented as 10011001 (the number 153). The same binary system is used to store both linear and log, but the spacing is compressed as you move up in value in log.

 

If we had all the bits in the world to work with, it would work just fine to keep the linear relationship between electrons and storage, but real world limitations -- like the bandwidth of cables and the write speed of disks -- create better images by mimicking how out eye/brain works, which is a non-linear relationship. It's not the binary that's encoding the non-linear relationship it's an actual mathematical translation that's doing it. That's what the Alexa curves that Tom provided yesterday show.

 

Link to comment
Share on other sites

Limits on dynamic range will be just engineering constraints on an otherwise ideal photon to electron count. There will be electron friction producing noise and some limit on how many electrons can flow - the photons keep bombarding but the electrons refuse to increase their number in proportion to that bombardment. But it won't be a sudden stop to counting. It will trail off so to speak, so I assume a good ADC, knowing the characteristics of a sensor, could very well make some corrections to the deviation from strict proportionality. But you wouldn't find this in published curves as it's merely an internal chip specific mapping. The digital signal coming out the other end is what matters and the ADC might very well tailor such to a non-linear mapping between stops and bits. For example, there's no reason a camera couldn't be following an sRGB curve. Or any other curve. But as a base standard it would want to encode it without such curves - ie. where there would be be only a difference of pure proportionality (a linear relationship) between stops and bits.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

What I don't know is if it's even possible to natively (physically) engineer a linear relation between stops and bits during ADC. The convention prior to the digital age is that a log scale (be it stops, EV, etc) represents precisely how we see differences in light. Phil has suggested (and I have no reason to disagree) that this is not strictly true - due to lighting conditions - so the reason the sRGB curve (in a map between stops and bits) is not a straight line is because in a well lit environment a strictly log scale representation of light does not quite hold. There is effectively another non-linear mapping superimposed on top of the original non-linear mapping, to accomodate the deviation from traditional norms.

 

So this has nothing to do with a mapping between photon count and a log scale representation in traditional photometrics. It's in addition to such: to take into account lighting conditions on top of our traditional log scales (such as stops, EV, etc).

 

But underneath such could also be an engineering difficulty in natively mapping light to a log scale representation, so there could be a bit of "killing two birds with one stone" going on.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

  • Premium Member

Yep, it's a good point. I should not have implied that the ADC is applying the log function (or any mapping). The ADC is likely just doing a linear mapping and the log function is being applied in processing in the digital domain.

 

I just wanted to make two points in this thread: 1) bit depth does not equal dynamic range and 2) the log function of a camera is not as a result of converting values to binary, it's an actual mathematical translation and 3) log becomes necessary as DR increases and bit depth remains constant (or grows only a little) if you want a nice image. Okay -- that's three points. I was never very good at math, I'm always counting like "1, 1.4, 2, 2.8, 4...." :-)

Link to comment
Share on other sites

I just wanted to make two points in this thread: 1) bit depth does not equal dynamic range and 2) the log function of a camera is not as a result of converting values to binary, it's an actual mathematical translation and 3) log becomes necessary as DR increases and bit depth remains constant (or grows only a little) if you want a nice image. Okay -- that's three points. I was never very good at math, I'm always counting like "1, 1.4, 2, 2.8, 4...." :-)

 

So I agree with most of what you said in this thread... except for the point 1) How could DR possibly be superior to the bit depth ?

We both seem to agree on how the light information is then stored in different values.

So going back to my "1B" graphs (the one that looks like a 2^x cuve)

Say the sensor stores data in 10b

Lets sayt its clipping point arrive when it receives 1023 photons.

1023 photons => mapped to the 1023 value (or 1111111111 as I didn't get how speaking in base 10 or 2 would change anything)

1022 photons => value 1022

...

512 photons => value 512. One stop captured so far. Ok interesting info : every camera sensor has half of its values talken by the brighest stop of light. A stop that hardly anyone uses (expect the ETTR geeks like me), fearing blown out highlights.

256 photos => value 256. Two stops captured so far/

128 => 128 => 3 stops

64 => 64 => 4 stops

32 => 32 => 5 stops. As we progress what we might call "low light stops" are getting fewer and fewer gray scale values for themselves. One reason why underexposing is a bad idea.

16 => 16 => 6. Ok only 16 values, at this point we enter the zone of the "probably unusable stops" (unless you're going for an image with deep blacks with no details)

8 => 8 => 7 stops

4 => 4 => 8 stops. What I might call the practical DR stops as the number of gray scales values per stops becomes ridiculous.

2 => 2 => 9 stops (theoretical stops, not anywhere near practical stops)

1 => 1 => 10 stops

then eveything will be mapped top the 0 value. And now even the theoretical DR stops.

 

That's why I maintain that DR < Bitdepth.

 

And then what a log function will do (with some tweaking)

is

1023 photons => value 1023

512 => 922

256 => 820

128 => 718

64 => 616

32 => 514

16 => 412

8 => 310

4 => 208

2 => 106

1 => 4

 

So we LOSE gray scales values in the highs (last stop before clipping being shrinked from 512 values to 102)

while shadows are'upscaled' (as one would upscale from 720p to 1080p).

As we can see in my 2B graph from the earlier post, Arri (and the others) don't go for the straight line (like in my example) but instead add a roll-off (I don't remember the word) to the shadows (stops furthest away from the clipping). That's probably because upsacling a stop that was ridiculously capture with 2 or 4 gray scale values (like my 8 and 9th stops in my example) probably looks awful when upscale to 102 values.

Plus then don't map their "1023 photons" to the max value, and that allow them to have different possibility of mapping (creating the ISO/EI option in camera)

Edited by Tom Yanowitz
Link to comment
Share on other sites

The following is a graph of actual measurements I made with a light meter of the light being emitted by pixels on my computer screen. The light measurements are in EV, which is a log scale representation of the light being emitted. The number of photons will be proportional to 2 ^ EV. The grey line represents actual measurements.

 

The red line is the way I'd prefer the relationship was between pixel values (bits) and the light emitted.

 

The red line represents a linear relationship between bits and stops (EV).

 

The grey line represents a non-linear relationship between bits and stops.

 

Neither line represents the relationship between photon count and stops.

 

 

ComputerScreen2.jpg

Link to comment
Share on other sites

  • Premium Member

Isn't all of this the reason the ADC usually has a higher bit depth than the recorder uses? Once you apply gamma functions to the signal you can store more stops of DR than the bit depth, right? After all, the Alexa stores 14.5-stops of DR in 12-bit 4444 ProRes Log-C.

Link to comment
Share on other sites

Isn't all of this the reason the ADC usually has a higher bit depth than the recorder uses? Once you apply gamma functions to the signal you can store more stops of DR than the bit depth, right? After all, the Alexa stores 14.5-stops of DR in 12-bit 4444 ProRes Log-C.

 

Yes I do think that's the case. It couldn't be 14.5 stops of DR without their weird cool "2x14bit = 16bit" sensor before creating 12bit raw or prores files.

I'm curious to know the bitdepth of these cameras that claim 17, event 20+ stops of DR.

Edited by Tom Yanowitz
Link to comment
Share on other sites

One can store 14.5 stops of DR in a 1 bit file or a 100 bit file. Or any other bit file. One selects a bit depth that is neither too much information (ie. which otherwise encodes useless noise) and not enough information - which would otherwise produce banding in the resulting signal. This latter point can be alleviated through dithering.

 

The DR refers to the relationship between sensor and the world at large. The world at large (the range of light there is) is much larger than our poor sensors can reliably accomodate in one go. We can get around this to some extent by having two or more sensors - each capturing a different range, and then cleverly compositing the data sets (editing out where the mapping veers off course towards a clipping threshold).

 

C

Edited by Carl Looper
Link to comment
Share on other sites

  • Premium Member

A lot of this engineering / math stuff is beyond me, but my question is why does this perception continue to exist that bit depth = dynamic range? Is it just that the camera manufacturers are limiting their DR within the same range as the bit depth to avoid banding artifacts from gradients rather than due to some actual connection between the two?

 

In other words, can an 8-bit ADC on a sensor process and send out 12-stops of DR to be recorded downstream?

Link to comment
Share on other sites

A lot of this engineering / math stuff is beyond me, but my question is why does this perception continue to exist that bit depth = dynamic range? Is it just that the camera manufacturers are limiting their DR within the same range as the bit depth to avoid banding artifacts from gradients rather than due to some actual connection between the two?

 

In other words, can an 8-bit ADC on a sensor process and send out 12-stops of DR to be recorded downstream?

 

 

Yes.

 

An 8 bit ADC can encode 12 stops of DR, for sure.

 

By way of analogy I could shoot a scene on film - where the film captures a range of 12 stops, (for example). And then I can digitise such film using an 8 bit ADC.

 

I would then have an 8 bit encoding of a 12 stop scene.

 

A sensor/ADC does basically the same thing. The sensor sees a certain range (eg. 14.5 stops) and the ADC digitises that range using whatever bit depth it likes. The bit depth is chosen as a sweet spot between too much information and not enough.

 

The only relationship between DR and bit depth is that both are specified using a base 2 log scale: stops and bits.

 

A choice of bit depth will also tend to be influenced by digital conventions. For example, an initial 16 bit buffer might be chosen because it doesn't require costly bit shift manipulations. Whether the ADC produces 16 bit data or not doesn't matter. Some bits from the ADC might be zero or otherwise represent noise. An 8 bit buffer would also be convenient (but not if the ADC is producing 12 bit data or 16 bit data). Once captured however, some dedicated hardware might then repackage the data for more efficient transport, re-massaging the data in the 16 bit buffer so that it fits a 12 bit pipeline, be it due to some of the bits being zero padding, or noise, or even valid data (!)

 

C

Edited by Carl Looper
Link to comment
Share on other sites

  • Premium Member

Hi David,

 

It's tough to answer why, but I have a thought exercise that might help. If I shoot a test chart that has tiles numbered 1 through 8 from white to black that cover 7 stops of exposure and I have a camera with a sensor that can see all 7 stops, but I encode it with 2 bits (four values: 00, 01, 10 and 11), the camera data will come over as 00, 00, 01, 01, 10, 10, 11 and 11 OR white, white, gray, gray, dark gray, dark gray, black, black. The dynamic range captured by this camera is 7 stops. It saw all 7 stops of EV, but the image would not be considered faithfully rendered.

 

Bit depth limits how well an image can be rendered, but it does not limit the dynamic range of the captured image. I think some people would say the image produced is 3 stops, therefore the bit depth has limited us to 3 stops. But 3 stops of camera DR would mean it saw tile 3 as white and tile 6 as black or white, white, white, gray, dark gray, black, black, black. I'm making some assumptions about the position of middle gray and the exposure settings, but I think you get the idea. I also apologize to the bit heads that know that generally 00 is black and 11 is white, I've reversed that for this example.

 

Edit from my original post: 8 tiles would actually be 7 stops of DR. I have corrected the example.

Edited by Ivon Visalli
Link to comment
Share on other sites

Why are DR and bit depth numbers, if not the same, nearly the same?

 

This is interesting. I don't really know. There probably is some sort of correlation (apart from both being log base 2 scales).

 

On one side we can say bit depth can be chosen completely independantly of cameras and dynamic range. We can test some guinea pig (eg. Alex from Clockwork Orange) to see how many bits are necessary before Alex just can't see any difference between one pixel value and the next pixel value. We might then say to ourselves, well there's no need to store numbers with any more precision beyond say 12 bits because Alex couldn't make any distinction between pixels beyond such. But is this bit depth threshold influenced by the range of light to which Alex is being subjected. Would it make a difference if the light had a range of one stop or 16 stops? Does playing Beethoven have any influence?

 

It would make a difference - not the playing of Beethoven - but the actual range of light used in the test. If I turn the contrast down on my computer screen I wouldn't be able to make distinctions between certain pixels as easy as I otherwise could. If I turn it back up again, I might. The greater the range of light to which we subject Alex, the more bits he might demand there be allocated (if banding is not to be produced).

 

But perhaps the easiest rationale to use in deciding bit depth is simply pixel count. The greater the pixel count the more bits we need. Given a 256 x 256 image we can get away with 256 divisions of the light. But increase this to 1024 x 1024 and our divisions can become noticeable in certain images. We have to use dithering to stop banding. Photoshop gradients, for example, have recently introduced automatic dithering on gradients - precisely to avoid banding.

 

So if and when we move to 8K images we'll need at least 13 bit encoding and/or corresponding dithering to avoid banding.

 

C

 

clockwork-horror.jpg

Edited by Carl Looper
Link to comment
Share on other sites

The following is an example of a scene in which the dynamic range is the same (since they are all from the same original) but the bit depth is varied.

 

Of particular interest are the last two images: in both cases, the displayed pixels are only 4 bit pixels (1 of 16 grey levels) but the second of the two uses dithering (3% random noise added to the original) prior to being re-sampled as 4 bit pixels.

 

 

BitEncoding.jpg

Edited by Carl Looper
Link to comment
Share on other sites

Hey Carl, I can't see the pics.

 

I can see them. Probably just a temporary invisibility. They were uploaded to my server in Australia just moments before my post, so perhaps hadn't diffused out into the world at the same speed as the text.

 

C

Link to comment
Share on other sites

  • Premium Member

 

So I agree with most of what you said in this thread... except for the point 1) How could DR possibly be superior to the bit depth ?

We both seem to agree on how the light information is then stored in different values.

So going back to my "1B" graphs (the one that looks like a 2^x cuve)

Say the sensor stores data in 10b

Lets sayt its clipping point arrive when it receives 1023 photons.

1023 photons => mapped to the 1023 value (or 1111111111 as I didn't get how speaking in base 10 or 2 would change anything)

1022 photons => value 1022

...

512 photons => value 512. One stop captured so far. Ok interesting info : every camera sensor has half of its values talken by the brighest stop of light. A stop that hardly anyone uses (expect the ETTR geeks like me), fearing blown out highlights.

256 photos => value 256. Two stops captured so far/

128 => 128 => 3 stops

64 => 64 => 4 stops

32 => 32 => 5 stops. As we progress what we might call "low light stops" are getting fewer and fewer gray scale values for themselves. One reason why underexposing is a bad idea.

16 => 16 => 6. Ok only 16 values, at this point we enter the zone of the "probably unusable stops" (unless you're going for an image with deep blacks with no details)

8 => 8 => 7 stops

4 => 4 => 8 stops. What I might call the practical DR stops as the number of gray scales values per stops becomes ridiculous.

2 => 2 => 9 stops (theoretical stops, not anywhere near practical stops)

1 => 1 => 10 stops

then eveything will be mapped top the 0 value. And now even the theoretical DR stops.

 

That's why I maintain that DR < Bitdepth.

 

And then what a log function will do (with some tweaking)

is

1023 photons => value 1023

512 => 922

256 => 820

128 => 718

64 => 616

32 => 514

16 => 412

8 => 310

4 => 208

2 => 106

1 => 4

 

So we LOSE gray scales values in the highs (last stop before clipping being shrinked from 512 values to 102)

while shadows are'upscaled' (as one would upscale from 720p to 1080p).

As we can see in my 2B graph from the earlier post, Arri (and the others) don't go for the straight line (like in my example) but instead add a roll-off (I don't remember the word) to the shadows (stops furthest away from the clipping). That's probably because upsacling a stop that was ridiculously capture with 2 or 4 gray scale values (like my 8 and 9th stops in my example) probably looks awful when upscale to 102 values.

Plus then don't map their "1023 photons" to the max value, and that allow them to have different possibility of mapping (creating the ISO/EI option in camera)

Hi Tom,

 

Bit Depth will absolutely limit the usability of images as DR increases, so yes Usable DR < Bit Depth. However, it doesn't limit the sensor's ability to see a wide dynamic range, it simply limits how finely you can slice the DR up. 20 stops of range can be chopped up into 1024 slices or 3 stops of range can be chopped into 1024 slices. One image will look terrible, the other acceptable. If I read your analysis correctly, you are defining the limits of what acceptable slices might be, not range. To use your example, there's no reason why you couldn't map 40,960 photons > 1023 17,815 photons > 512, 5,017 photons > 256... 20 photons > 1, but the image may not look right. In a camera with 10 bit output, they don't stop counting at 1023, they set the white point at 1023 and the black point at 0 and map the intervals in between. That DR might be 8 stops, it might be 12 stops. The bit depth defines the intervals.

 

The OP was asking (two years ago!) how high DR was being fit into limited bit depth. The answer is log curves. At some point as DR increases, even log curves won't produce acceptable images if bit depth stays fixed, so they will be forced to increase bit depth.

 

My question for the experienced cinematographers, do we really need to be chasing higher and higher DR? Are you generally using all the DR your camera permits now when you shoot? I recently worked with a cinematographer who said, "if I can't see into the shadows, I throw light in there". Isn't that our job, to produce a pleasing image within a given DR?

Link to comment
Share on other sites

I just explained in my last post why DR is determined MAINLY by Bit Depth, that DR can in fact only be inferior to bit depth. (Capture bit depth mind you, not storage bit depth).

They're just no way (for a camera with a linear ADC) that you can have a 14DR with a 10b ADC for example, either in theory or practice.

 

And you, Carl and Ivon, keep arguing that they aren't linked in anyway.

I don't get it.

 

EDIT: I wrote this before Iron's last post

Edited by Tom Yanowitz
Link to comment
Share on other sites

For some reason I can't edit my previous post anymore.

 

Ok so your example Ivon is 40,960 photons > 1023 as a start fair enough.

So moving on :

 

40,960 photons mapped to 1023 (1024-1, max value in 10bit)

20.480 to 511 (512-1..) because the sensor being in linear, if you divide the photons by to you also divide the value by two.
10.240 to 255 (256-1..)
5.120 to 127
2.560 to 63
1.280 to 31
640 to 15
320 to 7
160 to 3
80 to 1
40 to 1
20 to 1
10 to 1
5 to 1
So it started to "clip" in the blacks at 80 photons, which is 9 stops under 40.960 photons.
So theoretical-DR = 9 if Bitdepth = 10, doesn't matter the number of "photons" needed to make the sensor clip in the whites.
Link to comment
Share on other sites

  • Premium Member

 

For some reason I can't edit my previous post anymore.

 

Ok so your example Ivon is 40,960 photons > 1023 as a start fair enough.

So moving on :

 

40,960 photons mapped to 1023 (1024-1, max value in 10bit)

20.480 to 511 (512-1..) because the sensor being in linear, if you divide the photons by to you also divide the value by two.
10.240 to 255 (256-1..)
5.120 to 127
2.560 to 63
1.280 to 31
640 to 15
320 to 7
160 to 3
80 to 1
40 to 1
20 to 1
10 to 1
5 to 1
So it started to "clip" in the blacks at 80 photons, which is 9 stops under 40.960 photons.
So theoretical-DR = 9 if Bitdepth = 10, doesn't matter the number of "photons" needed to make the sensor clip in the whites.

 

Hey Tom,

 

I think you got it in your last post, but to be clear... bit depth does not determine DR, but there is absolutely a link between bit depth and a good image. I don't think any of us have argued otherwise. I think Carl even said, you can produce an image from a camera with 20 stops of DR with a 1 bit depth. It's an interesting image, but not a faithful image.

 

Your example shows a log curve where each value is halved from the previous one. Try a curve that doesn't do that. Something like this:

 

40,960 > 1023

20,480 > 942

10,240 > 862

5,120 > 781

2,560 > 700

1,280 > 619

640 > 537

320 > 455

160 > 370

80 > 281

40 > 183

20 > 55

10 > 27

5 > 13

 

That's a curve that looks like this:

post-68545-0-38490700-1441654621_thumb.png

Link to comment
Share on other sites

So for me your graph shows how we wish sensors would work ideally.

But didn't we settle some posts ago that any mapping like this one comes post-capture, and the only possible graph for the actual capture is a straight line from (0,0) to the clipping point ?

 

(by the way with what program do you draw graphs like thise one, I need one :) )

Edited by Tom Yanowitz
Link to comment
Share on other sites

  • Premium Member

Here is where it would be great if someone from a camera manufacturer stepped in! My degree is in electrical engineering, so I have some experience with ADC and digital processing, but not a lot. Now I'm totally speculating, so I'd love to be corrected by someone with actual knowledge of the engineering (or if someone has access to designs I'd love to review them)...

 

I think you already hit on it in your previous post. Looking at the curves you supplied earlier, there's a suggestion that for the Alexa the ADC is capturing in 16-bit and then the curve is implemented in 10- (or 12-) bit through digital processing.

 

I used Excel to generate the curve. If you have Excel, load values for X and Y in two columns, highlight the columns and select "Insert Chart". You'll be presented with different choices. You want a scatter plot (X vs. Y) with a line through it.

Link to comment
Share on other sites

I suspect ADC designs will want to digitise a sensor signal using more bits than it's target bit bandwidth, if only to allow some wiggle room for refitting the immediate digital signal to whatever is the current downstream standard. The downstream standard, for a very long time, was an 8 bit signal, evident in so many digital image formats - regardless of sensor design. And before that the standard was 1 bit signal. My first computer has a 1 bit display.

 

Historically there were digital images before there were cameras capable of creating those images. It was the graphic artists and programmers who were in control of the first digital images. And their first images were 1 bit images. Borrowing ideas from print (half-toning) they were able obtain grey tones using dithering.

 

It will be the output device, be it a panel of blinking lights, or a computer screen, that determines how many bits per pixel are used to drive images on such.

 

The camera arrives later to manipulate the same bits using signals produced by a lens on a sensor. If the downstream display was only a 1 bit display, one might think there's no need to digitise the camera signal with more than 1 bit per pixel - however this is not true because although a display might be 1 bit, one can dither the sampling so it suggests a greyscale image on a 1 bit display.

 

In the following is a depiction of such. Sampling a signal using only 1 bit per pixel would normally give us a high contrast result. But an ADC can be designed to still output 1 bit pixels, but randomise on a per pixel basis which part of the signal it digitises. Or to get the same result it could first digitise the signal at 8 bits, and then do the dithering later - doesn't make much difference.

 

It will be the evolution of display technology rather than capture technology which determines choice of target bit depth and how many bits one might then want to use in the ADC. The DR of the sensor is not in any way shaped by this. Ideally one wants as much DR as possible, regardless of the number of bits used to digitise such or the eventual display. And conversely one will want as many bits as possible to digitise that DR, if only to have room to retone it for whatever bit display we want to show the result on.

 

 

BitEncoding2.jpg

Edited by Carl Looper
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Broadcast Solutions Inc

Visual Products

Film Gears

CINELEASE

BOKEH RENTALS

CineLab

Cinematography Books and Gear



×
×
  • Create New...