Jump to content

Let's talk about linear to log, A-to-D in digital cameras


Charles Zuzak

Recommended Posts

  • Premium Member

 

It will be the evolution of display technology rather than capture technology which determines choice of target bit depth and how many bits one might then want to use in the ADC. The DR of the sensor is not in any way shaped by this. Ideally one wants as much DR as possible, regardless of the number of bits used to digitise such or the eventual display. And conversely one will want as many bits as possible to digitise that DR, if only to have room to retone it for whatever bit display we want to show the result on.

 

 

BitEncoding2.jpg

 

Well, I actually prefer some limit to DR. At least at this point in my career. Given a few locations I've worked in, it's very nice that parts of them roll off into black :-)

Link to comment
Share on other sites

  • Replies 104
  • Created
  • Last Reply

Top Posters In This Topic

Perhaps more important than DR, and the number of bits, is really how you distribute the grey tones across a given range.

 

The following shows two such mappings.

 

In the top distribution I've selected those pixel values that emit twice the number of photons as the previous (at least as measured on my screen with a light meter). In other words each step is one stop. It is the simplest approximation of how we perceive equal changes in light. It is a non-linear function where photon count = 2 ^ EV (multiplied by some constant). The linear part is in the fact that each step is increasing the exponent (EV) in a linear fashion, ie, by 1. But with respect to photon count the entire function is a non-linear function (because it involves a power expression).

 

In the bottom distribution is how sRGB screens map pixel values to emitted light. Here each pixel used has a value that is 32 greater than the previous. With respect to photon count it is a more complex non-linear function. With respect to an EV metric the EV does not increase linearly. At the darker end the EV increases faster than at the brighter end (where it slows down).

 

But with respect to photon count, both are a non-linear function of the photon count.

 

If I understand Phil correctly, the top one should look like an even distribution of grey tones, if viewed in the dark (such as a cinema), whereas the bottom one should look like a more even distribution of grey tones if viewed in a well lit room on a computer screen.

 

 

EVsRGB.jpg

Edited by Carl Looper
Link to comment
Share on other sites

Ah ok. Don't know why that's the case. I can see them plain as day.

 

So if you click on this link does it work:

 

http://members.iinet.net.au/~carllooper/filmout/EVsRGB.jpg

 

 

If so, then these are the backlog:

 

http://members.iinet.net.au/~carllooper/filmout/ComputerScreen2.jpg

 

http://members.iinet.net.au/~carllooper/filmout/BitEncoding.jpg

 

http://members.iinet.net.au/~carllooper/filmout/BitEncoding2.jpg

 

 

C

Link to comment
Share on other sites

Ok, not sure why links are not working. They work for me ???

 

Anyway here's a plain text link which you can copy into a web browser, where I've consolidated two posts (and 3 images) into a web page:

 

http://

members.iinet.net.au/~carllooper/filmout/Images.html

 

 

And here's the same link as marked up in the cinematography.com editor:

 

http://members.iinet.net.au/~carllooper/filmout/Images.html

 

 

C

Edited by Carl Looper
Link to comment
Share on other sites

Perhaps it's the tilde character: try this (where I've replaced the tilde char with %7E ). I'm on a PC where my previous links otherwise work in IE, Chrome, Firefox.

The file sizes are very small - just using light jpegs.

 

http://members.iinet.net.au/%7Ecarllooper/filmout/EVsRGB.jpg

 

And another attempt at an inline image (using the %7E tilde replacement in the URL):

 

EVsRGB.jpg

 

 

Could be some network traffic problem. I'm in Australia and the images are on my server here, and only just uploaded so if there's some sort of traffic congestion somewhere, the image might be stuck behind such. Perhaps tomorrow they'll all appear in this thread.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

Any joy on images? Are previous links still unavailable?

 

My server provider is trying to argue it's due to a firewall on the client side, but what firewall would deny even a single jpeg (especially if it otherwise allows jpegs from elsewhere). So I've tasked them with finding the real reason rather than some made up one.

 

Perhaps it's some secret nationwide firewall being tested in the States - delaying some connections until some criteria is being met (just to get into a conspiracy theory).

 

C

Link to comment
Share on other sites

  • Premium Member

Any joy on images? Are previous links still unavailable?

 

My server provider is trying to argue it's due to a firewall on the client side, but what firewall would deny even a single jpeg (especially if it otherwise allows jpegs from elsewhere). So I've tasked them with finding the real reason rather than some made up one.

 

Perhaps it's some secret nationwide firewall being tested in the States - delaying some connections until some criteria is being met (just to get into a conspiracy theory).

 

C

 

Yes, it's working now.

 

... they know you're on to them.

Link to comment
Share on other sites

Ah okay - so could be some sort of network congestion thing going on, where certain sites (such as mine) just end up at the back of the queue awaiting overnight delivery to some network cache closer to various destinations around the world.

 

It'll be all those people watching netflix, you tube, vimeo, etc. :)

 

C

Edited by Carl Looper
Link to comment
Share on other sites

How do I upload images to the cinematography.com "My Media" library?

 

I've always been uploading images to my server first, and then cross linking to that, but perhaps if I uploaded directly to cinematography.com it would work more reliably.

 

However I can't work out how to use the "My Media" library. When I click on such, a window comes up saying:

 

Your Media Library Media you have uploaded to the community is available here for sharing.
Content you share will be visible by all members who can view this topic.
Select a media type to choose what to share.

 

Which is all well and fine but how do I upload in the first place? I can't, for the life of me, see where or how to do such. There's not even any button or drop down list on this window with which to answer the last line in the above.

 

 

I'll try a copy/paste operation ...

 

 

Is there an image visible below (there is to me) - but as far as I can tell this is just another html link no different from previous links I've posted:

 

EVsRGB.jpg

 

Carl

Edited by Carl Looper
Link to comment
Share on other sites

Ok, I've discovered how to attach images - the "More Reply Options" in the bottom right corner of the editor.

 

And against each attachment is the option to "Add to Post" which I'll now do here, and see what happens when published ...

 

post-48441-0-32317800-1441763961_thumb.jpg
post-48441-0-54238300-1441764016_thumb.jpg
Ok, that's good. The images are now on the cinematography server (rather than my server) so presumably you can now see them and I'll use this method from now on.
C
Edited by Carl Looper
Link to comment
Share on other sites

Back to the discussion.

 

There is some correlation between DR and Bits. With a greater DR the less contrast there will be across a span of pixels - and therefore the more finely you'd want to divide up that span in order to avoid banding (or staircasing). As you decrease the DR, the higher the contrast becomes between adjacent pixels, and the less bits you might get away with.

 

In the following we can see a 2 bit display renders a 50% DR scene better than it does a 100% DR scene.

 

But this observable relationship between DR and bits isn't characterisable in any consistent way.

 

The only rational way to characterise a minimum number of bits is to do so with respect to pixel count. The DR can be whatever it likes. So, for example, a 256 x 256 image needs at least 256 grey tones (8 bits) if it wants to avoid the worst case scenario where the scene might require a gradient change of just 1 pixel value per pixel across the width (or height) of the image. Any less than 8 bits and it will begin to band (in a worst case scenario).

 

So a 512 x 512 image needs at least 512 grey tones (9 bits) in order to avoid the same worst case scenario.

And by the same logic a 4K image would need at least 12 bits.

 

This is regardless of the acquired image (be the image from a sensor or computer generated). It is, as we might say: axiomatic.

 

But as demonstrated in previous posts, banding is also alleviated by dithering (or in film, by grain), or just by natural variation in scene details (as most images will exhibit), and so the target minimum can be made smaller than that indicated by the pixel size of the image.

 

C

post-48441-0-52121500-1441768474_thumb.jpg

Edited by Carl Looper
Link to comment
Share on other sites

And as mentioned, if we introduce dithering (during ADC or afterwards, but prior to downstream encoding), or are otherwise transferring film (where the grain acts as a natural dither) our banding on a low bit display is considerably alleviated.

 

Of course we wouldn't normally use a 2 bit display, but by using one here we can see the principle more clearly. The principle remains the same as we increase the bit depth. A better sense of tonal variation.

 

Grain/dither is our friend. Not our enemy.

 

post-48441-0-70086800-1441773516_thumb.jpg

Edited by Carl Looper
Link to comment
Share on other sites

  • Premium Member

That's really nice stuff Carl. Really shows the core issue of bit depth and dynamic range. Low bit depth doesn't stop the dynamic range from being expressed, but it does limit the fineness of the steps between black and white.

 

Your two left-side images show that a bit depth of 2 didn't stop the higher dynamic range "camera" from finding white and gray more accurately (I'm especially looking at the white on the bridge of the nose with the lower dynamic range image). Once in post with our friend dither, the image from the higher dynamic range camera is able to retrieve more detail in the background. Though I kind of like how the lower DR image is blown out.

 

Carl, do you know how many stops of dynamic range are in the original image? Are you able to add the chroma back into the dithered images, I'm just curious how they look in color.

Link to comment
Share on other sites

That's really nice stuff Carl. Really shows the core issue of bit depth and dynamic range. Low bit depth doesn't stop the dynamic range from being expressed, but it does limit the fineness of the steps between black and white.

 

Your two left-side images show that a bit depth of 2 didn't stop the higher dynamic range "camera" from finding white and gray more accurately (I'm especially looking at the white on the bridge of the nose with the lower dynamic range image). Once in post with our friend dither, the image from the higher dynamic range camera is able to retrieve more detail in the background. Though I kind of like how the lower DR image is blown out.

 

Carl, do you know how many stops of dynamic range are in the original image? Are you able to add the chroma back into the dithered images, I'm just curious how they look in color.

 

The image is just one I found on the net (apologies to whoever it belongs):

 

ihaddadphotography.files.wordpress.com/2011/03/face9.jpg

 

So the dynamic range of the original capture I don't know. Could have been a very bright scene or shot on a very overcast day. It ultimately doesn't matter because the ADC will just choose some upper and lower bound on the signal and divide it up, whatever the range of the original scene, or whatever the sensor was able to see. So in principle the original sensor signal could represent any number of stops extracted from the original scene: eg. 18 stops (if such a sensor existed) and the corresponding ADC will just divide that entire signal up into however many divisions it likes. There's no reason it needs to use 18 bits - could use 12 bits, or 8 bits, or 2 bits - to capture the entire signal. The bit depth is best determined by downstream standards rather than what the sensor can see of the world at large.

 

The greater the DR you have in capture just means the greater flexibility you have in post to readjust the DR to whatever lower DR is appropriate for the work. I like the lower DR image as well, but starting with a higher DR during capture just means one can have more to play with in post (if that is one's thing).

 

C

Edited by Carl Looper
Link to comment
Share on other sites

This thread appears exhausted but there's a correction to be made to what was previously implied

 

Given a digital image that is 256 x 256, I said the least number of bits you'd need for such is 8 bits (256 divisions of the light) across the width of the image, in order to avoid banding. And while this remains true, it was also implied that at the most this is what you would need. And it is this implication which is not true, for we can always imagine (or desire) even finer gradations of light across a digital image. Between one edge of the frame and it's opposite side we might want a gradation of only one pixel value out of the 256 available - and this would cause a 128 pixel wide band.

 

And while the solution to this might be to increase the bits to allow for such a subtle gradation we are once again back to how many bits is enough bits? To which the answer would be there is no answer. For every number of bits we nominate we can imagine ever finder gradations of light we might otherwise want to render.

 

The answer, as previously provided, but not specifically in response to this particular clarification, is: dithering. It is that which occurs "naturally" in film through the grain of film. For example, the following represents a gradient of just one pixel value, across a 256 pixel wide image, but instead of the 128 pixel wide band that would otherwise result, the sort after finer gradation is solved by dithering. On the far left side of the image is the pixel value 128, and on the far right is the pixel value 127. Between which there are no other pixel values to draw on, yet a 256 pixel wide space in which to render such a desired variation (between 128 and 127). It is dithering that allows us to render such a fine gradation without increasing the bit count:

 

post-48441-0-69630200-1441939548_thumb.jpg

 

 

Below is what it would otherwise look like if we didn't employ dithering - one may need keen eyes to see the difference in this particular example, but it's there (and in situations involving colour it can become a lot more noticeable):

 

post-48441-0-59439600-1441939836_thumb.jpg

 

 

Here are exactly the same two images, but with the contrast turned right up - so that we can see how it's otherwise being rendered below immediate consciousness:

 

post-48441-0-43933900-1441940363_thumb.jpg

 

C

Edited by Carl Looper
Link to comment
Share on other sites

Great discussion. Just read it all.

 

It seems there's been a lot of confusion between what the sensor "sees" and what it "shows", regarding bit depth, but many seem to have got it by now :)

 

However, it seems the original purpose of the post have faded away, which was the limited bit depth of the darkest values pre-log conversion.

 

If the difference between the darkest stop and next darkest stop is just one value/bit at capture, it doesn't matter if the ADC converts that difference to, let's say, 100 values, there's still not gonna be any usable values in the between because a linear sensor doesn't seem to be able to distinguish 100 shades between those two darkest stops.

 

And after converting those linear values to log in order to ~evenly~ distribute those stops into a 8/10/12 bits image, those two darkest stops will still be easily distinguishable by us, without any usable information in between.

 

post-68223-0-66804900-1442232059_thumb.jpg

The example above is a depiction of the 2 darkest stops after ADC log2 conversion, considering an image captured with a camera able to see 10 stops and 10 bit capture sensor into a 8 bit image. (I just put a black square next to a 10% bright square on a 8bit image)

 

It's clear the difference between those two shades, and without dithering there might exist clear banding on the darkest tones of the image.

 

But this doesn't seem to happen in reality, as it would be too noticeable, right? Maybe because of contrast curves or because the sensors don't use the lowest range they capture, maybe because of noise, or most likely because I'm misunderstanding something with the sensor's capturing process.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Visual Products

Film Gears

BOKEH RENTALS

CineLab

CINELEASE

Gamma Ray Digital Inc

Broadcast Solutions Inc

Metropolis Post

New Pro Video - New and Used Equipment

Cinematography Books and Gear



×
×
  • Create New...