Jump to content

RED ASA question URGENT! Need answered by tomorrow at 5pm!


Kyle Shapiro

Recommended Posts

  • Premium Member
Why didn´t they build the camera for 3200k instead?

 

Theoretically they could have used exactly the same silicon with two different mask dye sets, one for 3200 and the other for 5600, just like film does. But the limiting factor is the silicon's sensitivity to blue, which is quite low. So, in designing the mask set, they start with a blue primary that's not too saturated. That's where the compromise between gamut and speed has to be made.

 

Once you have a blue, you need a red and green to make a reasonable balance. The warmer the light you want to balance for, the more red and green you have to filter out. Doing a mask set for 3200 would result in a slower camera, about equivalent to just hanging a blue filter on a camera with a daylight balanced mask set on the chip. So, that's what they do.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Replies 60
  • Created
  • Last Reply

Top Posters In This Topic

  • Premium Member
... Normally you will have a reserve of 2 f-stops to dial back. This is exactly what you get with a DSLR and RAW vs JPEG - you can dial back 2 f-stops in the highlights in the RAW image (where the JPG would already be clipped).

 

So Sonys marketing text just underlines what I already wrote before.

Robert, I appreciate that RAW gives more dynamic range than JPG. But Sony is clearly states in their Format Guide, and even your own words indicate, that the clipping we see in high end CCD footage occurs at a point before where the CCD's photosites have overloaded. The clipping we see from digital cinema CCD footage is done by the DAC during the original quantization.

 

But you and John have argued that the clipping we see in CCD footage is the actual points at which each photosite has overloaded, and that's just not the case. They actually clip about two stops above what a 12 bit quantization designates as clipping.

 

Anyway, I feel like I've expended enough energy on this. And it's getting to be somewhat inconsequential, since I'm arguing about the behavor of a chip design that the Red does not use.

Link to comment
Share on other sites

  • Premium Member
Keith,

 

Do you know if CMOS cameras that bake-in user selected image adjustments, perform gain, knee and white balance calculations on chip? Or are they typically done off-chip with the sensor "only" doing light gathering and the initial analog to digital conversion?

 

I would think at least knee and gain would be done during the CMOS sensor's initial on-chip quantization, BIDKFS.

 

Thanks much.

 

Not possible. ADCs are strictly linear devices. Any such manipulation would have to be done pre-ADC, requiring millions of carefully matched analog circuits. It simply cannot be done with any current technology.

 

Look no offense chaps, but I've been reading some absolute drivel here. Perhaps the following will clarify things a bit.

 

 

All standalone Analog to Digital converters have some means of setting the voltage values represented by their maximum and minimum digital output values. Here’s a connection diagram for the venerable CA3306 6-bit video ADC. (It’s a totally obsolete part now, but is easier to understand than most modern packages).

 

3306.gif

 

The minimum digital output the CA3306 can produce is binary “000000” which is zero in decimal, and the mqaximum is binary “111111”, 63 in decimal. The output pins are the ones marked B1 through B6, and are set to zero volts for binary “0” and (usually) 5 volts for binary “1”

 

Virtually all modern digital cameras use 10, 12,14 or even 16 bit quantitization, which means that instead of the maximum output being “111111” it might be “1111111111” or even “1111111111111111”, but the principles are exactly the same.

 

 

Now note pins 10 (Vref +) and 11 (Vref -).

Those two pins allow you to set the values of input voltage that produce the extremes “000000” and “111111” of the 6-Bit ADC.

 

Suppose we’re digitizing the output from a single photocell on a camera pickup device. In darkness there will be no (zero) voltage output, while the maximum voltage a silicon photocell can produce is about 0.6 Volts. After that, the effect is very much like an overflowing cup, no matter how much water (or light) you pour in, the level can never rise above 0.6V.

 

So in this case you would set pin 11 of the CA3306 to zero volts, and pin 10 to 0.6 Volts. That way, zero volts from the photocell would produce “000000” and 0.6V Volts would produce “111111”. Voltages in between would produce intermediate binary values.

 

But now suppose you’re shooting in limited light, and the most voltage you can get from a photocell is 0.3 Volts. It will still work, but the digital output will then only range between “000000” and “100000” so it then effectively becomes a 5-bit ADC. You can mathematically multiply the binary output by a factor of two to restore the full 6-bit range, but this will introduce noise.

 

A far better approach is to change Pin 10 to 0.3 Volts. Then, input voltages between zero Volts and 0.3 Volts will produce binary outputs between “000000” and “111111”. The voltage range between 0V and 0.3V is then broken up into 64 steps instead of 32

 

This is exactly what is done with modern 3-chip CCD cameras when you activate the “gain” settings.

 

Now all this to the side, both silicon photocells and Analog-to-digital converters are strictly linear devices. There is no way any sort of response curve manipulation can be done in the photocell itself or in the ADC. With CCD devices the video output is normally extracted one pixel at a time (sometimes two or 4 at a time) and in that case, the CCD output can be passed through some sort of response shaping amplifier (or two or 4 or whatever), since only a small amount of circuitry is required.

 

CMOS photosensors have a particular problem is that, while the photon gathering and storage process itself generates very little noise, the act of reading out the pixel values tends to introduce an enormous amount of noise. A CMOS sensor that outputs one pixel at a time as a CCD does would be far too noisy to be practical.

 

The noise is dramatically reduced if the clock speed is slowed down, but then the pixels cannot be read out fast enough to produce moving images. (This explains the great success of digital still cameras, which can often only produce motion pictures at VGA or less resolution).

 

The solution is to divide the digitization task up amongst more ADCs which can run more slowly and so introduce less noise. Some CMOS imagers have one ADC for each row (or column), so if you had 1080 rows of 1920 pixels for example, you could have 1080 ADCs running at 1/1080th the clock speed that would be needed for outputting the pixels directly. It’s this technique makes CMOS imagers practical. Unfortunately, accurately setting the VRef+ value on thousands of ADCs is simply not practical, which is why the RED Mysterium and similar CMOS chips are totaly dependent on controlling the light level to control the ADC range used. Gain can only be implemented by digital multiplication of the (sometimes noisy) ADC output.

 

As for various manufacturers’ claims about 800% overload etc, all that means is that once the input climbs past the 7th bit of a 10-bit ADC for example, different post-processing is applied to the signal that occupies the upper 3 bits, in an attempt to give a more film-like response. The ADC would normally still be set up to produce 1111111111 (Decimal 1,023) when the photocells were producing 0.6V. All that happens is the viewfinder’s Zebra pattern (or whatever) is set to cut in around 128 instead of 1024.

 

This doesn’t really do anything that you couldn’t do just as easily by underexposing the CCD and fiddling with the gamma curves in Post Production. The big problem then is that most of the picture is only being captured with 7-bit resolution.

Edited by Keith Walters
Link to comment
Share on other sites

Thanks Keith for the in-depth explaination. Well done and easy to understand.

I find it always difficult and timeconsuming for me to explain such a topic in English (which is not my mother language), I am glad you did this job.

Peter, I hope you now understand why your ideas won't work.

Link to comment
Share on other sites

  • Premium Member

Keith, Thank you very much for your reply.

 

 

...

Suppose we’re digitizing the output from a single photocell on a camera pickup device. In darkness there will be no (zero) voltage output, while the maximum voltage a silicon photocell can produce is about 0.6 Volts. After that, the effect is very much like an overflowing cup, no matter how much water (or light) you pour in, the level can never rise above 0.6V.

 

So in this case you would set pin 11 of the CA3306 to zero volts, and pin 10 to 0.6 Volts. That way, zero volts from the photocell would produce “000000” and 0.6V Volts would produce “111111”. Voltages in between would produce intermediate binary values.

...

 

Are you saying that in the case of a camera like the F23, its VREF+ (pin 10 equivalent) is set to the absolute maximum voltage for the photosite (its overflow or clipping point)? Not at a level which is small margin below the photosite's potentially maximum voltage?

 

 

...

But now suppose you’re shooting in limited light, and the most voltage you can get from a photocell is 0.3 Volts. It will still work, but the digital output will then only range between “000000” and “100000” so it then effectively becomes a 5-bit ADC. You can mathematically multiply the binary output by a factor of two to restore the full 6-bit range, but this will introduce noise.

 

A far better approach is to change Pin 10 to 0.3 Volts. Then, input voltages between zero Volts and 0.3 Volts will produce binary outputs between “000000” and “111111”. The voltage range between 0V and 0.3V is then broken up into 64 steps instead of 32

 

This is exactly what is done with modern 3-chip CCD cameras when you activate the “gain” settings.

...

 

And Red's sensor cannot do this. You yourself are arguing that this is a better approach when shooting in low light. Unfortunately for the Red, it has no way of rescaling its sensor when shooting in low light.

 

 

...

Now all this to the side, both silicon photocells and Analog-to-digital converters are strictly linear devices. There is no way any sort of response curve manipulation can be done in the photocell itself or in the ADC. With CCD devices the video output is normally extracted one pixel at a time (sometimes two or 4 at a time) and in that case, the CCD output can be passed through some sort of response shaping amplifier (or two or 4 or whatever), since only a small amount of circuitry is required.

...

 

And adding amplification to the CCD's analog signal is something that I brought up, and again Red's design is incapable of doing.

 

 

...

CMOS photosensors have a particular problem is that, while the photon gathering and storage process itself generates very little noise, the act of reading out the pixel values tends to introduce an enormous amount of noise. A CMOS sensor that outputs one pixel at a time as a CCD does would be far too noisy to be practical.

...

Unfortunately, accurately setting the VRef+ value on thousands of ADCs is simply not practical, which is why the RED Mysterium and similar CMOS chips are totaly dependent on controlling the light level to control the ADC range used. Gain can only be implemented by digital multiplication of the (sometimes noisy) ADC output.

 

This is again saying that there are limitations to Red's approach of using a CMOS sensor, as it eliminates any possibility of changing values like gain. And changing gain in a CCD camera has distinct technical advantages to raising gain in post.

 

 

...

As for various manufacturers’ claims about 800% overload etc, all that means is that once the input climbs past the 7th bit of a 10-bit ADC for example, different post-processing is applied to the signal that occupies the upper 3 bits, in an attempt to give a more film-like response. The ADC would normally still be set up to produce 1111111111 (Decimal 1,023) when the photocells were producing 0.6V. All that happens is the viewfinder’s Zebra pattern (or whatever) is set to cut in around 128 instead of 1024.

 

This doesn’t really do anything that you couldn’t do just as easily by underexposing the CCD and fiddling with the gamma curves in Post Production. The big problem then is that most of the picture is only being captured with 7-bit resolution.

 

Doesn't your line, "The big problem then is that most of the picture is only being captured with 7-bit resolution." answer why adding a knee in-camera is better than significantly under exposing and changing gamma and highlights in post?

 

Also, please realize that when I used the term "initial quantization" I meant it as opposed to the next stage quantization used to bring the data down to the proper resolution for recording. I realize that white balance adjustment would not be done to the analog signal. There would be an analog to digital conversion to let's say 14 bits, and upon this 14-bit signal would the WB be applied. Then the second quantization to 10 bits would take place in order to record to let's say HDCAM-SR.

 

Thank you very much again for your replies.

Link to comment
Share on other sites

  • Premium Member
Are you saying that in the case of a camera like the F23, its VREF+ (pin 10 equivalent) is set to the absolute maximum voltage for the photosite (its overflow or clipping point)? Not at a level which is small margin below the photosite's potentially maximum voltage?

 

Slightly higher if anything. Silicon photosensors have a small saturation curve which is slightly better-looking than the hardcutoff you would get with an overloaded ADC. You would only be throwing away a very small part of the ADC's dynamic range in either case.

 

And Red's sensor cannot do this. You yourself are arguing that this is a better approach when shooting in low light. Unfortunately for the Red, it has no way of rescaling its sensor when shooting in low light.

Info on the Mysterium is virtually non-existent, but if there is such a facility, they don't appear to be exploiting it. It would have to be such a worthwhile improvement is why I am confident that they have not developed such techniques. Other CMOS sensors do not offer this feature at any rate.

 

And adding amplification to the CCD's analog signal is something that I brought up, and again Red's design is incapable of doing.

Analog pre-processing will get the very best out a CCD chip it's true, but I don't know whether such a thing is used with the F23 and the like. The service information is somewhat circumspect. It's like detail correction, it took me a long time to determine that "zero" is not the same as "off"!

 

This is again saying that there are limitations to Red's approach of using a CMOS sensor, as it eliminates any possibility of changing values like gain. And changing gain in a CCD camera has distinct technical advantages to raising gain in post.

 

Certainly, but with a 12 megapixel CCD sensor, this will come at a price! CMOS chips are much cheaper to fabricate. CCD fabrication plants can only make CCD chips - for them CCDs are THE product. CMOS image sensors conversely can be made by a standard CMOS fab plant when it isn't busy making microprocessors or other much bigger money spinners.

 

For what it is, the RED is a very cheap camera.

 

Doesn't your line, "The big problem then is that most of the picture is only being captured with 7-bit resolution." answer why adding a knee in-camera is better than significantly under exposing and changing gamma and highlights in post?

It is, but the added complexity is seemingly not considered worth the trouble for ENG type cameras like the F23. Also, ADC noise is only important when it is greater than CCD analog noise, which is not insignificant.

 

To fart in church again, I remain totally confident no silicon sensor will ever equal the performance of scanned 35mm color negative anytime soon. More prime-time stuff is being shot with digital cameras now, but notice that a lot of it is being done with PV's F23 CineAltas, which have been around for 10 years almost!

 

Clearly it is more an artifact of lower budgetary expectations than better cameras. The world has changed indeed. I can remember when the peak earning lifespan of any model Betacam was about 3 years if you were so lucky!

Link to comment
Share on other sites

  • Premium Member

Keith,

 

I want to thank you so much for your replies. It's clear that I misunderstood how the "extra" dynamic range of a CCD chip is handled. And to Robert and John I want to wholeheartedly apologize for that.

 

If I understand correctly, it seems that the first ADC quantizes the entire sensitivity of the chip, including what on a straight line relationship could be as much as three stops above white level. During the second quantization a knee is applied to this line to bring the white level down to a data level that is actually useful for recording/output. The first quantization does not pre-clip the CCD's values; the second quantization does rescale them by pulling down the upper right end of the curve.

 

You also bring up good points as to why the in-camera approach can be superior. But it is also clear that these techniques are largely unavailable to Red due to its use of a CMOS sensor. (I had suspected that a CMOS was not as amenable to low level image processing as is a CCD; you however have clarified this point.)

 

For example, being able to adjust in-camera gain by lowering the output voltage designated as maximum can yield superior performance in low light conditions, but such an adjustment is only possible with a CCD not a CMOS sensor.

 

So while in-camera processing can be superior to saving it all for post, the extent to which meaningful in-camera image adjustments are possible depends upon camera design and the components used.

 

Thanks very much.

Link to comment
Share on other sites

  • Premium Member
CCD fabrication plants can only make CCD chips - for them CCDs are THE product. CMOS image sensors conversely can be made by a standard CMOS fab plant when it isn't busy making microprocessors or other much bigger money spinners. ....

 

More prime-time stuff is being shot with digital cameras now, but notice that a lot of it is being done with PV's F23 CineAltas, which have been around for 10 years almost!

 

Theoretically you could make image sensor CMOS chips in a conventional fab, but per Dalsa, in practice it takes some specialization to deal with the mixed signal environment. They didn't mention that for single chip color, you also have to work the dye filter mask application into the line. Here's loads of stuff from them, a lot of links to PDF's down the right side, to keep us all busy for a while:

 

http://www.dalsa.com/sensors/products/ccd_vs_cmos.aspx

 

It's the F-900's that have been around for a decade or so. F-23 and its companion F-35 are the new ones. Sitcoms use a lot of F-900's, dramatic shows use Genesis, D-21, and Red, and even the F-900 with an optical adapter for 35mm film lenses.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
It's the F-900's that have been around for a decade or so. F-23 and its companion F-35 are the new ones. Sitcoms use a lot of F-900's, dramatic shows use Genesis, D-21, and Red, and even the F-900 with an optical adapter for 35mm film lenses.

-- J.S.

 

Ack, you’re right, the F900 came out in 1999. The F23 is only about 3 years old. My apologies to the F900, the F23, Sony, Panavision, George Lucas and selected other persons/entities I may have unintentionally offended.

 

Did I write all that? Sheesh….

Sorry, didn’t get much sleep on Monday night. I’ve had a very long Easter holiday weekend. (Good Friday and Easter Monday are public holidays here, not sure how it works in other countries).

 

<totally off-topic rant (since this thread is way beyond its use-by date anyway:-)>

 

Most of my 4-day break was spent listening to my wife and her sister heading off yet another attempt by her hobo brother to install himself in his elderly mother’s beach house along with his hillbilly second wife and her gaggle of no-good trailer-trash assorted relatives, with their assorted collection feral offspring of dubious and obscure parentage. Most of the weekend was occupied by various screaming matches that went on long into the night.

 

Funny thing is, when I was talking about it at work yesterday, it seemed just about everybody had a similar story to tell about some back-sheep (ass)hole-into-which-soft-hearted/headed parents/grandparents shovel-endless-amounts-of-their-hard-earned-and-saved-money-to-absolutely-zero-result. Seems nobody is immune. I bet even Jim Jannard gets his share of this sort of thing eg:

 

“I mean how the f*ck could you POSSIBLY come away from a divorce settlement with $400,000 only 4 years ago, and you’ve got ABSOLUTELY NOTHING left?! NOTHING!! Then you came crying to your 86 year old mother for yet another handout to bail you out of your latest financial crisis….”

 

“Of course nobody wants you in their house! Not when you both smoke like that overpriced gas-guzzling piece of poop you spent your last handout on, and you insist on bringing that pack of fleabag canine rodents everywhere whose first duty appears to be to seek out every nook and cranny they may have missed last time they were here, so they can p*ss and poop in it! Why don’t you get a real f*cking dog?! Just ONE dog!?”

 

“Yes we all have nice houses. Possibly because we don’t spend all our time and money playing poker machines, on booze, cigarettes, and buying yet more of people’s worthless trash in garage sales! What?! Did you think I won this house in a raffle or something?! “(Significant parts of the various arguments was that they wanted to reserve a two large “common” areas of the house to store their disgusting collection of cockroach-infested trash they’d accumulated at the aforesaid garage sales).

 

Teary eyed, trembing lower lip: “What? You think I should live out on the street?”

“Yeah, well at least the council sweeps them and takes the rubbish away occasionally!”

<end of rant>

 

By the way, I didn't use the word "poop". The editing thingie substituted that word for the S-word with the "i" asterisked out. F*ck is apparently acceptable though:-)

Sorry, please resume normal forum programming.

Edited by Keith Walters
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

BOKEH RENTALS

Film Gears

Metropolis Post

New Pro Video - New and Used Equipment

Visual Products

Gamma Ray Digital Inc

Broadcast Solutions Inc

CineLab

CINELEASE

Cinematography Books and Gear



×
×
  • Create New...