Jump to content

RED ASA question URGENT! Need answered by tomorrow at 5pm!


Kyle Shapiro

Recommended Posts

  • Premium Member
...

 

But our chip cameras have already digitized. How can they adjust anything now? The answer is that they digitize at a greater bit depth than they'll finally output. That allows them to re-sample digitally without losing quality.

 

...

 

Right John. In the case of the F23, it quantizes to 14-bits while HDCAM-SR is 10-bits. My whole argument is that even ignoring loss due to RAW compression and decompression, under some shooting conditions adjusting quantization would have a more positive impact on the image than if quantization were kept locked and all adjustments were only made in post to .R3D files.

 

BTW, the term "off the sensor" gets a little muddy when talking about the Red because it uses a CMOS sensor, which like you pointed out, quantizes on the chip. The same chip could be designed to quantize to 6-bit color and would look awful. But the fault would be in the quantization, not the light gathering properties of the chip. With a CCD, "off the chip" means pre-quantization.

 

So I think some Red advocates hear "off the chip" and think it can't get any better than that.

 

 

...

Each photosite "well" clips hard if it gets filled up, and at the other end its dynamic range is limited by noise.

...

 

But the highlight rendition is a function of where the quantization sets the knee and the clipping is a function of where it sets the white point. Just because we have a uniformly clipped image in post does not mean that all the photosites were overloaded. It means that the quantization said "No mas, I'm already at 4,095 (or whatever the highest level is), so I'm labeling you 4,095 even though you're not."

Link to comment
Share on other sites

  • Replies 60
  • Created
  • Last Reply

Top Posters In This Topic

  • Premium Member
But the highlight rendition is a function of where the quantization sets the knee and the clipping is a function of where it sets the white point. Just because we have a uniformly clipped image in post does not mean that all the photosites were overloaded. It means that the quantization said "No mas, I'm already at 4,095 (or whatever the highest level is), so I'm labeling you 4,095 even though you're not."

 

That's the way it works on conventional three chip CCD cameras. IIRC, Red isn't like that. Because it has to quantize on the chip out there among the actual photosites, it has to be a lot simpler. Silicon real estate devoted to quantization and digitization takes away from photosensitive area, so they need to minimize it. It doesn't do knees or clipping -- at least not adjustable clipping. It may be clipping, but at very close to cell overload. Guessing now, maybe that's the reason for the "black sun" problem. Maybe it defaults to zero when it has to carry out of the MSB? It just quantizes everything, but to a really large bit depth. I don't have the notes handy, but it was something huge, like maybe 32 bits? I'm trying to remember this from a DCS presentation a long time ago. Anyhow, that's what comes off the CMOS sensor and into the next block in the diagram.

 

From there it gets turned into 12 bits and wavelet compressed, but all in Bayer spatial form. That gives you plenty to work with in post for color correction. So, all you do is post production color correction to make the look you want, like with film. In production, you hang colored glass filters to adjust your actual light sources to the Red's 5000 Kelvin. The video concept of white balance doesn't apply.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
...

I don't have the notes handy, but it was something huge, like maybe 32 bits? I'm trying to remember this from a DCS presentation a long time ago. Anyhow, that's what comes off the CMOS sensor and into the next block in the diagram.

 

From there it gets turned into 12 bits and wavelet compressed, but all in Bayer spatial form. ...

 

John,

 

So you're saying output is something like 32-bits off the sensor and then requantized down to 12-bits away from the sensor? Interesting.

 

... Anyway, you kind of motivated my avatar, LOL! Hope all is well and thanks for your help.

Link to comment
Share on other sites

  • Premium Member

(Don't know what happened to the post above, but here it is again)

 

I don't have the notes handy, but it was something huge, like maybe 32 bits?

-- J.S.

32 bits?!

Not in this universe.

I don't know who told you that, but if nothing else, making a high speed "flash" on-chip ADC with a resolution of 32 bits would require the bulk manufacture of monolithic resistors with a tolerance of 0.0000000025 percent. We can't even do that with precision hand-made laboratory components; the idea of thousands of copies of something like that being made automatically by gas soaking through microscopic holes in a lumpy layer of quartz is technically ludicrous.

 

In any case, any conductor above absolute zero has an inherent background of electrical noise generated by random movements of the atoms, which is far greater than the theoretical resolution of a 32-bit ADC.

 

While there are ways of achieving such resolution, the results are inferred rather than directly measured, and the sampling rate is extremely low.

 

Yes I know you can get PC sound cards that claim to have 32-bit resolution, but precisely what this means is never explained. Unless you want to record the AC waveform on a 330kV power line with an electret microphone in parallel with it, I'm not sure what use it would be to have that much resolution. The most movement that you can usefully get out of the average microphone is about 1mm before severe distortion starts to set in. So if that was your 0dB point, the smallest resolution step on a 32 bit ADC would be the voltage produced by a movement of 1mm divided by about 4 billion, which comes to about 0.00025 nanometres. Considering the diameter of a hydrogen atom is about 0.1 nanometres, that would have to be one bloody quiet recording studio!

 

And you're going to be doing this inside all the myriad hash-generating circuitry of a PC....

Link to comment
Share on other sites

  • Premium Member
OK, likely I'm remembering the number wrong. The notion that it's more than 12 may still be correct, maybe?

-- J.S.

Possibly, but every extra bit requires a massive increase in the ADC's DAC resistor accuracy. Single 16 bit ADCs are do-able (just), but they depend on custom post-asembly laser-trimming.

I seriously doubt it could be done with hundreds or thousands of on-chip ADCs.

 

The Dasla origin has 16 bit ADC, but the first four bits are mostly noise.

 

The RED manual seems to imply that the Mysterium has one ADC for every pixel, but that's got to be a misprint. All competing devices have one ADC for each row (or column), which means thousands of ADCs running at a much lower speed, but 12 million?

Link to comment
Share on other sites

  • Premium Member

Keith,

 

Do you know if CMOS cameras that bake-in user selected image adjustments, perform gain, knee and white balance calculations on chip? Or are they typically done off-chip with the sensor "only" doing light gathering and the initial analog to digital conversion?

 

I would think at least knee and gain would be done during the CMOS sensor's initial on-chip quantization, BIDKFS.

 

Thanks much.

Link to comment
Share on other sites

It makes no sense to me that you would argue for having basic image adjustments, like white balance, be done at the post production stage where data quality is lower. Wouldn't you want white balance calculation to be done at the sensor level during the analog to digital quantization? That is when data is truly off the sensor.

 

Peter, that is the point where you got the RAW approach wrong.

The data quaility in post is NOT lower, the only thing happend to the raw data is the wavelett compression to keep data rate manageable.

You could work with WB filter if you want as the sensor is lightbalanced to daylight. It will affect noise levels in the blue channel. But that's it.

 

RAW data workflow is not comparable to other digital formats with baked in corrections and heavy compression like HDV.

Re-whitebalancing HDV for example is very bad.

 

White balance compensation in post is performed upon an image that has undergone three stages of processing: 1) the analog to digital quantization at the sensor level, 2) compression to .R3D and 3) demosicing.

 

I can understand you arguing against trying to lock down a look in-camera (even if I tend to disagree). But basic image adjustments, like white balance, I see no good reason to leave for post. Leaving everything for post seems more like a philosophy than a technical advantage.

 

As someone doing a lot of postprocessing and VFX I can assure you that RAW is a godsend for me - it leaves all possibilities open for tweaking and getting the most out of it. That doesn't mean you should be a careless shooter. A good DP will provide the solid basis for the best RAW footage which you can manipulate to anything you want. Ideally you won't have to tweak anything, but you will have all freedom of choice if you change your mind later. It is very easy to set raw wb in post, but can be hard to correct baked in improper wb.

 

In the case of white balance, all it does is add another stage of processing to post, and its never going to look better than if the math was done in-camera.

 

It won't look worse either. Because the camera can't do anything to RAW what you couldn't in post.

The extra stage in post is not relevant as the data has to processed anyway.

 

UPS, didn't see there were already a bunch of answers...

Edited by Robert Niessner
Link to comment
Share on other sites

Do you know if CMOS cameras that bake-in user selected image adjustments, perform gain, knee and white balance calculations on chip? Or are they typically done off-chip with the sensor "only" doing light gathering and the initial analog to digital conversion?

 

I would think at least knee and gain would be done during the CMOS sensor's initial on-chip quantization, BIDKFS.

 

Peter, neither CMOS nor CCD is baking anything like knee or wb into the signal. They all record RAW. This RAW data is sent to an image processing device where those adjusments will be done. The concept of knee, superwhite and superblack is only for broadcast formats, it is basically a manipulated gamma curve. The reason for superwhite and superblack is merely because broadcasted ntsc/pal signals couldn't work with that data and had to be cropped off otherwise causing all sort of interferences because they were too hot.

When you watch footage from DV it is already cropped in the highlights from RAW - even with superwhites. If you can look at RAW data even from consumer cameras you will see they have a much better dynamic range than you would think from the compressed DV footage.

Link to comment
Share on other sites

  • Premium Member
Do you know if CMOS cameras that bake-in user selected image adjustments, perform gain, knee and white balance calculations on chip? Or are they typically done off-chip with the sensor "only" doing light gathering and the initial analog to digital conversion?

 

It would typically be done off the sensor chip, if at all. Consider the Arri D-21. It can give you raw or TV, so it must make the TV from the raw somewhere downstream.

 

White balance is only something you need to bother with for a) live TV or b) recording to tape, because of the limits of tape. In post, we do color timing to make the look we want, not white balance. An intermediate white balance step would only impose the restrictions of an extra re-sampling, and degrade the image. It's unnecessary.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
... The concept of knee, superwhite and superblack is only for broadcast formats, it is basically a manipulated gamma curve. ...
This is simply not true. Your typical CCD sensor has much more dynamic range than the recording medium allows. During the first round of analog to digital quantization, the image processor applies a knee to the upper end. This has nothing to with legal broadcast or what are normally called superwhites between 236 and 255 on the 8-bit scale. It has to do with the fact that sensor has a wider range of sensitivity than even fourteen bits would allow. But some point has to be called absoulte white and somewhere before that point a knee is applied to have the hightlights more gracefully transition to what is designated as pure whie.

 

Also, none of these chips record in RAW w/ capital letters. AYK, RAW is a lossy Red codec which records 12-bit mosaiced color. That's nowhere near what's coming off a CCD sensor.

 

My ? is how much image processing is done on-chip by the CMOS sensors used in a typical camera that bakes-in WB, gain, knee, etc.. An example of such a camera would be Sony EX-1/3 for the chip design and the F35 for single chip Bayer pattern design. You guys seem to think that the CMOS sensors in cameras like these are limited to just a basic straight line quantization w/o knee, gain or any image correction or adjustment applied. I'm just not sure if that's the case.

Link to comment
Share on other sites

  • Premium Member
Your typical CCD sensor has much more dynamic range than the recording medium allows.

 

Correct for video tape, not for pure data.

 

My ? is how much image processing is done on-chip by the CMOS sensors used in a typical camera that bakes-in WB, gain, knee, etc.. An example of such a camera would be Sony EX-1/3 for the chip design and the F35 for single chip Bayer pattern design.

 

Little if any on CMOS, absolutely none on CCD. CCD's have to be read off the sensor chip and processed on separate chips. I'm pretty sure that F-35 is a vertical stripe CCD design. Jeff Cree of Sony did a presentation on the F-23 and F-35. He says that they're very much the same except for the optical section. The next board in the chain is the same in both, it just has two places to plug in the output from the CCD's, one for the F-35 single striped chip, the other for the three separate chips of the F-23.

 

A better example might be the D-21. It's a Bayer CMOS design, IIRC. There's a link to its manual up in the Arri folder, about 150 pages. I'm not going to try to read it tonight, but it might shed some light on this. The D-21 outputs both "ARRI Raw" and conventional old baked-in HDTV, so I hope there are some clues to the processing chain in there.....

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
It would typically be done off the sensor chip, if at all. Consider the Arri D-21. It can give you raw or TV, so it must make the TV from the raw somewhere downstream.

 

White balance is only something you need to bother with for a) live TV or B) recording to tape, because of the limits of tape. In post, we do color timing to make the look we want, not white balance. An intermediate white balance step would only impose the restrictions of an extra re-sampling, and degrade the image. It's unnecessary.

 

 

 

 

-- J.S.

Arri's raw is not truly raw. It has been quantized to 12-bits. If they process analog to digital straight line or apply a knee is something I don't know. Most CCD cameras apply a knee during the initial quantization, but I can't speak for CMOS bayer ones.

 

As for your WB point, consider the F23. It applies WB at a higher data rate than at whichd the camera records. Now it's true that HDCAM-SR is "only" 10-bits, which could be called a limitation imposed by tape. But the F23 does its WB and initial knee adjustments during an analog to digital conversion which results in 14-bit color. Now you can say that RAW is good enough to apply WB in post w/--which may well be. But you can't say that it's the same as applying WB to uncompressed 14-bit color.

 

As for the F23's approach being limiting, let me give you a situation where it would not be. The camera is recording in bright orange light outside. By doin a WB and applying it during the initial quantization, the F23 dials back the red level it's still at 14-bits. W/o applying a WB at this point, the reds could very well be clipped.

 

Now you can say that all the DP has to do is stop down. But maybe he/she doesn't want to for composition reasons. So throw on an ND. But now you're exposing for a red level that is really too high to begin with. So you're starving the other channels to keep red from blowing out. Now Red's codec offers enough color depth that this probably won't be a problem. But it could be. Look at what happens in indooer low light. The camera's histograms are showing an exposure level that overly biases the red channel. So people are exposing for red and, low and behold, they find noise in the blues. Putting on CTB gels is pretty crappy solution and is worse than baking-in a WB correction. Instead of the camera baking blue into the image the gel is baking blue into the light itself, that's even more of blunt instrument than WB.

 

At this stage of the thread, I really feel like I'm starting to repeat myself here. The main point that I making is that (CCD anyway) cameras which do in-camera image adjustment do it at a higher data rate than even .R3D offers. Doing it post may be close, may be good enough, but it's not the same. It is a lower quality. It has to be, b/c .R3D is < 14-bit uncompressed.

 

Now if this also applies to CMOS cameras, IDK, b/c I don't know what on chip image adjustments they perform. If they all just quantize straight line to 12-bits and do nothing else, ten the only difference between in-camera and post is the loss due to .R3D. And I have think that's pretty minimal.

 

Anyway, hope you all have a nice holiday weekend. :).

Link to comment
Share on other sites

This is simply not true. Your typical CCD sensor has much more dynamic range than the recording medium allows. During the first round of analog to digital quantization, the image processor applies a knee to the upper end. This has nothing to with legal broadcast or what are normally called superwhites between 236 and 255 on the 8-bit scale. It has to do with the fact that sensor has a wider range of sensitivity than even fourteen bits would allow. But some point has to be called absoulte white and somewhere before that point a knee is applied to have the hightlights more gracefully transition to what is designated as pure whie.

 

Also, none of these chips record in RAW w/ capital letters. AYK, RAW is a lossy Red codec which records 12-bit mosaiced color. That's nowhere near what's coming off a CCD sensor.

 

My ? is how much image processing is done on-chip by the CMOS sensors used in a typical camera that bakes-in WB, gain, knee, etc.. An example of such a camera would be Sony EX-1/3 for the chip design and the F35 for single chip Bayer pattern design. You guys seem to think that the CMOS sensors in cameras like these are limited to just a basic straight line quantization w/o knee, gain or any image correction or adjustment applied. I'm just not sure if that's the case.

 

Sorry, but you seem to have a wrong impression of that technology.

I have worked 3 years in developement of highend CCD scanners for Vexcel Corporation (now Microsoft Photogrammetry) and I surely know one or two things about sensor and data as you might imagine.

 

RAW is raw data from the sensor. And nothing else. RED RAW is compressed RAW data. And nothing else. Please do not argue around those facts with me.

The dynamic range of a sensor is no problem for 14-bit ADCs. Clipping is done by the sensor itself, as soon as the wells ar 100% filled. Sensors record light linearly (our eyes do it logarithmic), resulting in a very dark RAW image with noise levels equally distributed at all levels. For viewing purposes you apply a gamma correction curve so brightness levels look correct. That means you spread the dark parts of the image and that is, where the high bit-depth is needed. The highlights are no problem because they even get compressed.

Now the noise in the darks gets also boosted and that's the reason why there is visible noise in the darks of digital video but almost no visible noise in the highlights. Basically the more noiseless a sensor is, the better is its dynamic range.

 

You can compress any dynamic range into any end format - question is how dull it will look and when will you face banding.

For a contrastier look on TV monitors you have the sacrify DR and clip levels by applying a s-curved contrast enhancement, you fit your DR to the DR of you output medium.

So for instant good looking pictures all cameras do this DR limiting within their image processor before recording to tape. The RED and other RAW cameras do NOT do this. They record the full DR of the sensor.

Edited by Robert Niessner
Link to comment
Share on other sites

  • Premium Member
... I'm pretty sure that F-35 is a vertical stripe CCD design. Jeff Cree of Sony did a presentation on the F-23 and F-35. He says that they're very much the same except for the optical section. The next board in the chain is the same in both, it just has two places to plug in the output from the CCD's, one for the F-35 single striped chip, the other for the three separate chips of the F-23. ...

 

John, my bad about calling the F35 a Bayer patterns CMOS camera, it's clearly not.

Link to comment
Share on other sites

  • Premium Member
Sorry, but you seem to have a wrong impression of that technology.

I have worked 3 years in developement of highend CCD scanners for Vexcel Corporation (now Microsoft Photogrammetry) and I surely know one or two things about sensor and data as you might imagine.

 

RAW is raw data from the sensor. And nothing else. RED RAW is compressed RAW data. And nothing else. Please do not argue around those facts with me.

The dynamic range of a sensor is no problem for 14-bit ADCs. Clipping is done by the sensor itself, as soon as the wells ar 100% filled.

I have no doubt that you worked in sensor design, but the fact of the matter is that Sony's highend CCD's respond to a dynamic range WELL IN EXCESS of anything that's recorded to a 10-bit colorspace like HDCAM-SR 4:4:4. That is just a fact. Clipping is absolutely done during the ADC stage and not by the photosites of the F35's CCD.

 

And if you doubt this, look at the Viper. It really records raw in its FilmScribe mode and can exhibit amazing dynamic range. Same idea behind the Dalsa.

 

I feel like I'm going around in circels here.

Link to comment
Share on other sites

I have no doubt that you worked in sensor design, but the fact of the matter is that Sony's highend CCD's respond to a dynamic range WELL IN EXCESS of anything that's recorded to a 10-bit colorspace like HDCAM-SR 4:4:4. That is just a fact. Clipping is absolutely done during the ADC stage and not by the photosites of the F35's CCD.

 

And if you doubt this, look at the Viper. It really records raw in its FilmScribe mode and can exhibit amazing dynamic range. Same idea behind the Dalsa.

 

I feel like I'm going around in circels here.

 

Again: The ADC does NOT clip highlights. It converts analog voltage level to digital level. Point. No discussion. It is a technological fact. Photosites can only handle a certain amount of photons, until they are filled. That means clipping. Otherwise you would claim the CCD to have unrestricted DR.

You are mixing again different technologies. HDCAM-SR is NOT RAW. It has uncompressed color channels, but is converted from RAW by using curves and LUTs. For the sake of a better looking image some clipping is applied by using contrast enhancement.

And I never said that for those formats data processing will not clip DR. In fact, that's exactly what I said.

But RED RAW is NOT HDCAM-SR. It is RAW data coming from the sensor with NO image processing applied (except the wavelett compression and some calibration for dark current and basics).

 

Please take a read of how CCDs and RAW data work - it is nicely written here:

http://www.sc.eso.org/~ohainaut/ccd/CCD_proc.html

Edited by Robert Niessner
Link to comment
Share on other sites

  • Premium Member
... Otherwise you would claim the CCD to have unrestricted DR. ...

It's not unlimited but their clipping point with a cameras like the F23/35, Viper, Dalsa is many times higher than where a 12-bit ADC would clip them. As for HDCAM-SR, I know that it's not RAW. I know that RAW is mosaic color off the sensor. What I'm saying is that what's off a CMOS sensor has already gone through the ADC phase.

 

With digital cinema CCD sensors, dynamic range is cut tremendously during the ADC quantization, esp if it's performed at 12-bits. I believe this is also the case with a CMOS sensor but I can't say definitively that's the case since CCD's are known to have greater dynamic range than CMOS's.

Link to comment
Share on other sites

No, you have the same dynamic range and control available in both cases. In one case, you're forced to do it on the set, in the other you have it in post.

 

I would modify that to say you have "effectively similar" range and control. It can't be the same because if you use in camera settings, you're working with the information prior to compression, at least on most cameras (not Red because it doesn't currently allow pre-compression settings by the user). In some cases, you're even working with the analog information prior to digitization, which allows for even more control due to an essentially unlimited number of value levels. If you use a post approach, you're working with the information post compression. How much of a difference that makes, particularly in the case of the current Red design, is questionable. But the fact that there is less information once the material is compressed is not arguable - and this is true even if you assume 12 bits internally and 12 bits externally (i.e., working with an image converted to linear light).

Link to comment
Share on other sites

  • Premium Member
I would modify that to say you have "effectively similar" range and control. It can't be the same because if you use in camera settings, you're working with the information prior to compression, at least on most cameras (not Red because it doesn't currently allow pre-compression settings by the user). ....

 

Yes, I was talking about the Red/raw idea there, with no in-camera settings. Sorry if I failed to make that clear. A lot of what appears to be disagreement here is just failure to keep track of whether we're talking tape with a ten bit limit, or raw bit bucket whatever mode.

 

The reason this is important and worth thinking about is a fundamental metaprinciple: Time is money everywhere, but the exchange rate varies. Therefore, complexity should be migrated away from where time is expensive to where it's less expensive.

 

Post ain't exactly cheap, but it costs way less per second than production. At least for us, the beauty of the raw idea is that you just hang your filters, set a stop, and shoot it. Much like we did with film.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
Again: The ADC does NOT clip highlights. It converts analog voltage level to digital level. Point. No discussion. It is a technological fact. Photosites can only handle a certain amount of photons, until they are filled. That means clipping.

...

 

Robert I appreciate you and John's input and willingenss to share your knowledge, but your assertion (which FWICT John agrees with) pretty much directly conflicts with what Sony says about high end digital cinema CCD's. I believe I've explained my position well and have given cameras examples that illustrate the data level and workflow needed to truly capture a CCD's full dynamic range. If clipping were happening on the photosite, then the Viper and Dalsa's wide data options would offer no advantage.

 

But here are Sony's own words:

 

"A major challenge to quantization is that today's best CCDs have tremendous

dynamic range, which film photographers call "exposure latitude." The CCD in

Sony's F35 digital cinema camera can sense meaningful detail up to 800% of

nominal peak white. An exterior daytime wedding scene, for example, will retain

the texture of the lace pattern in the bride's white gown without losing the

difference between the black wool and the black silk in the groom's tuxedo."

From the top of page seven, http://pro.sony.com/bbsccms/assets/files/m...rmats_Guide.pdf

Link to comment
Share on other sites

Robert I appreciate you and John's input and willingenss to share your knowledge, but your assertion (which FWICT John agrees with) pretty much directly conflicts with what Sony says about high end digital cinema CCD's. I believe I've explained my position well and have given cameras examples that illustrate the data level and workflow needed to truly capture a CCD's full dynamic range. If clipping were happening on the photosite, then the Viper and Dalsa's wide data options would offer no advantage.

 

But here are Sony's own words:

 

"A major challenge to quantization is that today's best CCDs have tremendous

dynamic range, which film photographers call "exposure latitude." The CCD in

Sony's F35 digital cinema camera can sense meaningful detail up to 800% of

nominal peak white. An exterior daytime wedding scene, for example, will retain

the texture of the lace pattern in the bride's white gown without losing the

difference between the black wool and the black silk in the groom's tuxedo."

 

Peter, I already know that text - but be careful quoting marketing blabla as technological fact. This is Sony marketing speech for people how don't know much about electronics. That's ok, but not a fact.

 

What does 800% above nominal peak white mean? It means its photosites get filled up after 8 times the amount of photons have hit them compared to the nominal peak. In other words: in best case up to 3 f-stops. As each color channel has different total fill levels this would be the maximum for the 'best' channel - normally the green channel. Other channels might clip earlier, leading to the typical video overbright wrong color look e.g. in skin tone highlights with the red to yellow to white ramp. To avoid this you clip earlier and desaturate which looks more like analog film highlights. Normally you will have a reserve of 2 f-stops to dial back. This is exactly what you get with a DSLR and RAW vs JPEG - you can dial back 2 f-stops in the highlights in the RAW image (where the JPG would already be clipped).

 

So Sonys marketing text just underlines what I already wrote before.

Edited by Robert Niessner
Link to comment
Share on other sites

Ok, this is getting a little to technical for me. What I understand red is 5000k and thats it, right.

So when I shoot night scenes with alot of tungsten I ad a 80somthing filter to the camera

and rate it somewhere 100asa to avoid a noisy image? just like you do if you for some reason shot 5205 at night.

In that case it will be alot of tungsten lights.

Why didn´t they build the camera for 3200k instead? so I could ad a 85 during daylight takes, makes much more sense in

my world. But maybe my world is small and strange.

 

andreas

Link to comment
Share on other sites

Why didn´t they build the camera for 3200k instead? so I could ad a 85 during daylight takes, makes much more sense in

my world. But maybe my world is small and strange.

 

Could it be that on film you tend to always shoot with tungsten stock?

I almost always do....

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Broadcast Solutions Inc

Visual Products

Film Gears

CINELEASE

BOKEH RENTALS

CineLab

Cinematography Books and Gear



×
×
  • Create New...