Jump to content

RED ASA question URGENT! Need answered by tomorrow at 5pm!


Kyle Shapiro

Recommended Posts

I am shooting on the RED Camera tomorrow and wanted to know what ASA would most replicate the look of film stocks:

 

Kodak Vision 500T 5279 (for interiors)

&

Eastman EXR 100T 5248 (for exteriors)

 

Any information would be greatly appreciated. I need this answered by tomorrow night (4/3/09) by 5pm pacific time. Thanks in advance!

 

Due to urgency please feel free to e-mail me at kyle_shaps@yahoo.com

Edited by Kyle Shapiro
Link to comment
Share on other sites

  • Replies 60
  • Created
  • Last Reply

Top Posters In This Topic

  • Premium Member

I don't think changing REDs ASA would do anything to replicate a stock's look. That would be more contrast and color reproduction in post.

That being said, the RED is a native 320, so plan on NDs outside and/or some deep stops.

Link to comment
Share on other sites

  • Premium Member
-- there is no setting on the camera that replicates the look of specific film stocks.

Yes there is!!

... isn't there...?

Maybe that was build 21...

sorry...

Edited by Keith Walters
Link to comment
Share on other sites

  • Premium Member
-- there is no setting on the camera that replicates the look of specific film stocks.

 

Indeed, that's the whole genious of the "raw" thing -- all it does on set is compress and record the data as it comes from the chip. Looks you can do in the comfort and convenience of a post facility, not while the cast and crew are on the clock. All you have to do is not crush the blacks or blow out the whites.

 

And yes, get the "hot mirror" ND's from Formatt or Tiffen.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

Realize that the camera is daylight balanced. I think the sweat spot is 320 ASA. I’d shoot it like reversal so you don’t want to over expose it. When shooting interiors with tungsten lights you need too gel your lights with CTB or use a blue filter 80 b, c, d on the camera. If you filter the camera your effective ASA is now 160. Trying to color correct electronically in the camera isn’t a good solution.

 

Also IR issues are very real. If you are shooting in bright daylight and ND yourself to a wide open stop you may experience IR exposure issues. The IR isn’t effected by standard ND so if your base stop was F16 and you ND’d down to 1.4 the IR would still be exposing at F16.

Link to comment
Share on other sites

[off-topic...]

 

And yes, get the "hot mirror" ND's from Formatt or Tiffen.

Perhaps it was just the particular filter I had, or the phase of the moon or something, but the last show I used Tiffen ND filters on, I had terrible problems with soft shots. We did a test and found that the Tiffen actually diffused the shot a little.

 

Switching over to a spare set of Schneiders solved the problem. Put a bad taste in my mouth for Tiffen, but I know that they produce quality glass, so I'm not sure what to think.

Edited by Mike Thorn
Link to comment
Share on other sites

i am about to shoot first time with the red

 

what are the IR issues with the red

 

If you're shooting in sunny conditions or under warm tungsten lights and using heavy ND (1.2 or higher) it's recommended you use

a Hot Mirror or similar filter to get rid of the IR wavelengths and restore the image to normal.

 

If you search on Reduser, you'll find a couple tests showing the effect. Once you see it, you'll understand why it's important to avoid.

Link to comment
Share on other sites

  • Premium Member
Indeed, that's the whole genious of the "raw" thing -- all it does on set is compress and record the data as it comes from the chip. Looks you can do in the comfort and convenience of a post facility, not while the cast and crew are on the clock. All you have to do is not crush the blacks or blow out the whites.

...

John,

 

I'm not so sure about the genius of the uber-metadata approach. It seems more like a necessity: the Red only has so much computational power. It chose to use that power for high image resolution and wavelet based compress. I don't think the camera could do all that PLUS apply a color grading on the fly (which is what a "baked-in" approach does).

 

Now someone will say, why bake-in the look? But the retort is that no matter what's being recorded, an image will be baked in. And what really matters is how off is what's being baked-in from what's wanted as a final look. In the case of the metadata approach, the image from the sensor can vary greatly from what the final look should be. In which case, more work and less flexibility will exist in post when compared with using a properly baked-in image. (I say properly because you can come very close in-camera but blowout the highlights, for instance, and then you're in worse shape than if you had recorded image corrections only as metadata.)

 

Another advantage in camera image processing has is it's done pre-compression. While R3D is very high quality, it's not uncompressed. It will introduce a baseline error that's absent to in-camera processing.

 

Furthermore, in-camera processing allows you to take advantage of dynamic range where you need it most. By being able to adjust gain, gamma, knee, etc.. you can maximize the limited range for what's most important for the shot.

 

Finally, in-camera processing allows the cinematographer and director greater control over the image. And this is not just an ego issue, it has to do with the fact that during post metadata can become separated from the image or misinterpreted.

 

I think one of the ways that Red was able to come up with such an economical camera design was to offload all the image processing to post.

Link to comment
Share on other sites

Shooting at 250 to 320 ASA in daylight balance is optimal, but it has nothing to do with making it look like 5279 vs. 5248 -- there is no setting on the camera that replicates the look of specific film stocks.

 

Does that mean that the red allows you to jump through asa's in the menus?

Link to comment
Share on other sites

  • Premium Member
Does that mean that the red allows you to jump through asa's in the menus?

 

The ASA setting on the Red is like the ASA setting on a light meter. It's not like choosing film stocks or adjusting gain on a video camera. It sets where the finder display will show you exposure warnings. The chip is what it is and records the same data no matter what the ASA setting. If you want slower than 320, hang IR blocking ND's. If you want faster, you're out of luck for now.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
I'm not so sure about the genius of the uber-metadata approach. It seems more like a necessity: the Red only has so much computational power. It chose to use that power for high image resolution and wavelet based compress. I don't think the camera could do all that PLUS apply a color grading on the fly (which is what a "baked-in" approach does).

 

No, it's the other way around. Conventional video cameras are forced to bake in to some extent because the recording formats can't handle the full range from the sensor. By recording the full output of the chip, Red makes in-camera color grading irrelevant.

 

Now someone will say, why bake-in the look? But the retort is that no matter what's being recorded, an image will be baked in.

 

No, like with film, Red is retaining everything that the light sensing technology it uses can produce. It doesn't need to discard information like a tape camera must. You set the iris and hang the filters you want in order to get started on the way to the final look.

 

In the case of the metadata approach,

 

This may be the fundamental error. There is no "metadata approach". You don't need any metadata at all, any more than you do with film. You can play with it if you want, or you can leave it all for post.

 

the image from the sensor can vary greatly from what the final look should be. In which case, more work and less flexibility will exist in post ...

 

The image on a camera original negative varies greatly from the final look, too. But it gives you far more flexibility in post. A baked in look throws away that flexibility.

 

Furthermore, in-camera processing allows you to take advantage of dynamic range where you need it most. .... Finally, in-camera processing allows the cinematographer and director greater control over the image.

 

No, you have the same dynamic range and control available in both cases. In one case, you're forced to do it on the set, in the other you have it in post.

 

I think one of the ways that Red was able to come up with such an economical camera design was to offload all the image processing to post.

 

That's true.

 

 

 

 

 

-- J.S.

Link to comment
Share on other sites

The ASA setting on the Red is like the ASA setting on a light meter. It's not like choosing film stocks or adjusting gain on a video camera. It sets where the finder display will show you exposure warnings. The chip is what it is and records the same data no matter what the ASA setting. If you want slower than 320, hang IR blocking ND's. If you want faster, you're out of luck for now.

 

 

I was just dreaming of the possibility of being able to roll through different ASA's, like on my meter, without having to go through the technical avenue and testing.

 

Cheers John! ;)

Link to comment
Share on other sites

  • Premium Member
No, it's the other way around. Conventional video cameras are forced to bake in to some extent because the recording formats can't handle the full range from the sensor. By recording the full output of the chip, Red makes in-camera color grading irrelevant.

 

...

John,

 

It makes no sense to me that you would argue for having basic image adjustments, like white balance, be done at the post production stage where data quality is lower. Wouldn't you want white balance calculation to be done at the sensor level during the analog to digital quantization? That is when data is truly off the sensor.

 

White balance compensation in post is performed upon an image that has undergone three stages of processing: 1) the analog to digital quantization at the sensor level, 2) compression to .R3D and 3) demosicing.

 

I can understand you arguing against trying to lock down a look in-camera (even if I tend to disagree). But basic image adjustments, like white balance, I see no good reason to leave for post. Leaving everything for post seems more like a philosophy than a technical advantage.

 

In the case of white balance, all it does is add another stage of processing to post, and its never going to look better than if the math was done in-camera.

 

BTW, the film negative analogy, cuts both ways (pun intended I guess). While a film negative looks much different from the final image, the film stock itself is selected based on color temperature, light intensity and to create a certain look.

Link to comment
Share on other sites

I made a test with RED and I use my ligthmeter to 125 Asa for tungstent light and 160 Asa to daylight. Of course you can use other sensibilites but you are loosing detail on shadows and incresae noise. As you now, 320 Asa in camera is a metadata and it is not nominal sensibility or efective sensibility, nominal or efective sensibility depends of S/N signal and saturation base either of the sensor and electroncic device.

 

Alfonso Parra AEC

www.alfonsoparra.com

Link to comment
Share on other sites

John,

 

It makes no sense to me that you would argue for having basic image adjustments, like white balance, be done at the post production stage where data quality is lower. Wouldn't you want white balance calculation to be done at the sensor level during the analog to digital quantization? That is when data is truly off the sensor.

 

White balance compensation in post is performed upon an image that has undergone three stages of processing: 1) the analog to digital quantization at the sensor level, 2) compression to .R3D and 3) demosicing.

 

I can understand you arguing against trying to lock down a look in-camera (even if I tend to disagree). But basic image adjustments, like white balance, I see no good reason to leave for post. Leaving everything for post seems more like a philosophy than a technical advantage.

 

In the case of white balance, all it does is add another stage of processing to post, and its never going to look better than if the math was done in-camera.

 

Peter, you seem to not understand RAW and metadata. Doing it in camera or doing it in post (at least with the RED) make 0% quality difference. The RED's sensor is fixed at 5000k. The RAW is RAW data (although, compressed). Actually, in a sense, the color temp is fixed in the RED since it's native 5000k. The thing with the RED is (and any other RAW camera) that if you preset the color temp, then it wouldn't be raw.

 

Matthew

Link to comment
Share on other sites

  • Premium Member
Peter, you seem to not understand RAW and metadata. Doing it in camera or doing it in post (at least with the RED) make 0% quality difference. The RED's sensor is fixed at 5000k. The RAW is RAW data (although, compressed). Actually, in a sense, the color temp is fixed in the RED since it's native 5000k. The thing with the RED is (and any other RAW camera) that if you preset the color temp, then it wouldn't be raw.

 

Matthew

Matthew,

 

With Red, you simply can't set the white balance inside the camera, so saying "Doing it in camera or doing it in post (at least with the RED) make 0% quality difference" makes little sense. You can't compare something that can't be done with something that can be done and say they are the same.

 

RAW is not raw data. It is lossily compressed. Information is lost when the image is recorded to .R3D format. So white balance adjustment applied in post is done after lossy compression and demosicing of the image.

 

There is this belief that the RAW file contains the "pure" Bayer image from the sensor. But this obfuscates the fact that the sensor itself does some image processing. A CMOS sensor, at the very least, has to convert the voltages into digital values, i.e. quantization. It also does some noise reduction, employs an OLP filter. Simply put pure RAW, even before lossy compression, isn't truly pure. My argument is that basic image adjustments like white balance and gain should be performed at the sensor level where the data is at its highest.

 

The cameras that truly create digital negatives are the Viper and the Dalsa b/c they record very high dynamic range sensors uncompressed at very high color depths. In their cases, in camera image correction versus doing it post probably wouldn't make any difference.

 

So it's not the concept of recording raw that's the problem, it's that there is raw and there is RAW, and they are not the same.

 

Now I'm sure Red's argument is that uncompressed recording is very impractical, and I agree, FWIW. But I also think that some hybrid of metadata and in-camera processing would be best for Red.

 

For example, people have been complaining about Red's low light performance vis a vis its otherwise very nice image. I would argue this is a prima face example of no in-camera user adjustable image processing being a detriment. I would imagine that adding gain during the quantization and stretching the black area would increase the camera's low light performance more than lifting the shadows would in post.

 

Anything is almost always about tradeoffs. I can sit in front of my computer and come up with an improvement to a Red. Even if I'm right, big deal. The creators had to come up with a design philosophy and make choices. I just believe that the metadata only approach was as a philosophy and mantra for simplifcation more than it was practical or even optimal.

Link to comment
Share on other sites

Matthew,

 

With Red, you simply can't set the white balance inside the camera, so saying "Doing it in camera or doing it in post (at least with the RED) make 0% quality difference" makes little sense. You can't compare something that can't be done with something that can be done and say they are the same.

 

RAW is not raw data. It is lossily compressed. Information is lost when the image is recorded to .R3D format. So white balance adjustment applied in post is done after lossy compression and demosicing of the image.

 

There is this belief that the RAW file contains the "pure" Bayer image from the sensor. But this obfuscates the fact that the sensor itself does some image processing. A CMOS sensor, at the very least, has to convert the voltages into digital values, i.e. quantization. It also does some noise reduction, employs an OLP filter. Simply put pure RAW, even before lossy compression, isn't truly pure. My argument is that basic image adjustments like white balance and gain should be performed at the sensor level where the data is at its highest.

 

The cameras that truly create digital negatives are the Viper and the Dalsa b/c they record very high dynamic range sensors uncompressed at very high color depths. In their cases, in camera image correction versus doing it post probably wouldn't make any difference.

 

So it's not the concept of recording raw that's the problem, it's that there is raw and there is RAW, and they are not the same.

 

Do you really think baking in color temp in the RAW data is really going to make all that much of a difference? I mean, the sensor is locked at 5000k, so before or during post, the image is going to be stretched. Look at shooting at 3200k, it's just slightly more noisy. Would there be a difference if that correction was done before the data was compressed? I doubt there would be very much. If there was, why wouldn't they have designed the camera to do that in the first place? I mean, there is a valid complaint that the RED takes a hit in tungsten light.

 

You really can't say that any camera is truly RAW then. Both the Viper and the Dalsa have OLPF's like the RED, plus it sounds like they are changing bits of data before they store it in their "RAW" format.

 

 

For example, people have been complaining about Red's low light performance vis a vis its otherwise very nice image. I would argue this is a prima face example of no in-camera user adjustable image processing being a detriment. I would imagine that adding gain during the quantization and stretching the black area would increase the camera's low light performance more than lifting the shadows would in post.

 

People are complaining about the RED's low light performance? In comparison to what camera? I've shot some stuff at T3.1, ASA 1000 (so 320 native pushed to 1000) and it has about the same amount of noise as my JVC 110U HD has normally. Now, if you were to compare that to 500ASA film stock pushed 2 stops, the film might win. It's a question of how much noise before you think it degrades your image.

 

And yes, I do like to keep things as high quality as possible. But not all compression is bad. There is useless information that is being recorded when you shoot in a 4:4:4 codec. It's also why REDCODE is not a constant bit codec. They knew some scenes needed more information and some needed less.

 

Matthew

Link to comment
Share on other sites

  • Premium Member

In part we're getting bit by history here. Some of this stems from how we've dragged the analog vacuum tube concepts of gain and white balance forward into the digital world. Suppose we set up a Red, an F-900, and a TK-41 side by side:

 

All three cameras have lenses followed by OLPF's. I'm not sure if they tried to filter only the vertical axis on the TK-41, since horizontal was basically analog. (I've e-mailed an expert for more detail on the TK-41).

 

On the Red, the light now passes through tiny red, green, and blue filters on the individual photosites on the CMOS chip.

On the F-900, the light is separated out by dichroics in a prism block and sent to three separate CCD chips.

On the TK-41, the light is separated out to three image orthicon tubes, not sure how they got the colors.

 

Both chip cameras convert photons to electrons in much the same way. Each photosite "well" clips hard if it gets filled up, and at the other end its dynamic range is limited by noise. On the Red, everything gets quantized and digitized right there at the individual photosites. On the F-900, the charges are shifted to the edge of the chip for the A to D step. Tubes I don't know much about, but eventually we have analog component RGB going to three video amplifiers. Amplifiers have gain, which is where we got that word, and DC offset. I'm pretty sure that they made careful adjustments of the three gains and three offsets to get a white balance.

 

But our chip cameras have already digitized. How can they adjust anything now? The answer is that they digitize at a greater bit depth than they'll finally output. That allows them to re-sample digitally without losing quality.

 

In the case of the F-900, the camera "knows" what white should look like, or more accurately, what its digital output would be if the picture was all white. So,

 

 

.........

 

Gotta run, will edit later

Link to comment
Share on other sites

  • Premium Member
Do you really think baking in color temp in the RAW data is really going to make all that much of a difference? I mean, the sensor is locked at 5000k, so before or during post, the image is going to be stretched. Look at shooting at 3200k, it's just slightly more noisy. Would there be a difference if that correction was done before the data was compressed? I doubt there would be very much. If there was, why wouldn't they have designed the camera to do that in the first place? I mean, there is a valid complaint that the RED takes a hit in tungsten light.

 

I have no doubt that in most cases there would be hardly a difference between applying WB in-camera or in post. Now if there are cases where it would make a noticeable difference, IDK. Because Red's CMOS sensor spits out "only" 12-bit color and RAW is 12-bit color, I have to believe the only things avoided by going in-camera would be compression and demosicing.

 

 

You really can't say that any camera is truly RAW then. Both the Viper and the Dalsa have OLPF's like the RED, plus it sounds like they are changing bits of data before they store it in their "RAW" format.

 

Except the Viper and the Dalsa record truly uncompressed after very high levels of quantization. In the case of the Dalsa, I believe the CCD is quantized to 16-bits. With the Viper I believe it's 12-bits but they do something else that Red doesn't, and it directly impacts dynamic range.

 

 

People are complaining about the RED's low light performance? In comparison to what camera? I've shot some stuff at T3.1, ASA 1000 (so 320 native pushed to 1000) and it has about the same amount of noise as my JVC 110U HD has normally. Now, if you were to compare that to 500ASA film stock pushed 2 stops, the film might win. It's a question of how much noise before you think it degrades your image.

 

You mention above that the sensor is "locked in." But that doesn't mean it's locked in a the correct level for all shooting conditions. What a camera like the F23 does is adjust how the initial quantization is done according to what the user wants and conditions dictate. Adjusting at the quantization level is actually a lot like pushing film.

 

Plus it's not just amplifying the signal, a sensor is sensitive to higher levels of light than is recorded as pure white. But some level has to be called pure white and at some point the noise gets too great, so anything above the determined pure white level is cropped.

 

So that perfectly horizontal line at the top of the waveform showing cropping is not there in the initial analog signal produced by the sensor. It is a result of the quantization from analog to digital. But if you really want the extra white info, you can get it by adjusting the quantization, i.e. set the white level to a higher point. This will give you a much more workable image than if you just lowered the highlights in post.

 

The same with the blacks; there's analog info for lower black levels, but it's cropped before quantization. But this info could be amplified before quantization, you would have better black details than if you tried to raise in post the blacks that made it through the initial cropping done on the chip.

 

What the Viper does is essentially say, "You want all the data? Here it is." As it can record the full spectrum of its three CCD sensors. It uses log based color, which diminishes noise in the highlights and gives more color steps to the shadows and the mids. Now CCD's generally have wider dynamic range than do CMOS chips, so I don't know how such an approach would work with the Red's chip. But the fact remains that the Viper truly provides what could be called a digital negative. There is nothing you can do in-camera that could help the picture. And this is why you can see images from the Viper that appear to have crazy amounts of dynamic range.

 

So yes, RAW does record what's coming off the sensor, but after quantization. There are a lot choices that are made during quantization which the Red user has no control over. It's MHO that providing that control would yield a better image in some circumstances, and also allow the user to royal screw things up even more.

 

And yes, I do like to keep things as high quality as possible. But not all compression is bad. There is useless information that is being recorded when you shoot in a 4:4:4 codec. It's also why REDCODE is not a constant bit codec. They knew some scenes needed more information and some needed less.

 

Matthew

 

I agree that uncompressed is unwieldy for the most part. But there isn't only info that being lost due to compression, there is also info that is cropped out by the sensor during the initial quantization and the fact that amplifying the signal before quantization can be beneficial to black levels.

 

And this whole thread is in many ways testament to the Red. The fact that it's being compared to cameras many many many times its cost speaks volumes. That said, I've also seen EX1 and even XH-A1 footage cut well with the Red, so it's obviously more about the Indian than the arrow.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Broadcast Solutions Inc

CINELEASE

CineLab

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Film Gears

Visual Products

BOKEH RENTALS

Cinematography Books and Gear



×
×
  • Create New...