Jump to content

RAW... What is it really?


Recommended Posts

Hello all.

 

I know this is not a film stock or processing... but it's the digital equivalent.

 

I know what RAW shooting does as good as the next guy, but I am embarking on writing a very indepth article on what it REALLY is. Down to the technical mumbo jumbo. I am writing this article in an attempt to completely understand and help educate others who don't quite understand what RAW does for the cinematographer and how it's not necessarily a 'do anything' process.

 

I am also exploring different types of RAW shooting as it is different from camera to camera... and i'll mainly be focusing on Genesis & RED... although I'll be exploring still photography quite a bit.

 

I think many people in the indie world have the wrong impression of what RAW is, and almost rely on it as a excuse or crutch to cut corners, rather then treating it like it's own digital 'stock'. As when somebody tells me to light 'flat' and get the lighting in post.... clearly many are misinformed.

 

I am asking anyone here to point me to articles which may explain the advantages and disadvantages as well as technical information of how RAW is pulled off a sensor.

 

All information is helpful, no matter how basic or advanced.

 

I thank you all in advance.

 

Best,

 

Ryan

Edited by Ryan Patrick OHara
Link to comment
Share on other sites

  • Premium Member

You have to be a bit careful because while "raw" customarily referred to unprocessed image data from usually bayer filter array sensors, the guys at Red have cunningly redefined it to mean "bayer data that is extremely heavily compressed". This is, at the very least, a highly questionable use of terminology. Also, not only does Genesis not give you raw output, it would be in a rather different format if it was, since Genesis is not strictly a bayer camera. It does something vaguely similar, but processes it down to RGB prior to output, so it's irrelevant to anyone who isn't the designer.

 

The only camera currently offering what I'd call raw is the Arri D21.

 

P

Link to comment
Share on other sites

You have to be a bit careful because while "raw" customarily referred to unprocessed image data from usually bayer filter array sensors, the guys at Red have cunningly redefined it to mean "bayer data that is extremely heavily compressed". This is, at the very least, a highly questionable use of terminology. Also, not only does Genesis not give you raw output, it would be in a rather different format if it was, since Genesis is not strictly a bayer camera. It does something vaguely similar, but processes it down to RGB prior to output, so it's irrelevant to anyone who isn't the designer.

 

The only camera currently offering what I'd call raw is the Arri D21.

 

P

 

Yes. I very much agree with you!

 

This is mainly the reasons for me focusing on the issue... thanks for the help. I was pretty sure Genesis did give you something like the RAW format.... I've spoken with one of the lead visual effects people on 'Superman Returns', when they did a presentation at my college years back. They showed us clips of the 'un-processed' image from the genesis, and it looked like that washed out, desaturated RAW video.

 

This is why I'm jumping in!

 

Best,

 

-Ryan

Edited by Ryan Patrick OHara
Link to comment
Share on other sites

Question for fellow skeptics: How dissimilar is RED's full-debayering, from software "upconversion?" It seems like both are filling in the blanks between pixels.

 

I obviously have an opinion, but I don't really know for sure.

Link to comment
Share on other sites

The only camera currently offering what I'd call raw is the Arri D21.

 

Not the only one. The Phantom records uncompressed RAW, and the Silicon Imaging 2K allows uncompressed recording as well. And although it's no longer built or offered, so did the Dalsa Origin.

 

In some ways, the Viper would also qualify, even though it's a 3 sensor, RGB device, in that the signals are not processed in camera in Filmstream mode.

Link to comment
Share on other sites

  • Premium Member

Ah, I didn't know that SI2K did uncompressed, I thought it was all cineform.

 

"raw" is as far as I know derived from the DSLR term, so to me it implies bayer data and I'd have trouble applying it to Viper. You might just as well apply it to an F900 with a particularly pumped up gamma curve in it for all the practical difference it makes. There's a difference between "raw" and "not intended for unmodified viewing".

 

Also:

 

How dissimilar is RED's full-debayering, from software "upconversion?"

 

Not very.

 

It seems like both are filling in the blanks between pixels.

 

Yes.

 

The algorithms are rather different, and there are advantages and disadvantages which affect the mathematical precision of the result in each case, but I suspect the question you're asking is "are they making things up in order to be able to call it 4K", and the answer is yes they are.

 

P

Link to comment
Share on other sites

Question for fellow skeptics: How dissimilar is RED's full-debayering, from software "upconversion?" It seems like both are filling in the blanks between pixels.

 

I suspect it's one of those questions thats a bit like how long is a piece of string!

 

I'm not much of an expert on this but I get the impression the major difference is that software upconversion starts with the image already taken by the camera. It tries to do intelligent things based on this data to create an image that appears to be higher resolution.

 

Things like Red and the HVX presumably work with data directly off the camera head. The CMOS or CCD. As such there may be more data to work from at that stage. For example the data at the head of the HVX might be 4:4:4, whereas the information it has to deliver to the codec is only 4:2:2. It also has 3 different chips it can talk to in order to make a beter guess at the values. As such it kind of has more information to guess from, so it can make a more intelligent guess at what the missing resolution might be. I'm guessing red must do something like this too except it only has a single cmos chip and is supposed to deliver 4:4:4 data so I'm not sure what it gets up to.

 

love

 

Freya

Edited by Freya Black
Link to comment
Share on other sites

  • Premium Member
How dissimilar is RED's full-debayering, from software "upconversion?" It seems like both are filling in the blanks between pixels.

 

Yes, both are filling in some blanks. The blanks are a little different in the two cases.

 

Suppose what we want is full 4K RGB 4:4:4. That means that for each of four thousand places across our picture, we have a complete set of three numbers representing Red, Green, and Blue.

 

If we start with 2K full RGB, that's two thousand complete sets of three colors across, so we have to create new in-between sets of three numbers for each new pixel we want. (Vertically we also have to create complete new in-between rows).

 

If we start with "4K" Bayer, what we have is four thousand places across our picture, but we have only one color for each place, not three. (One row will alternate Red-Green, the next alternates Green-Blue). To get from one color to three for each position, the math has to supply the other two colors.

 

So, both are in-betweening problems, but of different kinds -- apples and oranges.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

I suspect it's one of those questions thats a bit like how long is a piece of string!

 

I'm not much of an expert on this but I get the impression the major difference is that software upconversion starts with the image already taken by the camera. It tries to do intelligent things based on this data to create an image that appears to be higher resolution.

 

Things like Red and the HVX presumably work with data directly off the camera head. The CMOS or CCD. As such there may be more data to work from at that stage. For example the data at the head of the HVX might be 4:4:4, whereas the information it has to deliver to the codec is only 4:2:2. It also has 3 different chips it can talk to in order to make a beter guess at the values. As such it kind of has more information to guess from, so it can make a more intelligent guess at what the missing resolution might be. I'm guessing red must do something like this too except it only has a single cmos chip and is supposed to deliver 4:4:4 data so I'm not sure what it gets up to.

 

love

 

Freya

 

 

I really dig these comments.

 

What is RAW... and what defines it, are things I am also interested in too. It is different with every camera and is rarely truly RAW. Which is basically saying it's just a certain type of light compression... Right?

 

As for the HVX and DVX.... there used to be someone who would modify the DVX100 with a product called 'Andromeda'. He would tap into the signal from the CCD's before it went to compression and tape. He ended up getting a massive gain in resolution, 4:4:4 color space and higher dynamic range. The data was transferred through an Ethernet cable to computer.

 

Does that count as RAW? It serves many of the same purposes.

 

This is just an example of how the future, in my opinion is moving toward image capture with out the 'standard' camera compressions or 'processing' we have today. The future is either capturing RAW & applying/processing later or super computer/cameras which will offer the same level of manipulation to be determined ahead of time, letting the DP have digital emulsions to his/her liking. Basically scene file settings with much greater range and customizable choices. We will more or less learn to light by digital emulsions versus different camera models.

 

But does that count as RAW as well? For instance, if a camera was capable of RAW like RED but the ability we have with the 'digital processing' which we currently add in post, was able to be applied with the same manipulation, (but predetermined), on set is that RAW?

 

RAW has to be eventually processed sometime... so eventually when a DP can do pre-production camera tests with RAW video, find the gamma curve and additional digital processing he/she wants, and then set them in a camera which shoots can successfully apply these manipulations straight the the footage, the that is what we will do right? Is that RAW when it's done in camera?

 

If not then RAW = doing your look in post, versus picking a medium (digital emulsion) knowing how you chose it to reproduce the image and then light for it.

 

If my previous hypothetical proof is correct... if the future camera technology allows a cinematographer to set almost limitless manipulations to get his/her look in camera by setting the same manipulation you would otherwise do in post anyhow, then I'm interested what the future of RAW holds. (unless camera manufacturers try to hold down spending and simply make cameras that only shoot RAW instead of pioneering great manipulation capability (from preproduction LUTs done on computers) in camera)

 

Of course there are always instances where obscure reasoning and special circumstances can call for the use of unusual tactics... therefore I am interested in continuing to explore what uses and exclusive advantage RAW can hold, such as if it helps assist VFX shots, as it may. I don't know. If it did, then maybe the future of RAW is reserved for special shots, much like the film industry shoots SFX or VFX shots on vistavision or other large formats.

 

RAW is a strange and much to ambiguous creature, at least for me, a young cinematographer trying to ensure my future.

 

You can see how scatterbrained and excited I am about this. Thanks for your help so far. I hope I can answer my questions as fast as I'm finding them... because they are really stacking up.

 

I hope I am being clear. :)

 

-Ryan

Link to comment
Share on other sites

  • Site Sponsor

"I've spoken with one of the lead visual effects people on 'Superman Returns', when they did a presentation at my college years back. They showed us clips of the 'un-processed' image from the genesis, and it looked like that washed out, desaturated RAW video."

 

 

I think in this statement you are confusing Ungraded (probably LOG ) which is not RAW in the D-Slr or D-21 sense. 10bit LOG 2K or 4K film scans look similar before being Color-Timed (graded) but neither the Film Scans nor the genesis need to be De-Bayered as they either are already (genesis) or do not need to be in the case of RGB film scans.

 

-Rob-

Link to comment
Share on other sites

  • Premium Member

I think there's two senses of "raw" here.

 

There's the simple adjective, which describes something unprocessed or unmodified from an original state, which can be, and routinely is, used to describe original data from any camera system. This use, in the context of digital cinematography, is highly ambiguous, and I avoid it.

 

Then there's raw, often (for some reason) given in uppercase, RAW, used to describe uncompressed camera data typically from Bayer or similar colour-subsampled sensors, which requires advanced processing before it is even viewable. I think there's probably broad technical agreement on this use.

 

Either way, I'd prefer to see a formalisation of this terminology. Describing Viper as "raw" is arguably accurate English but has no widely-agreed meaning in a technical sense. Something of that ilk would be better described as "unprocessed", but since there's only really one camera that does it that way, the onus is on the individual to be independently informed of the capabilities of various systems.

 

This stuff is important because we should use terms for things which tell us useful things about how they work. We should be able to understand that a Bayer sensor camera will require nontrivial and potentially time-consuming postproduction work to look its best. We should be able to understand that certain other cameras deliver RGB output directly. Muddying the water by claiming "raw" for things that aren't (Red, Viper) just makes us use more words to describe situations when that shouldn't be necessary, and when this is done for marketing reasons, to make things seem to be something they're not or seem to be capable of more than they are, it wastes everyone's time.

 

P

Link to comment
Share on other sites

  • Premium Member
What is RAW... and what defines it, .... it's just a certain type of light compression... Right?

 

It's important to know that the word compression is used with two very different meanings.

 

There's the computer guys' meaning, which is taking a boatload of data and doing loads of clever math on it to produce a much smaller amount of data from which the original data (lossless), or something close enough in picture quality (lossy) can be recovered by un-doing the math.

 

There's dynamic range compression, which means cramming typically 14 - 16 bits worth of brightness info into the 8 - 10 bit limit of digital video tape. This is very much like audio compression, where they automatically turn the volume up and down to keep things in a narrower range.

 

As I understand the term, Raw means not using the second type of compression, but rather sending everything that comes from the sensors through to post.

 

He would tap into the signal from the CCD's before it went to compression and tape. He ended up getting a massive gain in resolution, 4:4:4 color space and higher dynamic range. Does that count as RAW?

 

Maybe, depending on what was between the CCD's and the place he tapped in. The idea was certainly tending in the direction of raw.

 

For instance, if a camera was capable of RAW like RED but the ability we have with the 'digital processing' which we currently add in post, was able to be applied with the same manipulation, (but predetermined), on set is that RAW? Is that RAW when it's done in camera?

 

No, Raw means not doing anything in camera that you don't absolutely have to. Time is money everywhere, and while post isn't cheap, production is much more expensive. I advocate a sort of meta-principle that complexity should migrate away from where time is more expensive to where it's less expensive. By that criterion, Raw is good.

 

If not then RAW = doing your look in post, versus picking a medium (digital emulsion) knowing how you chose it to reproduce the image and then light for it.

 

Yes, Raw means making your look in post. This is a good thing, because you have a dailies colorist working for you, covering your tush. We've tried the on-set LUT thing, it's been a disaster. Getting an adequate quality monitor correctly set up and free of ambient light on location isn't in the cards. If the dailies colorists hadn't thrown out the LUT's, those DP's would have been fired. Establishing LUT's at the post facility during pre-production using tests is what works. You can then carry those LUT's to the set for viewing.

 

The choice of film emulsions doesn't limit the range of looks you can get from the negative the way baking in a look on tape does. Any given film stock captures whatever range it does, with no in-camera manipulation, and leaves you to extract what you want from a far wider range than any chip can capture. So, in that sense, film is kinda like Raw on steroids. It gives you more to take into post and the DI than any electronic camera can. Film gives you a choice of several different wide dynamic ranges.

 

 

 

-- J.S.

Link to comment
Share on other sites

We've tried the on-set LUT thing, it's been a disaster. Getting an adequate quality monitor correctly set up and free of ambient light on location isn't in the cards. If the dailies colorists hadn't thrown out the LUT's, those DP's would have been fired. Establishing LUT's at the post facility during pre-production using tests is what works. You can then carry those LUT's to the set for viewing.

 

Any guesses as to why so few people (other than you and me, of course) seem to understand that?

Link to comment
Share on other sites

  • Premium Member
The choice of film emulsions doesn't limit the range of looks you can get from the negative the way baking in a look on tape does. Any given film stock captures whatever range it does, with no in-camera manipulation, and leaves you to extract what you want from a far wider range than any chip can capture. So, in that sense, film is kinda like Raw on steroids. It gives you more to take into post and the DI than any electronic camera can. Film gives you a choice of several different wide dynamic ranges.

 

 

 

-- J.S.

 

I like.

Link to comment
Share on other sites

  • Premium Member

Because that's how it looks! At least that's the answer I've gotten when trying to explain to certain people on set not to worry that x or y doesn't look right on the LCD/Monitor etc.

Ahh if only everyone could understand a waveform....

Link to comment
Share on other sites

  • Premium Member
Any guesses as to why so few people (other than you and me, of course) seem to understand that?

 

 

I don't understand this logic at all. I've been shooting a Genesis show and we create LUT's on the fly all the time, on set. We have a beautiful 24" CRT calibrated monitor, a talented DIT and a dark environment.

We're shooting Panalog so the only real reason for the LUT's is for viewing dailies. I look at it as a quick and easy rough idea to communicate our intentions to studio execs. Everybody seems to get that it's only an approximation and we will time it in post.

Link to comment
Share on other sites

  • Premium Member
I look at it as a quick and easy rough idea to communicate our intentions to studio execs. Everybody seems to get that it's only an approximation ....

 

Wow. You have no idea how lucky you are to have execs who get that.

 

 

 

-- J.S.

Link to comment
Share on other sites

Wonderful posts!

 

Since my article will not only be covering the technical and programming side of what RAW is in different formats and the advantage/disadvantage, I am also exploring the use of it, what it means to the production, and it's future.

 

I fully understand the process of shooting film, which makes me totally not fear RAW (doing in post) workflow. I understand that the film processing stage of any film negative is an important stage outside of the on-set production, in which a cinematographer can use ENR (or likewise type of silver retention), pull, push, or other means of making great and wonderful manipulations to the image... of course all pre-deturmined with tests.

 

To play the devil's advocate, why does this mean we should bake our look in post? Film developing and processing is something that can't be done in camera, which is why it's not. If a cinematographer could eventually do pretests with a Digital Raw (rawish) stock, find his look, and use then program that determined look into the camera/computer hybrid and shoot it the way intended, on set.... I fail to see how someone in post is taking less time, especially if they are just doing a jumping point of applying the same preproduction/predetermined look up table, besides the uses in specialty applications which I'm sure exist. It's not like non-RAW images can't have slight adjustments via colorist. So if you can get your look on set, why shoot the entire film in RAW? To assist the cinematographer, or to ease the job? I'd like to see Wally (who does photochemical process) and Vittorio be told that getting the look in post is a better, more time saving process. On the other-hand, I like the idea of doing a LUT in preproduction and applying in post, but if this is the future, I've heard many concerns that the cinematographers control is in jeopardy because he/she is not kept on the payroll in post. I'm sure the ASC committes are hard at work to solidify a good way of making everyone happy, but this is a concern of mine and I have not heard anything big since those American Cinematography supplements about finding the image. (can't remember the name of the three supplement series.)

 

This is of course questions I will be grilling many people I plan to interview, from professional DP's, camera manufacturers, and colorists. So please take this as not an argument but just an in depth survey. No one's comments on this thread will be used in my article unless I contact you privately first. But I am loving the feed back.

 

I think I'll save the rest of my ramblings for my notes and research!

 

Thanks again everyone... I knew this was the best place to get going!

 

Best,

 

-Ryan

Link to comment
Share on other sites

John... Quick follow up question on 4K-Bayer... My understanding of bayer is that the RGB pixels are scattered across the chip and that between different blue pixels on the same line, the "gap" between them needs to be "filled in." Are you saying a 4K image is taken and it's just a color shift happening from line to line? Meaning that on a 4K chip, my understanding is that it's 1K blue, 1k red and 2k green that some how mathematically combines to 4K, despite 2K being the highest resolution?

 

The reason I was suspecting it's more akin to a software "upconversion" is the render time involved. If it's just a color shift, then that doesn't account for why it needs a 15:1 ratio to debayer footage to 4K. Are you saying a full 4K image is captured line by line, but in alternating CbCr?

Link to comment
Share on other sites

  • Site Sponsor

I think the general rule of thumb is that you can get about 2/3 of the (monochrome i.e. 4K) resolution of a bayer mask sensor in actual resolution. So a "4K" bayer mask sensor can yield about 2.8K of actual resolution measured as MTF. This is because the De-Bayer Algorithm can take advantage of the less than perfect color filtering of the RGB photosites I think this is not as simple as just an up-rez and if you look at cameras like the D21 that makes a real time 1080P output from a 3K bayer cmos sensor you get an idea of real world specs.

 

Film scans from a scanner like a Northlight have a individual photosite for each RGB channel so a 4K scan is actually 4K for each channel so 4K-R 4K-G and 4K-B which is a "real" 4K image.... plus most good scanners operate from an over-scanned image i.e. the 4K image from a Northlight2 is derived from a 8K sensor.

 

-Rob-

Link to comment
Share on other sites

  • Premium Member
I think the general rule of thumb is that you can get about 2/3...

 

On a desaturated, broadband subject, yes.

 

De-Bayer Algorithm can take advantage of the less than perfect color filtering

 

Precisely.

 

This is why people shooting monochrome test charts get good results out of Red, though I'd like to verify these better-than-3K numbers personally. Shoot a chart that goes black/white/black/white and you're OK.

 

It works somewhat less well on charts that go black/red/black/red, depending exactly where they decided to chop the filters and exactly what assumptions are being made by the debayer software about how saturated the subject is likely to be. DSLRs have been shown to make major mistakes in this sort of thing, particularly where the subject has edges of high saturation against low saturation. As you get with, oh, I don't know - greenscreen?

 

This is one reason I'm suspicious of all this stuff.

 

P

Link to comment
Share on other sites

  • Site Sponsor
This is one reason I'm suspicious of all this stuff.

 

P

 

 

Most of the Bayer D-Cine stuff I have seen (Slumdog, Knowing) seems either soft or has bad color or both.... The Viper still makes the best D-Cine image IMO when it is run within it's envelope and post is done right.

 

Hurt Locker looked awesome in S-16 and the Phantom high speed mostly looked great though...

 

-Rob-

Link to comment
Share on other sites

  • Premium Member
I understand that the film processing stage of any film negative is an important stage outside of the on-set production, in which a cinematographer can use ENR (or likewise type of silver retention), pull, push, or other means of making great and wonderful manipulations to the image...

 

Yes, but things like bleach bypass/silver retention are just one way to get to the same place. We had an MOW once that used bleach bypass for one sequence. They made a mistake, and one scene for that sequence was accidentally shot normally. We had the final colorist time the bypass stuff the way the DP wanted it, then see if he could match the mistake rolls to the bleach bypass. He nailed it. Problem solved.

 

The same color timing power is available to features. All the major studios have had a DI in their pattern budgets for a few years now.

 

.... then program that determined look into the camera/computer hybrid and shoot it the way intended, on set.... I fail to see how someone in post is taking less time,

 

Baking in something close to the final look on set is not an unreasonable approach, we have shows doing it now. It makes sense for shows that are recording on HD tape, where you have only 10 bits of depth. They do, though, take care not to go all the way in the direction they want, so they don't tie their hands in final timing.

 

A dailies colorist in post generally turns out a set of dailies in 4 - 6 hours. A DIT on set is on the clock for the whole production day, 14 - 18 hours. Payroll is the biggest cost on any show, so that's a significant amount of money. It gets even more significant if the more complicated shooting setup results in more time waiting on set.

 

Every show goes through a final color timing in post. You have to, in order to get exact matches across the cuts. Trying to do it all in dailies would be like making a jigsaw puzzle by first cutting up the blank board, then painting a picture on the individual pieces in hand rather than all assembled.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
John... Quick follow up question on 4K-Bayer... My understanding of bayer is that the RGB pixels are scattered across the chip and that between different blue pixels on the same line, the "gap" between them needs to be "filled in." Are you saying a 4K image is taken and it's just a color shift happening from line to line? Meaning that on a 4K chip, my understanding is that it's 1K blue, 1k red and 2k green that some how mathematically combines to 4K, despite 2K being the highest resolution? ...

 

The highest resolution is pretty close to 4K because each R, G or B photosite will pick-up some information about the image (except when there is pure red, green or blue in the subject).

 

But because color information is mosaiced across the 4K grid, color information from nearby pixels/photosites is "blended" to create a coherent image. The net result is supposedly an image of 3.2K resolution.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...