Jump to content

RED production schedule


Carl Brighton

Recommended Posts

The problem with all the exisiting 2K HD cinematography systems

 

Except for the Dalsa, there are no "existing 2K HD cinematography systems." Viper, Genesis, Sony F950, and Arri D20 are all HD cameras, and output 1920x1080 images. Besides Dalsa, the only camera that currently exists - sort of - that yields anything beyond that is the Silicon Imaging 2K model, which to my knowledge is not actually released yet.

 

Well neither the Dalsa nor the D-20 has been used on any serious production thus far

 

There is a very large miniseries about the CIA called "The Company" that just wrapped several months of production. The primary camera used was the D20, with film used for much of the overcrank and aerial work. The D20 images are quite impressive, although this is, as I said, a television miniseries, not a feature release (at least not in the US).

Link to comment
Share on other sites

  • Replies 495
  • Created
  • Last Reply

Top Posters In This Topic

"Well neither the Dalsa nor the D-20 has been used on any serious production thus far, although this has been "imminent" for some years now. It can't all be put down to prejudice and conservitism."

 

When I was in London the guys at Moving Picture Compnay had just wrapped a pretty large production for SKY called HOGFATHER that was shot on the D20. I don't know how you characterize a "serious" production but HOGFATHER seemed to have crew-members, money, craft-service, and call sheets. The MPC people seemed to like the D20 and said they would use it again.

 

Alan Lasky

Link to comment
Share on other sites

  • Premium Member

There's films being shot in Germany with the D20 as well. In fact there was an article in German news magazine 'Der Spiegel' about one of these shoots and the D20. I was a bit surprised actually that it appeared there, since this is their biggest weekly magazine and deals mostly with politics. You can find plenty of D20 info in ArriNews and VisionArri magazines too.

Link to comment
Share on other sites

  • Premium Member
As for recording either Red or Dalsa, neither of them are more than about 400Mbyte/second if you record the raw mosaic. I have a recorder on this desk, right now, that will record that, given an HD-SDI interface for it... and it costs under half what the Red camera does. So it can be done. Obviously, any serious user is going to want it uncompressed.

 

Phil

 

Hey there Phil,

 

What sort of recorder is it? Also, have you had any chance to look at the Codex recorders?

 

 

All the best,

Link to comment
Share on other sites

Hi There, I'm new to this forum, so I just 'wasted' (i.e. should have been doing something else) two hours reading the rest of this thread, just for fun. Anywhoo - i've a couple of questions, ready to be shot down.

 

1) Re: Resolution - based upon my understanding you could equate a 4K bayer with a 4K chip - Assume you ignore all colour data, then you could say you are recording 4K's worth of luminance data sites - i.e. you could have a 'genuine' 4k Black and white image (ignoring luminance lost due to the colour filtration) - taking that as a base you could then add colour data to the luminance data using the Using the data you have for each site (Red, green or blue) and interpolating the other two colours for that site based on mathematical jiggery pokery. I'm not saying that's how it is done, but in theory it could work like that. Following this, while the interpolated image wouldn't be identical to the an image formed that recorded the actual luminance / colour info at each site, it would be closer to that than the raw bayer image would be. Sure there would be errors and occasional artefacts, depending upon what you are shooting i.e. Highly detailed, bright multicoloured cloth may be a problem (problems it would help a DP to know about), but the surely the important thing is how close it is to the original? So I don't see anything wrong with saying that the Red one is higher ress than a 2K camera - maybe not true 4K but close enough for the price tag.

 

2) Given that last point - hey if they pull it off, sweet - it won't be like shooting film (duh) but it is great for low budget filmmakers to have a camera that shoots really nice stuff that doesn't cost an arm and a leg - to me it's like the jump from DV to HDV - I can't afford to shoot on film but HDV looks better to than DV me, whilst still being affordable.

 

3) If any of the RED guys still read this - here's a suggestion, have you thought about swapping to something like the foveon x3 sensor (info here: http://en.wikipedia.org/wiki/Foveon_X3_sensor) - a 'proper' camera with this type of sensor would sound pretty sweet to me - since it is a single chip design that does not need demosaicing (hence no arguments about actual resolution), is very light efficient and could potentially allow you to cram more sensor sites onto a 35mm chip area without the aforementioned drop in S/N ratio (I think). Of course, without the need to pay for film, as data storage costs drop, you might just decide to go for a 65mm sensor and use some of those old lenses (or even make some new ones oooh...)

 

Colin.

Link to comment
Share on other sites

based upon my understanding you could equate a 4K bayer with a 4K chip - Assume you ignore all colour data, then you could say you are recording 4K's worth of luminance data sites - i.e. you could have a 'genuine' 4k Black and white image (ignoring luminance lost due to the colour filtration) - taking that as a base you could then add colour data to the luminance data using the Using the data you have for each site (Red, green or blue) and interpolating the other two colours for that site based on mathematical jiggery pokery.

 

Well OK, well now conside this "thought experiment". You take your 4K "raw" luminance signal from the camera, do a frame grab, then print it out with a black and white printer. What you'll see is a monochrome image covered with grey dots which correspond to the shape of the Bayer Filters

 

Now put that back in front of the camera and adjust the framing and focus so that the images of the grey dots line up with their corresponding filter pixels on the camers's sensor. If you could get it precisely right, the camera would the produce the same output from that monochrome image as it would from the original colour scene that produced it, and so it would produce a colour image.

 

But it's not supposed to be producing a colour image because the scene you are shooting (the printout) is a monochrome image!

 

Thisin essence is the "aliasing" problem as applied to single chip-sensors: the same RAW electrical output form the sensor can be theoretically produced by two entirely different images, and an automated video processor has no way of working out which one is correct...

Link to comment
Share on other sites

Except for the Dalsa, there are no "existing 2K HD cinematography systems." Viper, Genesis, Sony F950, and Arri D20 are all HD cameras, and output 1920x1080 images. Besides Dalsa, the only camera that currently exists - sort of - that yields anything beyond that is the Silicon Imaging 2K model, which to my knowledge is not actually released yet.

Eh?

I'm sorry; why is the Genesis at least not a 2K digital Cinematography system?

It takes standard Panavision 35mm lenses and other accessories, can record 4:4:4 2K HD, and probably more importantly, several features have been shot with it and subsequently shown in cinemas, occasionally at a profit. OK, 2K resolution may be a bit low-rent, but plenty of film-originated features have been post produced with that resolution and it doesn;t seem to affect their box-office performance.

 

Similarly, the Viper and the 950 have small, but significant cinema release track records.

 

Or are you saying that 1920 and "2K" aren't the same thing? I think they are....

Link to comment
Share on other sites

"Now put that back in front of the camera and adjust the framing and focus so that the images of the grey dots line up with their corresponding filter pixels on the camers's sensor. If you could get it precisely right, the camera would the produce the same output from that monochrome image as it would from the original colour scene that produced it, and so it would produce a colour image."

 

Hmmm? True, it would indeed (re)produce the original colour image (though probably much darker due to additional light filtration by the bayer filters) in that rare, difficult and unlikely case, but my original point was not to consider what the chip actually does, but to ague that resolution as a usefull term (in this discussion) only insofar as the 'processed' image of a certain size (4K Bayer) is more or less better are 'accurately' reproducing reality than another image formed from discreet pixels of another image of smaller size formed from composites of three colour specific alligned monochrome images (2k 3chip system). My argument was that some form of intelligent processing can re-produce reality better in that case, not as accurately as 3 chip 4K, but better than 3chip 2K I suspect.

 

Personally I don't have a problem with intelligent image processing at all - your brain does it all the time 'filling in' the image information missing from your rentina at the point where the retinal ganglia exit the eye as the optic nerve - i.e. your blind spot - in general this processing is pretty good and this gap in vision is not noticble - if you want to experience what happens when the processing is tricked have a look at this visual illusion: http://dragon.uml.edu/psych/bspot.html

 

In fact, going back to your 'monochrome image producing colour' argument - that's how all current imaging systems work is it not? 3 chip cameras produce a colour image from intelligently combining three monochrome images formed by detecting physically aligned, but discreet, photons of different wavelengths seperated by a prism - a bayer filter forms it from three sets of monochrome images formed of 'not as closely aligned' photons of different wavelenghts of filtered out by a bayer filter. The foveon X3 does a similar job to the bayer, as I understand - effectively filtering the three colours by selectively reading photons of different wavelengths at different depths in a silicon layer, but still by using discreet 'monochome' sensors. Hell even your eye does it recording different colour wavelengths through discreet cone cells at different points - in fact it makes things even more complicated as human vision makes use of two other types of non-colour aligned cells to add to all this signal processing. All of these systems form colour images from the intelligent processing of discreet monochrome images.

 

Anyway, that's what I think, how's about you Carl?

 

I'm still after thoughts on the Foveon though - a new potential revolution in image processing? or just daft?

 

Col.

Link to comment
Share on other sites

Personally I don't have a problem with intelligent image processing at all - your brain does it all the time 'filling in' the image information missing from your rentina at the point where the retinal ganglia exit the eye as the optic nerve - i.e. your blind spot - in general this processing is pretty good and this gap in vision is not noticble - if you want to experience what happens when the processing is tricked have a look at this visual illusion: http://dragon.uml.edu/psych/bspot.html

A common argument, but fundamentally flawed.

 

It's true that your brain synthesizes the illusion of a high definition colour image by taking a very small number of samples of the luminance and chrominance information in any scene it's looking at, but there are actually three parameters involved: luminance, chrominance and position.

Because your eye is able to flick the fovea over the scene more or less at random and pick up a certain level of detail everywhere it samples, your brain is able to make accurate assumptions about the level of detail of the areas it doesn't sample.

 

If the camera and transmission system had some way of knowing what part of the scene the fovea was looking at at any particular instant, the data rate could be massively reduced, as you would only need high resolution in those places.

 

As it is, you have to provide the full luminance resolution on all parts of the receiver screen, because the action of the eyeballs is entirely unpredictable, and would be different for more than one viewer in any case.

 

 

Generally another major flaw is that people confuse RGB that is re-created from discrete RGB sources with RGB that is synthesized from incomplete sampling systems such as the Bayer mask. The luminance signal from a true RGB camera is made by adding together distinct ratios of the full-bandwidth Red, Green and Blue signals, and these can be regenerated with a high degree of accuracy.

 

The luminance signal from a Bayer Mask does not contain such information, only an estimation of it.

 

They are not.

Explain your reasoning

Link to comment
Share on other sites

  • Premium Member
I suggest you look up the resolution specs for 2K and you will find that HD falls short of those. The Genesis, D20 and Viper are all, by their own admittance, HD cameras, not 2K cameras.

 

Hi Max,

 

Yes you are correct, it's only facility companies that try to pretend that an HD DI is as good as a 2K.

 

If you compare a Spirit transfer to an Arri scan there is a huge difference.

 

Stephen

Link to comment
Share on other sites

  • Premium Member

Hi,

 

While it's important to make the distinction, you're talking about 128 pixels less resolution horizontally - I rather doubt it's ever going to be the deciding factor.

 

> HD is not 2K Color space is also not the same.

 

Can be exactly the same - depends what you record them on.

 

Phil

Link to comment
Share on other sites

If you compare a Spirit transfer to an Arri scan there is a huge difference.

 

Stephen

 

Stephen, have you compared a transfer on a Spirit 2K to Arriscan ?

 

And was the ArriScan working at higher than 2K and downsampling ?

 

I guess I'm steering off topic, well a "Spirit 1" is subsampled chroma so maybe I can get away with it !

 

-Sam Wells

Link to comment
Share on other sites

  • Premium Member
Stephen, have you compared a transfer on a Spirit 2K to Arriscan ?

 

And was the ArriScan working at higher than 2K and downsampling ?

 

I guess I'm steering off topic, well a "Spirit 1" is subsampled chroma so maybe I can get away with it !

 

-Sam Wells

 

Hi Sam,

 

Yes I have. Arriscan 6K downsampled to 4K, Arriscan 3K downsampled to 2K & Spirit DataCine upsampled to 2K DPX. Then straight back to film no postproduction.

 

Arriscan 6K downsampled was clearly sharper in the cinema in the front, middle or back row, than the 3K downsampled. The Spirit was noisy and clearly less resoloution. The lens used was a 50mm Zeiss Superspeed from my Ultracam package with @T2.8/4 split. I wish I had had modern glass, however getting the favours was an after thought!

 

Stephen

Link to comment
Share on other sites

I'm sorry; why is the Genesis at least not a 2K digital Cinematography system?

It takes standard Panavision 35mm lenses and other accessories, can record 4:4:4 2K HD, and probably more importantly, several features have been shot with it

 

Or are you saying that 1920 and "2K" aren't the same thing? I think they are....

 

They are not the same thing, no matter what verbiage one wants to use to indicate otherwise.

 

There is no such thing as "2K HD." HD is 1920x1080. 2K is 2048x1556, at least in its "full frame" guise. The native color space is also different, as is the sampling rate. In a video system, there is a maximum sampling rate (based on today's systems) of 4:4:4. The sampling rates of most scanners exceed that, because they're not locked in to a video subcarrier rate as their base.

 

I'm not necessarily claiming that the end results are vastly different, but they are different. 1920 is not 2048, at least not on this planet. And lenses, recording methods, and industry/commerical success does not magically turn something into something it's not. There are enough inflated claims and general misunderstandings already. Let's not add to them. The current generation Genesis is an HD camera. Even Panavision doesn't dispute that, regardless of how it's used.

Link to comment
Share on other sites

The current generation Genesis is an HD camera. Even Panavision doesn't dispute that, regardless of how it's used.

So the only apparent difference between the Dalsa and the Genesis is in the native sampling rate. Despite the fact that, unless you use lenses designed for 65mm cameras, (or still cameras), the Origin has nowhere near as many active horizontal pixels as the Genesis, which produces no-questions-asked 1920 x 1080 4:4:4 RGB.

 

But, because Dalsa choose to employ the same sort of resolution creative accountancy as RED, that automatically qualifies it as a "real" cinematography camera, even though its native resolution is less than other cameras that don't quality.

 

OK, fine, got that.

 

I must say, it's a struggle keeping up with all this :blink:

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

Forum Sponsors

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Broadcast Solutions Inc

Visual Products

Film Gears

CINELEASE

BOKEH RENTALS

CineLab

Cinematography Books and Gear



×
×
  • Create New...