Jump to content

No Bayer Pattern CMOS systems


Tyler Purcell

Recommended Posts

  • Premium Member

I was having a heated debate with a colleague a few months ago about why the industry doesn't use three CMOS imagers (one for each color) and a beam splitter like CCD cameras do. This would solve many of the issues CMOS cameras have and deliver far greater color detail.

 

My point is that bean splitters reduce the amount of light hitting the imager and that adding two more imagers would be cost prohibitive, make the cameras larger and require a lot more processing power (expense/weight/heat dissipation).

 

His point is that nobody cares about cost and size, they'd still be smaller then an average 35mm film camera, electronics pricing is going down, so the cost for the added parts shouldn't be that much. Plus, it would be true RGB instead of this faux stuff we deal with today.

 

I mean there have been a few cameras made this way, but nothing in the 4k variety that currently exists.

 

What bothers me the most about this concept/conversation is that, people simply don't care. Nobody does anything about it because what we have now is "good enough". People are focused on higher resolution and not necessarily solving some of these technical issues.

 

Anyway, I'd like to know what people think about the idea of a S35 sized, 3 CMOS imager camera and if it would be worth developing at this juncture.

Link to comment
Share on other sites

Related to your question, I've often wondered why foveon chips don't get used more. They have a true, layered RGB structure, similar to film itself, with absolutely no Bayer artifacts, and without the need for beam splitters. They're in Sigma Merrill DSLR's, and produce stunning images. Maybe there are good reasons why they haven't been more widely implemented, but I'd like to know what they are.

Link to comment
Share on other sites

  • Premium Member

Just seems that not enough work has been done with the Foveon concept yet to match the quality of Bayer CFA sensors, which keep getting improved. Every now and then you hear a rumor about future sensor designs that are more like Foveon but they haven't really shown up yet for HD, 2K, and 4K video.

 

Some people feel that silicon itself is poor filter for color separation and thus the color signal from a Foveon is very weak in saturation, which then has to be boosted.

Link to comment
Share on other sites

  • Premium Member

the probable reason why beam splitters are not used more is because they limit the optics which you can use with the camera and no one wants a camera to limit the lens choices to only a few custom made options when making a movie.

even when the electronics pricing is going down and down, the optics pricing is staying more or less stable or may even go up every now and then, especially for special optics made for 3mos camera with larger format than 2/3".

 

So I'd say the reason is the lenses, not the 3-imager camera technology itself. it is also much cheaper to make a Bayer sensor camera vs. 3cmos camera but that is less important in cine use I think

Link to comment
Share on other sites

  • Premium Member

There are still 2/3" and smaller 3-sensor prism block cameras made, I think it's mainly that no one wants to make something where the size of the sensors jumps from 10mm wide to 24mm wide, that would require a larger prism block and then special lenses to adjust for that flange depth and the way colors focus on each sensor.

 

3-strip Technicolor used a 2-way prism block and that alone limited the focal lengths used, I think 35mm was the shortest focal length, they tried using wide-angle adaptors to get wider but the quality wasn't there.

Link to comment
Share on other sites

The Sony F23 had a beam splitter and 3 chips but for the s35 F35 Sony went with the RGB stripped sensor instead. I have heard that the reason for this was there was not enough room to fit the prism and 3 s35 CCD's. To my knowledge there has never been a camera with a sensor larger than 2/3" with a beam splitter, but I could very well be wrong.

Link to comment
Share on other sites

  • Premium Member

The broad strokes of it are fairly straightforward. If you want an 18mm lens, either something has to be 18mm from the film plane, or you need correcting optics (a retrofocal design). You can't put something 18mm from the film plane if there's a prism block in that location.

 

This is part of the purpose of the relay group on the back of a broadcast lens where the back focus adjustment is. In extant 3-chip designs, there are also focus distance tweaks for each of the primaries.

 

The problem is illustrated by attempts to put a broadcast lens on a single chip camera, which I've done: below about f/4, it turns into a glowing, halating mess.

 

My feeling is that modern high resolution sensors have sufficient resolution to oversample their way out of any serious problems with colour precision, although naturally a true cosited RGB design would provide better colour resolution, greater sensitivity and dynamic range, and lower noise. It's still only better if the downsides aren't worse, though, if you see what I mean!

 

P

Link to comment
Share on other sites

  • Premium Member

You can correct the other way; there are adaptors for putting B4 lenses on single-chip cameras which correct for the RGB focus issue and often enlarge the image a bit (often so that a B4 lens with the extender in will cover s35, albeit at significant stop loss).

 

Going the other way may be more complex. The entire relay group on the back of a B4 lens does it. That is presumably what you'd have to add and it is not straightforward.

 

I think the problem is really what issues it would solve. The biggest sensors now have enough resolution that the loss of resolution to Bayer artifacting is not seriously objectionable.

 

P

Link to comment
Share on other sites

That is what the c100 and c300 already did to a degree, they used the traditionaly bayer pattern but rather than debayering to 4K and downscaling the image they used 4 pixels RGGB to form one final pixel for HD output. The F35 did this using columns of colored pixels and the spliced the colums together into 3 1920 x 2160 images, downscaled them to 1920 x 1080 and then output a true RGB 4:4:4 HD image. One of the only cameras I am aware of to samlple all 3 color channels at the nyquist limit, 2 pixels for every final pixel for each channel.

 

A sensil in a sensor can only measure a single brightess value so there is no way to make it measure 3 separate values, you need 3 sensels to do that.

Edited by David Hessel
Link to comment
Share on other sites

  • Premium Member

Riddle me this, why can't you make an imager where each pixel is red blue and green? Simply divide it just like LED displays do, where the pixels are divided by thirds like a pie.

You mean like the Panavision Genesis and Sony F35 have been doing for over 10 years? :rolleyes:

 

There are a number of issues involved.

Dichroic prisms and with three separate sensors are far and away the best way to produce colour images. Period, regardless of what you're read on the Internet. Apart from the better colour filtering, they're much less affected by IR contamination

People keep talking as though single sensor cameras using filter arrays are some fantastic new idea; actually, they've been around since the 1950s. It's the availability of massive portable computing power that made them competitive with 3-sensor designs.

The main impetus behind producing single-sensor cameras for cinematography work had a lot more to do with maintaining compatibility with existing lenses designed for 35mm film cameras, than image quality.

You certainly could make 3-chip "35mm" size sensor cameras, but you would then have to make new lenses to go with them, because as Phil mentioned, in most cases the rear element of existing lenses would want to sit in the middle of the prism block.

This exactly the problem faced by Panavision's "Panavized" Cine Alta cameras; your range of rentable lenses dropped from thousands to about six.

 

The other major advantage of 3-chip cameras is the lack of "latency"; that is, the "live" pictures coming out of the camera are delayed no more than about 30 milliseconds. Because of the amount of processing required, the latency of most HD Bayer cameras varies between about a second for an Alexa studio camera, and a couple of days for early Red cameras :P

 

The main reason the PV Genesis/Sony F35 use RGB colour striping (like an LCD TV screen) is simply that they were principally designed as TV studio cameras (hence 1920 x 1080 sensors), and straight RGB requires minimal processing time for full-resolution HD.

Link to comment
Share on other sites

  • Premium Member

Yea I know there were a bunch of "attempts" to make this happen.

 

What kills me is that you COULD use optics to fix the problem with the flange distance issues.

You appear to be commenting well outside your experience.

Panavision made several abortive attempts at doing just that in the 80s and 90s, to allow 35mm film lenses to work with TV cameras. The results were universally horrible.

Panavision make some of the best lenses in the world; if they couldn't make it work, I seriously doubt anybody else could.

If you have the resources and technical skill to make it happen, by all means, please produce the goods; but please don't just come on here stating: "It must be possible."

Or at least describe the technical problems, and then suggest how they might be overcome. Otherwise, you're just writing science fiction.

Link to comment
Share on other sites

  • Premium Member

I don't work at Panavision, so how could I know about anything they've done?

 

Furthermore, if you said they did it and the results were horrible, that means it's possible.

 

I'm not an optical expert, nor do I claim to be.

 

The technical problem is the fact the imager in this scenario is too far away from the back element of the lens.

 

So all you need to do is have an adjustable lens that re-focuses the image that goes between the lens and the camera body. It would clearly consist of a few elements, but I can't imagine it being a problem. Good quality results are a totally different story. I would imagine it would be LESS quality then not using one of these gadgets.

Link to comment
Share on other sites

Having been involved in several of these camera designs as well as arial optics as used in scopes and periscope lenses (which is exactly what we are talking about here), I can tell you that it is a pretty horrible thing. The only way to even try to make it work is to start with the lens pretty far away, have the image land on an optical plane that is then reemerged at a further distance. That's how you get past your prism block. But you have to make a pretty big arial imager using doublets and triplets (multiple sandwiched optical elements) otherwise you suffer massive amounts of chromatic aberrations and other artifacts. You can get a pretty clean image (although it will NEVER be as good as it would be without this in the mix) but the camera will need to be 8"-12" longer and - oh yeah - you'll probably lose about two stops in the process. Also, if you have a three chip prism block on a 35mm-sized imager then it's going to be HUGE, which means that the camera will of necessity be really fat or tall or both. In one direction or another it will have a big hump, like Igor.

 

Getting all this stuff aligned properly and at high resolution (HD/4K/8K/whatever) is that much more of a challenge the larger they are, and I would expect such a camera to have to sell for at least $100K just to make it worth attempting. So now we're talking about a very, very expensive camera system that is huge, heavy, isn't particularly light sensitive and may or may not have optical quality loss, all so that one can use current glass.

 

Yeah, it's not going to happen for a lot of good reasons. No matter what anyone thinks of Bayer pattern and chroma subsampling, just remember that as nature went through it's own evolutionary process it arrived at pretty much the same configuration inside our skulls. There's probably a reason for that.

  • Upvote 2
Link to comment
Share on other sites

  • Premium Member

I was going to say much the same thing.

 

It could in theory be done, but ultimately you're better off spending less money on sensor engineering for better results. Double the photosite count on the sensor and you've more or less overcome the losses of Bayer colour recovery. Not entirely, but to a point where it's probably meaningless to argue.

 

I suspect eventually we will have to start quite aggressively cooling sensors to reduce the noise floor and that's also likely to make things big and heavy. It hasn't been an issue so far because the relationship between noise and exposure time hasn't made it worthwhile as it often is in astrophotography. This may eventually change.

 

P

Link to comment
Share on other sites

Sensors have been "aggressively cooled" for many years. The inside of the Dalsa Origin was one big heat sink. The original Phantom HD was about 12 pounds, and almost 9 of them were spent on huge copper heat sinks and a heat radiating casing. There's pies cooling, heat pipes, directed fans, liquid cooling, on and on. Performance of the chips has gotten much better but then we just keep cranking up resolution and sensitivity, so temperature control remains one of the major barriers to performance.

 

The Canon 8K demo system that uses 4x Odyssey7Q+ recorders to capture 8K60p is most impressive to me not because they built an 8K S35 chip with nice sensitivity and dynamic range, but because they could do so without the thing melting.

Link to comment
Share on other sites

  • Premium Member

My thought was about below-ambient cooling, or at least "to ambient" cooling, which nobody is currently doing in cinema cameras. There's a difference between keeping the heat out of the thing to avoid it destroying itself and addressing thermal noise in the way that astronomers do. Dealing with the issues of condensation and the intrinsic inefficiency of peltier coolers - and acoustic noise - will make current thermal engineering look comparatively straightforward.

 

It may still be a while before this becomes necessary, because the comparatively short exposures of motion picture (and most stills) work aren't really affected by thermal noise in the same way as the multi-minute star shots. That, and everyone seems to be happy with several LSB of camera output being noise.

 

P

Link to comment
Share on other sites

  • Premium Member

Peltier elements are already used in some modern digital cinema cameras to assist cooling the sensor. If I remember correctly at least blackmagic and arri are using it.

 

Btw nowadays when we live again the anamorphic/large sensor part of the periodical 3D image format cycle, people most likely take bigger single sensor camera over the 3chip one when the optics is concerned. With bigger sensor you also need more ffd and possibly special lenses but you get more "wow factor" with todays audiences compared to a 3ccd great color response camera. Most of the asudience wants the films specs to look good on paper too, not only on screen

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...