Jump to content

Recommended Posts

  • Premium Member

There's no particular reason why a CCD is better at interlaced images than a CMOS imager. Certainly some types of CCDs were built to make use of the engineering opportunities created by only having to read out every other line in one go, but there's nothing intrinsic about it.

 

CCDs were developed from PMOS memory devices, a type of semiconductor manufacturing technology that was current in the late 60s. The light-sensitivity of these devices was considered a problem; one of the reasons microchips are packaged in black resin is that there was once (and sometimes still is) a need to isolate them from light. Obviously it didn't take long for some bright spark to realise this characteristic could be used to create an imaging device which would be much more convenient than the bulky, hard-to-run vacuum-tube types in use at the time. Of course as with any new technology, the early CCD cameras were feeble, and were considerably outperformed by the tube cameras they replaced, but there's been a lot of development since then.

 

CMOS imagers work in fundamentally the same way - it's still a silicon photodiode turning photons into electrons via the photoelectric effect; it's the only phenomenon we know of to do that. The advantage is CMOS is a more modern technology, using much more advanced formulations of materials and techniques during the manufacturing process. Because of this, it's possible to put other devices on the same substrate - that is, the same piece of silicon - as the image sensor itself. This usually means the output amplifiers and analogue-to-digital converters, making the sensor much cheaper, easier to implement, and more compact. This is the principal reason it's become feasible to make USB webcams and camera phones for prices we like.

 

A topical example of this is Red's Mysterium sensor, which by the nature of its capabilities must be using at least several (probably several dozen) sets of ADC and amplification - many CMOS sensors have a transistor per pixel to deal with signal amplification. You could argue that it's really several smaller sensors that happen to be side by side. There are upsides and downsides to doing this. It's really the only way to make a very high resolution sensor which can go very fast with a low noise floor, which is exactly what they've done. The downsides are that these sub-sensors (usually referred to as "bins" or "binning areas") use different output amplifiers, and since you're still in the analogue domain, in-tolerance variations can cause shading errors (basically, different bits of the picture look reddish/greenish/bluish/dark/bright/whatever). This you have to fix digitally. Also, plonking all this extra circuitry down among the photosites of the image sensor means that the fill factor goes down (gaps between pixels) and the size of the pixels must decrease assuming the same resolution in the same physical sensor size. These issues reduce signal to noise ratio and dynamic range, and can exacerbate aliasing, particularly in Bayer designs.

 

It is possible to do some of these tricks with particularly clever CCDs, witness the JVC GY-HD100, some versions of which had a shading problem between the two halves of its chip, but it's a lot easier with CMOS.

 

CMOS image technology makes whizzy new things possible; expect to see more of it.

 

Phil

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...