Jump to content

Rolling shutter an issue ?


Michael Peploe
 Share

Recommended Posts

Hi Nick,

 

I picked the Lartigue photograph because it is always used as an example of the effect of focal plane shutter exposing movement and the associated distortion that results. Spinning shutter motion picture cameras have a similar shutter and can create the "bendys" as things move across the frame.

 

Please accept my apologies to the heirs of Mr. Lartigue as I naively thought that his photograph was so famous as to not need a credit :unsure:

 

While we're on this subject, I've often heard people remark that the motion of electronic motion picture cameras seems odd to them, even when shot at 24p. Perhaps we have grown so accustomed to the "look" of the spinning shutter from movies that images from digital shutters are perceived a little differently.

 

And, please excuse my ignorance: Do electronic imagers expose every pixel simultaneously, or do they scan from top to bottom (or side to side)?

 

-bruce

 

Hi Bruce,

 

I don't give a toss about the heirs of Mr Lartigue, but as 'Image Makers' on a cinematography board we should try and give authorship credit even if the photograph is so famous not to need a credit.

 

My question about the 'bendy' effect on the Latigue picture was its relevance to a spinning shutter as I am unsure if a focal plane shutter would have been utilised in a camera from 1912 ( Did Mr Latigue just cap the lens?) On fast pans with film cameras I tend to see motion blur more than distortion - maybe I should look at some narrow shutter angle whip pans.

I would love to get that 'latigue' effect.

 

Reading the link to first reduser showthread that Mark Peploe highlighted there is a great answer on how a CMOS chip with a rolling shutter works - # 159

 

Nick

Link to comment
Share on other sites


  • Premium Member

D'you think it'd be possible to discuss this camera without immediately flying into a rage every time someone has the unmitigated temerity to question it? Christ.

 

The thing is, you force me to say this:

 

I challenge you to find any other electronic camera intended primarily for cinematographic work which has rolling shutter artifacts that bad.

 

There you go. I wasn't going to say it, but you force my hand.

 

Phil

Link to comment
Share on other sites

Hi Bruce,

 

I don't give a toss about the heirs of Mr Lartigue, but as 'Image Makers' on a cinematography board we should try and give authorship credit even if the photograph is so famous not to need a credit.

 

My question about the 'bendy' effect on the Latigue picture was its relevance to a spinning shutter as I am unsure if a focal plane shutter would have been utilised in a camera from 1912 ( Did Mr Latigue just cap the lens?) On fast pans with film cameras I tend to see motion blur more than distortion - maybe I should look at some narrow shutter angle whip pans.

I would love to get that 'latigue' effect.

 

Reading the link to first reduser showthread that Mark Peploe highlighted there is a great answer on how a CMOS chip with a rolling shutter works - # 159

 

Nick

 

Nick,

Mr. Lartigue did use a focal plane shutter. In fact to find the photo, I googled "focal plane shutter". There is an interesting article about this effect made famous by Mr. Lartigue at: http://maisonbisson.com/blog/post/10531/ . On my leaf shutter still cameras, the motion distortion is more like a checker board pattern than a shape change.

 

The author states that digital cameras expose the pixel sites consecutively and create their own kind of distortion which he links to some examples of. While I don't know how accurate that claim would be for all digital cameras, it would be interesting to know how a mechanical focal plane shutter (Canon 1Ds, Arri D20) interact with the CCD/CMOS exposure.

 

-bruce

Link to comment
Share on other sites

Nick,

Mr. Lartigue did use a focal plane shutter. In fact to find the photo, I googled "focal plane shutter". There is an interesting article about this effect made famous by Mr. Lartigue at: http://maisonbisson.com/blog/post/10531/ . On my leaf shutter still cameras, the motion distortion is more like a checker board pattern than a shape change.

 

The author states that digital cameras expose the pixel sites consecutively and create their own kind of distortion which he links to some examples of. While I don't know how accurate that claim would be for all digital cameras, it would be interesting to know how a mechanical focal plane shutter (Canon 1Ds, Arri D20) interact with the CCD/CMOS exposure.

 

-bruce

 

 

Bruce

 

Thanks for the link, interesting article on slit scan shutters.

 

 

Nick

Link to comment
Share on other sites

The thing is, this type of distortion artefact is actually kind of interesting - it'd be great to be able to have control over it (direction and speed wise).

 

Did a search on 'rolling shutter camera' this morning and came across this post from the Russ Andersson - creator of SynthEyes (a well-regarded matchmoving application), about the rolling shutter problems with the HV20.

 

http://cybermessageboard.fatcow.com/ssonte...topic.php?t=478

Link to comment
Share on other sites

The thing is, this type of distortion artefact is actually kind of interesting - it'd be great to be able to have control over it (direction and speed wise).

 

Did a search on 'rolling shutter camera' this morning and came across this post from the Russ Andersson - creator of SynthEyes (a well-regarded matchmoving application), about the rolling shutter problems with the HV20.

 

http://cybermessageboard.fatcow.com/ssonte...topic.php?t=478

 

 

Someone on that forum says the RED is using a global shutter. I have not heard of anyone from RED confirming this. Isn't the RED a cmos rolling shutter?

Edited by Michael Peploe
Link to comment
Share on other sites

We are using a new methodology that lands read/reset somewhere between a global shutter and a typical CMOS rolling shutter. Closer to a global shutter, especially if you look at something like the HV20. We'll publish read/reset specs.

 

Jim

Link to comment
Share on other sites

And ALL film cameras have "rolling shutters". I assume they are usable for cinematography?

 

Nonetheless they might be considered "global" in the sense that the frame is continually being exposed; That area on the frame has not been "read out" in the moving-shutter period.

 

-Sam

Edited by Sam Wells
Link to comment
Share on other sites

  • Premium Member

> Do electronic imagers expose every pixel simultaneously, or do they scan from top to bottom (or side to side)?

 

With CCDs, this is a fairly easy question to answer. By default, all the photodiodes are active all the time, which is why things like Viper have a physical shutter. Charge in each cell, caused by photons impacting charged silicon, is transferred down line by line towards the bottom row, where it's read out sequentially. This is why, when you turn a Viper on, until sync has been achieved between the electronics and the shutter, you get what looks very much like film-style out of phase shutter effects - the photodiodes are still accumulating charge as the image "falls down" towards the read-out row, causing familiar vertical streaking. It is because there is a conductive path between vertically adjacent photocells that bright light sources can cause vertical streaking - blooming - by causing photodiodes to literally overflow with electrons during the readout period. You can tell in which axis a sensor reads out by forcing it to bloom.

 

When the bottom (physically the top, often) row of the sensor is read out, it's fed through amplification and, inevitably these days, A-D conversion stages. On a CCD, this is liable to be separate circuitry, since the semiconductor process used to manufacture CCDs is fairly primitive - similar to PMOS memory manufacturing of the 1970s, where the light sensitivity of the resulting memory cells was considered a problem. CMOS sensors work in fundamentally the same way, but use a different manufacturing process, allowing amplification and even A-D conversion stages to be incorporated on the same substrate, as well as incorporating charge-to-voltage converters (often a darlington pair of transistors) with each pixel.

 

Because of this, CMOS sensors, and some advanced CCDs such as those used in the original (and possibly current) F900s, and the JVC GY-HD100 series, may read-out several sub-areas of the sensor in a variety of ways limited more or less only by the imagination of the designer and the speed and linearity of the amplification and A-D. Something like the Red sensor will inevitably use a large number of separate readout areas, amplifiers and A-D stages to mitigate the bandwidth limitations created by high resolution, high frame rate imaging. Doing this creates problems with per-area shading differences, as were notoriously visible in some early HD100s. CMOS sensors are also subejct to per-pixel shading errors, termed photo-response nonuniformity. Both these types of errors are due to manufacturing tolerances in the semiconductor fabrication process which affect linearity of both the per-pixel transistors and any onboard amplifiers and A-D, and must be corrected in the camera's DSP based on black balance and shading setup.

 

Because CMOS sensors can incorporate a lot of onboard electronics, they are usually easier to use than CCDs and have more features, such as windowing. Apart from the problems mentioned previousy, CMOS sensors generally have poorer dynamic range and a greater susceptibility to parasitic sensitivity - that is, collecting charge during the electronic shutter period.

 

A rolling shutter is faster and easier to create because it does not require the entire sensor to be read out simultaneously - instead, it can be read out line by line, with a finite delay between each, reducing bandwidth. Obviously the Red sensor isn't taking the entire frame period to read out the whole sensor or the bottom line would be 50% out of phase with the top one, as is the case with some machine vision cameras, but the artifact of it can still be detected.

 

The only way around this is to use an even larger number of A-D converters and amplifiers to read out ever smaller areas of the array, generally beef-up the frame stores into which they feed, and perform other electronic upgrades, or use a frame-transfer chip like Viper which has a duplicate, light-shielded area of silicon into which to transfer the complete frame at the end of every shutter period - and use a rotating shutter. Doubling the size of the array doubles the number of rejects and halves the yeild of a single wafer.

 

Obviously, all this costs money. It's probably not impossible to make a 4.9K Bayer camera with a frame shutter, but it's probably not possible to make it for 17k-plus-required-accessories.

 

Phil

Link to comment
Share on other sites

Apart from the problems mentioned previousy, CMOS sensors generally have poorer dynamic range and a greater susceptibility to parasitic sensitivity - that is, collecting charge during the electronic shutter period.

Great post, Phil! Interesting stuff! Are you a cinematographer or an EE? I thought CMOS had the potential to have greater dynamic range than CCDs? Also, since CMOS sensors are immune to vertical smear (a clear advantage over CCDs), I thought this benefit also made them more "capable" in other areas as well. Perhaps you could elaborate. Also, could you explain what visual compromise there is due to CMOS' "greater susceptibility to parasitic sensitivity?"

Link to comment
Share on other sites

The only way around this is to use an even larger number of A-D converters and amplifiers to read out ever smaller areas of the array, generally beef-up the frame stores into which they feed, and perform other electronic upgrades, or use a frame-transfer chip like Viper which has a duplicate, light-shielded area of silicon into which to transfer the complete frame at the end of every shutter period - and use a rotating shutter. Doubling the size of the array doubles the number of rejects and halves the yeild of a single wafer.

 

It's actually not that hard to-do and can be done with a single A/D converter if you wanted . . . CMOS sensors, unlike inflexible CCD's can add transistors at each photosite to act as a charge storage pool so that the pixel can integrate while reading out, giving you global shutter capabilities. The only problem with this approach is reduced fill factor and increase PRNU as well as fixed pattern noise which reduces overall dynamic range.

 

Fill Factory for instance has been making global-shuttered CMOS sensors for years, and they are used in the likes of the Phantom V-series or CineSpeed Cam series of cameras which are global shuttered (6T pixels vs. standard 3T pixels).

 

For example, their generic LUPA4000 series only has two A/D converters, and can do a global shutter at 4mpix at 15fps (so it could do a 2Kx1152 image at 24fps through windowing). The only issues are that the imager is an industrial, not a cinema designed sensor, and so is not very good-looking for cinema use, again, mainly due to the pixel design (6T design = high PRNU and fixed pattern noise, along with shading errors). They make other sensors for cinema use, so global shutters do not preclude good image quality, but that's just the basic characteristcs of their off-the-shelf LUPA part.

 

EDIT: Split and modified last paragraph as clarification . . . see my next post below.

Link to comment
Share on other sites

BTW, quick clarification on my above post . . . I am *not* saying that the Phantoms or CineSpeed CAM's are using the LUPA4000 from Fill-factory . . . re-reading my post I realize that the two might get connected there . . . the LUPA4000 is a generic part from FillFactory and is not cinema-friendly (it's a great industrial sensor, but it wasn't designed for cinema screens). The Phantom's and CineSpeed CAM on the other-hand use custom sensors fine-tuned for their respective cinema-targetted applications.

 

I apologize if my above post was misleading and I am editing it to make it clearer . . . I'm basically saying global shutters are not hard to-do, at least not as complex as a FT CCD or as costly, but that they do create some more non-uniformity issues over a traditional 3T design. My post should not be interpreted to say that Phantom's and the CineSpeed CAM produce dodgy image quality, which for those who have used the cameras will know that is certainly NOT the case.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...