Jump to content

Questions about aspect ratios and CCDs


Häakon

Recommended Posts

I have some questions regarding aspect ratio and I will start out by apologizing in advance if the answers should be obvious or common knowledge. I'm a young student very interested in film and I am not a major industry player or "pro" like some of the other people who post on these forums.

 

I am getting more and more into shooting motion picture, and sometimes discussion over aspect ratio really confuses me. After seeing Canon's recent announcement of "true 16x9" CCDs on the new XL2, there have been a lot of comparisons to other cameras (most obviously and notably the AG-DVX100) about "effective pixel count" and "true 16x9" that have left me a bit in the dark.

 

First off, I found this page with information on the CCD used in Canon's new XL2 camcorder. I am a bit confused as to why a camera with "true 16x9" chips would look 4x3 in shape, and why they'd choose to make 1/4 of the area completely "inactive." I guess this all comes down to the age old debate of what aspect ratio is ultimately "better," but it seems to me that if you've got a 4x3 sized chip, then it's native, or "max" resolution would be in 4x3 mode. Yes, I realize that a rectangle (let's say 16x9) is longer than a square (say 4x3 - I know this isn't really a "square," but just play along), so it would appear that a "widescreen" 16x9 image has "more data" to the left and the right. People are conditioned to this response because they're used to seeing 4x3 pan and scan versions of widescreen movies that were originally shot with an intended 16x9 (or wider) aspect ratio, and when it's "formatted to fit their TV," they lose a bunch of the picture. But really, if the chip itself is 4x3 in shape, then by making 1/4 of it "inactive," aren't you in reality just "letterboxing" the chip to get your "true" 16x9 area? And if so, why on earth would you make the 4x3 area even SMALLER by "double letterboxing" it?? The 4x3 aspect ratio that is actually used by the camera looks like it's only using about 2/3 of the total CCD (maybe even less?), which seems incredibly wasteful to me. And considering everyone is so big on resolution and having the "maximum effective pixel count" possible, why would a manufacturer choose to make the system work like this? Is it so that the 16x9 mode actually looks "wider" when you use it and the manufacturer can claim that's it's "true" mode? Sorry, but if I'm going to spend $4,000 on a camera, I want to get my money's (CCD's) worth. To my understanding, the Panasonic AG-DVX100(A)'s "16x9" mode is more of a "letterbox" mode, which means that the CCD is 4x3 in nature, and it renders the top and bottom "inactive" to shoot in "16x9." Which, if the chip actually is 4x3 in physical size, makes sense to me. This Canon chip looks like it's already letterboxed to begin with, but then letterboxes again on the left and right to make up a 4x3 image. This means they've thrown out all kinds of resolution and have effectively wasted half of that CCD. I'm not making any claims or pointing fingers, I'm just trying to understand why it is being done this way. Are my observations anywhere close to what's really happening?

 

Also, "16x9" is not a common motion picture (movie/film/whatever) ratio as far as I'm to understand. (16x9 works out to roughly 1.77x1, whereas most motion pictures are shot at either 1.85:1 or 2.35:1... right?) Now I realize that HDTV has an agreed upon aspect ratio of 16x9, but if the 24p cameras are more aimed at independent movie makers and not HDTV producers (they aren't HD-resolution, anyway), then why on earth are the "widescreen" modes in these cameras 16x9 and not 1.85:1?? Frankly, with the options that have been presented to me, I feel like always shooting in 4x3 mode and then letterboxing it to 1.85:1 in post, because then at least I can crop/recenter the image vertically to meet my aesthetical preferences if I feel like something wasn't shot perfectly - with no loss in resolution. (If I shoot in "16x9 letterbox" with the Panasonic, then I'm then stuck with the image I have, and if I shoot 4x3 with the XL2 then the resolution is seemingly much worse than the AG-DVX100, so there's really no point in using that camera at all!) Essentially, unless the actual chip in the camera is physically 16x9 in size, then it's never really "truly" 16x9... (and the chip should really be 1.85:1 in size anyway.)

 

These may be basic questions, but the answers seem really unclear to me. It's very exciting that 24p digital cameras are now accessible in the "prosumer" domain, but the choices are slim (and with my confusion over the aforementioned topics, I'm not 100% ready to jump on this.) Since I don't have $60,000 to spend on a camera that's "professional-level," I'm trying to find the best solution for my needs that will actually allow me to make the best quality films I can on my own until I get to that point. I've been very impressed with the capability of the AG-DVX100 (I've seen both good and bad footage shot with it, but the best of the good looks quite acceptable to me - especially Ralph Oshiro and Ted Lederman's work), and I'm definitely leaning in that direction. With the advent of the XL2, obviously I want to do my research before shelling out four thousand dollars, but the design of their CCD just makes me feel like it's a huge step backward - and it's all so they can claim that it's "true 16x9" since they've hardcoded it in a way that a 16x9 ratio yields the highest resolution that the camera is "allowed" to shoot (regardless of the fact that the CCD isn't "truly" 16x9 at all). Ug!

 

If anyone can touch on any of my questions and provide some clarity for me, I'd really appreciate it. I feel like the confusion I'm having is legit in at least some remote way. It certainly would be much easier if there weren't so many choices of aspect ratio to begin with! :)

Link to comment
Share on other sites

Guest Pete Wright

Haakon

 

I have not read your whole post, sorry. But here is a little bit of what I know.

 

Not all cameras use native resolution of the chip. In the XL2, Canon is doing it in the 4:3 mode, where it uses 720x480, or 720x576 pixels. Using the native resolution should be the best; next best thing would be using a lot more pixels than the native resolution, for instance 1,500x1,000.

 

Canon is using about 1/2 the pixels of the chip in the 4:3 mode. They could have used all the pixels of the chip. They are using a small central area and are disregarding about 50% of the pixels. These used pixels are referred to as active pixels.

 

If Canon chose to use all the pixels of the chip, the camera would be better. It would have a more shallow DOF and less sharp lenses could be used. The existing lenses would be enough.

 

Pete

Link to comment
Share on other sites

  • 2 weeks later...

There is a lot of nonsense talked about resolution and pixel count, Canon's new XL2 camera has become part of that. The absolute pixel resolution does not say everything about the percieved resolution of the image by a human eye. This percieved resolution varies a lot by individualand what else is done to the data.

 

Secondly there is marketting hype and the Canon camera is full of it, along with an absence of real detail, such as frequency response or actual pixels recorded.

 

I live in PAL land and Canon have carefully NOT said what the vertical resolution of the camera is: they have implied it is the full PAL 576 lines but it does not actually say so anywhere. The same issue applies in both NTSC and PAL land for the horizontal resolution: they may well have a source CCD which is 950/960 pixels horizontally but the DV spec ONLY records 720 rectangular pixels (and 540 lines of frequency detail) so they are definitely processing this source data in the camera to put the picture on tape. Herein lies the devil and the detail: if they do this really well in the frequency domain, then they are effectively downconverting , in the digital domain, and ONLY horizontally. This should be good news as it will give a reduction in noise and should sharpen edges to their theoretical maximium, however, simple logic implies that dropping from 950 to 720 pixels is only a tiny improvement in source resolution and is likely to cause all sorts of artifacts: what if a line is sharp and vertical, how do two pixels portray it? Well we will see how good their engineers filters are!

 

So in a simplified answer to the first questions: yes 16:9 IS a COMPROMISE the broadcast boys decided on.

 

Tv guys have used super 16 for years and telecined this to 16:9 and many still do. ALL these standard definition systems end up with a SPATIAL resolution of 720 by 576 pixels in PAL land. These pixels ARE NOT square (like your computer) but are rectangular in shape. It was done as a clever bit of skulduggery by the TV boys to fit in with existing specifications. 1 bit in the data stream can say this data is 16:9 or 4:3 in ratio, but the actual total data content is the same: 720 per horizontal line(everywhere in the world!!!).

 

Then we get to questions of what is lost when you do a 5:1 compression: if the noise is low in the original source, suprisingly little. The coding schemes are now quite good and while they are NOT like a film sources image, for most of us they are good enough, on our TV's at any rate.

 

Many sources say that the actual films we see in cinemas are only about 800 lines of resolution horizontally and I have certainly seen films that looked like that. Most are much better and although XL2 results can be good they will never be as good as film. In my view film is still used widely because badly shot film can be "fixed" in telecine. Badly shot DV is C**p whatever else you do with it.

 

Well exposed and lit DV can look really good, but you need to be better at the art, not a beginner as is so often suggested.

 

This trick with the CCD is just a side issue dreamed up by the marketting boys: what matters is how good the Canon boys have processed 950 pixels to get the DV data on to tape, as 720 rectangular pixels and with a digital frequency response of 540 lines of percieved resolution. the XL1 did a damn good job, so if they repeat this the pictures may well be great!

 

Other questions come in then: is there ringing on sharp transition edges (we have all seen this around a persons face against a dark background). Do you see picture quality changes as the camera scene pans in any direction? Do you get contours enhanced? Why do we need skin tone correction in western coutries? does this apply to black and brown skins?

 

We all need to see real pictures to judge this, with test chart shots and frequency plots and we can see what they have thrown away to get it onto DV.

 

Then there is the real marketting question: if a little guy like JVC can invent and implement an MPEG HDV system: why cannot Canon? Notice I did not suggest the technical boys had anything to do with that question: it is obviosly possible, so what is scaring them?

 

Well the answer is down to your first question: most of us do not need HD: most of us just want 800 lines of REAL pixels, not bobbing about and at 50 FPS, because we would PERCIEVE that as REAL SHARP, and HDV might give us that, and we might not pay more for better....

 

Now let me see some screen shots of test charts and real images....

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...