Jump to content

Canon D5 Mk2 to 35mm mini-test results


Jean Dodge

Recommended Posts

  • Premium Member
Wow, that's unfortunate. All sensor chips are inherently RGB. Ideally, conversion to YUV would happen once, if at all, and only for TV use. Any idea which luminance equation/matrix is built into the camera?

 

 

 

 

-- J.S.

 

True, but as you know most video cameras record YUV because they chroma subsample. Since the 5DM2 records 4:2:0, it's not going to use RGB.

Link to comment
Share on other sites

  • Premium Member

It's actually Y'CbCr, not YUV, though that mistake is pretty much endemic now. It does make the right matrix info a bit harder to find.

 

There's a bunch of rotate, offset and scale crap going on between these color spaces, and loads of different ways to do it, with and without super blacks/whites:

 

http://en.wikipedia.org/wiki/YCbCr

 

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
True, but as you know most video cameras record YUV because they chroma subsample. Since the 5DM2 records 4:2:0, it's not going to use RGB.

 

True. For digital cinema, and for anything that needs blue/green screen work, chroma subsampling in the camera is a bad idea. That's one thing that the Red guys got right. At least Canon was smart enough to use 4:2:0 rather than 4:2:2.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
I only wrote YUV b/c you did and didn't want to seem pedantic with YCbCr. :P

 

Yup, guilty as charged. The difference does become significant when we're talking digital, and integers from 0 - 255 or 16 - 235 rather than analog levels and real numbers in the 0 - 1 range.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
True. For digital cinema, and for anything that needs blue/green screen work, chroma subsampling in the camera is a bad idea. That's one thing that the Red guys got right. At least Canon was smart enough to use 4:2:0 rather than 4:2:2.

 

 

 

-- J.S.

 

I'm curious, why are you saying 4:2:0 is a better choice than 4:2:2? Because the space saved in the chroma can be used for more picture detail?

 

As for the Red, it's 4:4:4, but the Bayer mask, as you know as well as anyone on the planet, is 4:2:0. So Red's red and blue samples were originally half horzontal and vertical resolution but subsequently interpolated to be full resolution 4K. The Red takes great images and is excellent for keying, so I'm not trying to denigrate the camera. But in its 4:4:4, the first four is greater than the other two, IMHO. That said, Red does a very good job of filling in the missing pieces.

Link to comment
Share on other sites

  • Premium Member
I'm curious, why are you saying 4:2:0 is a better choice than 4:2:2? Because the space saved in the chroma can be used for more picture detail?

 

As for the Red, it's 4:4:4, but the Bayer mask, as you know as well as anyone on the planet, is 4:2:0.

 

With 4:2:0, you save more space and lose no picture quality at all. It subsamples both horizontally and vertically. 4:2:2 subsamples horizontally only. The human visual system has equal horizontal and vertical resolution, so there's no advantage to subsampling one way but not the other. 4:2:2 is really a dinosaur. It dates back to analog component video, before we had cheap enough memory to have frame stores. You could only deal with the signal instantaneously as it scanned across horizontal lines. Now that we have memory, we should forget about 4:2:2.

 

It's not exactly right to say that a Bayer mask is 4:2:0. It has some 4:2:0-ishness to it in that it has twice as many sampling locations for green as it does for either red or blue alone. Green is the biggest component in luminance, and the human visual system resolves luminance a lot better than color, so both are using the same underlying perceptual reasoning.

 

But the Bayer array also has some 4:4:4-ishness to it in that its data represents the colors Red, Green, and Blue, rather than luminance and two color differences.

 

Flipping back and forth between RGB and Y'CbCr isn't lossless, and shouldn't be taken lightly. Because we're constrained to integers in both spaces, and the matrices have decimals in them, there are roundings or truncations going on both ways.

 

 

 

-- J.S.

Link to comment
Share on other sites

Jean,

 

Avid has sorted out RGB to YUV HD/SD (and vice versa) issues very well. If you were editing on Avid I'm sure a solution could be tracked down. And I believe it's as simple as choosing the proper import settings.

 

Just as a way of education, digital images are stored between the values of 0 and 255 (assuming 8-bit color). RGB images use this entire range. YUV does too, but the range between 0 and 15 is called super black and between 236 and 255 super white. So in YUV land, 16 is considered nominal black and 235 considered nominal white.

 

Depending on your delivery, having values below below 16 or above 235 may or may not cause a problem. For broadcast, it's a big no-no. Most cameras that record YUV record super black and white info. But not all NLE's properly use this info. Some see YUV and just truncate everything below 16 to black and everything above 235 to white. Avid does not do this.

 

Cineform has had problems with YUV footage with super blacks and whites. To accurately represent the values between 0 and 15 it sometimes resorts to using negative integers, as per a discussion with David Newman of Cineform. It supposed to work out in the wash, but does not always.

 

It looks like from the stills you've posted that the camera is recording YUV from 0 to 255 but the import process truncated everything below 16 and above 235. That's why the darkest blacks have lost their detail.

 

HTH.

 

It seem that Final Cut Pro is on the trail of getting this right, too but I'm not sure they are there yet. QT pro 7.6 is said to address some of what was wrong; Cineform is trying too. Obviously, we are searching for a workflow that gets us from "here" ie, what the canon D5 mk2 shoots, to "there," 35mm film out as well as possible.

 

The 30p-to-24p conversion using interpolation in CineTools portion of FCP shows great promise. In the real world of sitting in the theater screening 35mm film, it worked quite well. Preserving color space is the next hurdle. Taking the 8 bit clips into a ten-bit space seems to show promise. Combining the two operations is the challenge.

Link to comment
Share on other sites

  • Premium Member

Jean, it sounds like FCP is your weapon of choice (or necessity).

 

If you have the option of using Avid, I'll gladly help research a solution for you. But if FCP is the platform you'll be using, then I don't think finding an Avid solution would be helpful.

Link to comment
Share on other sites

  • Premium Member
With 4:2:0, you save more space and lose no picture quality at all. It subsamples both horizontally and vertically. 4:2:2 subsamples horizontally only. The human visual system has equal horizontal and vertical resolution, so there's no advantage to subsampling one way but not the other. 4:2:2 is really a dinosaur. It dates back to analog component video, before we had cheap enough memory to have frame stores. You could only deal with the signal instantaneously as it scanned across horizontal lines. Now that we have memory, we should forget about 4:2:2.

 

...

 

John, interlaced video is still being shot today. I have to think 4:2:2 is an advantage when shooting interlaced, as it's the same situation you are describing with analog compoent back in the day: a frame losing half its vertical resolution by being split into two temporally different fields. If the colorspace then takes away another half of vertical resolution and one half horiziontal resolution, isn't the result 1/4th vertical resolution and one 1/2 horizontal resolution?

 

I understand your logic when it comes to progressive. I've always thought that 4:2:2 was better than 4:2:0, but invariably the 4:2:2 was always shot with a better codec or uncompressed and the 4:2:0 was mostly HDV. But I also think there should be an advantage to having the extra vertical resolution. I mean if you had to represent a natural shape using blocks, wouldn't blocks shaped x wide by 1/2x tall allow for greater detail than blocks shaped x wide by x tall?

Link to comment
Share on other sites

  • Premium Member
John, interlaced video is still being shot today. ... I mean if you had to represent a natural shape using blocks, wouldn't blocks shaped x wide by 1/2x tall allow for greater detail than blocks shaped x wide by x tall?

 

I guess TV news, sports, special events and such might work interlaced on the 1080i networks. Maybe soaps and late nights, too? Certainly all of scripted prime time entertainment has been 1080p/24 for over five years. IIRC, we did our last interlaced show circa 2002. For film-out, 24p/25p is the only thing that makes sense. Do you know of any other applications still shooting interlaced?

 

You're right that interlace would be limited to using 4:2:2 rather than 4:2:0 because vertically adjacent pixels don't exist in interlace.

 

The human eye sees with equal horizontal and vertical resolution (If that weren't true, we'd be rotating stuff all the time to see it better). In the development of anamorphic systems, they did some testing on images with differing horizontal and vertical resolution. The results are that if the ratio of resolutions is 2:1 or less, the average viewer accepts the picture and sees it as having the lower of the two resolutions. Much beyond 2:1, and folks notice that the tops and bottoms of things are either sharper or softer than the sides.

 

 

 

 

 

-- J.S.

Link to comment
Share on other sites

Jean, it sounds like FCP is your weapon of choice (or necessity).

 

If you have the option of using Avid, I'll gladly help research a solution for you. But if FCP is the platform you'll be using, then I don't think finding an Avid solution would be helpful.

 

 

We're not married to FCP by any means, and we have seen that .avi files seem to understand the camera's codec better.... so yeah maybe AVID is the way to go.

 

If you are curious and have the free time to look at a clip in an AVID system PM me. The clips are on a mac-friendly server but there are ways to get you some footage... if you want to play. Again, this is a cinematography forum and not a post one, but who am I to look a gift horse in the mouth. You are too kind.

 

We originally figured FCP would be the way to go with this "low end" camera DIY philosophy but whatever works best is obviously going to be attractive to end users.

 

There are a boatload of tech issues we have gathered together, and I;ve got at least four FCP users lined up willing to try inventing a workflow for us... just friends and cohorts. I can get you up to speed with what we know and have tried in the last 48 hrs quickly.... which is not a whole lot but might help.

Link to comment
Share on other sites

  • 5 months later...
Exposure control work around is a pain in the arse but becomes second nature eventually. With manual iris lenses you can stop down while pointed at almost any light source until you trick the camera into displaying an approximation of the ISO and shutter rate that works for you, and then lock the value with the * button. Getting repeatable results take to take was a concern, but seemed to work out okay in the tests we did.

 

I'm looking into buying one of those cheap photo frames that displays jpegs in a slide-show and shooting a series of grey cards as way to easily repeat brightness values to hold to the lens.... another user forum suggestion I picked up in research. Silly but should be effective. As you may know, the camera uses a combo of ISO and shutter speed rates to control exposure once you take away Iris control as it's third option. At the slower ISOs, 100 and 200, the frame rates are all over the place - up to and beyond 1/160th a sec. ND filters come into play here to make sure you are shooting 1/50th at the f-stop you want, and the camera also has two stops exposure correction available with the thumb wheel. (for 1/50th a sec exposure, always choose and lock 1/40 on the display. 1/50th display can sometimes force a 1/100 shutter in actual practice)

 

At higher ISOs than 200, it is our understanding that the shutter rate is always 1/30th.

 

here's the table we went by. I can't take credit for this, but I can't refute anything in it either:

 

------------

From tests performed by Jon Fairhurst and Mark Hahn, the following findings have been made:

 

When shooting video with Nikon lenses or any lens where you are setting aperture manually:

 

Rule 1. Camera shoots at 1/33 of a second, any time the ISO is above 100, or above 200 with HTP mode employed. There is no way around this no matter what shutter speed reads out on the LCD.

 

Rule 2. At ISO 100, or 200 with HTP set, you can adjust shutter speeds. The following table shows the LCD reading on the left and the right shows the actual shutter speed the camera will use.

 

LCD -> Actual

Reads

 

1/40 -> 1/50

1/50 -> 1/50 or 1/100

1/60 -> 1/100

1/80 -> 1/100

1/100 -> 1/100

1/125 -> 1/125

1/160 -> 1/160

1/200 -> 1/200

 

Rule 3. With a non-aperture control lens, even higher shutter speeds than the 200 shown can be attained, despite Canon's indication of the limited shutter speed of 1/125.

 

---------------------------

 

There is some exposure related speculation that concerns using bayonet adaptors that include an interface to the auto focus/auto iris interface, but we have not chased down that rabbit hole yet ourselves. It seems that without a connected circuit to the lens, the camera's tiny brain assumes a value of f2.8 or f 2.0 and keeps it there, which is enough to work from as a good start. In theory I suppose you could hack that method and gain some more manual control back, but we've been too busy to ponder it.

 

No one said this was easy. But it is hackable, to an extent that the camera's specific advantages can be used and controlled. I think it is best used as spy cam and night vision thing for night exteriors in downtown mixed lighting, and as a way to steal shots inside clubs and museums, etc. For day to day production I don't trust it fully not to overheat or act up, but if I were making a verite doc I'd give it serious consideration over any HD DVX type unit, and I think it also may be better than my beloved s16 Aaton in a lot of ways.

 

The focus pulling won't be fun with Nikkors either, and the 7" 720 line monitor is not the world's best eyepiece, etc. There are clearly many many issues that can be worked out in future camera systems but again, this is an exciting format and a lot of fun to shoot with.

 

Am I correct to think the above exposure problems or limitations only apply to the 5D and not to the 7D which seems to have full manual exposure control when in movie mode?

Link to comment
Share on other sites

  • Premium Member
Am I correct to think the above exposure problems or limitations only apply to the 5D and not to the 7D which seems to have full manual exposure control when in movie mode?

 

Hi,

 

The post you quoted was dated 4th May. The 5D has had a firmware up date so now has full Manuel exposure control.

 

Stephen

Link to comment
Share on other sites

The 5D only does 30 fps which is the deal breaker for many.

 

 

Yeah, I never cared for the 5D and now that we have the 7D why should we? I just wanted to make sure the 7D had full manual exposure control and wouldn't override anything I did with my exposure.

Link to comment
Share on other sites

  • Premium Member

Two problems, as far as I know, remain unsolved:

 

- The codec is awful, bad enough that even when you downconvert it to standard-def DV, you can still get contouring and quilting in things like graduated skies. There isn't exactly a dual-link output on these things so there is no solution to this.

 

- At least in the 5D they were leaving out so much of the chip that it caused quite severe aliasing problems. Since the 7D is a different sensor, this may be better or worse but I doubt they've solved it entirely.

 

P

Link to comment
Share on other sites

  • Premium Member
... The codec is awful, bad enough that even when you downconvert it to standard-def DV, you can still get contouring and quilting in things like graduated skies. There isn't exactly a dual-link output on these things so there is no solution to this. ...

 

Hi Phil: Can you tell if there's an example of the "contouring and quilting" artifacts you mention visible on the shadowed ceiling (upper-right of frame) in the scene at about the 26-sec. point in this 7D video?

http://www.vimeo.com/6882279

 

... or perhaps that's just a Vimeo Flash compression artifact and not necessarily something caused by the 7D's in-cam compression?

 

Just curious, thanks.

Link to comment
Share on other sites

  • Premium Member

Could be either, it's hard to tell, since both Vimeo and the 7D use the same sort of compression.

 

That's probably been recompressed at least twice, though, if not three times, inbetween the camera and the web, with different degrees of h.264 involved each time. It's a testament to the codec that it still looks reasonably good.

 

P

Link to comment
Share on other sites

  • Premium Member

Thanks, Phil. I suspected that was the case, but wasn't sure.

 

As I think you've said before, several recent cams produce video which is better-looking than what their specs would otherwise indicate.

 

Now, if they'd just add a measly headphone jack and an audio AGC on/off switch to the 7D ... ;)

Link to comment
Share on other sites

  • Premium Member

The Japanese word for it is kaizen, 改 善 (literally "change good"). Much as people like to read a lot of mysticism into anything written in kanji, the standard translation is "improvement" and as far as I know the term has no ancient mystic philosophical baggage attached.

 

Actually it's a development practice as much as it is an approach to making money. They pour an extremely large amount of money into developing this stuff. In the west, we like to hit it out of the park, do all our work in one hit then sit back. What this actually means is that we never get anything out, and when we do, it's far too expensive. My impression of outfits like Canon is that they develop something, release it as soon as they can get a vaguely workable product, and keep the dev labs open and the cash flowing.

 

P

Link to comment
Share on other sites

  • Premium Member
Two problems, as far as I know, remain unsolved:

 

- The codec is awful, bad enough that even when you downconvert it to standard-def DV, you can still get contouring and quilting in things like graduated skies. There isn't exactly a dual-link output on these things so there is no solution to this.

 

- At least in the 5D they were leaving out so much of the chip that it caused quite severe aliasing problems. Since the 7D is a different sensor, this may be better or worse but I doubt they've solved it entirely.

 

P

 

How's the HD signal out the plug? Is it clean or compressed? If it's clean, wrangling it onto hard drive isn't out of the question. If it can route to both UDMA and hard drive, even better.

 

Does ISO affect quilting?

 

I have an automatic assumption, likely incorrect, that Japanese businessmen aren't as concerned about getting us the best product as they are about making money. Maybe, I'm too cynical.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...