Jump to content

Canon D5 Mk2 to 35mm mini-test results


Jean Dodge

Recommended Posts

Just came back from screening 35mm film-out test footage from the canon D5 Mk2. No major problems that haven't been discussed elsewhere.... it looks good and adding the film grain to low light shots makes it seem "right" to me, and also covered a few artifacts like solarization and aliasing on a few occasions. Overall we were very pleased with the look.

 

The 30p to 24p conversion we did in cinetool portion of FCP, and it seemed okay. We projected less than 300 feet of film though, and so I can't say these tests were definitive.

 

Non linear editing systems are not importing the files correctly however and the crushed blacks everyone sees after you take the footage into FCP or Avid can be avoided, supposedly by using the right software from Cineform, or something similar.

 

Here's a good link to what I wish we had found prior to the tests

 

http://cineform.blogspot.com/2009/01/full-...m-canon-5d.html

 

All our blacks were crushed and so most of the test was a waste of time, frankly but it was a good shakedown cruise for shooting with the camera. We learned a lot and if anyone wants the gory details feel free to send me a message.

 

The two hurdles seem to be A), getting the footage into a non-linear editor without crushing the blacks, and B ) converting it from 30p to 24p, which seems to have been done a variety of ways and need to be stacked against one another side by side before conclusions can be drawn as to which method is best.

 

Other than that, the camera works quite well. It's full HD, 8 bit color and has a great low light and low DoF look that can't be captured any other way as far as I can tell.

 

We shot with manual focus AI Nikkors and did car shots, day for night and some basic narrative stuff trying to recreate a lot of typical problem shots. I'm a film guy, and can't stand the video look, but this camera comes very very close to getting it right. It would be a delight to shoot verite with.

 

The look is worth the hassle. Getting the right lens package together is next, along with learning to adapt a lot of DVX tricks regarding support and post workflow, but this camera will make a movie that looks better than any DVX film I've ever seen. The overall look of the lenses put the camera in a category above super 16mm and the low light performance is incredible.

Edited by Jean Dodge
Link to comment
Share on other sites

  • Premium Member
The two hurdles seem to be A), getting the footage into a non-linear editor without crushing the blacks...

Hi Jean,

 

Have you seen these articles yet?

 

http://prolost.com/blog/2009/1/20/5d-crushing-news.html

http://prolost.com/blog/2009/1/22/quicktim...-5d-movies.html

 

Sounds like a workaround using Apple Color and Quicktime 7.6 instead of using Cineform.

Link to comment
Share on other sites

Satsuki - yes we've followed the google trail of breadcrumbs to CineForm and ProLost now...

 

Phil.... still exploring other avenues... by DV do you mean DV the little 1/4" digital video tapes???? Or D-5? please expand... I've followed many of your posts over the last two days, and you seem to be deep into this stuff. Kudos for that but here you've lost me.

 

I'm a film guy who HATED everything about video, including the fershluginer video taps we have to deal with. When I direct, I stand on the off side of the camera, consult with the DoP and trust he's getting good floating compositions based on a glance or two at the monitor or a peek thru the eyepiece. Then I work with the actors and try to get the performances and timing right. I trust my DP and also hope he trusts me to do my job. When I worked as a focus puller I learned my lenses and watched the actors and the dolly marks, not the monitor so much. If it got "a picture" I didn't tweak it much. That's my intro to video, besides renting blu-rays and watching them.

 

However, now that "digital cineamtogrpahy" is seemingly on the verge of looking sufficiently film like in certain conditions, it's creepining into my life. I'm learning this stuff reluctantly and also a bit late in the game, I admit. So the learning curve is steep and a lot of it is not second nature yet.

 

My role this last week as we did these tests was simply to build the package, assist the DP and consult with the editor who was doing his own research into the workflow. We rushed to a film-out despite the obvious crushed blacks simply to accelerate our learning curve - and we learned a lot but it's clear we need to keep pushing this little-camera-that-could. Sometimes you have to fail to learn your lessons.

 

I've also read about CoreAVC,a program that works with H.264 and was recommended by the CineForm blogger David Newman. His posts are linked to from the ones by ProLost but here is a good one with things about the blacks and the codec that everyone who shoots with this camera should know:

 

http://cineform.blogspot.com/2009/01/full-...m-canon-5d.html

 

For anyone just coming to this stage, I recommend doing a search on his CineForm Insider blog to read everything he has written using "Canon D5" as your search term.

 

Incorrectly, the night before our film-out on the Arri recorder I had assumed it was the 30p-to-24p conversion that was crushing our blacks and affecting the contrast so badly. (I was in the edit room but only after four 15 hour shooting days. My attention span was clearly not what it could have been!) Now I see it is a simple error in getting the media into a Non linear editor in the right setttings/ standards, but one which is still being sorted out by a vast majority of D5 users.

 

We'll continue our tests now but will probably wait a bit until we do another film-out. We saw in the screening room enough about color and film grain and what the process does to smooth out aliasing and solarization in dull shadow areas - the film helps cover some of the faults of the camera, and in the very low light scenes the stuff looks amazing.

 

I've got plenty to say about working on set with the camera, too but will have to write more later. Again, we're very pleased and excited with the camera. Crippled as it is, it still does some tricks you can't do any other way.

Link to comment
Share on other sites

  • Premium Member

Great info and links, Jean. Thanks for sharing with us. What are you doing to deal with the notorious, rolling shutter. As well, isn't there a common beef with the exposure control work-around?

Link to comment
Share on other sites

  • Premium Member

You're right, I'm being imprecise. By DV I mean DV25, also called DVSD, which is the compression codec used for the little 1/4" tapes. In its guise as a computer video compressor, it's a useful low-stress way of working with standard-def offline material with a good compromise between image quality, file size, and the computer horsepower required to encode or decode the material. More recently, DVCPRO-100, also called DVHD, has become popular for this, since it allows for reasonable 720p high definition pictures with a similar level of compromise, but it's a bit tricky to handle on anything other than a Mac running Final Cut.

 

As to crushed blacks - it's difficult to glitch-fix this without a step by step description of your workflow, but I've done everything as file based and I have not hit any problems - I'm looking right now at a monitor displaying both the original h.264 HD material, and the PAL-resolution offline side by side, and the colours appear identical. One reason I like to use free tools like ffmpeg for this sort of thing is that they have a marked tendency not to try and be clever; to simply take the image data and blast it from one file into another without concerning themselves with "correct" levels. Of course, this means you take responsibility for not creating a situation which gives you problems, but it also gives you the flexibility to get out of those situations when they occur.

 

I'd have to look into this a bit more, which I suspect I will eventually have to do when I come to online the short we did, in order to figure out exactly how to fix it, but in the meantime, I'm accepting bookings!

 

P

Link to comment
Share on other sites

Great info and links, Jean. Thanks for sharing with us. What are you doing to deal with the notorious, rolling shutter. As well, isn't there a common beef with the exposure control work-around?

 

Rolling shutter has not been a deal-breaker thus far. The car shots we tested used a 1/30th (1/33 actual) and 1/50th shutter rate and shallow DoF and as such if there were slanted telephone poles out there, we didn't notice them too much. When we did need to stop down to say, f8 to use a 35mm lens to hold focus on two front seat occupants in a hostess tray type setup, the motion blur of driving blurred the background sufficiently so it didn't look like video, where there is excessive DoF. When the car stops, we noted that a cut to a different angle would be best. Odd things like this became obvious after a day or so of shooting.

 

Panning in general is an issue, pan speeds need to be worked out as a table/ guide, much as spinning shutter pan speeds have to be dealt with in film. The wide Nikkors are still in the shop getting rear flanges milled down to fit the canon camera but it's obvious they are horribly distorted, and the barrel distortion effects that can be hidden in a still image are bad in a pan - but sometimes easily disguised in a tracking shot on a slant. We'll likely spring for a newer wide angle lens like the Nikon 15mm f3.5 if we decide we really need a super-wide. (tests not complete here)

 

As for jello-cam, we didn't see any at slower shutter rates and any normal pan speeds. The nikon D90 is much much worse at this. I wouldn't shoot LAWS OF GRAVITY with this system, but so far, so good. (LOG, notoriously HH indie feature from back in the day, filled with whip pans and HH 360 shots. Fun, but audiences got sea sick.)

 

Personally, we liked the car shot stuff - driving 60mph looks like 60mph and has a very natural feel to it. In fact we didn't see any "judder" issues anywhere in panning, but plan to do more tests soon. Also the small size of the camera lets you put it into places you could never get a film camera. The silly thing is wider than it is long, and often operates better as a SLR than with the FF rig and HH stuff. Imagine taking an Arri SR onto a roller coaster and then compare with taking a Nikon.

 

Exposure control work around is a pain in the arse but becomes second nature eventually. With manual iris lenses you can stop down while pointed at almost any light source until you trick the camera into displaying an approximation of the ISO and shutter rate that works for you, and then lock the value with the * button. Getting repeatable results take to take was a concern, but seemed to work out okay in the tests we did.

 

I'm looking into buying one of those cheap photo frames that displays jpegs in a slide-show and shooting a series of grey cards as way to easily repeat brightness values to hold to the lens.... another user forum suggestion I picked up in research. Silly but should be effective. As you may know, the camera uses a combo of ISO and shutter speed rates to control exposure once you take away Iris control as it's third option. At the slower ISOs, 100 and 200, the frame rates are all over the place - up to and beyond 1/160th a sec. ND filters come into play here to make sure you are shooting 1/50th at the f-stop you want, and the camera also has two stops exposure correction available with the thumb wheel. (for 1/50th a sec exposure, always choose and lock 1/40 on the display. 1/50th display can sometimes force a 1/100 shutter in actual practice)

 

At higher ISOs than 200, it is our understanding that the shutter rate is always 1/30th.

 

here's the table we went by. I can't take credit for this, but I can't refute anything in it either:

 

------------

From tests performed by Jon Fairhurst and Mark Hahn, the following findings have been made:

 

When shooting video with Nikon lenses or any lens where you are setting aperture manually:

 

Rule 1. Camera shoots at 1/33 of a second, any time the ISO is above 100, or above 200 with HTP mode employed. There is no way around this no matter what shutter speed reads out on the LCD.

 

Rule 2. At ISO 100, or 200 with HTP set, you can adjust shutter speeds. The following table shows the LCD reading on the left and the right shows the actual shutter speed the camera will use.

 

LCD -> Actual

Reads

 

1/40 -> 1/50

1/50 -> 1/50 or 1/100

1/60 -> 1/100

1/80 -> 1/100

1/100 -> 1/100

1/125 -> 1/125

1/160 -> 1/160

1/200 -> 1/200

 

Rule 3. With a non-aperture control lens, even higher shutter speeds than the 200 shown can be attained, despite Canon's indication of the limited shutter speed of 1/125.

 

---------------------------

 

There is some exposure related speculation that concerns using bayonet adaptors that include an interface to the auto focus/auto iris interface, but we have not chased down that rabbit hole yet ourselves. It seems that without a connected circuit to the lens, the camera's tiny brain assumes a value of f2.8 or f 2.0 and keeps it there, which is enough to work from as a good start. In theory I suppose you could hack that method and gain some more manual control back, but we've been too busy to ponder it.

 

No one said this was easy. But it is hackable, to an extent that the camera's specific advantages can be used and controlled. I think it is best used as spy cam and night vision thing for night exteriors in downtown mixed lighting, and as a way to steal shots inside clubs and museums, etc. For day to day production I don't trust it fully not to overheat or act up, but if I were making a verite doc I'd give it serious consideration over any HD DVX type unit, and I think it also may be better than my beloved s16 Aaton in a lot of ways.

 

The focus pulling won't be fun with Nikkors either, and the 7" 720 line monitor is not the world's best eyepiece, etc. There are clearly many many issues that can be worked out in future camera systems but again, this is an exciting format and a lot of fun to shoot with.

Link to comment
Share on other sites

  • Premium Member

Further to earlier reply, here's the problem in the flesh:

 

post-29-1241455908.jpg

 

This is a screen capture of the monitor window in Premiere Pro 2. On the left is the DV offline, on the right is the original h.264.

 

This does at least prove that Premiere is capable of reading the entire dynamic range out of the original h.264 file, although it's not clear at what stage the offline was damaged and we don't know if the full range will survive a render out to file.

 

P

Link to comment
Share on other sites

Jean: Thanks for posting. Do you have any screen grabs of the crushed-black D5 HD footage you can post? I know, the best way to see it would be on film, but obviously that is not an option. Thanks again.

 

 

 

Check this link for a better explanation of crushed black issue, with examples and waveform etc:

 

http://cineform.blogspot.com/2009/01/full-...m-canon-5d.html

 

It looks like that, and any screen shots I could post are similar. On film, it looks like that only slightly worse due to the small amount of increased contrast. Until we have proper HD footage to load into the Arri Recorder, I'd reserve any judgement on what the 35mm looks like, really.

 

If you have further needs and a legitimate reason to see some grabs, go ahead and send me a PM and I'll try to help you out. This being a cinematography forum I'm hoping to keep the discussion here to on-set stuff for the most part.

 

Sadly, as we've all noticed however the job description of cameraman has now been changed to cameraman-plus-tell-the-editor-how-to-do-his-job-too, sometimes. I am of the opinion that the DIT should report to the editing dep't, not the cinematographer. However I seem to be in the minority....

Link to comment
Share on other sites

Thanks Phil, great work. I'm about to post something to some post-production user sites that specifically address workflow issues - this being a cinematography forum - but I am riveted by your posts... and want to hear more. Mind if I PM you? Where else do you stand by the virtual water cooler and dispense these pearls of wisdom?

 

I was thinking of trying to take the larger issue of feature length workflow to a FCP user forum, or the cluttered and fraught with rumor cinemaD5 forum.... but wary of getting overwhelmed with bad intel, and then being forced to invade Iraq and water-board my neighbors 88 times a month. :P

Link to comment
Share on other sites

  • Premium Member
The 30p to 24p conversion we did in cinetool portion of FCP, and it seemed okay. We projected less than 300 feet of film though, and so I can't say these tests were definitive.

 

What you need to do in testing frame rate conversion is shoot something with smooth uniform and moderately rapid motion. Some nicely done gear head pans, car by's, stuff like that. Then look at the output frame by frame.

 

The tests we did showed that the rate conversion consisted of passing three frames thru unmodified, then combining the next two frames in a blur-between. If you shot with an effective 180 degree shutter at 30 fps, what you get at 24 fps is the equivalent of three frames with a 144 degree shutter and one with a 432 degree shutter. If you consider the middle frame of the three and the blur-between to be "right", then the other two straight across frames are a little early and a little late, which gives the motion a sort of 3-2-ishness.

 

Big talking head CU's work OK for mouth motion, but we did have some in our tests where a blur-between hit on an eye blink, and made a very obvious mess of the eyes.

 

All that being said, it's still massively stupid to have to work around the 30 fps problem. They need to make the camera do 24. If they don't, perhaps somebody will do it as an aftermarket modification.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member

I've never seen Avid crush blacks except when it's been told to do so (often times inadvertently). On import settings, there is a choice for 601 or RGB levels. If you choose 601, Avid assumes black is 16 and white is 235, if you choose RGB, Avid will assume black is 0 and white is 255. There are similar settings for export. And it will depend on what format you choose to export to.

 

Because Avid is used extensively in broadcast, one thing it does very well is manage how super blacks and whites are handled. It may take some time to find the correct combination of settings, but it should be a relatively simple process.

Link to comment
Share on other sites

I've never seen Avid crush blacks except when it's been told to do so (often times inadvertently). On import settings, there is a choice for 601 or RGB levels. If you choose 601, Avid assumes black is 16 and white is 235, if you choose RGB, Avid will assume black is 0 and white is 255. There are similar settings for export. And it will depend on what format you choose to export to.

 

Because Avid is used extensively in broadcast, one thing it does very well is manage how super blacks and whites are handled. It may take some time to find the correct combination of settings, but it should be a relatively simple process.

 

I don't know AVID, I barely know FCP. I am a cameraman and I have little experience with digital post or video of any kind. However the article I linked to above discusses this issue, and it has to do with the way the canon d5 codec is YUV not RGB, perhaps. There also seem to be frame rate issues, regarding 30p vs 29.978 etc that are not what they seem. However, regarding anything off a tripod, don't quote me. I want VERY much to use this camera on set and HAND OFF the compact flash cards to an expert. To me that is the job of a cameraman, to get great images and be able to briefly CONSULT with post people in layman's terms and trust them to do their job and not mine.

 

again, the POST discussion starts with this, maybe.

 

http://cineform.blogspot.com/2009/01/full-...m-canon-5d.html

 

And I am trying to learn all this, so thanks to all for adding to the discussion.

 

brave new world... indeed.

Link to comment
Share on other sites

Further to earlier reply, here's the problem in the flesh:

 

post-29-1241455908.jpg

 

This is a screen capture of the monitor window in Premiere Pro 2. On the left is the DV offline, on the right is the original h.264.

 

This does at least prove that Premiere is capable of reading the entire dynamic range out of the original h.264 file, although it's not clear at what stage the offline was damaged and we don't know if the full range will survive a render out to file.

 

P

 

 

I see a clear difference in the two... where are the grey hairs I see on the left? It's better than "horribly crushed blacks" (sounds like an industrial accident in antebellum days) but still there is lost info there.

 

But you seem to be onto something.

Link to comment
Share on other sites

  • Premium Member
...., and it has to do with the way the canon d5 codec is YUV not RGB, perhaps.

 

Wow, that's unfortunate. All sensor chips are inherently RGB. Ideally, conversion to YUV would happen once, if at all, and only for TV use. Any idea which luminance equation/matrix is built into the camera?

 

 

 

 

-- J.S.

Link to comment
Share on other sites

What you need to do in testing frame rate conversion is shoot something with smooth uniform and moderately rapid motion. Some nicely done gear head pans, car by's, stuff like that. Then look at the output frame by frame.

 

The tests we did showed that the rate conversion consisted of passing three frames thru unmodified, then combining the next two frames in a blur-between. If you shot with an effective 180 degree shutter at 30 fps, what you get at 24 fps is the equivalent of three frames with a 144 degree shutter and one with a 432 degree shutter. If you consider the middle frame of the three and the blur-between to be "right", then the other two straight across frames are a little early and a little late, which gives the motion a sort of 3-2-ishness.

 

Big talking head CU's work OK for mouth motion, but we did have some in our tests where a blur-between hit on an eye blink, and made a very obvious mess of the eyes.

 

All that being said, it's still massively stupid to have to work around the 30 fps problem. They need to make the camera do 24. If they don't, perhaps somebody will do it as an aftermarket modification.

 

 

 

-- J.S.

 

this is rapidly devolving into a post production thread.... but so be it.

 

What method did you use to convert from 30p to 24p? Ours is outlined here by "Denver, " whom I give all credit (and blame) to. It uses interpolation of frames in "cine tools" section of FCP and retains info rather than throwing it away as many compression-based method do.

 

http://www.cinema5d.com/viewtopic.php?f=13...p;sk=t&sd=a

 

the thread begins with an overview, and denver's method is outlined farther down.

Link to comment
Share on other sites

Wow, that's unfortunate. All sensor chips are inherently RGB. Ideally, conversion to YUV would happen once, if at all, and only for TV use. Any idea which luminance equation/matrix is built into the camera?

 

-- J.S.

 

 

re : Any idea which luminance equation/matrix is built into the camera?

A: No.

 

Read the links, and ask an expert, but yes I guess canon wanted to output "broadcast safe" color info and made sure that was baked-in, somehow.

 

Someone remind me again why I became a camera man and not a SMPTE engineer.... my brain hurts.

Link to comment
Share on other sites

Thanks to all who have posted.... very helpful and thankful even though I complain sometimes.

 

To consolidate where this thread seems to be going -

 

Shooting with the canon D5 Mk2 is a bit of a pain, but gets great images you can't get in low light with any other camera at the moment. The issues with manual control and 30p seem hackable by using manual iris lenses, certain workarounds and then converting the frame rate via interpolation methods available in FCP, or other methods untested. No guarantees made that it will be 100% artifact free 24p, but it looks usable most of the time. More tests will tell more, but film out is a good smoothing way to hide flaws, and the 1/33th a sec and 1/50th shutter rates look acceptable and fairly pleasing to the eye. It's not excessively smearing or "wrong" to see the slow shutter look of 1/33th a sec.

 

Outputting to 35mm smoothes over some digital artifacts and adds slight contrast, but shouldn't be attempted until one has a full understanding of what the camera is putting out as media, and how to devise a workflow that does two things:

 

1 ) converts from 30p to 24p, or some version thereof like 23.xxxx etc. without excessive artifacts, or 25p for EU film out.

 

2) retains best colors, contrast and blacks for color correction stage so the film out is as good as it can be.

 

Having only seen a very brief test, I still think the frame rate conversion is mostly solved. Anything more is icing on the cake. Normal stuff looks normal. Sound sync is still somewhat troublesome but will be figured out.

 

Retaining good colors seems possible, but we are starting with compressed QT files, broadcast standard 8 bit color, in full HD, so don't expect more than you can get from that as a beginning.

 

As for improving colors, now we are marching into uncharted territory, but some basic cc should be possible. How far one can push that is not known.

 

If anyone is wondering about the images themselves and the form factors of working on set with the camera, I can speak at length about that from personal experience. In my opinion it will be a great doc camera that is going to allow for some amazing footage to be captured in formerly inaccessible places with a low profile.

 

I am not a post person, but as a potential producer of films with the camera I am trying to learn more. Again, thanks to all who added to the discussion, and keep it coming.

Link to comment
Share on other sites

  • Premium Member

This isn't baked into camera output, as evidenced by the fact that Premiere is capable of at least rendering it unadulterated to the screen.

 

I've tried a few other approaches involving various tools, but without success; I've got some time tomorrow and I'll look at it again.

 

P

Link to comment
Share on other sites

  • Premium Member
What method did you use to convert from 30p to 24p?

 

We compared Twixtor and Digital Film Tree's in-house process. The results were pretty much the same, DFT's rendered about 25 - 30% faster. DFT also did some hand-tweaking of the start points to keep the blur-betweens from hitting in really bad places.

 

Eye blinks were a killer. If in the 30 fps source material you have an eye open, then one frame of it closed, and the next it's open again, there are no motion vectors to be found, because the eyelid moves too fast. A blur-between of frames 1 and 2, or 2 and 3, just looks like a double exposure. It's strange, but at 24 fps, you see that as it really is. It looks like a double exposure.

 

So, the way to do this would be to rough cut your sequence at 30 fps, then go back and add 15 frame handles to both ends of everything. Put it all thru the 24 to 30 conversion, then look at it. Tweak the ins and outs, and re-convert any shots that had blur-betweens land in a bad place. Drag the sound along as the other guy explained. Then fine cut the 24 fps version.

 

There has to be a crystal oscillator somewhere in that camera. Somebody will be able to find it and hack in a replacement that runs at 0.8 times the original frequency. Digital circuits have no problem at all running just a tad bit slower. The display may say 30, but you'd have a true 24 fps camera, and perfect motion rather than half assed fixes.

 

 

 

 

 

-- J.S.

Link to comment
Share on other sites

This isn't baked into camera output, as evidenced by the fact that Premiere is capable of at least rendering it unadulterated to the screen.

 

I've tried a few other approaches involving various tools, but without success; I've got some time tomorrow and I'll look at it again.

 

P

 

That's good news, if by "this" you mean broadcast standards not being baked in. People have reported some sort of success using "Color" inside Final Cut Suite in getting a better image from the H264 files, but that is a mens to take the 8 bit color and set it in a ten bit space. Post colorists have told us face to face that "this won't help, the info isn't there to begin with" but I have seen it work in photoshop by moving a compressed file like a jpeg into a higher bit rate and then having more "virtual" color space to manipulate. The bumble bee doesn't know it can't fly.

 

I've got at least two more final cut users working on this, but we're all working off the clock at this point.... so I have to continually be thankful for anyone who posts good intel...

 

thanks

Link to comment
Share on other sites

We compared Twixtor and Digital Film Tree's in-house process. The results were pretty much the same, DFT's rendered about 25 - 30% faster. DFT also did some hand-tweaking of the start points to keep the blur-betweens from hitting in really bad places.

 

Eye blinks were a killer. If in the 30 fps source material you have an eye open, then one frame of it closed, and the next it's open again, there are no motion vectors to be found, because the eyelid moves too fast. A blur-between of frames 1 and 2, or 2 and 3, just looks like a double exposure. It's strange, but at 24 fps, you see that as it really is. It looks like a double exposure.

 

So, the way to do this would be to rough cut your sequence at 30 fps, then go back and add 15 frame handles to both ends of everything. Put it all thru the 24 to 30 conversion, then look at it. Tweak the ins and outs, and re-convert any shots that had blur-betweens land in a bad place. Drag the sound along as the other guy explained. Then fine cut the 24 fps version.

 

 

That makes SOME sense to me. (Mongo only pawn in game of life.) That sounds like two methods that use interpolation, just like the Denver/Cinema Tools thing. Have I got that right? I understand the 15 frame handles, I was suggesting the same thing to our green, rushed editor and he said, "sure maybe next time. Right now, we gotta get to the lab in traffic!"

 

There has to be a crystal oscillator somewhere in that camera. Somebody will be able to find it and hack in a replacement that runs at 0.8 times the original frequency. Digital circuits have no problem at all running just a tad bit slower. The display may say 30, but you'd have a true 24 fps camera, and perfect motion rather than half assed fixes.

 

 

Yes, there is a sizable award already promised to the hacker who can make a 24/25p firmware update. And the rumor mill has it on the grapevine that the crew of IRON MAN 2 has an unpublished firmware update from canon that won't be made public. What a great conspiracy theory that one is.. they are in Morroco! But all that is hearsay, conjecture and unsupported baloney. I have seen the raw code however of a cracked Canon firmware -it's been published and there are several lines that seem promising...

 

 

What I have in my hands and can see in front of me is what I am concerned with. I can share my experiences and that's all I'd be willing to stand by. This is all new to me. But it seems like it's going to work. I've seen it on the big screen. Thanks for the input... exciting stuff.

Link to comment
Share on other sites

  • Premium Member

Rumour has it that Canon has a 24p firmware update for the thing which they're planning to release as soon as someone cracks it. I'm not sure exactly what they think they're gaining themselves by this, but that's the reasonably-well-backed-up story.

 

As to the crystal swappage trick - well, it works on computer graphics cards! On a more serious note, let the record show that if anyone wants to give me a 5D they don't mind me possibly destroying, I am more than happy to patch its innards up to a logic analyser and start poking about. There are likely to be obscure additional problems related to I/O clocking for the CF cards, among other things. But I hesitate to do it if Canon are just sitting on a much cleaner firmware fix.

 

P

Link to comment
Share on other sites

  • Premium Member

Jean,

 

Avid has sorted out RGB to YUV HD/SD (and vice versa) issues very well. If you were editing on Avid I'm sure a solution could be tracked down. And I believe it's as simple as choosing the proper import settings.

 

Just as a way of education, digital images are stored between the values of 0 and 255 (assuming 8-bit color). RGB images use this entire range. YUV does too, but the range between 0 and 15 is called super black and between 236 and 255 super white. So in YUV land, 16 is considered nominal black and 235 considered nominal white.

 

Depending on your delivery, having values below below 16 or above 235 may or may not cause a problem. For broadcast, it's a big no-no. Most cameras that record YUV record super black and white info. But not all NLE's properly use this info. Some see YUV and just truncate everything below 16 to black and everything above 235 to white. Avid does not do this.

 

Cineform has had problems with YUV footage with super blacks and whites. To accurately represent the values between 0 and 15 it sometimes resorts to using negative integers, as per a discussion with David Newman of Cineform. It supposed to work out in the wash, but does not always.

 

It looks like from the stills you've posted that the camera is recording YUV from 0 to 255 but the import process truncated everything below 16 and above 235. That's why the darkest blacks have lost their detail.

 

HTH.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...