Jump to content

Chris Nuzzaco

Basic Member
  • Posts

    25
  • Joined

  • Last visited

Profile Information

  • Occupation
    Cinematographer

Contact Methods

  • Website URL
    http://www.chrisnuzzaco.com
  1. I've colorized about 90% of the projects I've worked on as a DP. The big difference between myself and other DP's however is that instead of sitting next to the colorist... I am the colorest. My advice is to invest time into learning the various tools of the trade, and you'll have better leverage as a DP. As for the power struggle, its up to the director. If you don't like that, then I suggest networking with directors who are really big into artistic delegation when it comes to things like lighting and color - these guys tend to be actor/performance focused directors.
  2. Some people actually do just that. Not everyone has the money though, so it can be very unrealistic, and it would be of more interest to a DP than a director. If thats the case, I would request some stuff thats been shot and already polished in post to view. The issue with viewing Red proxies is that they haven't been corrected, and they have a generic viewing LUT (redspace) that most likely doesn't show the look the DP intends the footage to have. You see where I'm going here? If you get handed .r3d files, your entering a very strange forest at the moment if you want to see the quality they can have. I highly suggest requesting a 1080p demo of finished red footage to view if you just wanna watch it and see if your impressed by it. I understand it's 1/4 the resolution of Red, but it is the same amount of data to deal with, which is my point. The files actually take a few seconds to load, so no, I don't expect it to work like greased lightning at 4K. You can actually view the .r3d files in RedCine, so apparently there is a way to work natively with them, I guess I just wish that was available more readily in other applications people use when dealing with the Red. Again, this is just a temporary hangup, I know the support is coming. I just can't believe how "super beta" the work flow was when they released the camera.
  3. You know, this is something that was mentioned to me during my "Miami Mayhem Color Correction" (LOL). I was told most people have just color corrected in FCP via the proxies and then mastered from that. That notion kinda blew me away, I don't know how true it is, but man, kinda sucks if people have to resort to that. As for just viewing the footage myself to check it, I have a big hangup with the whole Redspace thing being the default for proxies. I want to view the footage in a flat looking, "raw" view. The Redspace will apparently blow out highlights that still have detail in them. I understand why they made it that way (kinda creates this "illusion" that red has "overhead"), but to an experienced raw shooter like myself, it's kind of silly. Yes, its good for a client to see a "nicer" image, but the DP's gotta see what they need to see too. I'm still awaiting the day I can edit and load in my own custom viewing LUT's. I don't recall this feature being implemented yet (but it IS available on my Andromeda mod :P).
  4. Kinda, but not really. He wants to see what the Red can really do, that means going beyond the lame quicktime proxies and looking at the .r3d files in a superior format, which means some kind of conversion. It's the conversion process that killed me partly, the other aspects certainly didn't help me either (the whole conform in Red Cine just not working, period, for example). I don't see much use, for example, in watching underexposed footage that I can't somehow tweek and examine, etc. I would love to play back the files at full quality, right away, but I know thats hard, and comes with certain technical complexities, but it isn't "impossible". My Andromeda DVX100 system, while lower res (1K), actually pumps out the same amount of data per second as the red (it's fully uncompressed 10 bit data, the storage is about 2GB a minute, just like Red), which means it should be equally tough for a computer to debayer and play back the file natively in quicktime, at full quality. The kicker is this, I can. A user made a quicktime component that does just that. The bigger issue though, how do you deal with gamma, sharpening or color balance? Right now this Andromeda component does nothing, its straight up linear gamma (dark looking image on a 2.2 gamma monitor), and the colors are locked off to whatever the camera captures, and totally unsharpened. It is a bit of a drawback, but, at least I can quickly work with the files at full quality, and in any good color correction program, I can get to work right away, no more use of the rendering program. It would be great if red had this option, in addition to the lower res proxies. The proxies absolutely have their use's in the post work flow, I just want this other option to be available as well. It sounds like it will happen sometime in the future now that Adobe is on the scene, but realistically, how long will it be before they release their solution? Thats the million dollar question.
  5. Hey Guys, I recently flew down to Miami to colorize a film I DP'd with the Red camera back in August. It was a film shot mostly in the wilderness, using natural sunlight, underexposed, and I used some bounce cards whenever possible. I have to say, as much as I was impressed with how well the footage held up under some very brutal color correction work, the whole post work flow freaking SUCKS. I'm sorry, but it's true. We were "prescribed" a certain work flow that was regarded as being solid. I won't get into all the nitty gritty details, but essentially when I arrived, what I found was a disaster that ended up forcing me to stay in Florida an extra week to get the job done (not that staying in Miami is "bad" lol....). Why? Let's just say the game plan sorta failed in multiple ways. I'll be more clear, the OFFLINE editing of Red files in FCP seems to be fairly decent, its the online editing that is the major pain, and also where most of the issues seem to lie. In a nutshell, we had to render out 1080 10 bit uncompressed 4:2:2 quicktime files, that were quite large, which is to be expected, but what slayed me was the reason. I couldn't get the 10 bit HQ ProRes files (the quicktime codec that everyone seems to swear by) that were rendered to work in After Effects at the proper bit depth (I suppose I now have beef with Apple?). Why? I have no clue. Same with RedCine basically going AWAL on us, so I had no way whatsoever to utilize the white balancing tools in Red Cine, which I think are decent enough, and far better than trying to white balance via curves (I personally hate doing that). If it weren't for Red Line, and a homemade FCP to After Effects EDL script available online that *actually works* we would have been dead in the water. So in other words, Red made a camera that can put out a mean image (when you are mindful of certain things like using IR cut filters, etc...), but they kinda leave you dead in the water if you wanna actually see and work with that image at its full quality level, as opposed to the crappy proxies that apparently use redspace. I don't wanna sound like I'm totally ragging on the Red One, I think the camera can do some great stuff, I just hated the post work flow. THIS is the reason why they persuaded Adobe to support the Red camera line up, in my opinion. Yeah its kind of a rant, but can you blame me? I guess the silver lining is that I got to spend an extra week in Miami... Oh, and by the way, I highly recommend getting a MacPro station that is pretty loaded if you want to do some serious work on Red footage. It just comes with the territory man.
  6. Actually it won't. The neat thing about this type of technology is actually DOF control. If you read up on it more, you'll find that you can actually stop the camera down when shooting in bright sunlight, then in post, narrow the DOF. Simply amazing stuff.
  7. Hey Guys. I've read a lot of discussion about the new big sensors from Red, especially that huge 28K monster and the medium format one. Most of the argument centered around lenses, and rightfully so. Especially focus pulling. Here's my theory: They might be laying a sensor groundwork for light field cinematography, a moving version of this stuff: http://www-graphics.stanford.edu/papers/lfcamera/ http://www.notcot.com/archives/2008/02/adobe_lightfiel.php This type of imaging DEMANDS larger sensors (read, higher resolution) to really work at a professional level thats actually useful. What I find so darn ironic about this, assuming it's one of their targets, is that all the focus pulling arguments people are laying out are more or less vaporized! Another thing I noticed was Adobe's development with the technology, and their newfound chummy devotion to supporting Red's cameras. Makes you wonder.
  8. The irony is that for you to have good 2K results using bayer filtered sensors, you actually DO need sensors around the 5-6K resolution. It's the whole interpolation of color channel resolution thing that causes a 5K image at 100% to not really display a true 5K worth or res. The 4K redone is more like a solid 3K, but if you want it to be really nice in terms of color channel quality when down-resing, 5K and 6k are more appropriate. It does have drawbacks, the bigger the sensor, the harder it is to have super fast frame rates, etc... I just hope it has a global shutter this time around....
  9. For those interested in seeing just how flexible the image is without downloading that huge zip folder: Linear (unsharpened): http://www.chrisnuzzaco.com/special_uploads/linear.tif LOG/ "Viper" Like (unsharpened): http://www.chrisnuzzaco.com/special_uploads/shadowlift.tif
  10. Like many other people here... I can't figure out how to edit my posts! LOL Here is the image file sequence (source footage for the clips). Its big and beefy... 463MB: http://www.chrisnuzzaco.com/special_uploads/filter_test.zip
  11. Thats interesting... Lately I've seen lots of "less than stellar" posts at Red addressing all kinds of issues like image noise when shooting under tungsten, etc. The original cameras lens mount had issues that were posted prior to and after the new mount. Granted, a lot of "issue" posts are more related to operation and lack of experience working with RAW cameras, but I've seen plenty of posts regarding QC issues like knobs, connectors etc. that haven't been deleted. I did once get a bit harassed by a Moderator in the Off Topic forum once for posting some of my Andromeda research there, which many found interesting and exciting because it showed just how powerful RAW data is, even from a DVX100 (FYI you can check out the latest improvements at the Reel Stream Forum and here under the DVX forum). Supposedly some people wanted to keep the Off Topic forum "Red Only", but if you have ever been to that forum, you know as well as I do thats its far from that. So I suspect I was being a little targeted, but I was happy to see several members post in my defense.
  12. http://www.chrisnuzzaco.com/special_uploads/filtertest.tif That is a cropped screen shot from AE. I'm not sure how much information gets captured during a screen shot though, so it might not be a full 16 bit file, but its much nicer than the quicktime I compressed. Remember to view with a proper gamma of 2.2, your browser might display it wrong. This is an 8 bit Animation file, uncompressed. Remember... its 8 bit, I shot in 10 bit, so if you try to mess with this file, you are missing a full 2 bits of data (thats a lot of extra data), and this is already been tweaked by myself anyways. About 220 MB: http://www.chrisnuzzaco.com/special_uploads/filter_test.mov For those wanting to give editing the footage a whack.... Uncompressed 16 bit tiff image file sequence. LINK UPLOADING These frames were captured linearly at 10 bit, and rendered linearly into a 16 bit space, so the images will look dark on your average 2.2 gamma monitor, but all the data is there. It was also rendered without sharpening. I chose to use more advanced sharpening methods in After Effects, Sculptor HD's sharpening abilities are a bit rudimentary and they tend to accent signal noise. These frames have no lateral CA compensation applied either. A little bit of information abut the shot: This was shot using a special manual white balance setting I created that forces the blue and red channels to equal the green channels gain. Basically its a "uni-gain" setting. This effectively turns your camera into a Viper Filmstream like system, where optical white balance brings the best results. The camera is natively balanced for daylight, so I had to use an 80A filter on top of my 90ccM filter. If you load the frame into photo shop and isolate each channel, you will see that all three channels have the exact same amount of noise, which is very low, this is due to the optical white balance. The shot was rendered unsharpened. In After Effects I broke the shot up into individual channels and compensated for the lenses lateral CA, then I applied an unsharp mask to the footage, making sure to not accentuate signal noise. Sharpening is much easier when you have footage with a very low noise floor. It looks even better when you see it play :)
  13. Wow, thats really cool, but, I think thats more of a research camera than a cinema camera. I used to shoot high speed video for an equine research hospital, they really could have used that little bugger. Thanks for the link.
  14. David, I'm well aware of this :) Thanks for diving into it more, I just skimmed over my basic approach. As for noise varying shot for shot, I think thats more an issue of always exposing for the highlight, but not bothering to check the shots contrast, and thus you end up changing the ISO in post (for those wondering, I have one "pet" LUT that I use for almost everything at the moment, so I don't change LUTs during a scene, unless there is some kind of special shot). For example... When I light a scene, I create a contrast game plan of sorts, I figure out how far under the highlight I want things in a scene to always be, and then I light from there. I might want to always keep one particular actors face 2 stops under the brightest highlight, the walls at a constant 3 stops under, etc.. I do make exceptions though, specular highlights are usually left as is. Funny you mention the clipped headlights... Headlights are very similar to specular highlights if you can't mess around with them at all. The contrast, especially at night, can just be really out there. Sometimes you have to let stuff blow out in order to get the rest of the shot to show up due to sheer contrast. Claudio could have exposed for the headlights, but man, everything else in the scene would have looked like junk. Better to just let them blow out, and if they have the post funds, mess with the headlights in post, at least thats what I would have done (unless I could somehow dim or put ND on them). Thanks,
  15. Harsh highlight clipping is the big issue so many people do not seem to understand about digital cinema. I've been shooting RAW HD for almost a year now, and one of the big things I've learned about good RAW HD, well, any HD really, is figuring out what the ultimate contrast is going to be AFTER you color correct the image. Ultimately, in post, you will probably need to kick the footages shoulder up a bit in order to create the necessary "roll off" into the highlights. Because of this, you do need be very careful about where you place your brightest highlight exposure wise. I totally understand exposing to the right, but I also understand when I've gone too far, and how far is too far all depends on the final image and the subject matter you are shooting. Another often confusing thing about shooting RAW HD in particular is that fact that you basically have two different responses for the camera. One is linear, which provides the max amount of data you can capture, and the other is your footages ultimate "look" and subsequent curve needed to create it. You can certainly shoot in a manner that uses the cameras full linear range and create contrast in post, but odds are you will need to do more sophisticated work to pull it off if you do not want clipped highlights nor loose all of your shadow details but still have a nice bold black. Just because you can do that in post doesn't mean you should, especially if you have a limited budget, in which case, you should really just light the shot in a manner that looks good once the final "Look" or curve is applied to the footage, and I can guarantee you the image will also be much cleaner as well. Even with RAW HD, the more you jerk footage around, the worse it can possibly end up looking. Make sense? As for metering, I base my ISO ratings off clipping in the green channel. The green channel is the first to blow out most of the time, so thats why I base the rating off of that channel. To expose a scene, I simply find the brightest object in the scene and then use the f/stop (or t/stop) that my meter gives me. I light and meter from the highlights down. Easy as pie! Hope that helps.
×
×
  • Create New...