Jump to content

David Mullen ASC

Premium Member
  • Posts

  • Joined

  • Last visited

Everything posted by David Mullen ASC

  1. You’d want the look of warm direct sun and cool soft skylight — and the sun gets warmer and warmer as it goes down, and also dimmer relative to the cold skylight. There are many ways to get that effect so there’s no right or wrong way. You can use a tungsten lamp for the sun and a daylight lamp for the soft skylight and set the color temp of the camera halfway between but you can also tweak that further with gels. Certainly you could leave the tungsten sun ungelled and keep raising the camera’s color temperature setting to warm it up but then you’d have to keep gelling your skylight lighting bluer to compensate. Which mix of gels and settings you use just depends on the situation like the view out the window.
  2. Equivalent to shooting on a 50mm in Super-35 so not wide-angle enough in Super-16 for that look you are talking about.
  3. Most cinematographers aim to get it right in camera and dailies, otherwise we’d have to put up with the director, producer, and studio execs complaining about the look being wrong or it not cutting together visually. We can’t wait for final color-correction months later to “get it right” because we’d have probably been fired long before that.
  4. First of all keep in mind that your initial resolution is 4320 x 3240 -- the resizing to take out the squeeze doesn't add resolution. So in theory, if your image has a 2X squeeze and you want to create a 2:1 image, then your max resolution comes from a square area of the sensor, so the max resolution from your recording is 3240 x 3240. Now how you get to an unsqueezed 2:1 frame from that sensor area is up to you, it depends on your delivery format: 4K DCP with 2:1 inside 1.85 (black borders top & bottom)? Inside 2.39 (black borders on each side)? 16x9 UHD video (3840 x 2160) with a 2:1 letterbox? Either way, your resolution limit is 3240 x 3240. So I guess you would start by cropping and unsqueezing to 6480 x 3240. But then whether you master for a theatrical 4K DCP first, then 3.8K UHD from that, depends on your delivery requirements. Cinema 4K DCP Aspect Ratio / Resolution Flat (1.85) / 3996 x 2160 Scope (2.39) / 4096 x 1716 Full Container (1.90) / 4096 x 2160 I assume "full container" is used for digital IMAX releases but I'm not sure you'd be guaranteed of that so I'd probably work within the 1.85 or 2.39 containers.
  5. Eastmancolor print stock first appeared in 1950, Vision 2383 print stock in 1999, so surely no one really believes that after 49 years, Kodak forgot how to get blues to reproduce accurately without a cyan bias.
  6. Blu-rays of older movies come from all sorts of sources and are post-processed in different ways -- it's mainly about budget. Look at something like "Out of Africa" for example, mainly transferred from a color-timed IP on a telecine to HD and SD, but then in the late 2000s, Universal decided they should remaster it in 4K as a form of archiving it, so they scanned the original negative on an Arriscanner at 6K then downsampled it to 4K and color-corrected that. But I'm sure the colorist looked at previous home video transfers OK'd by Sidney Pollack (and hopefully David Watkin) and maybe even screened a print but that's not standard operating procedure. Some movies have more budget allotted to digital remastering than others. Look at the "Star Trek" movies -- for some reason "Star Trek 2" had two digital remasters from scans of the original negative done in the 2000s whereas all the others just had an HD telecine from an IP until very recently. One common mistake I see on transfers of old movies where clearly they haven't watched a print or even watched the movie with the sound on, is transferring day-for-night scenes as if they were day scenes.
  7. I think, to some extent, you have to give up on the notion of perfect recreation of past processes because you are comparing apples to oranges -- digital to film, additive color to subtractive color, digital projection to print projection, etc. Even if you completely stayed out of the digital realm, you can't make new prints of old negatives that match exactly to the print stocks of the past, especially not to Technicolor dye transfer printing. You can't make new prints of new negatives either that match old negatives and old print stocks. And even if you had old print stock in storage, you couldn't make a new print of an old negative that matched exactly because the old negative has aged. And you couldn't use that old print stock on new negative to match the old look either because that's a new negative. And I'm just talking about staying completely out of the digital realm. Now throw in digital and the variations increase exponentially! Digital to P3 projection, with or without a 2383 emulation, film to digital to film, film to digital theatrical, to streaming, to broadcast, UHD, HDR, etc. So all you can do in the end is eyeball things. And if it's not a look that can be seen by human eyes, it's not particularly relevant anyway. Sure, it would be nice to have plug-ins in D.I. theaters that have labels like "3-strip Technicolor to dye transfer print", "5254 to dye transfer print", etc. but I'm sure in the end, half the filmmakers would say it doesn't look right to their eyes based on personal memory. We're creating visual art here, not doing a science experiment. Evocating old processes for a look is fine, but it's again about triggering an emotional response in the viewer, not putting up comparison charts and running side-by-side projectors, etc. You have to understand that for the most part, Kodak engineers weren't interested in creating looks, that was up to the filmmakers. They were interested in accuracy and consistency, and even more so as you went past the camera negative into the duplication and print stocks. And Fuji and Agfa followed Kodak's lead for the most part. Most studio print releases in the 1990s and 2000s mixed up Kodak, Fuji, and Agfa print stocks so they tried to be similar. What happened with Vision 2383 print stock in 1999 was that Kodak was trying to improve the stock technically -- sharper, better blacks, etc. but their first version was a bit on the contrasty side (in their attempt to improve blacks) -- and thus more saturated because sharpness / saturation / contrast are tied together -- so over the course of a year or two, they pulled back on the contrast and what we ended up with wasn't too far off from the previous stock. But in terms of color reproduction, Kodak engineers have always mainly aimed for accuracy, not for creating color biases. Of course, that's based on engineers looking at test charts and faces so there is always the human factor. Filmmakers, in the meanwhile, are always trying to create a distinct look -- some doing it within the system of Kodak negative and print stocks under normal processing, so they used odd lenses or filters or do it with lighting, etc. Some added variations to the system like push and pull processing, flashing, skip-bleach printing. By the late 90s, Kodak, Fuji, and Agfa were starting to put out special-look stocks, mainly ones with lower contrast though Fuji had some higher-contrast negative stocks. Kodak and Fuji also made a higher-contrast print stock. All of that went out the window by the mid-2000s as digital color-correction took over, all optimized to use laser recorders sending digital files to intermediate dupe stock following LAD specifications. So lots of variations introduced by digital color-correction, but conversely more standardization / less options in the printing end. And then digital projection killed most of the use of film prints anyway.
  8. A home video transfer from a film element would be color-corrected in Rec.709. If both theatrical and home video masters were needed, probably in P3 with the film print LUT then a color trim pass for Rec.709 for broadcast/home video. The whole orange-teal trend is a CREATIVE CHOICE, it's not built into any system... and if it were, which it isn't, it would easily be removed if not wanted, that's the whole point of color-correction. No one spends over $100,000 on color-correcting a feature film and says "if only I could have gotten that teal bias out of my blues! If only there was some way to turn a knob and shift the blue channel on the green-magenta axis... hopefully someone will invent such a device someday."
  9. I don't think you understand the color-correction process -- you can make a shade of blue any color you want. I just spent an afternoon at Fotokem with the ASC Master Class in a D.I. suite looking at MacBeth Charts with a 2383 LUT on the projector, of film and digital test footage. There was no teal bias to the blue patch of the color chart. Teal was teal, blue was blue, purple was purple...
  10. There is no teal-orange look built into the film print emulation LUT — you can color-correct the movie in any direction you want, you could turn it purple or b&w if you wanted to. Shifting your blues to cyan is a purely creative decision. Neutral skin tones is mostly a stylistic choice that was more common in the past, in fact, it was practically mandated in the old studio era, you were limited to how often you could deviate from perfect flesh tones because that was considered the goal of proper color photography, color biases were only allowed for select sequences. Look at movies set in the age of candlelight and firelight in 1940s through mid-1960s — faces were rarely lit orange. Today it’s almost the opposite, we rarely use a neutral key light on a face. The point of color-correction is not to technically emulate a print stock, it’s to reproduce the colors as the filmmakers intended. So in the case of an older movie, a starting point would be to first view a print or color-timed intermediate dupe element if possible to see how it looked originally. You’re over-thinking the importance of the print stock in terms of emulating it exactly. I mean what if they just wanted to strike a new print off of a negative from a 1980s movie? The original print stock doesn’t exist any more. So does that mean that old movies should never have new prints made? I can’t imagine Tarantino not allowing new prints to be shown of “Pulp Fiction” because the original print stock doesn’t exist now.
  11. Film print emulations are not used to create a “film look” or a “print look” — they are used in a D.I. when you want to create one master that works for both a DCP and a film-out by limiting corrections to the color space that a film print can contain. Contrast can be set to any creative look you want even if using the film print emulation.
  12. The 80A filter… but you lose 2-stops of light with it, which is why no one shoots daylight-balanced film and uses a blue filter when shooting under tungsten to get a neutral image — they are more likely to use daylight film in that scenario because they want more orange out of tungsten or firelight. I think the only case where the 2-stop loss of the 80A would be acceptable is if you are shooting day-for-night on daylight-balanced film and want a blue cast and have a lot of exposure from sunlight to cut down anyway.
  13. "Pulp Fiction" was shot in 4-perf 35mm 2X anamorphic for contact printing through release prints. You may just be responding to the harder, more old-fashioned lighting or maybe the video transfer is not as good but certainly in a movie theater, "Pulp Fiction" had a somewhat cleaner, sharper look on the big screen than "Lord of the Rings" did. Also, at first parts of "Lord of the Ring" went through a D.I. but over the years, the entire movie has so what you are seeing comes from a scan of the original negative, whereas likely any home video transfers of "Pulp Fiction" are still using an interpositive dupe run through an HD telecine (I assume your frame grabs are from HD blu-rays.) Also, "Lord of the Rings" was mostly shot on EXR 200T, some day work on 100T, so it's not as fast a stock as if it were all shot on 500T stock. And being spherical, there would be less focus issues than with anamorphic.
  14. The regular Alexa and the Alexa LF have the same sensor so the same-sized photosites, same sensitivity - the LF is just two of the Alexa sensors stitched together. The difference is that when presented on the same-sized screen, the LF image is enlarged less so the noise is physically smaller, which does allow you to get away with a notch higher ISO if desired. One could also argue that having less visible noise causes a small increase in useable dynamic range at the low end.
  15. Nothing other than a full-frame lens covers full-frame (traditionally 36mm x 24mm). I assume by "normal" you mean Standard/Super 35mm cine format? A T-stop is just an f-stop mark that has been moved to compensate for the transmission quality of the lens to make it more accurate in terms of exposure.
  16. “Eyes Wide Shut” was pushed two-stops. Technically you get increased contrast and grain but your blacks get worse, not better, because of an increase in base fog density. However with digital color-correction you can fix the black level. If printing film, you can help the blacks by printing down a denser-than-normal negative. “Eyes Wide Shut” pushed 500T by two stops but rated the film at 1600 ASA instead of 2000 ASA, giving themselves a little wiggle room for printing by having a bit more density.
  17. Think carefully: how can your camera capture a new image 24 times a second but have an exposure time per capture longer than 1/24th of a second? That effect in “Chungking Express” comes from shooting at 6 or 8 fps with a 180 degree shutter angle so that the per frame blur is from 1/12 or 1/16 exposure times, plus the steppiness from having fewer motion samples per second. They shot film and step printed it but in digital, you just have to playback at the shooting speed.
  18. I think the problem from not being level was uncorrectable -- like I said, it's like a barrel distortion effect but each third is rectilinear so there has to be a visual bend/break to accommodate the natural curvature from a very wide angle lens image tilting. And while one could rotate the side camera/movements to correct the horizon line, then the details that cross through the panel lines would no longer line up below or above the horizon. Look at these Widelux photos shot by Jeff Bridges and how the horizon bends when the camera isn't level: https://walldone.com/magazine/the-dude-has-got-a-widelux
  19. Sure, what works for a wide shot in a room might not work for a close-up, and you can't get the close-up lighting in the wide shot. On the other hand, wide shots also need to be lit well and work on their own too, they don't just exist as a vehicle for getting into the close-up. Often I've had a hot slash of light on some shiny furniture that puts a small reflective kick of light onto the actor's face that is only visible in a close-up, so it's not distracting in the wide... but usually I flag it off in the close-up. Silverware on a table is a classic example, you get in close with the camera and then you notice a spot of light on the cheek or chin, etc. from a reflection. I think every cinematographer would love to light a wonderful wide master and have that lighting look great on the close-ups as well with no changes. Things would go a lot faster on set and there would be no concern with matching. Sometimes you manage it or get pretty close. David Watkin was notorious for beautifully lighting a room and not changing things for the coverage unless absolutely necessary. It's not a bad approach by any means, it just means forgoing a certain level of glamorization of the actors.
  20. The effect is similar to using a nearly fish-eye lens with barrel distortion -- as you tilt up or down, the horizon bends. It's just in the case of Cinerama, it has zero barrel distortion... but on the flip side, the bending has no curvature, it just bends at the panel joins. It's almost like an analog barrel distortion that has been digitized into three discreet positions. You see in this shot below that the camera is slightly looking down at Debbie Reynolds but because there is no straight horizon, the tilting of the side panels is not objectionable. In motion, like with aerials, tilting up or down was like a smile switching to a frown or the reverse if you know what I mean.
  21. First see if the filter is labeled "Schneider-Kreuznach" or just "Schneider" and separate them into the two groups if you see both labels. After the first year or so their Hollywood Black Magic filters being manufactured, the design was adjusted and the filters relabeled, but I don't remember if "Schneider-Kreuznach" was the first or the second generation. So perhaps the same thing happened with Classic Softs.
  22. You don't want to get too scientific about lighting... if you want perfect continuity, you don't change the lighting between the master and the close-up. If you change the lighting, you've already admitted that it's OK to change the lighting because the change/improvement is more important than the perfect matching. So know you've entered the vaguer world of "how much can I change the lighting before it becomes too obvious for most viewers?" That's a feeling. It also depends on the shot size changes. If your wide master is waist-up on the actor and your close-up is chest-up, along the same axis, no shift in camera angle, then the difference in shot sizes is not extreme enough to allow much change to the lighting either. But if you are going from head-to-toe to chest-up in size, the change is big enough to allow more of a change in lighting. And if you absolutely know that there has to be a cut to some other person or to a reverse angle in between the two sizes on the actor, you can get away with more of a change. Sure, you can measure the key to fill ratio on every set-up and match it exactly if you want. But often matching is more done by feeling. If in your close-up, you softened the key and wrapped it more around the face, perhaps you don't want the fill to be exactly the same level as in the wide because the key is covering more of the face now -- it looks better to reduce the fill. Or conversely maybe you didn't want to use any fill in the wide shot for a higher contrast effect with the key coming from one side... but in the close-up you decide you need to add a little light into the shadows to see the other eye. Or maybe you were in a hazed set and the actor looked lower in contrast in the wide shot due to haze between them and the lens but when the camera is closer, you get more contrast on the face and decide you need a little more ambient fill.
  23. I don't think there are simulations for obsolete film stocks, especially one that disappeared 60 years ago from the market -- you'd have to start with any current color negative simulation software. Yes, that V-shape was a common problem with Cinerama since it used three lenses with three vanishing points -- if the camera wasn't pointed straight out but instead tilted up or down, the two side panels had a different angle to them than the center one.
  • Create New...