Jump to content

David Mullen ASC

Premium Member
  • Posts

    22,275
  • Joined

  • Last visited

Everything posted by David Mullen ASC

  1. Yes and no, sort of depends on your sensitivity about such things. You can match them to some degree, but fundamentally you cannot get more saturation and more brightness with subtractive color because more saturation means more dye, which cuts light, whereas with additive color you can increase both saturation and brightness.
  2. Now that we have laser projection with better contrast and deeper blacks, I suppose one could map the color gamut of a dye transfer print and try to create a LUT to emulate it. but I suspect it still won't feel the same as a projected dye transfer print due to the difference between subtractive and additive color.
  3. As far as creating positive b&w matrices from a digital source, that's not hard IF the b&w matrices stock existed, which it doesn't. After the demise of 3-strip cameras, Technicolor had to create the b&w matrices from a color negative anyway, so one could laser record a color negative from the digital master, and then make the b&w matrices from that negative. I suppose in theory it would be possible to laser record color separations directly to the b&w matrix stock, I don't know, but one could definitely laser record to b&w fine-grain film as negative images and then optically make the b&w positive matrices from that.
  4. The death of Technicolor dye transfer in the 1970s was mainly due to the economics -- it's an expensive process up front but a low cost-per-print, so it only was cost-effective with large print orders, which were on the decline in the 1970s. It was resurrected in the late 1990s briefly as an experiment at a time when print orders were larger than ever due to simultaneous release worldwide, but now the problem with dye transfer was two-fold: (1) it took too much time to set-up, at a time when studios were used to delivering a cut negative just two or three weeks before release, and Technicolor needed a month to set-up the matrices properly and time them; (2) the prototype printer couldn't crank out thousands of prints quickly -- in fact, during that time, studios were used to delivering multiple dupe negatives to more than one lab in order to make so many prints at once. And it certainly wasn't cost-effective compared to making normal color prints. The look of the prints comes from the dyes and the whole subtractive color look of prints in general so there's no reason to scan them or scan b&w separations from color negative origination to create the look for digital projection, which is an additive color process. At best, like with the D.I. of "The Aviator", you could create a print emulation LUT that simply cut-off the color frequency ranges of each color channel into the area that the color dyes were limited to, similar to the Vision print LUT ("The Aviator" used a LUT to limit the range of the color channels from a scan of the color negative to match the color response of a 3-strip Technicolor camera, which had less crosstalk between channels compared to color negative.)
  5. The distortion of facial features from being too close isn't even a camera issue, it happens with your own eyes. Forget about a face for a moment and imagine being on a hilltop looking at a distant mountain and in the foreground next to you is a medium-sized boulder. As you kneel down and get closer to the boulder, it gets bigger in your vision relative to the distant mountain. It's the same with a face, at a distance, the relative sizes of the nose compared to the ears is small compared to their distance to you, but get very close and the distance from you to the nose might be the same as the distance from the nose to the ears, making the nose bigger relative to the ears. So the sizes of the facial features at, let's say, one foot away, whether the lens is a 10mm or a 1000mm, are the same -- it's just that the 1000mm lens can't hold the whole face in the frame. However if you panned around with the 1000mm lens and shot pieces of the face and then stitched it together, you'd see the whole face with the same facial distortions as the 10mm lens.
  6. As Karim points out, you are incorrect. Perspective / distortion / compression of a face has nothing to do with focal length, it had to with the camera to subject distance. Focal length just provides the field of view. So you can match camera distance and field of view between two formats and get the same perspective on a face in close-up, and what you'd mainly see is a difference in depth of field.
  7. Bigger soft lights farther away mean even bigger flags! In this case, a slower fall-off over a greater distance is not going to help make the subject brighter and the background darker, the opposite happens. What you need are light control tools -- eggcrates for soft frames plus flags, teasers, etc. Are there any theatrical lighting pipes near the front edge of the stage to rig from? If you want to kill the light off of the screen, then stand where the screen is and look out - if you can see the light source, then it's hitting the screen. I recently lit a stage space for a soft frontal key using two Skypanel 120s end-to-end, then the grips hung an 8' x 4' black teaser on a piece of wood out a few feet in front of the lights to take it off of the upper half of a curtain in the background.
  8. Early on, they used spun glass in frames, or large silks. But considering the powerful output of the lights used, the silks weren't used too often due to the fire hazard. Spun glass became replaced by Tough Spun (fiberglass). There might have been some use of frosted glass for small lights, I don't know. Heat was an issue in general and particularly with gels since they were made of gelatine. See: https://spectrum.rosco.com/index.php/2010/11/the-first-100-years-from-gelatine-to-roscolux
  9. I hardly use a meter but it's not because I don't think it would be inaccurate under LED lighting, maybe it would under a very narrow wavelength of saturated color. But even if it were inaccurate under that condition, it might be consistently inaccurate, enough to help with matching levels shot to shot. I usually have the DIT grab frames of the master and I then compare my coverage to see if the look has drifted in terms of balance, color, etc. But when lighting the coverage, I usually start by having the lens set to the same f-stop as the master and often I'm not changing the background lights, just the foreground. But for pre-lighting a set before the camera would be there, you would work with a light meter or your gaffer might, at least to get to the base f-stop you hope to use. It's less critical today since digital cameras have a range of ISO settings they can work in.
  10. It's a good question and I've never gotten a consistent answer from colorists, some would rather work with more normal density even it means pushing, some would rather just lift the footage themselves. In theory, all pushing does is add density to whatever information got captured by the film, it doesn't add information. So maybe some shadow detail gets lifted above the noise floor, but then, the noise floor / base fog level itself gets lifted by pushing. It would be worth testing.
  11. The easiest way to find out is to load another cartridge of Tri-X and compare the f-stop it selects to what your light meter tells you, assuming you know the shutter angle. Does this camera have no manual switch for daylight versus tungsten? My only Super8 experience was with a Sankyo XL60. Either way, if there is only a potential 2/3-stop mistake, I'd develop it normally instead of guessing.
  12. If there is an internal reflective meter, it reads the light after the filter so automatically compensates for the internal 85 filter going in, or any filter in front, just as it compensates for overcast versus sunny conditions. The 85 filter has a 2/3-stop loss. If it has an external meter, next to the lens, it probably would be set-up to compensate for the internal 85 filter being switched in but wouldn't know if an ND filter was added in front. Just a guess.
  13. The foam may have had a dual purpose of dampening sound...
  14. You could get fine-grained results on 80s high speed stock at night as long as you didn't underexpose them too much. And 5247 with a one-stop push wasn't that grainy. Again, it's all about the degree of underexposure. If you ended up printing your footage in the 30s, it was going to look finer-grained than if it printed in the teens. But you need more light if you are going to be able to expose enough to print night work above the teens in the scale. So these night scenes usually used fast lenses and additional lighting. Most people would say that "high-speed" film was introduced in 1981 with a type of Fuji 250T, followed quickly by the first 5293 250T (not the late 90s 5293). You could see the change in the Oscar nominations for 1982 movies for cinematography: Gandhi (all 5247), Sophie's Choice (5247 and fast 5293), Tootsie (fast 5293), E.T. (all 5247), Das Boot (fast Fuji).
  15. There are free FOV calculators online. You can also use your smart phone on the location and a director's viewfinder app like Artemis or Cadrage, there are quite a few (not free though.) APS-C is the same sensor size as Super35 so you could also take an APS-C still camera with a zoom to the location and determine the likely focal-length. (Of course, you could use any camera format and then calculate the crop factor to find the equivalent focal length...)
  16. You'd probably have to work with photometric data of lamps based on foot-candles. Inverse square law comes into play; unfortunately one aspect of lighting through windows is that the natural slow fall-off of natural light is replaced by the faster fall-off from artificial lighting outside the window, which cannot be as far away are the sky and sun. But obviously the bigger and farther you can get, the slower you can make the fall-off rate. I'm afraid that practically what tends to happen is that you get the largest lights you can afford and can rig, even if they are not enough. Yes, experience sometimes tells you when something will be overkill.
  17. Yes, use black on the camera, dolly, crew, and wall behind the camera, cut the light off of that area, and minimize who can be in the room (park the director & monitor behind the black wall, not in front of it, etc.). Shallow focus will help.
  18. What I was referring to is that occasionally Kaminski will smear a little Vaseline or something oily on the edge of the filter in front of the lens. I've done something similar in the past. A more subtle version is John Seale using an oily fingerprint on some glass in front of the lens to soften a close-up, as seen in "The English Patient".
  19. A lot of “Citizen Kane” was shot on 24mm Cooke but a 35mm was the more common wide-angle lens used. (35mm was also the widest prime that could be used on a 3-strip Technicolor camera, but a few times a wide-angle attachment was used to get wider, with mixed results.) 40mm and 50mm were popular, sometimes a 75mm for close-ups.
  20. Well, technically you can't get use all of Full Aperture with a 2X anamorphic lens on a 4-perf camera because that would be a 2.66 : 1 frame, not 2.35 : 1. But it is common today to shoot framed for 2.35 centered on Full Aperture rather than offset for a soundtrack, but I don't know if anyone calls that "Super Anamorphic". But it sounds like you can use a Full Aperture / Silent 4:3 groundglass.
  21. I know an AC who worked for DP Nicola Pecorini on it before William Fraker was brought in, but I thought there was a third DP also, maybe as an interim? Maybe there was only the two listed.
  22. IF you want maximum softness you have to fill the frame evenly without hot spots, but there’s no rule that you have to achieve maximum potential softness anymore than you have to use the heaviest diffusion gel in front of a light.
  23. Fogs, Double Fogs, and Low Cons were particularly popular in the mid-1970s.
  24. With b&w film, the difference is only 1/3-stop in terms of extra sensitivity to daylight (or lower sensitivity to tungsten.) But this isn't a factor if you are shooting digital. In terms of color contrast response, since you're shooting in color, it doesn't matter much either. I wouldn't worry about mixing the lights. And you can switch your monitor to b&w to judge the results.
×
×
  • Create New...