Jump to content

David Mullen ASC

Premium Member
  • Posts

    22,274
  • Joined

  • Last visited

Everything posted by David Mullen ASC

  1. Depends on a lot of factors, like what is cutting the light into a shaft, is there a window frame? How large? Does the light have to fill it? Or is this is empty space where the light itself has to contain itself into a shaft? How much of a spread do you want? How much intensity? How far can you place the light? How even in exposure do you want the area where the light hits?
  2. It would be easy to take stills to see the effect of using two 85 filters on a blue sky and correcting back to neutral versus normal filtering, but again, assuming digital color-correction, it's just as easy to darken the blue channel in post if that's what you want.
  3. In that example, there was a lot of overall contrast added somewhere in the chain, probably in color-correction. And of course, in digital color-correction, you can isolate the blue channel and darken it. The first 85 filter on tungsten stock just gets you to neutral, as if you shot daylight stock. The second one warms up the image quite a bit; that will slightly darken a blue sky but it's more that by making the image more orange, the blue sky stands out a little more (of course, the orange filter also makes it less blue, it shifts a bit to the green.) If you have a really blue sky like that, a pola filter will do more to darken and saturate it, but if you like the orange highlights, then go ahead and use the extra 85 filter. Or do both, or use an 85/Pola combo so there is only one piece of glass. But keep in mind that your reference photo is higher in contrast than normal, and increasing the contrast tends to increase the saturation.
  4. There can be multiple reflections in the eye, just as in real life. If anything, the traditional center point eye light is what is unnatural.
  5. Yes that is anamorphic lens breathing which is a bit different-looking than spherical lens breathing due to the change in compression/squeezing to the background as it goes out of focus, but there’s also probably a bit of traditional breathing (focal length shift) mixed in with that. Anamorphic mumps as with CinemaScope is an entirely different issue, that’s when as you follow-focus to a face getting closer, the squeeze starts to become less than 2X, so during consistent 2X unsqueezing during projection, the face starts to look fatter.
  6. A number of movies have done that -- "Good Night and Good Luck" as well I believe. Shooting color and turning it b&w, whether digital or film, has been more common over the years. Truth is, if you want really clean b&w, shoot digital like "Ida" for example. B&W reversal was mainly just a lot more high-contrast, sharper, and generally finer-grained (depending on the stock) than b&w negative, so you can increase contrast in post but contrasty lighting will also help.
  7. If you liked the skin tones then why did you let the colorist change them? Doesn't he work for you? If you weren't shooting raw, then you were baking in any color temperature settings. Were you recording raw? If so, then usually the color temperature setting on the camera is only metadata.
  8. Nothing wrong with warming up skin tones by lighting through unbleached muslin, but being a daylight source, if the camera was set to 3200K, then the light isn't warm, it's still cool. You can make that light look warm, cool, or neutral depending on your base color temperature setting. Your colorist should be more clear as to what the problem was with color-correcting the skin tones in the shot, too green, too magenta, too desaturared, too saturated? Was the camera set-up properly for the color rendition you intended?
  9. I had the same experience with a screening of "2001", the audience found the deadpan dialogue very funny ("Without your space helmet, Dave... I think you'll find it... very difficult.") And I don't think they were wrong to laugh.
  10. "Key" exposure generally just means exposing the subject for what the meter says will render the subject with "normal" brightness. It could be f/16 or f/2.8! A low-key image has smaller areas of normal brightness, even a few tiny areas of overexposure, but a large amount of the frame is very underexposed or black. It's not a scientific term, it's describing a feeling.
  11. A diffusion filter is only necessary if you think it is necessary based on your visual goals and the image you are getting on the set. It certainly doesn't hurt to carry some filters even if you end up not using them, just in case the sharpness of the image is too unflattering or if you desire a certain effect like glowing highlights. However, keep in mind that if you don't really know whether you need the filters or not, you could always just shoot clean and do the filtering in post if it turned out to be necessary after-the-fact; you aren't stuck with a too-sharp image (but you can be stuck with a too-soft image). There are different types of diffusion filters, some are "mist" filters that cause bright areas to halate, often causing some loss of contrast while softening the image, others blur fine details but don't contribute much halation. Some combine a mist filter and a softening filter. You just have to decide what look you want and why you are using the filter.
  12. You’d cross-dissolve two passes of the same shot on two IP copies onto a new negative, one IP with the blurred areas. Now how they blurred it, I don’t know — perhaps just on a clear piece of film in front of the IP in the projector gate of the optical printer, with some Vaseline applied with a paint brush.
  13. I think it’s done in an optical printer since it dissolves over the shot rather than slides into the frame.
  14. These labels are sort of after-the-fact applications, you just light a scene the way you want to... but I'd say that the final effect in these examples is neither high-key or low-key. Even with some dark walls, the overall impression is not one of darkness or moodiness so it's not low-key. But because of the dark walls, it would be hard to say it is high-key either. But the actors alone are lit in a high-key, it's more highlights than shadows.
  15. I find the Vision-3 stocks to be quite well-matched, other than the 50D which is so fine-grained that it tends to stick out. The graininess is directly tied to speed, so 200T is slightly finer-grained than 250D but not by much. Plenty of movies have been shot in 35mm on 250D and 500T because they match fairly well -- for example, the three Batman movies shot by Wally Pfister for Christopher Nolan. Your eye tends to see the graininess in lower-contrast situations with lots of midtones, which is why often you want to shoot day scenes on slower-speed stock than night scenes.
  16. Traditionally contrast for b&w negative was adjusted by development, i.e. pushing. You don't necessarily have to underexpose to compensate, you could rate normally and push one-stop for more density & contrast. But if you are finishing digitally, then probably normal development is fine and you can increase contrast in the grade. Color contrast filters only increase contrast based on the color content of the scene, they don't increase contrast overall no matter what is in the frame. You see some increase in contrast outdoors with the yellow-orange-red filters because the shadows often have blue skylight in them, so they go a bit darker than the highlights, and skintones get lighter because they have red in them. But indoors, while you'd still get the lightening of skintones with those filters, you don't really change the contrast overall. And one could argue that shooting under tungsten light indoors is already like using a yellow-orange filter on the camera outdoors.
  17. All digital cameras have some sort of noise suppression so I think some of these textures just involves turning these down or off so that people who want that sort of noisy texture don't have to employ super high-ISOs (along with the loss of dynamic range) to get it.
  18. Yes, 2.8K Arriraw is a smaller sensor area than 3.4K Open Gate -- so yes, a crop. You can't really downsample raw to raw easily, you usually debayer and downsample from raw to RGB. So you could record 3.4K Arriraw and downsample in post to a 2.8K RGB codec without cropping... though I don't know why you would since 2.8K isn't a deliverable format. Some recording formats in the Alexa involve rescaling though, but not in raw. If you record 3.8K (UHD) ProRes, for example, it is an upscale of a 3.2K sensor area. There are some 1080P/2K recording formats that downscale from a 2.8K sensor area.
  19. Overexposure can still be below the clip point, cameras like the Alexa can have highlights many stops overexposed and still not get clipped depending on the subject’s tone.
  20. In terms of color/contrast, whole DI system was built originally to create a digital master that could be recorded back to intermediate film stock to be printed to 2383 that would match Vision negative film directly printed to 2383 -- and then digital cinema projection was designed around getting digital files projected to match 2383. So there should be no color shift between a DI recorded out to film and printed, or a DI digitally projected, compared to the same film printed to 2383, if matching is what you want. In fact, movies like "Oppenheimer" have the D.I. and film print projected split-screen in the D.I. theater to make sure the digital version matches the film print, because in that case, it was timed in film first which is rare. And that's one of the reasons there is a 2383 print emulation LUT used in D.I. sessions, so you stay within the color gamut that print film can reproduce for movies that plan to be released both in film prints and digitally. However, this is all based around shooting negative and printing it on 2383, not using 2383 inside a camera to record an image onto. 2383 was not designed to reproduce colors of real life accurately but to reproduce the colors on the negative film accurately into a positive image. It's all designed as a 2-step system. That's one reason why color negative has that orange color mask in it, so that the colors in the print end up being accurate. Print film has higher contrast than negative film because it has to in order to get decent blacks when a bright projector lamp is pushing light through the print and projecting the image on a white screen (just like why reversal slide film has more contrast than negative films.)
  21. Maybe — but you’d probably want to put your 16mm lenses on the digital camera and crop the sensor to the 16mm area. You’re testing the degree the stacked filters affect front-focus as well as sharpness on certain focal lengths. I’d be concerned that the results might be different if using a larger sensor and lenses made for that larger sensor in terms of the degree of shift in front-focus. I could be wrong though. Ultimately what matters is how it affects your 16mm photography so the best thing would be to test it in 16mm, shoot some line resolution charts, and zoom way in on the scan. But checking it in digital might also reveal something.
  22. Two filters isn't too bad, it depends on the focal length -- it starts to affect longer lenses but I'm referring more to ones with a telephoto effect. I suggest a test.
  23. They shot at a low frame rate so the people were sped up (when played back at 24 fps) while the man stayed very still so he wouldn’t look sped up — but near the tail of that sped-up shot, the editor slowed the footage down for a moment.
×
×
  • Create New...