Jump to content

David Mullen ASC

Premium Member
  • Posts

    22,275
  • Joined

  • Last visited

Everything posted by David Mullen ASC

  1. It would be hard enough to find places to project it in 16mm but very few have anamorphic lenses and those were 2X so would only make sense for a scope image. For anything 1.66, 1.77, 1.85, etc. a matted image was more likely in the past. You'd need a 1.3X or 1.4X anamorphic projector lens to show 1.77 squeezed onto 1.33.
  2. The guys were crossing the gas station when the car smashed through their pane of glass and hit the hydrant, all before the car hit the wooden fruit cart.
  3. Perhaps they were using black coroplast cut-outs for lensers and felt it was easier to attach them on the inside of the mattebox?
  4. Just depends on the effect you want. "Mist" filters are designed to create halation around lights but they also soften detail, partly because the mist particles are diffracting light rays, blurring fine detail, which is what a regular diffusion filter like a net is designed to do, but also because there is some contrast loss, and contrast affects sharpness. Another popular mist filter is Tiffen GlimmerGlass. There are filters designed to create less misty halation and just more softening, so less of that glowing, hazy effect. Some of the best are Tiffen Diffusion/FX and Schneider Radiant Softs (those are subtle so you have to use the heavier strengths.) Something like a Schneider Classic Soft mainly softens.. but there is a sort of blurred halation around lights due to the size of the "lenslets" in the glass. More subtle is the HD Classic Soft which uses smaller lenslets. And then there is the Schneider Hollywood Black Magic, which combined a 1/8 Black Frost, for a base amount of misty halation, with degrees of HD Classic Soft for softening. And there are black nets of various types. I would take a look at the Tiffen Digital Diffusion/FX #1, or the Black Diffusion/FX #1 for starters. Really the more important thing is lighting (and dialing down any artificial sharpening levels in the camera.) Light the subject to reduce wrinkles and baggy skin (i.e. frontal and soft), keep the electronic sharpness level down in the camera, use a light filter... and then do any additional softening in post if needed.
  5. If you record 3328 x 2496 (4:3) with a 2X squeeze, you end up with a 2.66 : 1 desqueezed. But how much you crop to get 2.66 to 2.40 depends on the delivery specs -- is it a 4K DCP or 16x9 UHD (3840 x 2160) with a 2.40 letterbox? Because with the DCP, a scope image area is 4096 x 1716 (2.387 : 1). With 16x9 UHD home video / broadcast, you can letterbox "scope" to whatever you like: 2.35, 2.40, 2.39, etc.
  6. Anamorphic is traditionally 2X, not 2.4X. But often the final delivery aspect ratio is 2.40 : 1. Which means the actual area of the sensor used is 1.20 : 1. You look at the pixel count of the recording and take the vertical number of pixels and multiply by 1.2X to get the number of horizontal pixels that will be used for 1.20 : 1 image area. But in terms of the post conversion (desqueeze and crop), there are a number of paths.
  7. A filter that is a flat piece of glass isn’t going to create a ring flare like that, that’s happening with a lens element. A cheap diopter filter might, or a magnifier glass, but then you’d have to figure out how to get the lens to focus at a distance. Seems simpler to just use older lenses…
  8. All color imaging systems relying on RGB filtering, whether film or digital, have to allow some crosstalk between the three primaries to record information about those shades in between. Too much crosstalk, though, and you get a desaturated image. So it's as much an art as a science in terms of picking separation filters to be used. And too narrow a bandwidth for each color separation filter, not only do you get more saturation in the primaries (something like the old Technicolor look) but less of those shades in between, but you also lose sensitivity due to the density of the filters.
  9. Here’s an example I pulled from my own work. The middle frame is how the final image looked on home video. Above it is a lower-contrast attempt and at the bottom is a high-contrast version. You see that when a lot of contrast is added you start to need to add light to areas that you don’t want to drop out. Originally shot on Kodak 5219 500T rated at 320 ASA.
  10. It's easy to time a scan to be contrasty. If you are going something like a skip-bleached print look where the blacks really look crushed, oddly enough you do have to think about fill light because of the drop in shadow detail. For the most part, of course, you want that look, but when you need to see something in the shadows, you have to add some light there. Eyelights are also helpful.
  11. Filter designs for diffusion tend to show up in the bokeh or when you stop way down and get too much depth of field, so it might be interesting to shoot a test of a tiny bright slit of light against black, to look at halation, on a macro lens stopped way down and then pan past the slit of light, then zoom into the image and see the interaction between the light and the filter structure. It would be misleading in terms of the look the filter was designed to create in normal situations but it may help you understand how the filter is interacting with light rays. Diffusion filters tend to be "mist" designs or use something to diffract focus around certain points (patterns in glass, dimples or bumps, threads of a net, etc). Or they combine both types. But keep in mind that particles designed to "mist" and glow are also diffracting focus so they act to soften detail, just not as heavily as a filter specifically designed to blur detail.
  12. Another important factor is the changes in strength over a series. Modern filters tend to be better spaced in strength with more subtle jumps, whereas a 1/2, 1, and 2 Tiffen Soft/FX are pretty distinct between them. So it depends on how much you plan on switching between filter strengths to modulate the effect. But you'd have to come up with your own nomenclature I'm afraid to describe what you are seeing.
  13. A long time ago I heard that many labs don't do pull-processing for 16mm, just pushing. Anyway, it may be simpler to just color-correct the scan for a low-contrast, desaturated look. You could use 200T for less grain.
  14. It’s bit of both, there was a difference in look due to the nature of the color array filters and the recording formats of the time, and the Sony F35 didn't use a Bayer mosaic pattern but an RGB stripe approach, so each of the three colors were captured equally rather than 50% green, 25% blue, 25% red. And with CCDs, you don't have rolling shutter issues. But whether any of that made the image more "filmic" is up to debate. You could look at the first "Captain America" movie, shot mostly on the Panavision Genesis (similar to the Sony F35) but the final sequence set in modern times shot on the ARRI Alexa.
  15. Actually increasing the ISO does not increase the amount of light hitting the sensor -- that's only determined by light level, f-stop, and shutter time (and, I guess, the size of the photosite and any use of microlenses over it). The ISO setting just determines the amount of gain applied to the signal.
  16. Maybe it's a typo and they meant "pulled"? Or maybe he wants a negative that is 2-stops denser than normal?
  17. 2.40 from 3-perf is slightly larger negative area than from 2-perf (which is 2.66 : 1 full aperture so you crop the sides to get to 2.40) but also shooting 3-perf allows you to stabilize shots, not have to deal as often with hairs in the picture area or gate flare, and if you have to make a 16x9 non-letterboxed version, it is ideal. Here is my framing chart from an ARRI Alexa movie I shot in 16x9 but framed for 2.40 with a 1/4 offset from top, but I've done the same thing for some 3-perf movies:
  18. If you’re talking about shooting 35mm, it’s partly an issue of whether your producer will budget for 4-perf 35mm so you can shoot with 2X anamorphic lenses. If the only issue / goal is getting a 2.40 aspect ratio then shooting 3-perf and cropping is the best route. 4-perf 35mm anamorphic would look finer-grained but no one seems to mind film grain anymore. Use anamorphic only if you want that anamorphic look.
  19. Thanks for the memories! I used to go to all those presentations of the latest Kodak and Fuji stocks.
  20. Yes, the narrator is saying that the viewer (in a theater) is watching a projection print made off of a dupe negative rather than off of the original negative. I remember this presentation. Vision 320T (1996) was a replacement for their lower-contrast 200T 5287 (1994), which was Kodak's response to Agfa XT320. Unfortunately Agfa got out of the motion picture camera negative market right around this time.
  21. It's pretty much dead overhead because he's almost leaning forward of it and then catching it when he tilts his head up. It's reflected in his forehead.
  22. Geoff's method is how they did it, it's a slow-motion shot of a pan of water reflecting a soft light. The texture of the ripples all depends on how you are agitating the water, the size of the water, and the frame rate of the camera. The shape of the reflection is a lighting issue. It's all very organic and hit-or-miss depending on all those factors, you just have to play with it. You can probably first try doing it using your iPhone shooting slow-motion for practice.
  23. Seems to be one soft source judging by its reflection in the arm of the sofa chair.
  24. Certainly it seems larger-then-full-frame is more of a niche market in still photography and I suspect the same will be true in filmmaking, Also, the benefits of large sensors are not quite as visible on the screen as the benefits of large film formats like 65mm and IMAX. You could intercut regular Alexa, Alexa LF, and Alexa 65 footage together in one movie more seamlessly than you can intercut 35mm and 65mm because grain size is not a factor and resolution is always variable in terms of perception.
×
×
  • Create New...