Jump to content

Michael Nash

Premium Member
  • Posts

    3,302
  • Joined

  • Last visited

Everything posted by Michael Nash

  1. Uh, you mean the title didn't tip you off? Grindhouse = inner city theater that runs poorly maintained prints of exploitation films...
  2. Everything about film is an "artifact"! It's two dimensional, 24fps, has limited contrast range, grain structure, it's own color gamut; then there's the optics, the lighting, production design, acting... The difference here is that certain artifacts inherent to the medium have become accepted by audiences, and even become conventions that the audience expects to see. Stylized looks like bleach-bypass and that cross-processed "Tony Scott look" have been popular for years, so you can't say that movies have to be naturalistic or that audiences don't want artifacts. The important difference is being able to choose which image artifacts to include, and not be stuck with the ones you don't want. If you can use it's an asset; if you can't avoid it it's a problem. When is a plant a weed? When you don't want it. But it's still a plant.
  3. It depends what you mean by "better." These adapters allow 1/3" chip video cameras to have the shallow depth of field their native focal lengths don't allow. That's it. They do so at the expense of resolution, contrast, and light transmission. With 16mm film you're usually fighting for all the sharpness and resolution you can get, using the sharpest, most contrasty glass and the slowest ASA film practical. I wouldn't say an Ultracon 5 filter plus ND.6 would make my 16mm footage look better, no matter how "crazy-shallow" the depth of field. 16mm film also has an image area close to that of a 2/3" chip video camera, which can get significantly shallower depth of field at the same field of view compared to 1/3" video. 1/3" prosumer video cameras also have a limited dynamic range and steep gamma compared to film, and a contrast-lowering groundglass can help mitigate that harsh contrast by flattening out the gamma response a little. With 16mm film you don't need that because it's... film.
  4. Did you see The Lookout? Any of The Company miniseries (TV)? I got about 1/3 of the way through I Know Who Killed Me (on DVD) before I noticed something different about the color that tipped me off. But then, that's about 1/3 farther than anyone got with that movie... :P
  5. Dante Spinotti talks about splitting the two formats here. It's interesting, he talks about choosing the Genesis for how well it sees into the shadows compared to film, yet the shadows in the trailer are completely black (looks great though). Can't trust too much from an online trailer I guess, or maybe they changed their mind duting color correction. Trailer: http://www.apple.com/trailers/fox/deception/ Now how can I try one of those new "Prima" lenses? ;)
  6. This is a little out my territory, but could it be a problem with the authoring of the DVD? Be sure that you're creating a true 24P DVD and not a 60i.
  7. Kodak Hollywood has monthly demo screenings of their film products: http://www.kodak.com/US/en/motion/hub/news/demo.jhtml
  8. If the whites are clipped in camera, there's nothing you can do to make them "not white" in post -- the information in those areas is simply gone. You can bring them down to 100 and they should be legal. Bringing them down farther will just make them a pale gray and the contrast of the image will start to flatten out. If bringing the whites down to 100% makes the rest of the image look too dark, you can bring the mids back up with the 3-way color corrector or the Levels tool. Use your waveform monitor to compare the "before" and "after" images to set the level.
  9. I assume you're talking about the 16mm format... You can estimate field of view with a simple geometric formula: a lens that's twice as long sees half as wide. So a 5.9mm will see twice as wide as a 12mm on the same format. I'm not familiar with this lens, but I would expect noticeable barrel distortion at that focal length.
  10. Usually you'll have a target f-stop in mind, and you design the lighting package around that. Or at least a range of f-stops (say 2.8-5.6). For example, with anamorphic lenses you usually want to avoid the lowest f-stops because the image tends to lose sharpness wide open, so you try to light to an f4 or f5.6 (or whatever your tests determine) for the best image. Sometimes with scenes like action/stunts or physical comedy you know you need enough depth of field to hold the action in focus, so you wouldn't handicap yourself by lighting to just any light level. High frame rate shots also require more light, so for sequences with highspeed shots you usually light for the highest light level needed, and scrim the lights or ND the camera for the other shots to maintain a consistent f-stop between shots.
  11. And how much IR light do you need for a high f-stop, so that you have enough depth of field to hold the whole scene, since you can't pull focus in the dark? :blink: I'm assuming the shots will be locked off with film also, since you can't see to operate -- unless you use some kind of IR videotap.
  12. Yes, it's the same as tungsten but in a gas-filled envelope. http://en.wikipedia.org/wiki/Halogen_lamp
  13. http://dvcreators.net/monitoring-your-work-in-final-cut-pro/
  14. Michael Nash

    Chocolate filters

    In the strictest sense, "Digital Intermediate" simply means that the original footage is manipulated digitally, as opposed to photochemically, before being output to its release format. It might be fair to say that color-correction done during the telecine transfer isn't exactly a DI, but the process and controls are very similar. You're transferring the film into digital information and manipulating it digitally. Most telecine suites have pretty good control over colors, but of course it depends on the equipment they have (and the skill of the colorist). Ask your colorist what kind of color-correction hardware he has, and how well that allows him to isolate independent colors. There's no right or wrong way to go about color correction as far using filters or post. The more you do optically the more you bias the information that's on the negative, which can be harder to cancel out later without digital artifacts. The more neutral you keep the negative, the more digital artifacts you might create if you have to push the look too far in post. So it's a balancing act, and you have to test each process all the way through post to determine which method gives you the best results within your means.
  15. Michael Nash

    Chocolate filters

    How are you finishing this film? On film or video? Because telecine color correction essentially IS a DI. Or did you mean the lab color timer for your film print?
  16. Michael Nash

    Chocolate filters

    No, it's not silly, and as a beginner you're asking all the right questions. This sort of thing has been done since the early days of film, when actors wore white makeup to keep their reddish skintones from coming out too dark on orthochromatic film. These days, horror films that use a bleach-bypass technique will use intensely bright red blood on set so that it will appear with the appropriate saturation on screen. Keep in mind that skin tones are just as important (if not more so) than wardrobe, so make sure to test that your "sickly warm" background still separates from warm fleshtones.
  17. Obviously your needs are more specific. 120 fps might be exactly what you need for food, but can be overkill to simply slow down human motion in a drama. It just depends what effect you need to create. Sounds like the Phantom is really more your thing if you're not shooting film. Have you tried Twitor to create "tweener" frames to slow the footage down even more? I've seen nice results in online clips, but those doesn't show the full quality.
  18. Thanks. However, there are plenty of people using NLE's for online, especially when they're starting with a compressed codec like HDCAM, DVCPRO-HD, or now AVC-Intra. I'm not arguing the quality, but it does matter to those productions where NLE's are the "end of line."
  19. Music is just one more element of film, like cinematography, editing, production design, acting and so forth. Filmmakers can use it overtly or subtly, just like any other element. There's no one way it "should" be used. The problem I've noticed on some Hollywood films is that the music seems to be administered onto a finished film after the fact, and ends up feeling "decorative" rather than integrated into the design of the film. When that happens music that's too ear catching and memorable can just end up feeling hokey and distracting, like a wallpaper pattern that's too loud for the room. Like the other elements of filmmaking, when music is well integrated into the design of the film you end up feeling the effect, regardless of whether you're consciously aware of it. That doesn't mean that the score has to be "buried" in the edit or forgettable either; just that is has to be incorporated into the design. You can take complete or even well know pieces of music like "The Blue Danube Waltz" or "Mrs. Robinson" and successfully integrate them into the design of the film.
  20. It probably goes without saying that you need to view your work on the same type of display that you'll be delivering for. In other words, you really need to output to an NTSC CRT to see how the work will look on that type of display. As I understand it FCP does apply gamma correction that simulates a CRT. But you do still need to set up your monitor properly to get it as close to "accurate" as possible, even for home color-correction.
  21. Right, I realize that -- my question is exactly how do you make complete frames out of fields? Is there a specific box that's used, or a common way on NLE's?
  22. 60fps is absolutely enough to get excited about -- ask anyone who owns an HVX200 or Varicam. Everyone's needs are different, but for dramatic effect 60 fps gives pretty nice results. Slowmo in sports isn't always the same thing, especially when you consider it's usually interlaced. Having true highspeed progressive capture can be a real asset for some productions. The obvious difference is you're using the full capture of the chips, and not losing any resolution or having to interpolate pixels in post. Obviously people have been able to get nice results from interlaced capture, so it's a matter of workflow and preference. If you're pleased with the F900 you might also enjoy the current Panasonic HPX3000. For some of us the flexibility in frame rates is desirable. It's "horses for courses." What's your technique for getting you slow mo in post with the F900?
  23. The ChromaDuMonde chart includes several different references all in one chart. http://www.dsclabs.com/chromadumonde.htm In simplest terms, it's useful to see how your imaging system (film or video camera) is responding to known real-world values. The basic things you want to look for are exposure (gray), contrast (dynamic range and gamma), color response (hue and saturation), and sharpness (resolution and detail). There can be different charts for each thing you want to measure, and you often want to measure them across the entire frame, not just the middle of the image (since optical distortions are one thing you're trying to evaluate). Grayscales let you see tonal distribution as well as dynamic range. They're not used for exposure per se. Color charts let you see color response around the full circle of the spectrum, and at different saturations and luminances. The striped "trumpets" are for resolution, detail and aliasing artifacts in different directions. A Siemens Star is usually used for backfocus. Color bars are used as a signal reference, independent of the camera or film's performance. You don't need to use a chart if you already know how your imaging system is behaving and what it will give you. But if you don't already know, or some variable in the system has changed, then a chart can provide a real-world known reference to compare against. Charts are especially useful when you have multiple cameras that you're trying to match. The references on the chart are more precise and easier to see (especially with test equipment) than subjectively matching general images.
  24. It's a matter of optics. 3-chip cameras use a prism to split the light into three colors focused at slightly different distances, while film and single-sensor cameras don't. Lens CA can be independent of this, but might be compounded more by one system than another.
  25. The "changing angle" has nothing to do with it. For all practical purposes just forget about the shutter wiping the frame; with a film camera/rotating shutter the effect is negligible and it doesn't affect exposure. All that matters for exposure is the amount of time the shutter is opened and exposing the film to light. The angle of the shutter is just a measure of the size of the opening, which determines the amount of time the film is exposed, at a given frame rate. It really doesn't matter what direction the shutter moves.
×
×
  • Create New...