Jump to content

Michael Rodin

Basic Member
  • Posts

    298
  • Joined

  • Last visited

Everything posted by Michael Rodin

  1. You can't grade out what is not in the footage. If it doesn't distinguish enough yellow and red hues at -1,5/+2 stops around key to capture all the skin color pallette (no video does at this point), you won't be able to somehow magically create those hues in post. Color contrast is lost, digital waves goodbye and leaves you with plastic portraits. Shooting RAW doesn't mean you capture all the color that's in the scene. For some it'll destroy the movie night. Déformation professionnelle, you see.
  2. As good as film? No. And it has nothing to do with sharpness. All modern stocks and video cameras have enough microcontrast to look sharp, and there are lenses and diffusion to control it, on both mediums. Unless the movie is B&W the two things responsible for realism (and believability - of the image, not the whole film) are shadow/highlight handling and color. They're interconnected, which means both are flawed in video. Simply put, it has unrealistic color. And, please, don't start about scenes shot with strong color cast, green skies, etc... :) There's such thing as color accomodation of vision which makes us adjust to casts and hue shifts. Skin lit with blue light still looks "real" if it has all the hue range we're accustomed to seeing and colors saturate naturally depending on lightness. But if it turns into an almost flat color with little or no color contrast and oversaturates in highlights, it looks fake. Alexa has those problems, F35 is worse, F23 - even worse, Reds - from bad to awfull depending on sensor. Fuji Eterna Vivid - 99,9% lifelike color response I'd say. For example - video cameras don't "see" the red (the blood vessel color) in the skin correctly, they all color it too uniformly. To me it's a deal breaker. I even used to put a pink enchancer filter on the lens, use faint amber gels to get more red saturation for red to come out more in grading, but the thing is too complex to solve this way. As to 48fps - as much as it sucks, one can always go back to 24. If the hucksters... pardon, producers, rip us of film, manufacturing will shut down and there's hardly any way back.
  3. But, actually, doens't he? :) Of what I read, he chose Alexa for things other than image.
  4. We as lighting cameramen don't edit scripts and can only this much influence direction or acting. So the only place we add value to a movie is image. As to crap films rescued by film, I've no to list: excellent cinematography - and technically flawless image at that - hasn't saved any crap movie, at least for a general auditory. But it has happened otherwise, IMO, pretty much with every good film shot on video. With all due respect to a master cinematographer. Deakins delivered excellent lighting and color choices which would have provided for an immersive "cinematic reality" image quality - if he had a tool with accurate-to eye color and highlight/shadow handling. These tools go by names like Eterna, Vision, etc... Shot on an Alexa, there are things that distract, me - consciously, the audience - not so consciously. The No1 thing is weird color, especially warm color, saturation, very visible on faces. Will that spoil the show for the audience? No way, But it sorta robs the movie of its visual beauty anyway. His films shot on 35, to me at least, are noticeably better at creating realism than Alexa ones. And I won't trust loading to virgins, they probably don't know what's 99 vs 9P or think it's something to do with 69...
  5. Actually, 4K projection, digital LieMAX and other kinds of "innovative" nonsense are for pixel-peepers, and film origination is for a better viewing experience, i.e. for consumers. An average viewer doesn't care for resolution and grain (as long as it's not the size of a football) - they want an immersive experience, which means a natural-looking image, as if we were looking at the scene (no matter how unrealistic or out of this world it is) through our or character's eyes. And digital fails to deliver a natural looking image in a majority of lighting/color situations, which makes audiences' subconscious nag even harder: "you're watching some pointless sh-t, aren't you?" And these days it's already hard for a director - and DoP, PD, editor - to suspend disbelief.
  6. I am clearly not an editor. Have edited my own material for various needs (not for a final cut, I mean) though, on legacy Final Cut and Avid, and - much less - Premiere software. Of all, Avid was the fastest and easiest to work on. Take trimming, for example. On many systems, it takes fiddling with mouse. On Avid - hit "S" for next cut, hit P or { or } to choose which side to trim, trim with J/K/L or M/</> keys, and press Space anytime to review the cut.
  7. Do they use anything other than that? Here's mostly Avid and some slowly disappearing Final Cut 6 and 7 bays.
  8. Don't ever put an HMI on a dimmer or flicker box. With a magnetic ballast, you'll fry the ignitor or waste a globe. Electronic one will either turn off or burn down. You can get DMX controlled blinds/shutter for them though. Usable with up to 18K. There are special xenon-based fixtures for imitating lighting - Lighting Strikes and the like. Keep in mind they'll require a significantly overpowered genny.
  9. Could be done in front of a large white cyclorama with rather dense fog. You can try backlighting it with rows of PAR64 or Jumbos. Might be also possible in a warehouse-like space without windows, but it will require more even and powerful backlight to hide the unwanted background behind overexposed fog.
  10. The signal coming from a port on the back of F900 is parallel digital, and should be component 4:2:2, so there's actually no conversion (like analog to digital) going on in the Miranda box. Which's good as it means Miranda doesn't impact signal quality and is no worse (or better) than the HDSDI output of F900R.
  11. Konvas feed side friction clutch. EDIT: f-ing forum software doesn't accept any image format, will sort out later
  12. Yes, my fault, feed shaft sits on another friction clutch, not coupled to motor. I can post a tech drawing of a Konvas friction clutch.
  13. Excuse my terminology, I learned it all in Russian... Constant-speed transport of film is done by a pair of drums which engage the perfs and reside on feed and take-up sides of the movement. In some cameras (pre-XL Panaflex, Soviet "Rodina") there's one drum - it's upper part engages film on feed and lower part on take up. There are rollers which press the film against the drum(s) and guide rollers. The constant-speed part must be gentle with film (low stress on perfs, reasonable bend radiuses) and, of course maintain precise speed (which it does as long as the motor does because it's linked to claw movement). Then there's claw movement which's responsible for intermittent motion. It's linked to the shutter and advances the film while the shutter is closed. When it opens, registration pin(s) are go into perfs to hold film steady in the gate. There's still a large number of non-registered cameras in use though. Between movement and drum(s) loops of film are formed which are critical for maintaining constant feed/take up speed while the film travels intermittently in the gate. Were it not for loops, the whole mass of film would need to travel intermittently and experience huge accelerations (that would mean stress on film and motors in kilowatt range). Feed and takeup shafts in a magazine are mechanically coupled to camera motor on most cameras. But because film reel diameter constanlty changes while the camera is running you can't hard-coulple it to the motor and drive it directly with constant speed. So they run the mag with the nominal speed and let the reels slip with friction couplings (фрикционы), keeping torque constant. Cameras like Moviecam and Panaflex have separate mag motors with a "soft" torque characteristic - they give more torque and get slower when load increases, eliminating friction couplings and the whole mass of mechanics that comes with them, making the camera quiter and gentler on film.
  14. Doesn't work either. It could be greenish BG or even a kicker that made skin look more pink by contrast. Try zooming in and judging skin tone separately.
  15. I constantly use 1 degree spot to get isolated readings of specific textures or details. Also helps to separately meter lit and shadowed sides of faces. With a 5 degree lens you're often doing what's called "integrated metering" i.e. taking average from highlights, half-shadows, etc, which kind of defeats the purpose of spot metering - the precise control of contrast. You can get closer to the subject to meter a smaller area but that slows you down and once again defeats the purpose - spot readings are usually best done from camera position as they're all about reflected light.
  16. Hypergammas on F23 produce smoother highlights compared to F900, I meant. On this camera they're adjusted to hold detail at 3 stop over "100% video white" exposure (Sony calls it 800% dynamic range). On F900, Hypergammas keep highlights that are 2 1/3 stops over. F900 can also be made to shoot S-log, but it's a bit too noisy for that.
  17. Two HDSDIs are on the so called Interface box. It came from factory with every camera but rentals could be selling used cameras without them. They aren't regular BNC though, they're 2-pin Lemo - at least on early cameras.
  18. F23 does have a wider and more accurate color response compared to F900. It's on par with today's best cameras in that sense. Highlights are smoother with the same number Hypergammas (which aren't actually the same as they're still adjusted fo different cameras' latitude). Canon HD-EC zooms don't do this camera justice. It deserves Fuji E Series or Zeiss Digi ;)
  19. I'd second the question, what exactly are you after? The aforementioned Lomo, for example, made all kinds of lenses in the Soviet era: moderately all-around soft with low microcontrast, center-sharp and edge-soft, high-resolution with pretty strong contrast, really soft portait, etc. Take two 75mm T2 Lomos - a late OKC17-75 and an OKC1-75 - both are 6-element Double Gauss, but performance (and "look" or "feel" as a result) is radically different. Then there's "75mm soft" and prototypes... it's almost that no 2 Lomos are the same :) Those bokeh effect lenses must be Helios 40-2, known for swirly bokeh and very bad edge definition (which can be an advantage for a portrait lens).
  20. First, the green screen must be farther away. Currently it's working as a reflector lighting the actor with greenish spill. Will be close to un-keyable if you leave it that way. With a screen far enough you'll have where to place lights - over its edge or in front of it. You either need a wider green screen or... Have you tried obscuring only those parts of the window which will be overlapped by actor's figure? If she'll be sitting like this all the scene just leave green behind the camera-right third of the window. The rest can be easily rotoscoped.
  21. There is fill, otherwise we won't see texture and modelling in shadows. Fill is quite high above the lens and maybe a bit to the left.
  22. There's only one key light. It's also set on quite normal height/angle, it's just her leaning towards the lens that mekes the nose shadow do further down to her upper lip. Key's not quite soft. Could be a big-eye fresnel with very thin diffusion like Hampshire frost - or it could be bare and it's makeup which prevents skin from looking more specular. Fill likely comes from the key side. Then she's cross backlit by 2 softlights.
  23. Seems adequate. But you should better ask at LiftGammaGain as I know nothing about LCDs and neither do most of us here. Basically you need a monitor which will precisely enough display colors within Rec709 spec and which you can calibrate to a '709 gamma.
  24. You don't need 3 grading monitors. One grading monitor and one for interface is enough. Later you might get another one to display scopes, but now getting an adequate grading monitor is a first priority.
×
×
  • Create New...