Jump to content

Michael Rodin

Basic Member
  • Content Count

    269
  • Joined

  • Last visited

  • Days Won

    6

Michael Rodin last won the day on July 11 2018

Michael Rodin had the most liked content!

Community Reputation

28 Excellent

About Michael Rodin

  • Rank

Profile Information

  • Occupation
    Cinematographer
  • Location
    Moscow
  • My Gear
    Cinealtas & Arri SR line

Contact Methods

  • Skype
    mihail_rodin
  1. I had a Blackmagic CD with older (around 2005 I suppose) drivers and codecs. I recall they worked in Adobe software and VLC. Could upload it here in a couple weeks.
  2. How much stops there are between 0 and 100% is limited your camera's signal to noise ratio. Say, an F900 had, as far as I remember, 54 dB, which means you won't ever distinguish more than 9 stops in the 0 to 95 (or wherever the knee circuit kicks in) IRE range. 6dB equals a stop. Then you add 3 stops (=800%) of range cramped in the highlight (say, 95...105) region nonlinearily - either using knee or with some gamma that has a shoulder (like a film H&D curve). In the latter case it's a bit more complicated since the whole "percents" terminology comes from the world of broadcast-legal video that doesn't necessarilly get color graded.
  3. It's a static detail shot, isn't it? Does anything moving obscure the image in the viewfinder? You could shoot the image in the telescope separately with the exact focal length needed to get the right image scale far enough from the eyepiece, and composite it into the shot. As to focusing, the image of the moon will come out of the telescope in practically parallel rays, which means focusing the taking lens around infinity. And the focusing distance for a (split) diopter will be 1/D if we need parallel rays after it. A short-focus single-element diopter might unacceptably degrade edge definition and introduce CA though.
  4. Interesting... Because, if I remember correctly, the parallel port at the back of non-R f900 output a 10-bit signal straight from the DSP, just upstream the downsampling/3:1:1 circuitry. I had the service manual vol.2 somewhere though, should look up.
  5. CA has nothing to do with the lens being "HD" (which's a meaningless marketing term since it isn't that only "HD lenses" resolve 110 lp/mm or, say, 40 line pairs at decent contrast) or not. High-end EFP, box lenses and especially D-cinema lenses like Fujinon E series or Canon FJ are better corrected for CA at wide apertures.
  6. Pushing doesn't increase the sensitivity (or granularity, at that) much - the film's sensitivity curve (H&D) is only slightly shifted to the left (towards lower exposure, by 1/3 stop or so) and quite noticeably upwards, which means denser fog and generally denser shadows. So yes, a one stop push pulls out more shadow detail, but it doesn't precisely compensate for 1-stop underexposure. What it does is, first of all, increase midtone contrast and let you see color there, while with normal processing strongly undexposed midtones are murky and desaturated. I would worry less about overprocessing the rest, since it was an overcast day and there were hardly any important details exposed more than 3 stops over key - if you weren't shooting in the shadow and shifting your exposure accordingly. A two stop push might look too contrasty to you on normally exposed shots, but it won't be much grainier than 1.5. Sky will get lighter and lose detail, clouds will stand out less, and it can look almost uniformly white. Highlight detail will be largely still there, just a little too dense for the telecine/scanner.
  7. The way sunlight looks has nothing to do with intensity, what's important is falloff (or rather lack of noticeable falloff) and quality - it's a practically perfect point source when not overcast. You can try to simulate it at any light level. The last thing to do though, is to set a light at half a meter from the face :) The farther away, the more it looks like sun. Mirrors are useful for this.
  8. It won't be a 16-stop sensor then :) Since 4-6 (or up to 11 if there's pre knee) stops of highlights will be all rendered as white (full). The same for shadows. You do need sufficient A/D bit depth to utilize the full range of photosites. Recording bit depth, on the other hand, has little to do with latitude.
  9. You can't get away without any color correction - even with impractically tight control of exposure and color temperature on set you'll see color/contrast variations on uncorrected scans with just a print emulation LUT applied. At least, you need an equivalent of optical printer lights - which are exposure/offsets in DI - to level them out. And you'll need to adjust contrast for your "print stock" - the output medium - at least on per-emulsion basis if you're shooting "for print" like in the optical days. Unless you're shooting expired film, you aren't correcting for inaccuracies of stock's color reproduction (maybe only if you were shooting under weird discharge lamps etc...) - Vision 3 color is neutral, slightly on the warm side, and very subjectively natural under any full-spectrum light. Much more color errors come from the scanner, from how it subtracts the mask in particular.
  10. No, it'd be like this if cameras all recorded in linear without any internal processing and their sensors were completely noiseless. Shortly, DR is mostly a sensor/encoding issue usually not connected with recording bit depth. Less shortly... The analog part's DR is limited not only by full well capacity but by thermal (Ikegami used Peltier elements to cool their CCDs which provided for a lowest-noise camera back in the day) and shot noise (which makes a difference at tiny currents inside there) as well. Then there's a preamp circuit. In CMOS - a huge lot of MOSFET transistors (or even differential amps) which are impossible to get precisely matched and off course add their own noise. I don't know of any CMOS camera that used anything nonlinear in the sensor to compress the range before digitization. Kodak designed a sensor with two amps per pixel with different gain. A technology used in Alexas and Varicams now. CCDs have an advantage (not only) here: they need a single preamp, which can be much more elaborate (we're not trading off real estate on silicon for more amp transistors) and include complicated nonlinear stuff like pre-knee. This means what gets sampled by A/D isn't necessarily a ful-range signal - it can be compressed, it can be two signals, and there's always noise which makes redundant bit depth, well, redundant. Then I guess you mixed up A/D converter bit depth with recording bit depth. I doubt there has ever been a pro camera with 8-bit A/D converters - 25 years ago they were already all 10 bit. Bit depth of A/Ds is generally such that lower bit(s) contain nothing usable. On cameras that output encoded video (either 709 or log) there's gamma correction taking place before bit depth gets lowered for recording. And if we're recording uncompressed raw, we're basically going straight from the A/D.
  11. How could a tungsten fresnel - and 300W at that - be dangerous at 2 meters from behind? It's not an HMI where you can get UV exposure if, say, a back lid is missing on an ancient 70s HMI fresnel. At 300 Watts neither does it heat up a small room too fast, even though something like 270W (compare that to a couple kilowatts that radiators and heat fans consume) goes into emitting infrared (some is focused into the beam, the rest heats the body). It's not that easy to start a fire with a beam of a 10K fresnel, a 300W can hardly melt foam if very close. It's VNSP pars and other pinpoint sources that burn gels and ignite things all the time. Make sure though there's some space and nothing flammable above the fixture. Remember school physics - convection? All the hot air from the fixture goes up.
  12. A Czech lab in Zlin, called Bonton, is reputable and not too far away. The nearest one is in Bucharest.
  13. Ikegami HL59 had a better camera head than 970, with very solid color science, surpassed only by D-Cinema cameras like F35 years later, but no progressive scan, of course.
×
×
  • Create New...