Jump to content

Michael Rodin

Basic Member
  • Content Count

  • Joined

  • Last visited

  • Days Won


Michael Rodin last won the day on July 11 2018

Michael Rodin had the most liked content!

Community Reputation

28 Excellent

About Michael Rodin

  • Rank

Profile Information

  • Occupation
  • Location
  • My Gear
    Cinealtas & Arri SR line

Contact Methods

  • Skype

Recent Profile Visitors

8817 profile views
  1. Properly stored F64D is fine and holds up well against newer stocks. It has noticeably less latitude than Kodak '03 as it's slightly more contrasty and doesn't go up to very high (say, 1,5-2-2,4 RGB, which is hard for telecine and non-HDR scans anyway) densities like '03. It's grainier too. But one can argue the warm hues are more lifelike compared to Kodak ochre/carmine/cadmium palette. If you come across F64D, do a sensitometery test. If it's still got some 20...30 ASA and tolerable fog, color should be OK.
  2. LAD is basically a quick one-measurement check for under-/overdevelopment. Originally intended for color neg. As far as I remember, Kodak recommended slightly different gray patch densities for various stocks, all of them around 0.8-1.2-1.6 RGB Status M. For B&W, 0.7 over base and fog sounds like a safe grey density. Optimum exposure depends on the post process (and, to some extent, the stock) though. In the photochemical world, a "correctly" exposed negative would print at midrange printer lights. DI is much more complicated and you have to test to figure out the densities that yield the best scan.
  3. Are we talking about debveloping Kodak Vision film cut into 36-exposure rolls for shooting stills or processing film footage at home? The latter just isn't possible without a machine.
  4. I had a Blackmagic CD with older (around 2005 I suppose) drivers and codecs. I recall they worked in Adobe software and VLC. Could upload it here in a couple weeks.
  5. How much stops there are between 0 and 100% is limited your camera's signal to noise ratio. Say, an F900 had, as far as I remember, 54 dB, which means you won't ever distinguish more than 9 stops in the 0 to 95 (or wherever the knee circuit kicks in) IRE range. 6dB equals a stop. Then you add 3 stops (=800%) of range cramped in the highlight (say, 95...105) region nonlinearily - either using knee or with some gamma that has a shoulder (like a film H&D curve). In the latter case it's a bit more complicated since the whole "percents" terminology comes from the world of broadcast-legal video that doesn't necessarilly get color graded.
  6. It's a static detail shot, isn't it? Does anything moving obscure the image in the viewfinder? You could shoot the image in the telescope separately with the exact focal length needed to get the right image scale far enough from the eyepiece, and composite it into the shot. As to focusing, the image of the moon will come out of the telescope in practically parallel rays, which means focusing the taking lens around infinity. And the focusing distance for a (split) diopter will be 1/D if we need parallel rays after it. A short-focus single-element diopter might unacceptably degrade edge definition and introduce CA though.
  7. Interesting... Because, if I remember correctly, the parallel port at the back of non-R f900 output a 10-bit signal straight from the DSP, just upstream the downsampling/3:1:1 circuitry. I had the service manual vol.2 somewhere though, should look up.
  8. CA has nothing to do with the lens being "HD" (which's a meaningless marketing term since it isn't that only "HD lenses" resolve 110 lp/mm or, say, 40 line pairs at decent contrast) or not. High-end EFP, box lenses and especially D-cinema lenses like Fujinon E series or Canon FJ are better corrected for CA at wide apertures.
  9. Pushing doesn't increase the sensitivity (or granularity, at that) much - the film's sensitivity curve (H&D) is only slightly shifted to the left (towards lower exposure, by 1/3 stop or so) and quite noticeably upwards, which means denser fog and generally denser shadows. So yes, a one stop push pulls out more shadow detail, but it doesn't precisely compensate for 1-stop underexposure. What it does is, first of all, increase midtone contrast and let you see color there, while with normal processing strongly undexposed midtones are murky and desaturated. I would worry less about overprocessing the rest, since it was an overcast day and there were hardly any important details exposed more than 3 stops over key - if you weren't shooting in the shadow and shifting your exposure accordingly. A two stop push might look too contrasty to you on normally exposed shots, but it won't be much grainier than 1.5. Sky will get lighter and lose detail, clouds will stand out less, and it can look almost uniformly white. Highlight detail will be largely still there, just a little too dense for the telecine/scanner.
  10. The way sunlight looks has nothing to do with intensity, what's important is falloff (or rather lack of noticeable falloff) and quality - it's a practically perfect point source when not overcast. You can try to simulate it at any light level. The last thing to do though, is to set a light at half a meter from the face :) The farther away, the more it looks like sun. Mirrors are useful for this.
  11. It won't be a 16-stop sensor then :) Since 4-6 (or up to 11 if there's pre knee) stops of highlights will be all rendered as white (full). The same for shadows. You do need sufficient A/D bit depth to utilize the full range of photosites. Recording bit depth, on the other hand, has little to do with latitude.
  12. You can't get away without any color correction - even with impractically tight control of exposure and color temperature on set you'll see color/contrast variations on uncorrected scans with just a print emulation LUT applied. At least, you need an equivalent of optical printer lights - which are exposure/offsets in DI - to level them out. And you'll need to adjust contrast for your "print stock" - the output medium - at least on per-emulsion basis if you're shooting "for print" like in the optical days. Unless you're shooting expired film, you aren't correcting for inaccuracies of stock's color reproduction (maybe only if you were shooting under weird discharge lamps etc...) - Vision 3 color is neutral, slightly on the warm side, and very subjectively natural under any full-spectrum light. Much more color errors come from the scanner, from how it subtracts the mask in particular.
  13. No, it'd be like this if cameras all recorded in linear without any internal processing and their sensors were completely noiseless. Shortly, DR is mostly a sensor/encoding issue usually not connected with recording bit depth. Less shortly... The analog part's DR is limited not only by full well capacity but by thermal (Ikegami used Peltier elements to cool their CCDs which provided for a lowest-noise camera back in the day) and shot noise (which makes a difference at tiny currents inside there) as well. Then there's a preamp circuit. In CMOS - a huge lot of MOSFET transistors (or even differential amps) which are impossible to get precisely matched and off course add their own noise. I don't know of any CMOS camera that used anything nonlinear in the sensor to compress the range before digitization. Kodak designed a sensor with two amps per pixel with different gain. A technology used in Alexas and Varicams now. CCDs have an advantage (not only) here: they need a single preamp, which can be much more elaborate (we're not trading off real estate on silicon for more amp transistors) and include complicated nonlinear stuff like pre-knee. This means what gets sampled by A/D isn't necessarily a ful-range signal - it can be compressed, it can be two signals, and there's always noise which makes redundant bit depth, well, redundant. Then I guess you mixed up A/D converter bit depth with recording bit depth. I doubt there has ever been a pro camera with 8-bit A/D converters - 25 years ago they were already all 10 bit. Bit depth of A/Ds is generally such that lower bit(s) contain nothing usable. On cameras that output encoded video (either 709 or log) there's gamma correction taking place before bit depth gets lowered for recording. And if we're recording uncompressed raw, we're basically going straight from the A/D.
  • Create New...