Jump to content

Ben Syverson

Basic Member
  • Posts

    102
  • Joined

  • Last visited

Everything posted by Ben Syverson

  1. You can do it easily from QuickTime Player 7 in 10.6, or the pro QuickTime Player in 10.5 and below. Export > QuickTime Movie to Picture
  2. Paul, there is a waveform visualizer in After Effects which might be really useful in this context. The key will be applying the appropriate audio filtration -- didn't I read that optical sound has some kind of EQ or nonlinear response curve in order to make it physically larger on film? Real waveforms for dialog tend to be very small relative to the full height.
  3. Ha, no compensation needed. :) I've gone over the thread and think I mistook the DG5 for a film recorder, when it's actually a monitor! The resolution of the DG5 is 3840 x 2400, right? So the full (silent) aperture area would be 3200 x 2400. If the sound area is 2.94mm wide out of 24.89mm, that translates to 378 horizontal samples for audio. I'm assuming we want stereo, so that's 189 samples per channel. That's 7.56 bit, though it's a bit misleading because the values aren't linear -- so the result will already sound different from a "normal" digital audio file at 7 or 8 bits. Based on the numbers I scribbled in my last post, it looks like the monitor will NOT be the limiting factor in the system, so that's good news. It's more likely to be the film's grain or MTF that limits the effective bitdepth. Another way to say it is, you'll probably never see pixelation in the final film print of your optical track, even with a microscope.
  4. This is a great idea... It's too bad all of the engineering talent is going into digital these days. You would need a modified film transport, but you'd be able to use the full 35mm width for picture. The registration mark could be printed along with metadata such as timecode, between the frames. Kodak would of course have to start manufacturing perfless 35. Because the transport would be independent of any perforations, you could have it switch aspect ratios on the fly (just adjust the speed and mask). 35 x 14.9mm for 2.35:1 would be twice the film area of Super-35 (521.5 sq mm vs 263.6). 1.85 would have 2.5X the film area (662 sq mm vs 260.4)! You would wind up with far greater detail, tonality and grain structure, without any increase in film/development costs.
  5. This is a great idea... It's too bad all of the engineering talent is going into digital these days. You would need a modified film transport, but you'd be able to use the full 35mm width for picture. The registration mark could be printed along with metadata such as timecode, between the frames. Kodak would of course have to start manufacturing perfless 35. Because the transport would be independent of any perforations, you could have it switch aspect ratios on the fly (just adjust the speed and mask). 35 x 14.9mm for 2.35:1 would be twice the film area of Super-35 (521.5 sq mm vs 263.6). 1.85 would have 2.5X the film area (662 sq mm vs 260.4)! You would wind up with far greater detail, tonality and grain structure, without any increase in film/development costs.
  6. Optical printing... Strike a black and white print of a color negative, then optically fade it with the color original onto color stock. You think that's amazing? Look up how they did bluescreen photochemically!
  7. 1) Start fight scene. 2) Motion blur! 3) Someone has won, apparently? Call me an old fart at 29, but I like to understand what's happening on-screen. Spielberg always has a great style for action. The camerawork and editing are fast but perfectly comprehensible, which actually increases the tension in my book. The best action scenes in The Dark Knight were like that; clear but dramatic. Maybe all action directors should have to shoot with 15 perf cameras.
  8. Yeah, the keycode stuff would be experimental... I'm thinking I'll play around with it post-scan, but not worry too much about it as a control mechanism. I'm getting excited to get this thing running! I'll upload some test scans soon...
  9. Paul, thanks for your notes! That does make sense. I'm looking at the keycode/edge numbers as a bonus -- the main thing I was interested in capturing was the perfs, so I could use them for stabilization. But now that you have me thinking about them more, they could potentially be pretty useful... The edge numbers could serve as a "sanity check" during the stabilization process, to make sure I'm not skipping frames. If it seems like I'm skipping a lot of frames, I might have to read the keycode in realtime (not a problem since my system will be so slow) and have the computer control the forward/advance mechanism. (Right now it's a "dumb" intervalometer) In terms of workflow, it may make sense to use the HD video mode and the "fast" advance on the projector (2-3 fps) to read a roll in quickly into a compact format. That can be used for the edit, and then the edge information used to create an "EDL" for the final scan. Storage-wise, I'm sort of trying to plug my ears and say "lalala." :) All of my early tests will be 100' rolls, which works out to ~40 GB. I'll be getting a cheapo TB drive or two for those tests, and after that who knows... I'm not even 100% sure this will be a workable system. I know that right now the filmstrip projector is a little too rough (okay, way too rough) with the film. I'm seeing scratches, mostly in the perf area, and need to eliminate those before I run any negatives of consequence through.
  10. So I got the projector... Focus variability won't be a problem, as the film is held sandwiched between two plates of ANR glass. Registration doesn't seem to be a problem either -- the film advance mechanism is simple, well-built and accurate. The biggest problem will be getting the light source placed correctly... It may involve some Dremeling. The ANR glass means the light has to be 100% even and soft, or you'd see the texture. Shooting with a 1:1 macro lens on a full frame DSLR, I'm able to image 100% of the film, including the perfs and keycode, which should help with registration. The actual image area winds up being about 3.5K, but so far I'm happy with the detail I'm resolving. I'll post some examples soon...
  11. Paul, negative film is already "HDR" compared to digital -- I say just shoot with the unmodified Mitchell and do a nice scan on your rig! :)
  12. Check out this crazy person making his own Kodachrome in his garage: http://www.flickr.com/photos/dark_orange/s...57603226919391/
  13. I've thought about it, but the startup costs on the manufacturing side would be pretty steep. Someone will do it, but the software will probably suck. What you really want is a full suite of scopes (RGB parade, vector, etc), configurable framelines, focus "peaking" and a 200% zoom for checking pixel-level stuff. Instead, we'll probably wind up with a big red record button and a live histogram if you're lucky. :)
  14. There's a big difference between latitude and dynamic range (aka signal-to-noise ratio), but the distinction sometimes gets lost... Latitude refers to how many "stops" of light a system can represent, and dynamic range / SNR refers to the amount of "information" in the record, versus noise. They are certainly interrelated; you may have X stops of latitude, but some of them may be so noisy that they're unusable. The higher the SNR, the more latitude you're likely to be able to capitalize on. To give a concrete example, think of a hypothetical 14 bit DSLR sensor. Because image sensors are highly linear, that would suggest 14 stops of latitude. However, noise keeps that from being a reality. Say we have an average variation (noise) of 10 values out of the sensor's 16384 possible values. That makes the SNR of the sensor 64.28 dB, and the effective bitdepth of the sensor 10.67, or 10.67 stops of latitude. We can expect an extra 3.3 stops in the shadows, but they will be more noise than signal. Film is a little different. With negative film, shadows can quickly drop to nothing (0 density), while highlights may burn in far beyond the normal exposure range. In that way, it's the opposite of digital. We can expect extra stops in the highlights, but they will be increasingly noisy. Film is also very nonlinear -- the response curve is shaped by the engineers to "look right" when printed or inverted (whereas linear digital sensors need a strong gamma curve applied to look right). The thing is, because film has extended range in the highlights instead of the shadows, it will "feel" like it has much more latitude than digital, because we rarely need to see detail in the highlights, but we often want to see into the shadows. All of that is the very long LONG way to say that you're safe without resorting to HDRI. HDR/multipass would reduce the level of noise from your sensor, but that noise level is probably already extremely good. You will be capturing the entire response curve of the film because a strip of negative film is so low contrast. Film's full latitude is "compressed" into a range of something like 3-4 stops of linear light transmission, and your DSLR has more than enough dynamic range to capture that faithfully. Way more. The key is really to get the base/shadow of the film up into the highlights on the sensor, which will place those valuable few stops of film exposure data in the best part of the sensor's response.
  15. The point is obviously to do this on the iPad, not the phone...
  16. True, true... I would be curious to see results from scanning Vision3 in particular, as they make a big deal about diversity in grain size as a way to increase latitude...
  17. Ah, so with such a high magnification (over 1:1), you may be slightly oversampling the grain (well, the dye clumps anyway). I'd be curious to see a more recent sample shot -- my guess is that the grain is well resolved and larger than a single pixel.
  18. The short answer is "yes," this is possible. You would need to manufacture a custom cable to go from the dock connector to mini USB, and you would have to reverse engineer Canon / Nikon's remote shooting protocol to build the software. No one has done it yet, though it seems like it's just a matter of time.
  19. From the compositor's perspective... If you're going to put anything on that screen, it should be four small white "+" signs in each corner, against a black background. Any reflection or light hitting the screen will key perfectly if it's against black, and the crosses will let you motion track anything onto the screen. If you have talent passing in front of the monitor, you'll have to rotoscope, but that's preferable to getting footage with a blown out green screen and having to fake the reflection. If you really have a lot of foreground subject passing in front of the monitor, especially with fine detail, you can fill the screen with a dark green, but don't do a 100% green. A dark green will still let you pull the reflections.
  20. Sorry, I misunderstood; you're using a camera without an OLPF! Paul, are you filling the DCS's frame horizontally with the full width of the Techniscope frame? Or are you framing such that you can photograph two frames / four perfs at once? In other words, is your magnification 1:1, or is it even higher? With Techniscope I would be tempted to turn the camera sideways, shoot 1:1, and capture 4 frames at a time!
  21. Paul, I think you're correct that grain will break up aliasing of image detail. The only thing I would be even theoretically concerned about is "grain aliasing," which is what happens when your pixels are about the same size as your grain. It happens a lot on 4000 DPI Nikon film scanners, and has the effect of exaggerating grain. However, I think the OLPF and Bayer interpolation will keep grain aliasing from being a problem. I wouldn't sweat these details unless you start to see artifacts.
  22. Paul, keep on trucking... Your rig is an inspiration! My feeling is that multiple passes and HDR are probably overkill. If you're able to get a nice bright exposure (expose to the right) such that the base of the negative is photographing as very close to pure white (R1.0, G1.0, B1.0), you should be able to capture the entire exposure range of the negative in one pass. In other words, I can't imagine a situation where the base of the film is being captured at 1.0 and the densest highlight is clipping at 0.0. Luckily for us, the distribution of "values" in a negative is extremely nonlinear, so you should be able to pull 100% of the negative's latitude out of a single RAW file. The key is really to get the DSLR's exposure as far to the right as possible. I wouldn't even worry too much about highlight clipping on the base/shadow portion of the image, as that area actually still has information (mostly in the green channel -- it can be exposed by the "recovery" slider). That will minimize noise and maximize signal, due to the highly linear response of the sensor (50% the data is devoted to the brightest highlight stop, leaving only 50% for the rest of the image). That tiny amount of digital noise will disappear into the grain, and along with the grain will prevent banding throughout the post process. As for RED vs Techniscope... Everything is a trade-off. With RED you trade a large upfront expense for low operating costs and an easy digital workflow. With film, you trade a relatively large operating cost and cumbersome workflow for image quality advantages and extreme flexibility. When Kodak introduces a new film, you're getting a sensor upgrade for free...
  23. I understand wanting to sell these "as-is," but these quotes from your auctions are downright misleading: ...and... Oh really? I mean, best of luck with the sale, but from this buyer's perspective, that is a major RED FLAG. And a personal pet peeve.
  24. If you can seat it so that half the screen is providing illumination and the other half is sticking out, you can still control it... But getting the device in there might still be a challenge. Might be a job for a well-placed mirror? The light the iPhone (or iPod Touch) emits is white LED light filtered through a color LCD/TFT array and polarizer. I don't have a spectrometer, so I can't measure the spikes ultimately emitted... But I have shot a few tests using the device as a backlight for negative reproduction, and find that I get great results. I tend to be pragmatic -- if I was really interested in 100% perfect colorimetry, I would certainly not be shooting negative film, with its extremely bizarre response curve. ;) I figure I'm going to make creative adjustments to the color and tonal curve anyway, so as long as it "looks good" in the end (which is why one would shoot film), I'm happy.
  25. Some notes on this old thread... The reason you're getting a lot of noise in the blue channel is not grain, but the orange mask... Are you using a ~3200°K hot light? If so that will exacerbate things. The DSLR sensor has a native white balance, so by forcing the WB on the camera to 2500°K or relying on Photoshop to remove the mask, you're basically taking a very weak blue channel and pushing it to oblivion. Ideally you want to be shooting close to the native WB of the camera (daylight, more or less), using a calibrated blue backlight to neutralize the orange mask. That means your light should be substantially bluer than daylight. When you open the uncorrected camera files in Photoshop, you want to see three similar-looking channels that mainly need to be inverted. The less image processing you have to do, either in-camera or in PS, the less noise. If you're looking for a good programmable RGB light source, you may only need to look as far as your cell phone. LED backlights can be incredibly even and consistent. Not to plug, but a while ago I developed an iPhone app named "Catchlight" that provides a fast interface for adjusting the color temperature of light emitted from the screen. Recently I opened it up and threw a 35mm neg on the screen. After setting the camera to 5600°K, I adjusted the light color until the orange mask was photographing as white. Once I imported that file, all I needed to do was invert and apply a minor Levels adjustment to introduce some contrast. I wound up with less noise than I get from my dedicated Nikon film scanner. I'm going to try to get a low-volume 35mm scanning workflow set up at home, and I really think getting the right light color is key...
×
×
  • Create New...