Jump to content

First scans


Paul Bruening

Recommended Posts

I'm not sure how it can sit behind my film area and still be accessible for controlling.

If you can seat it so that half the screen is providing illumination and the other half is sticking out, you can still control it... But getting the device in there might still be a challenge. Might be a job for a well-placed mirror?

 

The light the iPhone (or iPod Touch) emits is white LED light filtered through a color LCD/TFT array and polarizer. I don't have a spectrometer, so I can't measure the spikes ultimately emitted... But I have shot a few tests using the device as a backlight for negative reproduction, and find that I get great results. I tend to be pragmatic -- if I was really interested in 100% perfect colorimetry, I would certainly not be shooting negative film, with its extremely bizarre response curve. ;) I figure I'm going to make creative adjustments to the color and tonal curve anyway, so as long as it "looks good" in the end (which is why one would shoot film), I'm happy.

Link to comment
Share on other sites

  • Replies 75
  • Created
  • Last Reply

Top Posters In This Topic

  • Premium Member
Paul, what do you do about dust? And how many images/second (or seconds/image!) do you capture?

 

Any chance you can post a pic? I'm intrigued how you've arranged all this mechanically.

 

I'm putting an air scrubber in the film path. It's an old trick from the optical days of printing. It should cut way down on the digital dust busting. The Kodak doesn't have any dust cleaning mechanism in it. I'll just have to knock dust off it with air. I don't have to change lenses on it so the chances of gathering dust are low. If I can go with something like a Canon T2i, it has its own methods of cleaning its sensor as well as putting me up to 5K Bayer. But, that's all part of my very uncertain future.

 

The Kodak is notoriously slow to save its maximum image and file size. It will cycle something just below 8 seconds per frame. The new Canons are WAY faster.

 

I'm fine with the results I'm getting with gelled halogens. I still prefer the broad spectrum light they deliver as opposed to the spiked light of LEDs. It's still the best results for cheap that I can easily arrive at in my sticks and string level of engineering.

 

I'll do a pic of it when it's finally running dependably and no longer an ongoing experiment. I would like to take this opportunity to once again praise Bruce McNaughton for all his genius in this project.

Link to comment
Share on other sites

I love it. I'm rooting for ya. I hear you on the tungsten source, but my question is have you thought of using a gel pack? You probably can't hit the nail on the head with it, but if you get your light source closer to the inverse of the orange mask you'll be doing yourself a favor, I would imagine. Then the fine tuning of the invert would be in white balance, but you would be starting closer to the mark and your light would still be continuous spectrum. I assume some level of CTB and a touch of plus green would get you close. the only give/take in that route would be how much you would have to boost the lights wattage, or change the exposure time; that of course would either affect heat management or scan times, respectively.

 

Hit me up with a bit of scratch (read not much. I have medical bills to, I can sympathize) and I can build you a custom footage:frame LCD counter to slap on the front of it. I have all the routines in library for my Archimedes, it would probably only take me 2 or 3 hours. Actually it would probably just require that I dig up the old working file for that routine. Don't know if it helps or not, but I'd like to help if I can.

Link to comment
Share on other sites

  • Premium Member

I don't know the camera personally, but a reasonably high-end 13.5MP DSLR is liable to perform reasonably well (although the DCS series famously had no OLPF at all).

 

I think you'll have more problems getting the optics right. Most true RAW DSLRs output completely uncompressed data, or data compressed using only mathematically lossless techniques; you should be able to tell by examining the size of the RAW files. Googling I find that the DCS Pro/n (which I understand to be identical other than the Nikon lens mount) will shoot up to 18 raw frames into its 512MB buffer, implying a frame size of 28MB; a 14-bit 13.5MP raw image should be just over 20, so this passes a basic sanity check.

 

There will still be noise in the image - all image sensors have noise. Shoot HDR doubles or triples and you can minimize that. At that point you should be able to come up with fairly convincing 2-3Kish 16-bit TIFFs with all 16 of their bits stuffed with useful information; I suspect the sane solution from thereon in would be to batch process it through something like ImageMagick with a big S curve to make it viewable. This would probably go for any reasonably modern DSLR.

 

What's been said about having lots of blue in the backlight is very true and the light source on something like a Northlight scanner looks practically cyan.

Link to comment
Share on other sites

  • Premium Member

Hello Michael,

 

Thank you for your supportive words. I am using full CTO on the light. I'll try a little green per your recommendation. I did my first tests (so long ago) with unaltered halogen. The amount that I had to push the color correction was junking up the data, no doubt. The light is bounced into the head through mirrors. Heat's no big problem. The long light path and mirror angles solved my uneven light distribution on the back board. I covered the back board with a rectangle of bright white paper topped by a rectangle of full frost plastic. That eliminated the bits of backboard that were coming through the film and appearing as little, faint blotches in my images. Low f-stops didn't knock those out of focus enough on its own. Even with all this light travel, diffusion and gelling there's light to spare for F5.6 at 1/30th.

 

Your offer of constructing a counter is kind. Bruce's (Paul Moorecroft's) software manages all that for me, already.

 

Phil's answer was his usual, generous helpfulness. I'd expressed concern to him in a PM about the validity of my rig. I'd begun to question it after XiaoSu's recommendation to get a RED, apparently, as an alternative to my heavy commitment to Techniscope and my telecine/scan system. Given that my sensing technology is in the same ballpark as a RED his comment caused me to reassess.

 

I think that the low or no compression rate of my RAW data stacks up quite well against RED's 9:1 and 12:1 compression. I can't account for if the no OLPF factor on the Kodak is significant to my final output. Adobe crunches OLPF in software. So, I feel okay about that. Tests will reveal if it's a usable approach. Being able to multi-scan HDRI means that I can pull all of the latitude from the negative. Photoshop CS2 and better has a well respected HDRI thingie already in its software. I also get all that color and contrast enhancement that comes with HDRI. So, I feel that I can knock the socks off of RED in those departments. From the tests I have already scanned it is apparent that I can save a thin negative (knowing there are, inherently, some losses to image quality) with better results than an equivalently underexposed RED image. I assume I will get roughly the same compensation on overly thick negatives.

 

All-in-all, I still think I have a viable system. As well, as DSLR technology gets better and cheaper, my system can grow with the times. My current calculations put my overhead down to $0.000491 per scan. That's about $85.00 to single pass scan a 120 minute feature in 4K Bayer. $170.00 for an HDRI version. I have no reservations about that.

Link to comment
Share on other sites

Paul, keep on trucking... Your rig is an inspiration!

 

My feeling is that multiple passes and HDR are probably overkill. If you're able to get a nice bright exposure (expose to the right) such that the base of the negative is photographing as very close to pure white (R1.0, G1.0, B1.0), you should be able to capture the entire exposure range of the negative in one pass. In other words, I can't imagine a situation where the base of the film is being captured at 1.0 and the densest highlight is clipping at 0.0. Luckily for us, the distribution of "values" in a negative is extremely nonlinear, so you should be able to pull 100% of the negative's latitude out of a single RAW file.

 

The key is really to get the DSLR's exposure as far to the right as possible. I wouldn't even worry too much about highlight clipping on the base/shadow portion of the image, as that area actually still has information (mostly in the green channel -- it can be exposed by the "recovery" slider). That will minimize noise and maximize signal, due to the highly linear response of the sensor (50% the data is devoted to the brightest highlight stop, leaving only 50% for the rest of the image).

 

That tiny amount of digital noise will disappear into the grain, and along with the grain will prevent banding throughout the post process.

 

As for RED vs Techniscope... Everything is a trade-off. With RED you trade a large upfront expense for low operating costs and an easy digital workflow. With film, you trade a relatively large operating cost and cumbersome workflow for image quality advantages and extreme flexibility. When Kodak introduces a new film, you're getting a sensor upgrade for free...

Link to comment
Share on other sites

  • Premium Member
I am using full CTO on the light.

 

EDIT: CTB, actually.

 

I have room to spare on the light so I'll keep adding CTB and some green until I get as close to white base as possible, as you fine fellows have recommended.

 

I really do appreciate all the supportive words. It does mean a lot to me. Thanks for all the help.

Link to comment
Share on other sites

  • Premium Member
I can't account for if the no OLPF factor on the Kodak is significant to my final output. Adobe crunches OLPF in software.

 

You can't really do that.

 

The problem with low pass filtering is twofold: first, you can't physically manufacture a filter which has 100% transmission to a certain spatial frequency, then none afterward. Second, while you can (almost) do that in software, you need to oversample significantly in order to achieve it; the basic rule is that you can't low-pass filter after it's hit the sensor, because the damage is already done.

 

The good news is that you can probably just throw it slightly out of focus and achieve similar results, especially on a flat plane subject like a film frame. Evaluating how much is the issue and the tradeoff of sharpness for aliasing performance is always a matter of opinion.

 

P

Link to comment
Share on other sites

  • Premium Member
You can't really do that.

 

The problem with low pass filtering is twofold: first, you can't physically manufacture a filter which has 100% transmission to a certain spatial frequency, then none afterward. Second, while you can (almost) do that in software, you need to oversample significantly in order to achieve it; the basic rule is that you can't low-pass filter after it's hit the sensor, because the damage is already done.

 

The good news is that you can probably just throw it slightly out of focus and achieve similar results, especially on a flat plane subject like a film frame. Evaluating how much is the issue and the tradeoff of sharpness for aliasing performance is always a matter of opinion.

 

P

 

I've been surfing the still photo forums searching for more information on the DCS and OLPF. There doesn't seem to be much agreement on the subject. I do find a couple of mentions that RAW data can be better software manipulated for aliasing than JPEG. There are statements that bracketing eliminates aliasing. There are also mentions that down-sampling can reduce or eliminate aliasing. Since my 4.5K scans have to be down-sampled by high-base number, fractional rates (like 4,500 to 3,840 HR), it may be a self-solving problem. The most common complaint seems to come from wedding photographers shooting wedding veils. Maybe, I could only write stories of people getting divorced. I'm not happy with the softening that OLPFs do given the amount of enlargement my images will have to sustain in big screen presentations. As well, I'm wondering if the inherent irregularities of film grain will temporally break up aliasing. Maybe the human eye won't catch one "jaggie" frame at 24 FPS.

Link to comment
Share on other sites

Paul, I think you're correct that grain will break up aliasing of image detail. The only thing I would be even theoretically concerned about is "grain aliasing," which is what happens when your pixels are about the same size as your grain. It happens a lot on 4000 DPI Nikon film scanners, and has the effect of exaggerating grain.

 

However, I think the OLPF and Bayer interpolation will keep grain aliasing from being a problem. I wouldn't sweat these details unless you start to see artifacts.

Edited by Ben Syverson
Link to comment
Share on other sites

  • Premium Member
Paul, I think you're correct that grain will break up aliasing of image detail. The only thing I would be even theoretically concerned about is "grain aliasing," which is what happens when your pixels are about the same size as your grain. It happens a lot on 4000 DPI Nikon film scanners, and has the effect of exaggerating grain.

 

I did not know that about pass scanners like the Nikon. We have a poster who built his own VistaVision scan rig from a pass scanner. I wonder what grain aliasing issues he's encountered.

Link to comment
Share on other sites

  • Premium Member

Ben,

 

I was thinking further that if I did run into a stretch of film that grain aliased noticeably, I could simply track the DCS in or out, with the accompanying, slight losses or gains to framing, to solve that. My thinking is: All I have to do is change the problematic grains to a size that is outside the size of the photosites.

Link to comment
Share on other sites

Sorry, I misunderstood; you're using a camera without an OLPF!

 

Paul, are you filling the DCS's frame horizontally with the full width of the Techniscope frame? Or are you framing such that you can photograph two frames / four perfs at once? In other words, is your magnification 1:1, or is it even higher?

 

With Techniscope I would be tempted to turn the camera sideways, shoot 1:1, and capture 4 frames at a time!

Link to comment
Share on other sites

  • Premium Member
Sorry, I misunderstood; you're using a camera without an OLPF!

 

Paul, are you filling the DCS's frame horizontally with the full width of the Techniscope frame? Or are you framing such that you can photograph two frames / four perfs at once? In other words, is your magnification 1:1, or is it even higher?

 

With Techniscope I would be tempted to turn the camera sideways, shoot 1:1, and capture 4 frames at a time!

 

One, 2-perf frame at a time filling the DCS frame horizontally. There was much emailing back and forth on this issue with Bruce McNaughton. That NC was 2-perf geared when I bought it. Putting it back to 4-perf and modifying the gate/pressure plate that much was too much redesign and cost at my budget. So, we kept it as a 2-perf exposure rig. Though, in hind-sight, a 4-perf pull-down would have given me 2 formats that could be scanned (where the 2-perf format would split the two exposed frames in Adobe) which might have had some potential to make money off the rig If I had gone that way. Sideways exposure was discussed for dual Techniscope 4-perf. It was his idea. But, budget decided all of that in the end.

Link to comment
Share on other sites

  • Premium Member
Ah, so with such a high magnification (over 1:1), you may be slightly oversampling the grain (well, the dye clumps anyway). I'd be curious to see a more recent sample shot -- my guess is that the grain is well resolved and larger than a single pixel.

 

Film is a pan-resolution capture medium. Some dye clouds are big and some are very small with every possible size in between (I don't know what the actual specs and percentages are on that). Because of this there will always be some amount of grain aliasing in my scans even if insignificantly low. Finding a sweet spot for each uniquely processed stock may be a minor concern.

Link to comment
Share on other sites

  • Premium Member

Latitude and dynamic range are intertwined in single exposures. Doesn't HDRI in my scans push both? Won't I get more stops out of the film as well as better details in lows and highs? Or am I missing something about the MTF of film?

Link to comment
Share on other sites

  • Premium Member

Okay. I actually decided to use my brain on this instead of just bugging you fellas for an answer.

 

I'll never get more stops from the film because when the MTF curves come to an end at both corners of the graph, that's all she'll do. I can get more stops out of the sensor since it already has a lower dynamic range than the film. I can also get the illusion of more stops from the film, knowing its just a trick. That also means I'll be goofing with the knee and shoulder of the film's MTF. I do get better and tunable latitude by HDRI scanning. But, the buck still stops at the corners of the film's graph.

Link to comment
Share on other sites

There's a big difference between latitude and dynamic range (aka signal-to-noise ratio), but the distinction sometimes gets lost... Latitude refers to how many "stops" of light a system can represent, and dynamic range / SNR refers to the amount of "information" in the record, versus noise. They are certainly interrelated; you may have X stops of latitude, but some of them may be so noisy that they're unusable. The higher the SNR, the more latitude you're likely to be able to capitalize on.

 

To give a concrete example, think of a hypothetical 14 bit DSLR sensor. Because image sensors are highly linear, that would suggest 14 stops of latitude. However, noise keeps that from being a reality. Say we have an average variation (noise) of 10 values out of the sensor's 16384 possible values. That makes the SNR of the sensor 64.28 dB, and the effective bitdepth of the sensor 10.67, or 10.67 stops of latitude. We can expect an extra 3.3 stops in the shadows, but they will be more noise than signal.

 

Film is a little different. With negative film, shadows can quickly drop to nothing (0 density), while highlights may burn in far beyond the normal exposure range. In that way, it's the opposite of digital. We can expect extra stops in the highlights, but they will be increasingly noisy. Film is also very nonlinear -- the response curve is shaped by the engineers to "look right" when printed or inverted (whereas linear digital sensors need a strong gamma curve applied to look right). The thing is, because film has extended range in the highlights instead of the shadows, it will "feel" like it has much more latitude than digital, because we rarely need to see detail in the highlights, but we often want to see into the shadows.

 

All of that is the very long LONG way to say that you're safe without resorting to HDRI. HDR/multipass would reduce the level of noise from your sensor, but that noise level is probably already extremely good. You will be capturing the entire response curve of the film because a strip of negative film is so low contrast. Film's full latitude is "compressed" into a range of something like 3-4 stops of linear light transmission, and your DSLR has more than enough dynamic range to capture that faithfully. Way more.

 

The key is really to get the base/shadow of the film up into the highlights on the sensor, which will place those valuable few stops of film exposure data in the best part of the sensor's response.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Visual Products

Film Gears

BOKEH RENTALS

CineLab

CINELEASE

Gamma Ray Digital Inc

Broadcast Solutions Inc

Metropolis Post

New Pro Video - New and Used Equipment

Cinematography Books and Gear



×
×
  • Create New...