Jump to content

DIY Film Scanner (With Samples)


Josh Gladstone

Recommended Posts

  • Site Sponsor

Hint:

 

You could use a singe resistor per color channel and trim the values of each of the three (or four) resistors to balance the LEDs i.e. a 20ohm for green a 50ohm for red, etc.

 

I assume you will have a PWM setup controlled by the Arduino

Link to comment
Share on other sites

one resistor per channel would probably work, but after talking to some EE friends, the recommendation was to keep them separate. I think this is a pretty good explanation of why: http://electronics.stackexchange.com/questions/22291/why-exactly-cant-a-single-resistor-be-used-for-many-parallel-leds

 

I mean, it'd probably be ok, but it's not that big a deal to have multiple resistors if I'm using an array for them. 5 extra minutes of soldering, pretty much.

 

Yeah - the Arduino PWM pins are being used for this so you can vary the intensity of each color independently to either do pure R, G and B with a mono camera, or mix a pretty good white for a color camera or black and white film.

 

-perry

Link to comment
Share on other sites

  • 2 weeks later...

Another quick update: The PCB arrived this morning for the RGB+IR lamphouse. This one is a bit big, but fits the Imagica. I am toying with tweaking the design (seems every time I finish a design I learn some more tricks that could minimize the size. Now I'm obsessed with shrinking it!). In any case, this will use SMD components, which I'm about to learn how to work with for the first time. Should be fun. I'm hoping to have it fully assembled in the next couple of days.

 

https://www.flickr.com/photos/friolator/17037539117/in/set-72157644369553789

Link to comment
Share on other sites

  • 3 weeks later...
  • 2 weeks later...

Hey Josh,

 

I just wanted to say I liked the vid on the bolex projector you posted. I never post here, but I did something similar last year playing with a raspberry pi.

 

 

It's not the same as the work you put into it. But fun none the less. Sometime I need to finish that project.

Edited by Justin Miller
Link to comment
Share on other sites

  • 2 weeks later...

So, I've successfully gutted my Bell&Howell projector, and written a really basic Arduino sketch that's successfully advancing frames. I'm pretty stoked about all this.

 

Today I'm receiving a pretty cheap USB Microscope camera from Amazon that I'm going to try to get to work with the whole setup. Josh, can you post your Python program? I think it'll work with the camera I bought, because the computer should treat it like a webcam, and I know you said your program worked with your iSight...

 

Once I have a complete setup in place, regardless of how *good* it is, I will share with the forum!

Link to comment
Share on other sites

  • 7 months later...

This is a fascinating thread. Anyone have any updates, particularly on the USB microscope cameras.

 

I am interested in capturing some Super 8 or Regular 8 mm home movies from the 60s and 70s.

 

The quick and dirty method is to just take a video of the film as it is projected on a screen. This capture method must give better results but can anyone comment on how much better the final product is?

Link to comment
Share on other sites

Hey Sam,

 

Let me just say this about cameras: from my limited experience, no matter how much a manufacturer claim that they comply with standards, it always takes some work to get the cameras working. For example, I just got an imaging solutions L-W-5s-1394 (2.5k camera upgrade!), but I haven't been able to get OpenCV to interface with it yet, despite the claim that both it and my current camera both use DCAM IIDC standards. So that's going to take some work.

 

But I digress. The version I had working with iSight was from a long time ago when I was using an HDV camera with removable lens as my capture camera. OpenCV was able to natively pull images from those cameras; it wasn't until I got into machine vision that I needed to look to other libraries. But anyway, here's a basic Python program that should be able to pull images out of the iSight. It was hanging on my system, but it looks like that might be an OpenCV version / possibly a Yosemite issue. The code should work. Will it work with a webcam? Maybe? But honestly I'd be surprised if it was that easy. Anyway, here's the code:

#!/usr/bin/env python
#include "highgui.h"
import cv2



cv2.namedWindow("preview")

vc = cv2.VideoCapture(0)

rval = True





while rval:

rval, frame = vc.read()

cv2.imshow("preview", frame)



key = cv2.waitKey(20)

if key == 27: # exit on ESC

break
Link to comment
Share on other sites

Also, as a bonus (?), here's some really old code I had. Basically this was the very first proof-of-concept version I had working with the HDV camera. I hacked apart a computer mouse and modified mouse and projector so that it would click once per revolution. A preview video is shown in a window, and every time the preview window is clicked, the current frame is written to the hard drive. So, once I situatied the cursor, and then hooked up the projector-mouse-combo up to the computer, it would capture each frame. It was able to capture at fairly reasonable rates (like 15-20fps if I remember correctly! Probably has something to do with it being HDV as opposed to raw imageds coming off the vision cameras). But it was hard to time the capture exactly, so eventually the shutter would creep in. Then I removed the shutter, and after a while, it would capture a pulldown blur. Just couldn't get it to be reliable, so that's when I moved to the stepper motors and vision cameras. Anyway, here's the code for this. And this was a long long time ago, so no guarantees anything works.

 

#!/usr/bin/env python
#include "highgui.h"




      ###########################################################
   ###                                                           ###
###                       FrameCapture v .9                         ###
   ###                                                           ###
      ###########################################################




import cv2, argparse, time


framecount=0
shuttercount=3
gocapture=0
syncdelay=0.0


parser = argparse.ArgumentParser(description='Example with non-optional arguments')
parser.add_argument('-l', action='store', dest='capture_folder', default='/FrameCapture/Caps',
                    help='Path to Capture Folder, i.e. /Users/USERNAME/Desktop')
thePath = parser.parse_args().capture_folder


def onmouse(event, x, y, flags, param):
    global shuttercount
    global framecount
    global syncdelay
    if gocapture > 0:
        if event==cv2.EVENT_LBUTTONDOWN:    
            if shuttercount >= 6:
                time.sleep(syncdelay)
                cv2.imwrite(saveLoc, frame)
                framecount += 1
                shuttercount=1
                print 'Frame Captured --> ' + saveLoc
            else:
                print('*')*shuttercount
                shuttercount +=1


print ''
print '-------------------------------------------------------------------------'
print '[] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] ['
print '-----------------------------------------------------------------------/'
print '      \|/ | O -   ^^         |                  |           _   _     |'
print '     --O--|/ \        O  ^^  |   ^^   |||||     |     ___  ( ) ( )   _/'
print ' /\   /|\ |         --|--    | ^^     |O=O|     |_ __/_|_\,_|___|___/'
print '/  \/\    |~~~~~~~~~~~|~~~~~~|        ( - )     |   -O---O-       |'
print '  /\  \/\_|          / \     |       .-~~~-.    | -- -- -- -- -- /'
print ' /  /\ \  |                  |      //| o |\\   |______________ |'
print '--------------------------------------------------------------_/'
print '[] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] ['
print '------------------------------------------------------------ '
print '                            FrameCapture v .9   (c)2013'
print ''
cv2.namedWindow('Initializing Camera...')
camera_index = 0
vc = cv2.VideoCapture(camera_index)
print 'Camera Online. Click in Video Window to Activate.'
print 'Press [h] for help. Press [esc] to Exit.'


if vc.isOpened(): # try to get the first frame
    rval, frame = vc.read()
else:
    rval = False


while rval:
    saveLoc = thePath + '/Frame_' + str(framecount).zfill(3) + '.tiff'
    cv2.imshow('Capture', frame)
    cv2.setMouseCallback('Capture', onmouse)
    rval, frame = vc.read()
    key = cv2.waitKey(20)
    if key == 27: # exit on ESC
        print 'Goodbye.'
        break
    if key ==32: # spacebar
        if gocapture < 1:
            gocapture=10
            print 'Starting Capture...'
        else:
            gocapture=0
            print 'Ending Capture...'
    if key ==46: # < key
        syncdelay += .01
        print 'Delay:', syncdelay
    if key ==44: # < key
        if syncdelay >= 0.01:
            syncdelay -= .01
            print 'Delay:', syncdelay
        else:
            print 'Delay: 0.00'
    if key ==48: # < key
        syncdelay = 0
        print 'Delay: 0.00'
    if key==78 or key==110: # n key
        camera_index += 1 #try the next camera index
        print 'Switching Cameras...'
    if key==72 or key==104: # n key
        print ''
        print 'HELP:'
        print '    To change capture location, invoke FrameCapture command with FrameCapture.py -l /PATH/TO/LOCATION'
        print '    Press [space] to start/stop capturing frames.'
        print '    Press [period] to increase delay. Press [comma] to decrease delay. Press [0] to reset delay.'
        print ''

Enjoy! I'll try to answer any questions if anybody's got any!

Link to comment
Share on other sites

Sorry, just for anybody starting out. If you want to run that program, you need to have Python and OpenCV installed. Then save it as FrameCapture.py, just drag it into a terminal window, hit enter, and it'll run. If you want to specify a save location, after you drag it into the terminal window, add "-l" for location, followed by the path to the directory you'd like the images saved in. Hope that makes sense!

Link to comment
Share on other sites

  • 4 weeks later...

New poster, but have benefited immensely from this discussion, and hopefully will have more to add. I recently have gotten into video, and had an interest to digitize my family’s small collection of super 8 videos. I figured I’d try to build a telecine-type device to do it, and as my background is in microbiology, why not try to use a microscope as the image collector?

 

I’m still very early in the process, but thought I’d put out some of the things I’ve discovered in trying to use a USB microscope to do this.

 

I did some calculations early on - the horizontal field of view at max maginication of the Celestron Pro scope(pushed all the way to the object) is about 7mm, judging from the videos online.

Super 8mm frames are 0.211" 0.158" 4:3

There are a number of capture resolutions on the scope (theoretically, see below)
- 1280 x 960 (1.3 MP)
- 1600 x 1200 (2 MP)
- 2048 x 1536 (3 MP)
- 2592 x 1944 (5 MP

So, if assumed we captured at max resolution all the way up to the scope (without modifying):
5.79/7 *2592 = 2143 x 1611 image for each frame or about 3.5MP per frame. Not too bad.

Selection: I looked around at a number of cheap scopes, and it seems there are actually only a few low-cost scopes made. Actually, there are only low cost scopes that are direct USB scopes, the next step up includes a screen, and then you’re looking at a separate scope and camera, which are much more expensive. I think there is likely only a few manufacturers, but they are OEMed to many different firms that put them out. I ended up purchasing the “Crenova® UM012C USB Digital Microscope 5MP Video Microscope”. This is theoretically the same microscope as the “Celestron 5 MP Handheld Digital Microscope Pro” and other brands, but was the cheapest at the time on Amazon.

 

Plugging it in to a windows computer (I’ve tried windows 7 and 10) and the drivers will automatically load. It installs as a webcam (which it essentially is, but with a different lens), so any software that can interface with windows to capture webcam video can use it. The disc that came with the scope didn’t read, so I downloaded the microscope software from the Celestron site, and it worked fine. I took some shots with the scope the worst/least cared about super 8 movie that I had. As a first proof of concept, I put it on top of a tablet for backlighting and used the scope to take some shots. You can see the pixels behind the frame. I tried a bunch of different resolutions, using the Celestron-supplied software, but didn’t see much of a difference above 2MP. Of course, that might change with proper diffuser and backlighting. Also the film has a number of scratches and imperfections so more study is needed.

 

4lDMiTv.jpg

 

I had originally thought that I might build a variant of the http://kinograph.cc/ but then finding this forum made me think that the better route would be using a projector to move the film. This is my current plan.

 

As far as software, I have tried a number of libraries. Microsoft Expression, opencv, and AForge. I’ve had the best luck with AForge, which has a good forum, and lots of example code. I’ve modified a demo app for my purposes that allows me to adjust the capture resolution and capture jpgs from the image programmatically using Visual Basic. I think this will work well to allow me to develop the scripts for the automation. I’m not sure if I’m going to do the Arduino path, or more direct control like a Labjack. Another TBD.

 

I just got a Sankyo Dualux-1000 today, which seems to work well. I took out the lens, and pressed the scope in and it’s about the same diameter, and it stuck in place. While it is not optimally placed (it is too far away from the film currently for a good capture) I took some shots that might be interesting. Also, I was using my bike headlight as the light source, so not an optimal test, but another proof of concept. The front part of the scope seems to be screwed on, so it might be easy to unscrew it and get it closer. It would be nice to do this non-destructively to the projector. It would also be nice not to have to use any additional lenses, and just snap the film directly.

 

Tn2brCR.jpg

 

 

 

2P52erk.jpg

 

 

 

More details on the Microscope: Though it is advertised as 5MP (2592x1944), I can only seem to get that resolution out of the Crestron software. When I use other programs to adjust/capture I only get up to 2048x1536 I’m not sure why –

· It could be that the camera is not 5MP, and they are just upscaling – I don’t have anything good to look at with good detail at that resolution, so eyeballing it to date hasn’t worked.

· It could be that that 5MP mode uses some special driver mode that isn’t available to DirectShow which is what my software uses. There is also a ‘snapshot mode’ for webcams, which tends to be high resolution, but my initial search hasn’t found a higher rez mode.

 

In all though, I’m not sure how high a rez I’ll need, or if 5MP will be overkill.

 

Modes from the Directshow driver reported from the scope:

 

Stream Format Properties Tab for Microscope

Color Space/ Compression; Frame Rate/ Output Size

MJPEG - 15FPS - 2048x1536 (default)

MJPEG - 15FPS - 1600x1200

MJPEG - 25FPS - 1280x960

MJPEG - 30FPS - 800x600

MJPEG - 30FPS - 640x480

No adjustments available on Frame Rate or On the 'compression' tab

YUY2 - 5FPS - 2048x1536 (Default)

YUY2 - 5FPS - 1600x1200

YUY2 - 10FPS - 1280x960

YUY2 - 20FPS - 800x600

YUY2 - 30FPS - 640x480

No adjustments available on Frame Rate or On the 'compression' tab

I don’t know much about color space or compression quality difference between MJPEG and YUY2 in terms of practical outcomes, anyone have any tips?

 

I’ll be working on this over the course of the winter and will update as it progresses.

 

  • Upvote 1
Link to comment
Share on other sites

Your photos are way (4x) too big and mess up the page layout. This on a modern large screen windows system. Let alone on a tablet or Phone.

Huh, that's odd. They autoscale for me, and only open large when clicked.

Link to comment
Share on other sites

Yep - the big images muck up the layout on a windows box under firefox or google chrome.

 

But Microsoft Edge is okay.

 

This has always been the case here on cinematography.com.

 

Someone posts an image bigger than usual, and the width

of the entire thread window then stretches to accommodate

the larger image meaning one then has to use the horizontal

scroll bar to find the end of any text, unlike here where I've

used manual line wrapping.

Edited by Carl Looper
Link to comment
Share on other sites

Just finished capturing about 40 reels of normal8 and super8 film with my diy telecine device.

I could not have done it without some great tips on this forum. Thanx all!

 

some technical details and samples :

 

Stunning piece of work. I'm impressed by the sheer quality workmanship and attention to detail.

Personally, I got my stepper all rigged up and working at xmas, but then I stalled when it came to designing the LED light source. Would you be willing to share more detail about how you did that?

  • Upvote 1
Link to comment
Share on other sites

@Phil : No experience with negatives, I only got "projectable" reels from my father and grandfather.

 

@Simon : Thanks, I had a fun time building it.

 

About the LED light source:

 

I started with taking 9 RGB leds from a rgb ledstrip I had. I soldered them onto a small PCB.

The results were disappointing, because the wavelength of the green led was way too close to the blue led, resulting in poor color rendering. (the light source, film die and ccd bayer filters characteristics need to match ). I ordered separate green leds ( 545nm) and used these instead of the rgb green led. The result is much better.

 

I use a basic current source to dim the light. See schematic for a single color.

I did not use PWM because I was worried it could cause flickering. (if you use 1KHz PWM frequency the led flashes once every ms. If the exposure time is say 10ms, the ccd sensor can receive 9 or 10 pulses. Which would give a 10% difference in luminance frame to frame.)

Getting an uniform lighting of the film was a bit of a struggle, but the method I use now works fine. The light is mixed and reflected in the white tube, and the frontcover contains two layers of light diffuser foil, recovered from a lcd backlight.

 

To be really honest, I also tested with a halogen light with a good IR filter (to avoid burning the film and avoid flooding the ccd), and this also gave very good results. A halogen light is easier and requires no electronics.

 

The disadvantage is of course that the blue ccd pixels do not receive as much light as the red and green, causing some loss of dynamic range. But in my case the loss was approx. 30% ( white in the film is detected as red=100%,green=100%, blue = 70%), which is not that much considering the 12bit dynamic range of the sensor.

 

The real advantage of leds lies in the ability to supply very short and intense light burst, ideal if you want to keep the exposure time very short. For my 3 frames/s prototype not needed.

 

Pol

 

post-69665-0-43917400-1455304560_thumb.jpg

post-69665-0-15559600-1455304577_thumb.jpg

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...