Jump to content

Josh Gladstone

Basic Member
  • Posts

    330
  • Joined

  • Last visited

Posts posted by Josh Gladstone

  1. Has anyone tried to build a new one? It looks pretty simple. I bet you could do it with an Arduino pretty easily. Plus, then you could program it to do all sorts of crazy / accurate things. Interesting thought...

  2. Doesn't the Müller scanner use a laser to register the perfs? That should still work, right?

     

    On the other end, I've converted a projector into a scanner, which has it's own issues, but processing density certainly isnt one of them. I've got a bunch of super 8 stuff I scanned on youtube and all of it was hand-processed as negative.

  3. I've actually got a Nikon to ACL adapter, so they did make them.

     

    I'd keep an eye on ebay, a decent looking ACL kit just sold today for like $550.

     

    If the sensor sizes are the same, you should get very similar results, so if it works on the BMPCC, I can't think of a reason it wouldn't on a super16 ACL, other than lens mounting or possibly backfocus issues.

  4. I agree. I've been looking for one of the eclair ergonomic grips for mine. I've got some sort of aftermarket ergonomic grip, but it's just not quite at the right angle to be comfortable. But he's asking $300 for one. That's nuts. I payed $800 for the whole camera!

  5.  

     

    Indeed, E-100 is a tricky stock, especially for beginners setting out on, say, Super 8, not accustomed to manual exposure practices and easily discouraged by non-immediately-gratifying results returning from the lab :) .

     

    Kodak Alaris is actually stating in their press release that they are to "…reformulate…" the Ektachrome 100 stock they want to launch. I am unclear if this is more a marketing reference than an chemical-engineering term at this stage. It will be interesting to see if the E-100 stock will indeed be the old 5285 / 7285, or a new type with greater latitude.

     

    Here's an interview where they give more details:

    Film's not Dead: So, In terms of the formulation is it exactly the same, or has it been upgraded?

     

    Kodak Alaris T. J. Mooney: Well that is still TBD (to be discussed) which is part of the reason why the availability is set for later this year, in the fourth quarter. Bringing back a film is not as simple as you might think. There's a very significant R&D (Research & Development) that is necessary to re-formulate the product based on component availability and any equipment changes that have been made or any changes to environmental health and safety regulations. So the intent here is to bring back a daylight 100 speed Ektachrome film. Saturation levels and performance characteristics are still TBD at this point but in terms of the old Ektachrome it will certainly be along those same lines and we'll know more as we go along.

  6. Yeah, that's about 4x what I was expecting the price to be. It's a crystal sync super 8 camera, so that's great, but they also mentioned only 4 film speeds, 18, 24, 25, and 36. So no time lapse or long exposure, even though the hardware should be capable of it. Also it also seems like they may have removed the on-board camera microphone, (they seemed to make a point of saying that you could record to the sd card if you plugged in an external mic). Not a huge deal, but still. Definitely going to need to see some more footage and find out a lot more details. I was planning on pre-ordering at $500, but I can't really see spending $2000 on a super 8 camera. Still love it though, and I love the efforts Kodak is making.

  7. Unfortunately 120 film base is much thinner so it's not just a matter of cutting. If the 135 market is tiny, 120 must be well nigh invisible. Sorry. You can still get Velvia in 120.

     

    Oh, wow that's interesting. I just always assumed they cut everything from the same sheets. Learn something every day.

  8. I had more or less dismantled my scanner a while back (it had been fairly unreliable and I needed the parts for something else) and I've been (slowly) repairing and reworking the whole setup. Including rewriting a some code to work with newer versions of pydc1394 and opencv. I also upgraded the vision camera to a Allied Vision Guppy GF-503c, so I'll be able scan over 2k now. I also got an ISG LW-5-S-1394, which is another 2.5k camera, but I'm unable to get images off of it. So maybe I'll look at that as well (although the Guppy has a slower frame rate, but works fine, so maybe I won't. We'll see). Hopefully I'll be able to make the scanner more reliable with this rewrite. I'll post some stuff when/if I get everything working.

  9. Overcranking? You'd eat through your film a lot quicker, but you would get some nice slow motion shots out of it. (I'm not saying this is a good idea, but it is an option.)

     

    Also, you could intentionally overexpose a stop and pull process the film. Your lens may also stop down smaller than f/16.

  10. Are you sure it's on the mirror? Could it be on the ground glass? As far as I'm aware Beaulieus have two pieces of ground glass sandwiched together with a glue, and over time that glue can deteriorate and cause viewfinder distortion/irregularities. My personal R16 is slightly blurry on the left side, but clear on the right side, so I can only get hard focus with part of the viewfinder.

     

    Anyway, from what I've read, cleaning the ground glass on a Beaulieu is a huge pain and requires disassembling the whole camera, so I've been told if you can live with it as it is, you should.

  11. Not sure if this helps, but I do believe they did have some large ludicrous lights that you could use with your camera to allow low-light and indoor shooting. Think wedding movies. Like this:

     

    s-l1600.jpg

    http://www.ebay.com/itm/1950s-WORKING-Acme-Lite-Mov-e-Lite-Home-Theater-Light-Lamp-Bar-for-Movie-Camera-/221978257927?hash=item33aeef6a07:g:~ggAAOSwMmBV2Kfx

     

     

    Of course you have to plug them in, so they're not very portable. Probably not a lot of use in nightclubs. Unless maybe there was a battery pack for them? But I really don't know much about them.

  12. I'd love to see this in a side-by-side or top-bottom 3d format! I've got a GearVR, so I'd love to check it out with true 3D playback instead of Red/Blue. Plus I could watch it in 3D in a giant virtual theater!!

  13. Yeah Simon, you basically want to come up with a way to know when one full revolution has occured, and you stop the motor at that point, either with an interrupt, or by just constantly polling whatever sensor you have.

     

    The first way I did it was to hack apart a mouse, and use the mouseclick to trigger the image capture. This did work pretty well at first, but ultimately because there's a physical thing pressing against a physical part, it wears out and breaks. And because one roll of film is going to click that thing something like 3000+ times, it wore out pretty quickly. So ideally you want a way to detect when a frame is finished being pulled down and is ready for capture without physical contact.

     

    Then solution I came up with was a photoresistor behind a hole behind the shutter, with an LED on the other side of the shutter. When the shutter is not between the LED and the photoresistor, it detects a lot of light, when the shutter passes between them, it detects less or no light. Every three shutter passes equals one frame, and so my program counts three shutter passes and then stops the motor.

     

    I got this idea from looking at the Müller HM Data Framescanner, which was the initial inspiration to try and build a scanner in the first place. But there may be other better ways to detect the position of the stepper motor. A hall effect sensor - magnet combo, possibly? Or a rotary encoder on the motor itself? Lots of possibilities. I do plan to look into it more some day soon.

  14. Sorry, just for anybody starting out. If you want to run that program, you need to have Python and OpenCV installed. Then save it as FrameCapture.py, just drag it into a terminal window, hit enter, and it'll run. If you want to specify a save location, after you drag it into the terminal window, add "-l" for location, followed by the path to the directory you'd like the images saved in. Hope that makes sense!

  15. Also, as a bonus (?), here's some really old code I had. Basically this was the very first proof-of-concept version I had working with the HDV camera. I hacked apart a computer mouse and modified mouse and projector so that it would click once per revolution. A preview video is shown in a window, and every time the preview window is clicked, the current frame is written to the hard drive. So, once I situatied the cursor, and then hooked up the projector-mouse-combo up to the computer, it would capture each frame. It was able to capture at fairly reasonable rates (like 15-20fps if I remember correctly! Probably has something to do with it being HDV as opposed to raw imageds coming off the vision cameras). But it was hard to time the capture exactly, so eventually the shutter would creep in. Then I removed the shutter, and after a while, it would capture a pulldown blur. Just couldn't get it to be reliable, so that's when I moved to the stepper motors and vision cameras. Anyway, here's the code for this. And this was a long long time ago, so no guarantees anything works.

     

    #!/usr/bin/env python
    #include "highgui.h"
    
    
    
    
          ###########################################################
       ###                                                           ###
    ###                       FrameCapture v .9                         ###
       ###                                                           ###
          ###########################################################
    
    
    
    
    import cv2, argparse, time
    
    
    framecount=0
    shuttercount=3
    gocapture=0
    syncdelay=0.0
    
    
    parser = argparse.ArgumentParser(description='Example with non-optional arguments')
    parser.add_argument('-l', action='store', dest='capture_folder', default='/FrameCapture/Caps',
                        help='Path to Capture Folder, i.e. /Users/USERNAME/Desktop')
    thePath = parser.parse_args().capture_folder
    
    
    def onmouse(event, x, y, flags, param):
        global shuttercount
        global framecount
        global syncdelay
        if gocapture > 0:
            if event==cv2.EVENT_LBUTTONDOWN:    
                if shuttercount >= 6:
                    time.sleep(syncdelay)
                    cv2.imwrite(saveLoc, frame)
                    framecount += 1
                    shuttercount=1
                    print 'Frame Captured --> ' + saveLoc
                else:
                    print('*')*shuttercount
                    shuttercount +=1
    
    
    print ''
    print '-------------------------------------------------------------------------'
    print '[] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] ['
    print '-----------------------------------------------------------------------/'
    print '      \|/ | O -   ^^         |                  |           _   _     |'
    print '     --O--|/ \        O  ^^  |   ^^   |||||     |     ___  ( ) ( )   _/'
    print ' /\   /|\ |         --|--    | ^^     |O=O|     |_ __/_|_\,_|___|___/'
    print '/  \/\    |~~~~~~~~~~~|~~~~~~|        ( - )     |   -O---O-       |'
    print '  /\  \/\_|          / \     |       .-~~~-.    | -- -- -- -- -- /'
    print ' /  /\ \  |                  |      //| o |\\   |______________ |'
    print '--------------------------------------------------------------_/'
    print '[] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] ['
    print '------------------------------------------------------------ '
    print '                            FrameCapture v .9   (c)2013'
    print ''
    cv2.namedWindow('Initializing Camera...')
    camera_index = 0
    vc = cv2.VideoCapture(camera_index)
    print 'Camera Online. Click in Video Window to Activate.'
    print 'Press [h] for help. Press [esc] to Exit.'
    
    
    if vc.isOpened(): # try to get the first frame
        rval, frame = vc.read()
    else:
        rval = False
    
    
    while rval:
        saveLoc = thePath + '/Frame_' + str(framecount).zfill(3) + '.tiff'
        cv2.imshow('Capture', frame)
        cv2.setMouseCallback('Capture', onmouse)
        rval, frame = vc.read()
        key = cv2.waitKey(20)
        if key == 27: # exit on ESC
            print 'Goodbye.'
            break
        if key ==32: # spacebar
            if gocapture < 1:
                gocapture=10
                print 'Starting Capture...'
            else:
                gocapture=0
                print 'Ending Capture...'
        if key ==46: # < key
            syncdelay += .01
            print 'Delay:', syncdelay
        if key ==44: # < key
            if syncdelay >= 0.01:
                syncdelay -= .01
                print 'Delay:', syncdelay
            else:
                print 'Delay: 0.00'
        if key ==48: # < key
            syncdelay = 0
            print 'Delay: 0.00'
        if key==78 or key==110: # n key
            camera_index += 1 #try the next camera index
            print 'Switching Cameras...'
        if key==72 or key==104: # n key
            print ''
            print 'HELP:'
            print '    To change capture location, invoke FrameCapture command with FrameCapture.py -l /PATH/TO/LOCATION'
            print '    Press [space] to start/stop capturing frames.'
            print '    Press [period] to increase delay. Press [comma] to decrease delay. Press [0] to reset delay.'
            print ''

    Enjoy! I'll try to answer any questions if anybody's got any!

  16. Hey Sam,

     

    Let me just say this about cameras: from my limited experience, no matter how much a manufacturer claim that they comply with standards, it always takes some work to get the cameras working. For example, I just got an imaging solutions L-W-5s-1394 (2.5k camera upgrade!), but I haven't been able to get OpenCV to interface with it yet, despite the claim that both it and my current camera both use DCAM IIDC standards. So that's going to take some work.

     

    But I digress. The version I had working with iSight was from a long time ago when I was using an HDV camera with removable lens as my capture camera. OpenCV was able to natively pull images from those cameras; it wasn't until I got into machine vision that I needed to look to other libraries. But anyway, here's a basic Python program that should be able to pull images out of the iSight. It was hanging on my system, but it looks like that might be an OpenCV version / possibly a Yosemite issue. The code should work. Will it work with a webcam? Maybe? But honestly I'd be surprised if it was that easy. Anyway, here's the code:

    #!/usr/bin/env python
    #include "highgui.h"
    import cv2
    
    
    
    cv2.namedWindow("preview")
    
    vc = cv2.VideoCapture(0)
    
    rval = True
    
    
    
    
    
    while rval:
    
    rval, frame = vc.read()
    
    cv2.imshow("preview", frame)
    
    
    
    key = cv2.waitKey(20)
    
    if key == 27: # exit on ESC
    
    break
×
×
  • Create New...