Jump to content

Josh Gladstone

Basic Member
  • Posts

    330
  • Joined

  • Last visited

Everything posted by Josh Gladstone

  1. I think this is looking a bit better! https://www.facebook.com/KodakMotionPictureFilm/videos/1273261012795891/
  2. Has anyone tried to build a new one? It looks pretty simple. I bet you could do it with an Arduino pretty easily. Plus, then you could program it to do all sorts of crazy / accurate things. Interesting thought...
  3. Sure, why not? Better if someone gets some use out of it. Let me know if you're interested.
  4. Doesn't the Müller scanner use a laser to register the perfs? That should still work, right? On the other end, I've converted a projector into a scanner, which has it's own issues, but processing density certainly isnt one of them. I've got a bunch of super 8 stuff I scanned on youtube and all of it was hand-processed as negative.
  5. I've actually got a Nikon to ACL adapter, so they did make them. I'd keep an eye on ebay, a decent looking ACL kit just sold today for like $550. If the sensor sizes are the same, you should get very similar results, so if it works on the BMPCC, I can't think of a reason it wouldn't on a super16 ACL, other than lens mounting or possibly backfocus issues.
  6. It might not help, but here's some super16 I shot that's tri-x processed as a negative. This stuff was hand-processed in D76 and scanned on a homemade scanner, so it's a pretty dirty and uneven, and a lab won't be using D76, so I'm not sure if it's even that helpful. But here it is:
  7. I agree. I've been looking for one of the eclair ergonomic grips for mine. I've got some sort of aftermarket ergonomic grip, but it's just not quite at the right angle to be comfortable. But he's asking $300 for one. That's nuts. I payed $800 for the whole camera!
  8. Look what just came up on eBay: http://www.ebay.com/itm/192122951485
  9. In their podcast, they talk about how they are actively looking into bringing back Kodakchrome, and other legacy film stocks. Wow. https://soundcloud.com/the-kodakery/discussing-the-new-kodak-super-8-camera-live-from-kodak-studios-at-ces (at about 24:10)
  10. Yeah, that's about 4x what I was expecting the price to be. It's a crystal sync super 8 camera, so that's great, but they also mentioned only 4 film speeds, 18, 24, 25, and 36. So no time lapse or long exposure, even though the hardware should be capable of it. Also it also seems like they may have removed the on-board camera microphone, (they seemed to make a point of saying that you could record to the sd card if you plugged in an external mic). Not a huge deal, but still. Definitely going to need to see some more footage and find out a lot more details. I was planning on pre-ordering at $500, but I can't really see spending $2000 on a super 8 camera. Still love it though, and I love the efforts Kodak is making.
  11. Oh, wow that's interesting. I just always assumed they cut everything from the same sheets. Learn something every day.
  12. I had more or less dismantled my scanner a while back (it had been fairly unreliable and I needed the parts for something else) and I've been (slowly) repairing and reworking the whole setup. Including rewriting a some code to work with newer versions of pydc1394 and opencv. I also upgraded the vision camera to a Allied Vision Guppy GF-503c, so I'll be able scan over 2k now. I also got an ISG LW-5-S-1394, which is another 2.5k camera, but I'm unable to get images off of it. So maybe I'll look at that as well (although the Guppy has a slower frame rate, but works fine, so maybe I won't. We'll see). Hopefully I'll be able to make the scanner more reliable with this rewrite. I'll post some stuff when/if I get everything working.
  13. Okay, so I'm sure it's just a mistake on the part of the packaging designers, but take a look at this packaging prototype: What do you think? Probably nothing, right? (http://www.underconsideration.com/brandnew/archives/new_logo_and_identity_for_kodad_by_work_order.php)
  14. Overcranking? You'd eat through your film a lot quicker, but you would get some nice slow motion shots out of it. (I'm not saying this is a good idea, but it is an option.) Also, you could intentionally overexpose a stop and pull process the film. Your lens may also stop down smaller than f/16.
  15. Are you sure it's on the mirror? Could it be on the ground glass? As far as I'm aware Beaulieus have two pieces of ground glass sandwiched together with a glue, and over time that glue can deteriorate and cause viewfinder distortion/irregularities. My personal R16 is slightly blurry on the left side, but clear on the right side, so I can only get hard focus with part of the viewfinder. Anyway, from what I've read, cleaning the ground glass on a Beaulieu is a huge pain and requires disassembling the whole camera, so I've been told if you can live with it as it is, you should.
  16. Looks great! Makes me really want to get on rebuilding mine! How do you like the Sankyo projector?
  17. Not sure if this helps, but I do believe they did have some large ludicrous lights that you could use with your camera to allow low-light and indoor shooting. Think wedding movies. Like this: http://www.ebay.com/itm/1950s-WORKING-Acme-Lite-Mov-e-Lite-Home-Theater-Light-Lamp-Bar-for-Movie-Camera-/221978257927?hash=item33aeef6a07:g:~ggAAOSwMmBV2Kfx Of course you have to plug them in, so they're not very portable. Probably not a lot of use in nightclubs. Unless maybe there was a battery pack for them? But I really don't know much about them.
  18. I'd love to see this in a side-by-side or top-bottom 3d format! I've got a GearVR, so I'd love to check it out with true 3D playback instead of Red/Blue. Plus I could watch it in 3D in a giant virtual theater!!
  19. Yeah Simon, you basically want to come up with a way to know when one full revolution has occured, and you stop the motor at that point, either with an interrupt, or by just constantly polling whatever sensor you have. The first way I did it was to hack apart a mouse, and use the mouseclick to trigger the image capture. This did work pretty well at first, but ultimately because there's a physical thing pressing against a physical part, it wears out and breaks. And because one roll of film is going to click that thing something like 3000+ times, it wore out pretty quickly. So ideally you want a way to detect when a frame is finished being pulled down and is ready for capture without physical contact. Then solution I came up with was a photoresistor behind a hole behind the shutter, with an LED on the other side of the shutter. When the shutter is not between the LED and the photoresistor, it detects a lot of light, when the shutter passes between them, it detects less or no light. Every three shutter passes equals one frame, and so my program counts three shutter passes and then stops the motor. I got this idea from looking at the Müller HM Data Framescanner, which was the initial inspiration to try and build a scanner in the first place. But there may be other better ways to detect the position of the stepper motor. A hall effect sensor - magnet combo, possibly? Or a rotary encoder on the motor itself? Lots of possibilities. I do plan to look into it more some day soon.
  20. Sorry, just for anybody starting out. If you want to run that program, you need to have Python and OpenCV installed. Then save it as FrameCapture.py, just drag it into a terminal window, hit enter, and it'll run. If you want to specify a save location, after you drag it into the terminal window, add "-l" for location, followed by the path to the directory you'd like the images saved in. Hope that makes sense!
  21. Also, as a bonus (?), here's some really old code I had. Basically this was the very first proof-of-concept version I had working with the HDV camera. I hacked apart a computer mouse and modified mouse and projector so that it would click once per revolution. A preview video is shown in a window, and every time the preview window is clicked, the current frame is written to the hard drive. So, once I situatied the cursor, and then hooked up the projector-mouse-combo up to the computer, it would capture each frame. It was able to capture at fairly reasonable rates (like 15-20fps if I remember correctly! Probably has something to do with it being HDV as opposed to raw imageds coming off the vision cameras). But it was hard to time the capture exactly, so eventually the shutter would creep in. Then I removed the shutter, and after a while, it would capture a pulldown blur. Just couldn't get it to be reliable, so that's when I moved to the stepper motors and vision cameras. Anyway, here's the code for this. And this was a long long time ago, so no guarantees anything works. #!/usr/bin/env python #include "highgui.h" ########################################################### ### ### ### FrameCapture v .9 ### ### ### ########################################################### import cv2, argparse, time framecount=0 shuttercount=3 gocapture=0 syncdelay=0.0 parser = argparse.ArgumentParser(description='Example with non-optional arguments') parser.add_argument('-l', action='store', dest='capture_folder', default='/FrameCapture/Caps', help='Path to Capture Folder, i.e. /Users/USERNAME/Desktop') thePath = parser.parse_args().capture_folder def onmouse(event, x, y, flags, param): global shuttercount global framecount global syncdelay if gocapture > 0: if event==cv2.EVENT_LBUTTONDOWN: if shuttercount >= 6: time.sleep(syncdelay) cv2.imwrite(saveLoc, frame) framecount += 1 shuttercount=1 print 'Frame Captured --> ' + saveLoc else: print('*')*shuttercount shuttercount +=1 print '' print '-------------------------------------------------------------------------' print '[] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [' print '-----------------------------------------------------------------------/' print ' \|/ | O - ^^ | | _ _ |' print ' --O--|/ \ O ^^ | ^^ ||||| | ___ ( ) ( ) _/' print ' /\ /|\ | --|-- | ^^ |O=O| |_ __/_|_\,_|___|___/' print '/ \/\ |~~~~~~~~~~~|~~~~~~| ( - ) | -O---O- |' print ' /\ \/\_| / \ | .-~~~-. | -- -- -- -- -- /' print ' / /\ \ | | //| o |\\ |______________ |' print '--------------------------------------------------------------_/' print '[] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [' print '------------------------------------------------------------ ' print ' FrameCapture v .9 (c)2013' print '' cv2.namedWindow('Initializing Camera...') camera_index = 0 vc = cv2.VideoCapture(camera_index) print 'Camera Online. Click in Video Window to Activate.' print 'Press [h] for help. Press [esc] to Exit.' if vc.isOpened(): # try to get the first frame rval, frame = vc.read() else: rval = False while rval: saveLoc = thePath + '/Frame_' + str(framecount).zfill(3) + '.tiff' cv2.imshow('Capture', frame) cv2.setMouseCallback('Capture', onmouse) rval, frame = vc.read() key = cv2.waitKey(20) if key == 27: # exit on ESC print 'Goodbye.' break if key ==32: # spacebar if gocapture < 1: gocapture=10 print 'Starting Capture...' else: gocapture=0 print 'Ending Capture...' if key ==46: # < key syncdelay += .01 print 'Delay:', syncdelay if key ==44: # < key if syncdelay >= 0.01: syncdelay -= .01 print 'Delay:', syncdelay else: print 'Delay: 0.00' if key ==48: # < key syncdelay = 0 print 'Delay: 0.00' if key==78 or key==110: # n key camera_index += 1 #try the next camera index print 'Switching Cameras...' if key==72 or key==104: # n key print '' print 'HELP:' print ' To change capture location, invoke FrameCapture command with FrameCapture.py -l /PATH/TO/LOCATION' print ' Press [space] to start/stop capturing frames.' print ' Press [period] to increase delay. Press [comma] to decrease delay. Press [0] to reset delay.' print '' Enjoy! I'll try to answer any questions if anybody's got any!
  22. Hey Sam, Let me just say this about cameras: from my limited experience, no matter how much a manufacturer claim that they comply with standards, it always takes some work to get the cameras working. For example, I just got an imaging solutions L-W-5s-1394 (2.5k camera upgrade!), but I haven't been able to get OpenCV to interface with it yet, despite the claim that both it and my current camera both use DCAM IIDC standards. So that's going to take some work. But I digress. The version I had working with iSight was from a long time ago when I was using an HDV camera with removable lens as my capture camera. OpenCV was able to natively pull images from those cameras; it wasn't until I got into machine vision that I needed to look to other libraries. But anyway, here's a basic Python program that should be able to pull images out of the iSight. It was hanging on my system, but it looks like that might be an OpenCV version / possibly a Yosemite issue. The code should work. Will it work with a webcam? Maybe? But honestly I'd be surprised if it was that easy. Anyway, here's the code: #!/usr/bin/env python #include "highgui.h" import cv2 cv2.namedWindow("preview") vc = cv2.VideoCapture(0) rval = True while rval: rval, frame = vc.read() cv2.imshow("preview", frame) key = cv2.waitKey(20) if key == 27: # exit on ESC break
  23. An sr3 should be pretty damn steady...
×
×
  • Create New...