Jump to content
Josh Gladstone

DIY Film Scanner (With Samples)

Recommended Posts

Guest Peter Charuza

Wow this is really coming along great! That's awesome you're developing your own film too, double whammy. The latest setup based on macro photography with bellows definitely is pulling much more resolution out of the negative. What are you looking to improve upon next? Dynamic Range? HDR each frame from multiple photos at various exposures? The helicopter shot was a great exposure, so you are definitely able to get a lot of information from each frame.

 

How do you feel your method compares to scanning the film on a flatbed scanner, and automating that process much like you have, with motors pulling the film through etc etc. This method would be heavy on the software side, grabbing each frame from a giant scan, and for controlling the machinery.

To be honest you could even cut the film up and have it run in shorter sections all lined up on the scanner to best save space and maximize frames per scan.

  • Upvote 1

Share this post


Link to post
Share on other sites

How do you feel your method compares to scanning the film on a flatbed scanner, and automating that process much like you have, with motors pulling the film through etc etc. This method would be heavy on the software side, grabbing each frame from a giant scan, and for controlling the machinery.

To be honest you could even cut the film up and have it run in shorter sections all lined up on the scanner to best save space and maximize frames per scan.

 

Here you have all instructions and software for flatbed scanning:

http://wkurz.com/

 

http://hosting.aktionspotenzial.de/CineToVidWiki/index.php/Hauptseite

Share this post


Link to post
Share on other sites
Guest Peter Charuza

Thanks! That's so funny I found the same link last night snooping around. I'm going to give this a try this weekend and see how it comes out. I have a ton of 16mm shot that I need to digitize. At the very least I have a whole short film that I never got to edit due to the cost of Telecine. This would be a dream if the quality is decent.

Share this post


Link to post
Share on other sites

Hey Peter, thanks for the kind words! I actually did experiment with HDR/bracketing exposures and then stitching them back together in post. I had some results posted on the previous page, but here's the video:

 

The real issue with it was that it took 3-4 times longer in an already slow process, and the results weren't much better (if at all) than just a single exposure.

 

I never really considered going the flatbed route. I did look at it, when doing my initial research, but decided against it. I'm not really sure why. I guess for workflow reasons. The Müller HM framescanner is my sort of dream-goal, and that was machine vision. Maybe that's why I went that direction.

Share this post


Link to post
Share on other sites

Does anyone have any comments on applying this to 35mm? I am trying to come up with a solution for transferring old nitrate based 35mm on a low/no budget.

Share this post


Link to post
Share on other sites

Yes, actually. The scanner I made broke and I'm in the process of making a new one, as well as a 16mm version. I should be able to use pretty much the same setup on both, and there's no reason to think 35mm would be any different (assuming the stepper motor is able to turn the 35mm mechanics)

Share this post


Link to post
Share on other sites

Hi Josh! I bookmarked your post and thread a while back and have finally started delving into the Arduino world in preparation for making my own version of this project! I was wondering if you could share with me StepCapture and your Arduino sketch? I bought the stepper motor and have an Arduino Uno. I'm thinking about trying to mod this for a DSLR setup but I just wanted to see the guts of how everything is working for you. An electrical plan would be cool to see too!

Share this post


Link to post
Share on other sites

Awesome! I'm absolutely happy to help! It should totally be modifiable for DSLR use, you'd just have to remove the machine vision capture code and replace it with code to fire your solenoid / trigger capture however you're doing it, and then maybe add a delay to ensure your frame was captured and advance to the next frame. Should be very doable.

 

Just a quick word of caution about the DSLR route, though. Most DSLRs are have a lifespan on the shutter of something like 50,000-100,000 exposures. So, when each roll of super 8 film has somewhere around 3,400 frames, you might find that you're wearing your DSLR out very quickly. That's one of the reasons why I went the machine vision route.

 

Anyhow, I'm about to replace the belt in my super 8 scanner, so I'll make you a video overview of how all the hardware and software work. And I'll upload all the code somewhere too. I can show you my 16mm scanner also, if you're interested. It's pretty much exactly the same but larger.

Edited by Josh Gladstone

Share this post


Link to post
Share on other sites

Good point about the DSLR! I'll probably just use my old 30D since I don't care too much about it. Or maybe buy a used one online to care even less about.

 

Yes! That video would be incredibly appreciated. As for where to put the code... GitHub?

 

Less interested in the 16mm because I don't have enough shot that I'd want to build a version for it, but I'm sure someone else will stumble upon this and want it so I say go for it!

 

Also, I noticed we're both in LA. I'd love to come check out your setup, if that wouldn't be weird for you. No pressure there, feel free to just post the video and code. Thanks for your help!

Share this post


Link to post
Share on other sites
Just a quick word of caution about the DSLR route, though. Most DSLRs are have a lifespan on the shutter of something like 50,000-100,000 exposures. So, when each roll of super 8 film has somewhere around 3,400 frames, you might find that you're wearing your DSLR out very quickly. That's one of the reasons why I went the machine vision route.

 

Hi, this a great project! Concerning DSLR, and bypassing the shutter wear, why not using the camera in video mode and outputing the frames through the HDMI socket?

I have a lumix GH4 capable of 4K videos and it can feed an external display or an external recorder, then it must be able to feed a video card as well. There wont be any sync issue if your camera delivers a progressive image. Does that make sens?

Edited by Damien Dubois

Share this post


Link to post
Share on other sites

You could capture one frame from a stream of video. My very first setup was with a sony hdv camera over firewire, and much to my surprise at the time, OpenCV handled it out of the box. I'm not sure about 4k over HDMI. I mean, I know it's possible, I have a 4k tv hooked up via hdmi, but I have no idea about capturing it. Theoretically it's possible though. Why not?

 

So yes, I'm sure you could make a DSLR work with my setup in a variety of ways. Of course I've never tried it, so I can't vouch a DSLR's lifespan or reliability or what have you. But yes, it should be able to work.

 

That said, I do have a Sony 720p firewire machine vision camera sitting around somewhere. I've been meaning to sell it on ebay for over a year, but I'm too lazy to ever get around to it. I think I'd sell it for like $200 if anyone is interested. I can dig it out and give you more specs and test it out if there's interest. If not, I'll put it on ebay. Eventually...

 

edit: video and code coming real soon I promise.

Edited by Josh Gladstone

Share this post


Link to post
Share on other sites

It can be done via HDMI or through the YAGH console for the GH4 using four SDI outputs. An Odyssey 7Q+ records 4K through HDMI input, and the 7Q records it via SDI.

 

I wonder if aliasing would come up between variations in camera and sensor size.

Share this post


Link to post
Share on other sites

Right, but physically connecting is one thing, and actually pulling bits out of the camera is another. My machine vision camera uses a standard, so I'm using pyDC1394, an open source driver to communicate with it. But I have never tried to get data from anything over HDMI or SDI. I'd imagine once you get into graphics card drivers, things get complicated, but I really have no idea. Maybe OpenCV reads that stuff out of the box like it did with hdv.

Edited by Josh Gladstone

Share this post


Link to post
Share on other sites

What about using a Blackmagic Intensity Pro to capture the 4K GH4 signal onto a computer? Then you could use any software you needed. I'm not sure how this applies to machine vision cameras, though. Just speaking to your point about recording a 4K video signal via HDMI.

Share this post


Link to post
Share on other sites

But what is "any software you need"? What are you going to use to display the images / capture the frames? Normally, you'd use Final Cut Pro or DaVinci, or whatever, but if you go that route, how do you actually trigger it to capture a frame and write it to a hard drive at the proper moment? So In my case, I decided to write a python program and used OpenCV to handle all the image displaying and writing stuff. That worked out of the box with and HDV camera, but needed a driver and totally different capture code to work with the uncompressed data coming off the vision camera. So, it's possible OpenCV would also work with an Intensity Pro, and it's also possible that's frame capturing is feature of Resolve, or possibly someone else has already written software to do just that, but since i've never tried any of those things, I can't really say.

Share this post


Link to post
Share on other sites

Here's the video. For some reason I forgot to describe the shutter sensor on the back of the projector, but basically I drilled a hole in the shutter housing and taped a light dependent resistor to the back side of the hole. Then on the front side, in front of the shutter, there's a large LED pointed right at it (clearly visible in the video). So that LDR senses the light levels as the shutter passes between it and the LED, allowing the Arduino to sense thirds of a rotation.

 

Share this post


Link to post
Share on other sites

So I'm just going to copy/paste the code at the bottom of this. Keep in mind, I'm really really not a programmer, I taught myself all this stuff, and pretty recently, so it's very hacky. Also, my 16mm version has a couple features the super8 version doesn't, so there's some leftover code. Namely, there's a sonar sensor to detect rollout, and an RGB led in the front that changes colors depending on whether or not it's capturing, has an error, etc. So pretty much ignore anything pertaining to setting colors or rollouts.

 

I'll sort of briefly walk through the code here, and then I'll paste it below. The first part is all setting up pins and variables. Below that is a section that's commented "USEFUL VARIABLES". This is where you can change the stepper speed, which might be higher or lower depending on the size/power/torque of your stepper motor. The lower the number, the faster the motor will go. Too low and the motor won't spin. Shutter Offset is how many times you want the light sensor to count a low/high pass before one it triggers a capture (i.e. how many blades the shutter has). Sensor Offset would be used if the light sensor setup stops the shutter so that the shutter is covering the frame. This offset would add a few steps (whatever number you put) to move the shutter out of the way. Sensor threshold is the the amount that the light sensor needs to count it as uncovered.

 

Speaking of the light sensor, the next bit down, and then a bit below Void Setup has to do with that. The light sensor take a few milliseconds to poll, so you don't want to be doing that after every step of the motor. That would slow it down a lot. And you don't want to do it every now and then because that would be less accurate. So the best way to do it is to run it as an interrupt. So that's what the ISR and adcReading bits are for.

 

The "startbyte==" stuff is all about communicating with the python application, toggling LEDs, stepping the motor forward on keypress, etc. So I guess you can more or less ignore this part.

 

AdvanceMotor sets the motor in motion until the interrupt counts three shutter passes (or however many you set), and then stops and prints a "99", which is what the python software is watching for to know that the stepper is done advancing. The software then captures a frame and advances the motor again. And so on.

 

I think that'll get you going. Feel free to ask questions. Again, apologies at the messy code. Just looking at some of it breifly, I already see stuff that could easily be cleaned up, but this is sort of a work in progress.

 

 

 

 

 

//ArduinoStepCaptureControl some old code may remain from:
//ArduinoStepperControlv5.0
//2012-04-14 jdreyer
//Stepper motor control program
int stepsToMoveNeg;
int StartByte;
int i;
int userInput[3];
int digitalVal;
int dir;
int steps;
int previous=0;
int val;
char direction;
int negVal;
int StepperPosition=0;
int byte1;
int byte2;
float Version=5.0;
const int buttonPin = A0;
int buttonState = 0;
long startTime;
long timeDifference;
int startStop=0;
const int DirectionPin=6;
const int StepPin=7;
const int LedPin=4;
const int Led2Pin=5;
const int workLedPin=12;
const int redPin = 9;
const int greenPin = 10;
const int bluePin = 11;
const int trigPin = 3;
const int echoPin = 2;
int sensorCount = 1;
int stepsTweak = 20;
int sensorState = 0;
int lastsensorState = 0;
int motorGo = 0;
int indicatorLEDToggle = 3;
int sensorOffsetCount = 0;
int delayTime = 1000;
// USEFUL VARIABLES
// ESPIECIALLY WHEN YOU CHANGE PROJECTORS
const int stepSpeed= 200; // The speed of the stepper motor (Default: 200)
int shutterOffset = 3; // The number of blades in the shutter (Default: 3)
int sensorOffset = 0; // If the sensor doesn't cause the shutter to stop in the correct position (Default: 0)
int sensorThreshold = 220; // Brightness threshold for shutter light sensor (Default: 80)
//
//
#define COMMON_ANODE
const byte adcPin = 0;
volatile int adcReading;
volatile boolean adcDone;
boolean adcStarted;
void setup() {
Serial.begin(115200);
pinMode(LedPin, OUTPUT);
pinMode(Led2Pin, OUTPUT);
pinMode(workLedPin, OUTPUT);
pinMode(DirectionPin, OUTPUT);
pinMode(StepPin, OUTPUT);
pinMode(redPin, OUTPUT);
pinMode(greenPin, OUTPUT);
pinMode(bluePin, OUTPUT);
// setColor(0, 0, 0);
digitalWrite(DirectionPin, LOW);
delay(10);
digitalWrite(workLedPin, HIGH);
ADMUX = bit (REFS0) | (adcPin & 0x07);
}
ISR (ADC_vect)
{
byte low, high;
low = ADCL;
high = ADCH;
adcReading = (high << 8) | low;
adcDone = true;
} // end of ADC_vect
void loop()
{
///////////////////////////
if (adcDone) {
adcStarted = false;
if (adcReading > sensorThreshold) {
sensorState= 1;
} else {
sensorState = 0;
}
///////////////////////////
if ((lastsensorState == 0) && (lastsensorState != sensorState)) {
sensorCount += 1;
}
if (Serial.available() > 2) {
StartByte = Serial.read();
if (StartByte == 5) {
Serial.println(StepperPosition);
}
if (StartByte == 118) {
Serial.println(Version);
}
if (StartByte == 70) {
digitalWrite(LedPin, HIGH);
delayMicroseconds(10);
}
if (StartByte == 69) {
digitalWrite(LedPin, LOW);
delayMicroseconds(10);
}
if (StartByte == 72) {
digitalWrite(Led2Pin, HIGH);
delayMicroseconds(10);
}
if (StartByte == 71) {
digitalWrite(Led2Pin, LOW);
delayMicroseconds(10);
}
if (StartByte == 75) {
digitalWrite(workLedPin, HIGH);
delayMicroseconds(500);
}
if (StartByte == 74) {
digitalWrite(workLedPin, LOW);
delayMicroseconds(500);
}
if (StartByte == 100) {
Serial.println("99");
}
if (StartByte == 120) {
// One full rotation forward
motorGo=1;
}
if (StartByte == 121) {
// One full rotation backward
motorGo=2;
}
if (StartByte == 122) {
// Partial rotation forwards
motorGo=3;
}
if (StartByte == 123) {
// Partial rotation backwards
motorGo=4;
}
if (StartByte == 125) {
MotorStartup();
}
if (StartByte == 40) { // Indicator LED Red
// setColor(255, 0, 0);
}
if (StartByte == 41) { // Indicator LED Red Low
// setColor(1, 0, 0);
}
if (StartByte == 42) { // Indicator LED Green
// setColor(0, 255, 0);
}
if (StartByte == 43) { // Indicator LED Green Low
// setColor(0, 1, 0);
}
if (StartByte == 44) { // Indicator LED Blue
// setColor(0, 0, 255);
}
if (StartByte == 45) { // Indicator LED Blue Low
// setColor(0, 0, 1);
}
if (StartByte == 46) { // Indicator LED White
// setColor(255, 255, 255);
}
if (StartByte == 47) { // Indicator LED White Low
// setColor(1, 1, 1);
}
if (StartByte == 48) { // Indicator LED Off
// setColor(0, 0, 0);
}
if (StartByte == 49) { // Indicator LED Toggle
if (indicatorLEDToggle == 0) {
indicatorLEDToggle = 3;
} else {
indicatorLEDToggle -= 1;
}
// setColor(0, 255, 0);
}
if (StartByte == 255) {
for (i=0;i<2;i++) {
userInput = Serial.read();
}
byte1 = userInput[0];
byte2 = userInput[1];
val = (byte2<<8) | byte1;
delayTime=val;
// Serial.println(delayTime);
}
}
///////////////////////////
adcDone = false;
}
if (!adcStarted)
{
adcStarted = true;
ADCSRA |= bit (ADSC) | bit (ADIE);
}
///////////////////////////
whatToDo();
lastsensorState = sensorState;
}
void whatToDo() {
if (motorGo == 1) {
AdvanceMotor();
} else if (motorGo == 2) {
RetreatMotor();
} else if (motorGo == 3) {
AdvanceALittle();
} else if (motorGo == 4) {
RetreatALittle();
}
}
void AdvanceMotor() {
if (sensorCount >= shutterOffset) {
if (sensorOffset > 0) {
SensorOffsetMovement();
}
sensorCount=0;
motorGo=0;
// setColor(0,255,0);
delay(delayTime);
Serial.println("99");
// rolloutCheck();
} else {
digitalWrite(StepPin, LOW);
digitalWrite(StepPin, HIGH);
delayMicroseconds(stepSpeed);
}
}
void RetreatMotor() {
if (digitalRead(DirectionPin) == 0) {
digitalWrite(DirectionPin, HIGH);
delay(50);
}
if (sensorCount > (shutterOffset+1)) {
sensorCount=0;
motorGo=0;
// setColor(0,255,0);
delay(50);
digitalWrite(DirectionPin, LOW);
delay(50);
motorGo=3;
} else {
digitalWrite(StepPin, LOW);
digitalWrite(StepPin, HIGH);
delayMicroseconds(stepSpeed);
}
}
void AdvanceALittle() {
if (sensorCount >= 1) {
if (sensorOffset > 0) {
SensorOffsetMovement();
}
delay(10);
sensorCount=0;
motorGo=0;
// setColor(0,255,0);
} else {
digitalWrite(StepPin, LOW);
digitalWrite(StepPin, HIGH);
delayMicroseconds(stepSpeed);
}
}
void RetreatALittle() {
if (digitalRead(DirectionPin) == 0) {
digitalWrite(DirectionPin, HIGH);
}
if (sensorCount >= 1) {
sensorCount=0;
motorGo=0;
// setColor(0,255,0);
digitalWrite(DirectionPin, LOW);
} else {
digitalWrite(StepPin, LOW);
digitalWrite(StepPin, HIGH);
delayMicroseconds(stepSpeed);
}
}
void SensorOffsetMovement() {
delay(50);
while (sensorOffsetCount < sensorOffset) {
digitalWrite(StepPin, LOW);
digitalWrite(StepPin, HIGH);
delayMicroseconds(350);
sensorOffsetCount += 1;
}
sensorOffsetCount = 0;
}
void MotorStartup() {
if (buttonState < sensorThreshold) {
digitalWrite(DirectionPin, HIGH);
while (buttonState > sensorThreshold) {
buttonState = analogRead(buttonPin);
digitalWrite(StepPin, LOW);
digitalWrite(StepPin, HIGH);
delayMicroseconds(stepSpeed);
}
delay(100);
digitalWrite(DirectionPin, LOW);
while (buttonState < sensorThreshold) {
buttonState = analogRead(buttonPin);
digitalWrite(StepPin, LOW);
digitalWrite(StepPin, HIGH);
delayMicroseconds(stepSpeed);
}
}
if (buttonState > sensorThreshold) {
digitalWrite(DirectionPin, HIGH);
while (buttonState > sensorThreshold) {
buttonState = analogRead(buttonPin);
digitalWrite(StepPin, LOW);
digitalWrite(StepPin, HIGH);
delayMicroseconds(stepSpeed);
// Serial.println(buttonState);
}
delay(500);
digitalWrite(DirectionPin, LOW);
while (buttonState < sensorThreshold) {
buttonState = analogRead(buttonPin);
digitalWrite(StepPin, LOW);
digitalWrite(StepPin, HIGH);
delayMicroseconds(stepSpeed);
}
}
}
void setColor(int red, int green, int blue) {
if (indicatorLEDToggle == 2) {
if (red > 0) {
red = 30;
}
if (green > 0) {
green = 30;
}
if (blue > 0) {
blue = 30;
}
} else if (indicatorLEDToggle == 1) {
if (red > 0) {
red = 1;
}
if (green > 0) {
green = 1;
}
if (blue > 0) {
blue = 1;
}
} else if (indicatorLEDToggle == 0) {
red = 0;
green = 0;
blue = 0;
}
#ifdef COMMON_ANODE
red = 255 - red;
green = 255 - green;
blue = 255 - blue;
#endif
analogWrite(redPin, red);
analogWrite(greenPin, green);
analogWrite(bluePin, blue);
}
void rolloutCheck () {
long duration, inches, cm;
pinMode(trigPin, OUTPUT);
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
pinMode(echoPin, INPUT);
duration = pulseIn(echoPin, HIGH);
if (duration>1000) {
Serial.println("666");
} else {
Serial.println("333");
}
}
Edited by Josh Gladstone

Share this post


Link to post
Share on other sites

Interesting idea. If you didn't mind getting down-and-dirty you could run the HDMI feed live into a video capture program of some kind and just trigger the computer to take screen grabs at the moment you would otherwise trigger data or a shutter release, and just crop in post. Obviously there'll be some quality loss there but certainly a quick solution if you're not super interested in post-processing the image!

Share this post


Link to post
Share on other sites

Cool. When I'm done with it (hopefully in a few weeks), I'll post some footage of my 35mm scanner. It's an older Imagica scanner, with the guts removed. Using an Arduino Mega to control it, currently building a custom LED (RGB) lamphouse. It'll be slow, but will do multi-flash per color RGB scanning with a mono sensor. I'm using a cheap color machine vision camera right now (4.6k), but it's just for prototyping. Once it's done, I'll swap it out for a higher end camera.

 

The camera's frame grabber board has a separate C++ library that gives you full control over most of its functionality, so I bought that from the manufacturer. As long as I use one of their CameraLink boards in the final version. it's just a swap-out replacement and will work with nearly any CameraLink camera.

 

The control software is being built in RealStudio (currently Xojo, but I use an older version). I built a whole serial command and response system in the Arduino, and a library of transport functions that you can call from the application. Basically, I send it a serial message and it does what I tell it, then reports back. The scanner I'm basing this on had 5-phase stepper motors and motor drivers (big external boxes, but conceptually similar to the one you're using), so I'm controlling steppers for forward and reverse motion, lens focus, camera platform focus, and pin-registration/pressure plate. Kind of mind blowing what you can do with a $20 controller like the Mega.

 

The plan is to get PCBs made for this once I'm satisfied it's working properly, then mount everything in a 1 rack unit box inside the chassis. It's taken far too long to get to this point, but it's nearly done and now it's becoming a lot more fun.

 

What are you using to invert the negative and remove the orange cast? is that a filter in OpenCV? I'll be doing all the image processing in memory using ImageMagick, so that's not particularly hard to deal with, it'll just take some calibration to get it just right.

 

Here's some early footage of mine, taken a few weeks ago, with the transport finally responding as expected to my commands:

 

https://www.youtube.com/embed/YWuAcmAf2ww

 

 

 

-perry

Edited by Perry Paolantonio

Share this post


Link to post
Share on other sites

That's super cool Perry. I like that you can tell it to go to a certain frame and it'll drive to that point.

 

Where'd you get the imigica from? I've always wanted to build something less projector-y. The Mueller HM was the initial inspiration for me to get into building a scanner in the first place, so ideally I'd love to make something multiformat with a simple film path. But first I still need to perfect what I have so far. Some day!

 

Yeah, OpenCV handles all the image processing, debayering, writing to disc, even talking to the camera and setting exposure.

(Basically for inverting 8-bit images, you just set each pixel to 255-its current value. So if it was 255, now it's 0, and if it's 0, now it's 255. For 16-bit images you'd do 65535-current value.)

 

For the orange hue, I actually have a small cyan filter in front of the lamp. I always have some diffusion in there, but for color negative, I add a cyan filter. Works pretty great.

Share this post


Link to post
Share on other sites

You really got that Imagica 3000 torn down Perry!

 

I looks like they moved the Tri-Linear CCD in that older machine, our ImagerXE moves the film and the CCD stays put.

 

 

You might want to look at these:

 

http://www.edmundoptics.com/optics/windows-diffusers/optical-diffusers/holographic-diffusers/1363

 

Rennie uses them in the Xena lamp houses and they make a nice even field out of the gate, the Xena has a software field correction calibration also.

 

There is a 8.8K 49M Pixel sensor coming out from the company that was Kodak-Trusense in a few months.

 

I like the 6.6K ex-Kodak monochrome sensor as a possible replacement for our 4K Sequential color monochrome sensor Xena.

 

http://imperx.com/ccd-cameras/b6620/

Share this post


Link to post
Share on other sites

Kind of mind blowing what you can do with a $20 controller like the Mega.

 

Also this. Seriously. I didn't know how to do any of this stuff about a year and a half ago, and now I've got a million ideas for projects to tinker with.

  • Upvote 1

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



  • Serious Gear



    G-Force Grips



    FJS International



    Metropolis Post



    Visual Products



    Gamma Ray Digital Inc



    Rig Wheels Passport



    Broadcast Solutions Inc



    Glidecam



    Ritter Battery



    Abel Cine



    Wooden Camera



    Paralinx LLC



    Tai Audio



    Media Blackout - Custom Cables and AKS



    New Pro Video - New and Used Equipment



    CineLab



    Just Cinema Gear


×
×
  • Create New...