Jump to content

RED BUILD 14 FULL TEST


Zac Halberd

Recommended Posts

Hi Jim,

 

I understand that DR is changed little since Claudia's test. He had to repeat his first test with another camera due to 'issues'

 

Are you saying that a Red with Build 14 beta will now beat an F23? I understand that Build 15 beta is not yet available.

 

Stephen

 

We'll leave competitive testing to others. Build 15 is a big improvement on many image quality fronts. Just want to make sure posts reflect all the info, not just selected sound-bites to support a position.

 

We are in development... always will be.

 

Jim

Link to comment
Share on other sites

  • Replies 112
  • Created
  • Last Reply

Top Posters In This Topic

  • Premium Member
And Claudio's test was with an old build with issues, which you forgot to mention. You seem to be selective in the facts you choose to post.

Yes I know the grass is always greener on the other side, but people are shooting today with your camera, not tomorrow. At some point you'll have to let the camera speak for itself, instead of continually undermining people's tests by saying that the 'next' build will be so much better.

Link to comment
Share on other sites

  • Premium Member

I hate to throw something entirely subjective into the mix, but I've sat there and looked at a Red and the image it's putting up on the preview monitor, and I've said:

 

Eleven stops my arse.

 

It is so clearly not eleven stops that you can tell by looking!

 

P

Link to comment
Share on other sites

Graeme, my problem is that few months ago we ended that particular discussion in the hard way, by Jim coming forward and post a seizure of discussion in the DR ?issues? that arise when I picked up the test Adam did. The post was something like ?I don?t allow Graeme to talk about that matter any more!?.

 

That was probably because our engineering discussion didn?t have good marketing results for RED. Or at least this is what I understand, from His act.

 

I perfectly understand His position and I didn?t continue any more since it was obvious the problem you had?

 

This is one thing, but having the nerve to argue with a poor 1st AC that happens to post his humble personal adventure with THE SAME IMATEST PICTURE that force Him to seize our discussion back then, its another thing?

 

You guys at RED, must decide, do you accept Imatest and transmitive charts as a basis of discussion for DR measurement or not?

 

Because I see The Boss using Imatest to point to a poor guy (and to a community of cinematographers that is listening without being able to properly argue) that his camera performs 11,4 stops and on the same time the lead engineer on the first real conflict with a real engineer tries immediately to depreciate the case by doing the classic FUD movement.

 

Jim and Graeme PLEASE talk to each other before posting, you are not alone?

 

JIM, BEWARE, WE ARE NOT IDIOTS!

 

Yes I know it was a bad coincidence (for RED), that I did read, that thread, this morning?

 

I?m hoping after all these, RED to be a better camera? and not all of us to be better (forgiving, tolerated) RED users?

 

The cameras should serve cinematographers NOT the vise versa!

 

 

 

 

Back to the engineering discussion?

 

Stephen, Max and others I?m willing to pass through my Imatest software a still frame (Tiff) shoot from Viper, F23 and others from a Danes Picta or a Stouffer transmitive step chart properly lighten as imatest is pointing.

 

The chart is very cheap Stephen, Danes is from Czech Republic and it cost something like 50euros. Danes Picta TS28D is the chart and it?s calibrated in 28 steps of 0,15D a total of 14 stops. For US residents Stouffer is T4110 with 41 steps of 0,10D a total of 13,6 stops. I use Danes.

 

It needs a black carton surface to hide the back light, a diffused light source like a four lamp Kinoflo and a darkened room. First do a black balance and then adjust the white balance of the camera to the white point of the Kinoflo, then adjust the iris with the help of a waveform monitor to reach the brightest step the absolute top of the waveform, capture a 16bits Tiff frame with no processing, send it by email to me and I will reply with the analysis?

 

That?s it, simple and easy.

 

Now where, Graeme found the difficulty of that process I don?t know. Probably to have good results, needs a lot of tweaking in the curves who knows?

Link to comment
Share on other sites

  • Premium Member
Back to the engineering discussion?

 

Stephen, Max and others I?m willing to pass through my Imatest software a still frame (Tiff) shoot from Viper, F23 and others from a Danes Picta or a Stouffer transmitive step chart properly lighten as imatest is pointing.

 

The chart is very cheap Stephen, Danes is from Czech Republic and it cost something like 50euros. Danes Picta TS28D is the chart and it?s calibrated in 28 steps of 0,15D a total of 14 stops. For US residents Stouffer is T4110 with 41 steps of 0,10D a total of 13,6 stops. I use Danes.

 

It needs a black carton surface to hide the back light, a diffused light source like a four lamp Kinoflo and a darkened room. First do a black balance and then adjust the white balance of the camera to the white point of the Kinoflo, then adjust the iris with the help of a waveform monitor to reach the brightest step the absolute top of the waveform, capture a 16bits Tiff frame with no processing, send it by email to me and I will reply with the analysis?

 

That?s it, simple and easy.

 

Now where, Graeme found the difficulty of that process I don?t know. Probably to have good results, needs a lot of tweaking in the curves who knows?

 

 

Hi Evangelos,

 

I was hoping somebody else would do the test, I don't want the fanboys screaming 'foul' if their illusion gets shattered.

 

I will get a chart myself so I can make some personal tests of Red once the camera comes out of beta.

 

 

Stephen

Link to comment
Share on other sites

Because talking about this wastes lots of time with endless back and forth.

 

Backlit wedge tests are currently my best method. I don't like the stopping down the lens / multiple shot method due to the abundance of frames / data to analyse. It's also quite open to human error and relies on the calibration of the lens, among other things. Also, by DR, we want to know what the dynamic range can be at one time, not over time, so doing it all in one frame makes sense. I think we agree on this!

 

Imatest is a clever program indeed, but for me, it's not a definitive way to test DR. That is because it is not consistent, in that using different curves gives different results (it should not, in my opinion), and that when I put known linear data through a known gamma curve, the software did not accurately tell me which gamma value I'd used. So to put it plainly, not I don't accept Imatest as a tool for measuring dynamic range. I've said why, and I've said so multiple times. If I get different results by changing things which should not change the answer, I cannot accept it.

 

As Jim says, we're not interesting in doing comparative tests ourselves - but we do test constantly to help us improve what we do do, and we use our own methods, as I've outlined before for that. If Imatest worked properly with known linear light data (no futzing of curves allowed) I'd have been happy to use it, but without that, I'll probably have to write my own internal tool.

Link to comment
Share on other sites

  • Premium Member
The only way I've ever done it is to lay up some ND on a lightbox and shoot that.

How precisely is ND made? Doing, say, a 15 stop wedge test that way, wouldn't it be subject to cumulative error -- sort of like measuring a 15 foot wall by using a one foot ruler, making a mark and moving it along 15 times?

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member

Edit - actually, yeah, you're right.

 

I've no idea how well it's made. I wouldn't try to enter this argument with such a Rube Goldberg arrangement, but I'd have thought it was OK for casual evaluation - especially comparative evaluation using, say, quarter stop increments.

 

P

Link to comment
Share on other sites

Jim and Graeme, It?s easily understood that if you give different image you will get different results? Even my four years old son knows that different actions have different results?

 

So please stop repeating that argument.

 

If you really want to have accurate results, then take a shoot, pass it through your red alert, in a way that you will not alter the original curve (there must be same kind of original curve in the RAW recoding) export 16bit TIFF, and this is it, you are ready?

 

Don?t forget to tell us what is that curve in order to be able to verify it independently and am OK.

 

Alternatively if you don?t accept Imatest for your personal reasons, PLEASE DO SHARE with the rest of us your internal tool in order to be able, all of us to have a way, to compare your CAMERA to the rest of the cameras? But you should have to note, that this tool to be accepted from us it has to have a generic input like TIFF and not a special for RED?

 

Jim, all the rest is just monkey business?

 

If you don?t want things like that to happen then don?t come forward and engage with people in small talking. There are consequences, that?s why Sony, Panasonic and others don?t coming forward. If you thing you are brave, you have to withstand the pressure and be OBJECTIVE AND HUMBLE with your product.

 

Or stay in the shadow and let your camera to speak by itself, read everything, analyze it, and act. Then your ACTIONS will be accepted and praised.

 

Graeme, our time its also very precious, so if you think that?s a waste, stop answering posts EVERYWARE!

 

Phil, John, the step charts are created from monochrome positive film, the charts are coming together with density measurements, that verifying the accuracy of every single step, so in this way there is an absolute stripe of NDs from clear to ND 4,20D. A stop is 0,30D so 4,20D is 14 stops? And because they are back light it?s easy to create an even light.

 

The step charts are very useful if you playing with your camera, because they immediately allow the understanding of the results of your action.

Link to comment
Share on other sites

  • Premium Member

Evangelos:

What exactly are your test results based on - Single frame grabs, or a series of frames (ie "motion" pictures)?

 

Somewhere in this thread (I think), I saw somebody (and I think it was you) mention something about digtal processing being able to "pull images out of the noise". I can't find it now.

 

Certainly if you average enough frames you can produce miraculous noise reduction, but that only works with static images! So are these tests based on frame grabs or actual live video?

 

Regarding your tests generally, I still think you are unlikely to make much impression on non-technical DOPs and the like by showing them a lot of graphs.

 

But all the ones I know (or have known in the past) seem to have no trouble picking the difference between film and video derived material, just by looking at it. This is why I still think my white LED array is the most foolproof approach. Apart from issues of cheapness and portability, unlike most other systems, I can fit switches to the LEDs so you can have just one on or all of them on, which will help eliminate problems caused by internal reflections in the lens used.

 

However, probably the most important feature is that it will be relatively easy to explain and demonstrate to non-technical operators exactly what is being measured, and exactly what real-world results are being obtained, at least as far as most of them would want to informed.

Link to comment
Share on other sites

  • Premium Member

I didn't read everything to see if this has been answered yet. I'm working with a RED this week that does have a tape hook. I have marked its position on a photo of the body. It is approximately 1/8" (eyeballing that distance, but it is pretty close) behind the sharp corner that marks the front of the body.

Link to comment
Share on other sites

  • Premium Member
I hate to throw something entirely subjective into the mix, but I've sat there and looked at a Red and the image it's putting up on the preview monitor, and I've said:

 

Eleven stops my arse.

 

It is so clearly not eleven stops that you can tell by looking!

 

P

That pretty much sums up how most "film or video" decisions are made, at least for ones expected to have an audience outside the producer's extended family :lol:

 

Although the expression is more generally "'Indistinguishable from film' my arse".

 

Certainly there are producers who have bet the farm (well more correctly their financiers' farm) on all-digital movie production over the past decade, but they do not have seem to have set too many lasting trends. Certainly the "Making Of" segment on the DVD release is not likely to catalogue any reasons he may have had to regret his choice...

 

This is the reality: Once a producer has "cut his cloth" so to speak, he is going to find some way to make the project work, either by simply reducing his range of shooting options, spending a lot more on flattening difficult lighting situations, or pretending the dodgy image quality is the "look" he was aiming for and so on.

 

To put it another way, once his cloth is cut, we only see a end product on the catwalk. How many corsets, workouts at the gym, how much crash dieting or finger-down-the-throat the model had to endure to fit into the dress he made from the cloth he cut, is rarely if ever discussed :P

Link to comment
Share on other sites

Jim and Graeme, It?s easily understood that if you give different image you will get different results? Even my four years old son knows that different actions have different results?

 

So please stop repeating that argument.

 

That you don't seem to understand why this is an issue is the cause of our problems. Until you do understand it, we can discuss this no further as I have tried on multiple occasions to get across that this is a fatal flaw in the analysis software.

 

I know for certain that my linear light data has as much DR as the camera can have. There is NOTHING I can do to it that will improve it's DR, as it's basically a dump of the sensor data that was captured. Everything I can do it (in the form of altering the linearity of the data with a curve) will either preserve the DR or make it worse. Now.... I put a curve on it, and the software Imatest now tells me I have more DR. No I don't. I have the same amount - the exact same amount I started with. I put a different curve on it, and the number goes up again. Now do you understand?????

 

Graeme

Link to comment
Share on other sites

That you don't seem to understand why this is an issue is the cause of our problems. Until you do understand it, we can discuss this no further as I have tried on multiple occasions to get across that this is a fatal flaw in the analysis software.

 

I know for certain that my linear light data has as much DR as the camera can have. There is NOTHING I can do to it that will improve it's DR, as it's basically a dump of the sensor data that was captured. Everything I can do it (in the form of altering the linearity of the data with a curve) will either preserve the DR or make it worse. Now.... I put a curve on it, and the software Imatest now tells me I have more DR. No I don't. I have the same amount - the exact same amount I started with. I put a different curve on it, and the number goes up again. Now do you understand?????

 

Graeme

 

 

I saw someone punch a bus once.

 

The bus wasn't hurt.

 

R.

Link to comment
Share on other sites

Graeme

 

OK, I will follow your thinking?

 

When I try to emulate what you are saying with cameras like Varicam and Cinealta F900R I can?t change the results of Imatest.

 

So with all other cameras I can?t reproduce what you are saying.

 

Why?

 

Because your camera is recording RAW data, that means your software is doing the debayering, the subsequent noise reduction and the unavoidable sharpening according to a given curve. Right?

 

I thing yes more or less?

 

Every time that you give to your software (Red Alert, REDcine) a different request (see curve, sharpening, gain ASA etc.) in order to create a new output from RAW, the internal algorithms are dynamically adjust the parameters (Debayering strength, Noise Reduction, Antialiasing etc.) in order to produce the best result. Right?

 

Again I thing yes?

 

That effectively, IS CHANGING the results of Imatest, you should aware of this long time now.

 

All the other cameras are not having that RAW/Debayering need, that?s why I can?t reproduce your findings?

 

Even thought in all the other cameras we just do the following:

 

1. Open the box put out the camera.

2. Turn on camera.

3. Do a black balance.

4. Selecting the BEST PERFORMING SETUP according to manufacture data or use the settings that you typically use with that camera on real projects.

5. Adjust the white balance point to the appropriate Kelvin of the light source.

6. Point the camera to an appropriate transmitive step chart target as numerous times I have described.

7. Grab JUST ONE frame (not many as Keith has understand).

8. Export it in 16bit Tiff

9. Analyze it with Imatest.

 

That?s it.

 

So, setup a RED camera as it will going to be used in a real shooting project and according to the above steps.

 

Choose the real settings that you going to use on a real project, export the tiff frame and analyze it.

 

Better, choose your best performing settings and do the export? analyze it.

 

That IS your camera performance?

 

Graeme from your work, I have understand that you are a very good engineer, why can?t you understand all that?

 

To compare two cameras, we simple use the best settings chain on every system and we analyze the results.

 

What is so difficult to understand? Imatest is stupid software that doesn?t know what?s RED and what?s Varicam. It just analyzes noise figures.

 

As for using Imatest to guess what the gamma curve that you used is, I don?t thing that Imatest was designed to do that. So don?t expect accurate results on that.

 

But this is not a correct argument in order to trash the usability of Imatest.

 

On DR measurements expect very accurate results ?

 

Keith,

 

Imatest is analyzing in a single frame the noise levels on a specific ?ND level? by measuring the mean deviation of noise in that step.

 

It translates the noise level to relative F-stop level.

 

Then is counting how many steps are visible with noise level i.e. no more than 0,1 F-stop.

 

Converts the counted steps to F-stops i.e. 28 steps visible with noise level no more than 0,1 F-stop with ND 0,1D per step equals ND 2,8D divided by ND 0,3D (which is one F-stop) equals 9,33 F-stop latitude with very low noise level.

 

How low noise? Less than 0,1 F-stop.

 

That expression is closer to be understood by a cinematographer than the i.e. SNR -56db?

 

So Imatest reports five levels of quality or how many stops with noise level:

 

?Very good quality?________High________< 0,1 F-stop noise level

?Usable quality?___________Mid-High_____< 0,25 F-stop noise level

?Very problematic quality?___Medium_____< 0,5 F-stop noise level

?Very Bad quality?_________Low_________< 1 F-stop noise level

?Crap?!?_________________Total________all visible steps up to 5 F-stops noise ceiling

 

I thing this is the most cinemagrapher friendly approach to the DR/Latitude measurement problem? and its politically correct for an engineer, is like common ground for the two professionals (cinematographers and engineers).

 

All the graphs are for those who can understand them and they exhibit very critical quality details that are crucial for a DIT or an engineer.

 

Keith I strongly recommend you to go and read Imatest how to in order to understand better what am talking about.

Link to comment
Share on other sites

  • Premium Member

So Evangelos, if I understand you correctly, your concern is that the RED has no provision for monitoring the actual off-chip signal, and the RED RAW compression process may disguise some of the noise. Since noise is theoretically impossible to compress, (at least in any "lossless" way) the compression process itself must be introducing some sort of noise reduction as a by-product.

 

Which often mistakes low level detail for noise and removes it, giving the famous "plastic faces" look.

 

I still think my idea is better, since you can easily do a series of test shots which will allow you to cover the full gamut of everybody's ideas about how noisy the dimmest LED should be allowed to look.

 

Somehow I think the answer is going to lie somewhere between the 6 stops and 11 stops currently being claimed.

 

I don't care overmuch, I'm more interested in comparing the performance of different cameras than trying to set standards for "Digital"

Link to comment
Share on other sites

Yes Keith, RED camera is using CMOS Bayer pattern sensor so if you read carefully the theory of it you will find that:

 

?Since each pixel is filtered to record only one of three colors, two-thirds of the color data is missing from each. To obtain a full-color image, various demosaicing algorithms can be used to interpolate a set of complete red, green, and blue values for each point.? From wikipedia

 

So demosaic or debayer algorithms are manipulating the RAW image data in a dynamic way in order to give the best results. This can?t be described as lossless process when it?s compared to a 3 CCD sensor or a Stripe array sensor like Genesis has or like Foveon, in which you have all primary colors and resulting RGB image directly from sensor.

 

Due to the nature of the Bayer pattern sensors it?s unavoidable a sort of noise reduction in the process. If you have basic knowledge of how a Digital SLR is working with RAW converter (for Bayer pattern CMOS sensors like REDs) software like Capture one, you will know that there are a myriad of parameters like sharpening, noise reduction, curves etc. that intervene to the final image.

 

Graeme quote: "I put a curve on it, and the software Imatest now tells me I have more DR. No I don't. I have the same amount - the exact same amount I started with. I put a different curve on it, and the number goes up again."

 

So for someone like Graeme to claim that kind of problem with the use of Imatest is quite odd.

 

He can always use a typical curve that his end user will use like REC709 or REDLog, its very simple.

 

Since I haven?t seen his final word I will keep a ?wait and see? position.

 

As for your idea, I thing that?s a useful test tool for on set real-time evaluation, but I don?t thing that for laboratory use will be compered to a tool like Imatest.

 

But Keith the subject of that thread is not ?methods and tools of DR measurements??

 

So I will stick to the subject and wait for a reasoned respond from Graeme.

 

And why not some other qualified persons from the community to pose their opinion?

Link to comment
Share on other sites

What you don't seem to follow is that the curve (as long as it doesn't clip highlights or crush shadows out of existence) should not and cannot alter DR. It certainly cannot IMPROVE it from the linear light export (given sufficienct bit depth, which surely a 16bit tiff has for 12bit sensor data) . Now, in your video-centric world, a curve can make a difference as you're going from a high bit depth AtoD, to often 8bit tape, and the nature of that curve and how it distributes code values is important. However, in a digital cinema data-centric way of working, where I can access those 12bits of linear light sensor data in a linear light form, without curve applied, I expect to be able to measure the DR in that form and get a reasonable answer, which I can't with that software.

 

This discussion is getting very pointless now. I don't know how many times I can explain things. Going off on a tangent into demosaicing, which does not alter the dynamic range at all, in any way.... The myriad of parameters you mention are irrelevant.

 

Just think it through.... If a curve could improve the DR of a camera, don't you think a) we'd be doing it, B) everyone else would be boosting their DR with a magic curve.

Link to comment
Share on other sites

  • Premium Member

I could try and get involved in this argument but I really can't be bothered because there are vastly more effective ways to do it.

 

Red need to publish - ideally uncompressed but they can't - a TIFF or DPX of a step chart, a zone plate and probably a couple of other assorted sharpness targets. Until they do this it's all drivel.

 

The cost of doing this is practically nothing and the benefit incalculably high, so their squirming excuses and reticence to come up with the goods suggests, as if it still needs suggesting, that the thing has vastly lower performance than they claim.

Link to comment
Share on other sites

Due to the nature of the Bayer pattern sensors it?s unavoidable a sort of noise reduction in the process.

 

From elementary probability theory the SNR should improve for sum of noisy samples of uncorrelated random variables. If we are shooting a *static* scene, and if the temporal noise on the sensor is considered uncorrelated between neighboring samples of the same frame, and even between samples at the same position but on different frames, and neighboring samples of the same color are assumed to have (more or less) the same signal value, and temporal noise is also considered uncorrelated with the signal values, then SNR should improve, and SNR can be used to derive a measure of higher dynamic range in this sense.

 

It is easy to see that naive deBayering techniques that use samples of the same color for reconstruction benefiting with improved dynamic range in the manner described above as some sort of averaging is done somewhere in the process. However, more sophisticated deBayering techniques look at the samples of different colors in addition to higher order derivatives and curvatures, and it may be difficult to quantify in closed form the exact dynamic range gain/loss because of deBayering process.

 

But it appears to me that dynamic range after deBayering may be different from the actual sensor frame whether it is a gain or loss.

 

Temporal filtering can certainly help to improve dynamic range in the manner described above. Hence, even if a single frame is used for dynamic range calculation by a software, say ImaTest, but that single frame was derived by temporal filtering, then it should have more dynamic range than the actual sensor frame.

Link to comment
Share on other sites

Zac, good post.

 

Each camera comes with its own problems. The newer ones are better (the Hard Drive is great) but it's still in beta (testing) mode. I definitely agree with you about the video output. It was such a drag to have 2 monitors on at the same time, particularly for handheld. We can't view the Frame rate on the other monitor and the RED LCD disables all other video outputs. I highly suggest getting that second monitor! Another complaint I have is the Power issue; we charged all the batteries before the shoot and always have 2 batteries on the charger but by 2/3 of the day we always end up using the AC (and we had handheld, steadicam and jib arm). I suggest getting 2 chargers. It seems that one charger, though it can charge 2 batteries, only charges one-at-a-time and it takes a while too. Keep in mind we are using pretty much a computer and computers use a lot of power. There's no question about image quality and the camera is easy to use but there's plenty of room for improvement. -Natalie

Link to comment
Share on other sites

  • Premium Member
But it appears to me that dynamic range after deBayering may be different from the actual sensor frame whether it is a gain or loss.

 

Temporal filtering can certainly help to improve dynamic range in the manner described above.

 

Certainly you can discard part of the dynamic range, but there's no real way to gain more range.

 

The high end is quite clearly limited at the point where the wells fill up. More photons make more electrons until you reach that limit, after which the sensor clips hard. Beyond that, more photons get you nothing, and may even spill over and bloom into adjacent areas.

 

The low end is kinda squishy. When you have very few photons hitting each photosite, the number of electrons from thermal or other noise sources becomes significant compared with the number of electrons that get there the "right" way. It's a subjective aesthetic call as to how much noise you want to tolerate in the shadows.

 

Temporal reduction in apparant noise actually happens naturally. That's why a freeze frame done entirely on film will look a little grainier than the motion image leading up to it. An old editor's trick to get around that is to pick two successive frames that have little or no motion between them, and rock them instead of freezing on just one frame.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

Joofa I would like to know your full name also and your occupation, but beside to that you are doing a very good but deeply technical analysis that I understand it fully.

 

To add something in the mix for John and Graeme,

 

The way that this method (of measuring DR) is working, is that we expose the image just before it clips the brightest step, this is between 98% to 100%, so there are no photons that will allow a hard clip since there is NO clipping in that way of image exposure.

 

All the brighter steps especially the first 1/3 are noiseless on almost every camera?

 

So what we actually analyze is the dark steps. These steps because of the nature of the measurement are very sensitive to noise reduction.

 

In reality there is a limit, there are steps that don?t trigger at all the sensor so on these there is no data and those steps are the absolute limits of the system (sensor/debayering/curve).

 

But what is usable is what matters to a Cinematographer, so the rest of the steps that have some info, as heavy the noise reduction is, that much Imatest is reporting better results, but to a point.

 

That point is the absolute limit of the system that I described just above.

 

That point yes Graeme it doesn?t change.

 

But to get better results with means of changing the EFFECTIVE noise reduction, yes you can make the result to look more in favor or less favor. As I have said in the past you can also go to Photoshop and hand paint the steps, which will give you the best results.

 

This is not the point. But we are not playing games here.

 

The point is to get an image AS IT GOING TO BE USED BY THE USER.

 

And then measure it, AGAIN THAT?S VERY SIMPLE.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

BOKEH RENTALS

Film Gears

Metropolis Post

New Pro Video - New and Used Equipment

Visual Products

Gamma Ray Digital Inc

Broadcast Solutions Inc

CineLab

CINELEASE

Cinematography Books and Gear



×
×
  • Create New...