Jump to content

The 1st needed convincing, the grip wasn't sure, everyone else was sold


Keith Mottram

Recommended Posts

If readers who have been following this forum would like to see uncompressed D20 images (tifs), I will be hosting them temporarily here:

 

www.markallen.net/d20/

 

3 individual frames.

 

recorded to SR at 8bit 4:2:2, digitised at 8bit uncompressed via BM decklink HD. exported to tiff from FCP.

Link to comment
Share on other sites

  • Replies 89
  • Created
  • Last Reply

Top Posters In This Topic

Guest Jim Murdoch
If readers who have been following this forum would like to see uncompressed D20 images (tifs), I will be hosting them temporarily here:

 

www.markallen.net/d20/

 

3 individual frames.

 

recorded to SR at 8bit 4:2:2, digitised at 8bit uncompressed via BM decklink HD. exported to tiff from FCP.

 

Er, I dunno; it looks it looks video with a 35mm-like depth of field.

 

Has the girl got something wrong with her skin, or is that patterning an artifact of the conversion to 8-bit luminance?

 

I don't think the pictures look particularly sharp either, even where they're supposed to be in maximum focus.

 

The maximum resolution I can display them at is 1,280 pixels wide. 6 megapixels does seem awfully large, though, even for an uncompressed file.

Link to comment
Share on other sites

  • Premium Member
If readers who have been following this forum would like to see uncompressed D20 images (tifs), I will be hosting them temporarily here:

 

www.markallen.net/d20/

 

3 individual frames.

 

recorded to SR at 8bit 4:2:2, digitised at 8bit uncompressed via BM decklink HD. exported to tiff from FCP.

 

Hi,

 

Could you tell me what the lenses were? and approximate T stop.

 

Many thanks

 

Stephen

Link to comment
Share on other sites

  • Premium Member
Keith,

 

Did you see a difference between S4's and Master Primes? I have never seen a comparison!

 

Cheers,

 

Stephen

 

To be honest no and i have no log of which were used for which shot, I think they were just preferences by the DP for one over the other on certain lengths. With regards to the softness of the images- i have worked with a large amount of 35mm scanned to HD and I wouldn't say I notice a considerable difference (apart from the lack of grain!!). When the footage is played back at HD it does not feel soft. These shots are ungraded if anyone is wondering.

 

Keith

Link to comment
Share on other sites

  • 2 weeks later...
Sometimes I think the technical aspects of moviemaking are overrated -- I almost agree with Kubrick who felt that the technical side of filmmaking could be learned in a week (but perfected over a lifetime, of course.)

 

 

Sorry to go OT, but I believe Kubrick said that film theory could be learned in a week. Correct me if I'm wrong.

Link to comment
Share on other sites

  • 4 months later...
Huh? Well, 1,920 x 1080 x 3 = 6,220,800 nicht wahr?

 

Anyway I've seen an e-mail from John Galt about this. He said the 12,440,800 pixels are arranged in square "macropixels" of two red, two green and two blue each. I would have thought they would have been arranged like this:

 

RGB|RGB|RGB

BGR|BGR|BGR

------------------

RGB|RGB|RGB

BGR|BGR|BGR

 

to reduce alaising effects, which is what they do with single-chip standard definition cameras that use this technique, but he said no, they're arranged like this:

 

RGB|RGB|RGB

RGB|RGB|RGB

------------------

RGB|RGB|RGB

RGB|RGB|RGB

 

and he didn't seem to understand why you'd want to do it any other way. He didn't seem to have any answer to the question: "why do they simply average adjacent pairs of pixels, instead of having just one taller pixel (which would give a better signal-to-noise ratio)?" either. He stopped replying after that!

 

I'm certainly no expert, but I think you are misunderstanding the way Bayer arrays work:

 

http://www.arri.com/news/newsletter/articl...9211103/d20.htm

Link to comment
Share on other sites

He didn't seem to have any answer to the question: "why do they simply average adjacent pairs of pixels, instead of having just one taller pixel (which would give a better signal-to-noise ratio)?" either. He stopped replying after that!

 

Hi Jim,

 

As I see from one of your earlier posts in this thread, you remembered my post from a year ago in which I gave my explanation for the Genesis using that particular sensor design.

 

For everyone who didn't see that post, here's my explanation... Every first row of pixels overexposes to capture shadow detail while every second row of pixels underexposes to capture highlight detail (see attachment).

 

post-5255-1146250428.gif

 

So, if you have 12,440,800 "pixels" (really photodiodes) to begin with, and you need to average every two rows of photodiodes into one row, you would then be left with 6,220,400 photodiodes. Then, because you need a red, green, and blue photodiode to produce a single pixel, you need to divide 6,220,400 by three, at which time you arrive at producing an approximate 2k, full-color image. It's as simple as that.

 

This seems to be the best explanation for using 12 million photodiodes to produce a 2 million pixel image. It would be the most advantageous use of having multiple photodiodes for each single primary color of every pixel. It accounts for the increased contrast handling capabilities of the Genesis.

 

And now that I think more about it, Jim is right in saying it would be too complex to build a chip in which the actual sensitivity of each row of pixels is different than that of the adjacent row. So, instead, they most likely place strips of ND filters over every other row of pixels so those rows can capture highlights better. However, after averaging the two rows together, the image would appear underexposed. The simple solution to that problem would be to recalibrate the sensitivity of the entire sensor to counteract the darkening caused by the ND filters.

 

I think those are the (former) "secrets" John Galt couldn't reveal.

 

 

-Ted Johanson

Edited by Ted Johanson
Link to comment
Share on other sites

But isn't that the point? What's the purpose in using improved technology if it doesn't save you anything? The whole paradigm with this thing seems to be to make it as awkward, old-fashioned and backward as a film camera, for no better reason than ensuring the crew are employed.

 

 

I think you're slightly wrong here. To give an example: imagine doing a car commercial including dusty dirt roads and beaches in say South Africa where a lot of crane and remote head shots are done. When i put a 435 on a remote head i find plugs for the RCU connection and the power connection on the head. Set up time: 1 minute. When i want to do this with your usual HD camera, that was designed for broadcast use, i might be happy to even have a remote control unit that's of any use to me as an AC. There's a good chance i will have to install some extra cabling along the crane arm. Set up time: who knows. You see there are situations where it makes perfekt sense to build a machine "as awkward, old-fashioned and backward as a film camera".

Where each HD camera loses is the combination of heat and dust. With a 435 i just pull a plastic bag over it, cut a chole in front and tape the seems of the hole to the edges of the mattebox. Absolutely safe. When trying this with an HD thingie there are two possibilities: the plastic bag melts, dust comes in and fu**s up the usually - compared to old-fashioned filmcameras - crappy mechanical parts of the cam. Or the camera just overheats and the electronics die.

Link to comment
Share on other sites

The Genesis doesn't use a Bayer Array, it uses exactly the layout I showed you.

I know perfectly well how a Bayer Array works, and the shortcomings it has.

Ah - forgive me, I thought we were discussing the D20 here. Obviously the discussion switched to a different camera 5 pages ago, and I missed it.

 

For anyone who's interested, here are some small patches (actual pixels, no resizing) cut out of some D20 HD images from tests we did a few weeks ago.

 

Arri D20, shooting 4:4:4 uncompressed into the s.two. Camera shooting in "User 6" mode, which, if memory serves, is one of the higher gain (1 or 2 stops) video-ish colourspace settings, with highlight compression on.

 

Images taken into Shake, and saved out as highest quality jpegs. Yes, yes, yes, I know that conversion to jpeg potentially brings up a load of issues, but I think that you will still see some things of interest in these images. Most specifically the high level of noise in the red and blue relative to the green, and also the low resolution and "blockiness" of the red and blue relative to green. Also some obvious and colourful moire. Quite worrying. YMMV.

 

orig_image.jpg

rgb0wv.jpg

 

In the camera's defence, I should add that we shot a load of material of more "naturalistic" subjects, projected it all in a high-end grading suite with a high spec projector, and sat for a couple of hours commenting on how wonderful the material looked.

 

So it's probable that Arri have made the necessary compromises in the right places. Trouble is - I have a load of chromakey shots to worry about! ;)

Link to comment
Share on other sites

  • 2 weeks later...

Update - there's a chance the issues visible in these images were one-offs. Similar tests with another D20 didn't exhibit them. Further testing in progress, so for now, pls don't assume that all D20's will have these problems... of course if you are planning on using this new-ish camera, you would be doing your own tests.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Visual Products

Film Gears

BOKEH RENTALS

CineLab

CINELEASE

Gamma Ray Digital Inc

Broadcast Solutions Inc

Metropolis Post

New Pro Video - New and Used Equipment

Cinematography Books and Gear



×
×
  • Create New...