Jump to content

Your DREAM SPECS for a Digital Cinema Camera?


Tom Lowe

Recommended Posts

> Six chips of 65mm size

 

I think you only need two, if they're bayered and sufficiently high res, and 65mm is just wannabe chasing of an expensive format. 2/3" video is perfectly usable for dramatic work and full ap 35 is actually a pain in the neck depending on sensitivity; going any larger than that is just masturbatory "I want to shoot 65" without actually having given any thought to the grief you'll cause yourself.

No, this is based on experience in building silicon chips, and the dynamic range of B&W vs a Bayer pattern. Using a pure B&W sensor I was getting a full 3 stops of lattitude over the bayer patterned chip. In addition, the larger size allows for a finer light collection per-pixel, due to larger photo sites, which also means reduced noise. And finally, lenses made for the larger size give a better depth of field in image, almost a 3D effect without the need for fancy glasses. I shoot 4x5 large format stills, and let me tell you, the difference between that and a Canon Mark II Ds is absolute, and obvious once you've shot with both, and I have.
> a complete avoidance of codecs by storing all data as single-frame TIFF's (32-bits per color)

 

Craziness. All this drivelling on about high bit depth is daft - few modern video cameras have more than four or five bits of real data. Absolutely none of them have more than eight, and yet people blast on about ten bit like it's more important than oxygen. For grief's sake just record it eight and diffusion dither it up to whatever you want - at least the noise will be gaussian that way!

Ahem, you realized you shot your own attempt to dispute in the foot here. "few modern video cameras have more than four or five" is precisely the problem. I never said that the camera would capture 32-bits per color, only that it would store that. There is a solid reason why you do this, it is to allow higher-end process to be used later on, such as HDR, without the need to re-encode the data, causing data loss from the conversion. Future-proof your product, eliminate waste later on down the pipeline, and give yourself room to work with.
> A physical mirrored shutter, no electronic junk in the way

 

I'd rather have a viewfinder that showed me what the image actually looked like, but ideally a hybrid design with information optionally superimposed over the image using a transreflective display would be ideal. Flickery mirror shutter cameras give me a headache. Personal preference I understand, but all this banging on about a colour viewfinder is gibberish when it previews neither colour, exposure or sharpness with any accuracy.

David actually brought up a brilliant option, use both, the mirrored as well as an electronic, giving you the best of both simultaneously. Now, I'll admit, for the mirrored shutter I'd still like to use a HUD setup to superimpose information into the viewfinder, but that's my preference.
> Storage directly to a RAID HD array

 

Surely you want flash.

I should have been more specific in stating a Solid State Hard Drive RAID array.

> No audio stored on the unit

 

This is just curmudgeonliness. Single system sound takes expensive make-work drudgery out of postproduction and should be used unless there's a reason why not - and the only reason why not is if the radio link broke down on the use take.

No, this is simple practicality. By seperating the audio and video we've eliminated a huge issue I've had with every single solid digital camcorder I've worked with: they can do video top but not audio, or vice-versa. I'd rather have them seperate, so I can mix and match up to get the desired results. I am no audio guy, I would not want to even touch the requirements of my friend Romm who is the audiophile. On the other hand, he feels the same going the other way. I'd sooner keep them seperate, especially when not all shots use sound, during which you'd have weight, battery, etc all being sucked down for something unused. Give me a seperate solution and make me happy

This is a lovely list of the requirements of the dedicated film luddite who's come up with a spec list based on nothing more than the fact that it's difficult to meet. It's not very practical.

 

P

This part made me laugh, especially as anyone that knows me would never classify me as a luddite. I worked for a computer company for 2 years, helping to design silicon chips, only to have our work destroyed by every parts supplier being pressured by a giant in the field to cancel the type of bridge chips needed for the product. I worked on making a replacement, but by then we'd lost the momentum, and the market absolutely collapsed. I know how to make a silicon chip from the acid baths used to what the difference between an AND and a XOR gate are. I began in this field working with pure digital setups, the first-gen JVC DV cameras. But for me, it is about delivering the best results for the price we have to work with. This was a dream camera, and I wrote my dream, a camera able to deliver the optical illusions desired, with a maximum of editing capability with a minimum of data loss. Which would be simple enough and direct enough to be relatively easy to manufacture, having less junk to worry about. A simple, direct, optics system with capturing capability, that's all. Nothing more than a digital form of an Eyemo, a camera so simple a chimpanzee can run it yet being able to go nose to nose with the best in the business if in the hands of a master operator.

 

Yes, I know the list is impractical, but it is only a dream. For a practical system, I would use 2/3" sensors, but I'd use an integrated micro70 or such into the system. By integrating it into the body, including the reduction optics, you would still capture the DoF without the headache. Another option, and one I would love to see explored, is to license the 4:3 mount from Olympus and adapt it to a video camera. A huge range of established lenses, a proven sensor design technology, all there to use. It just needs licensing and adapting to task.

 

So many options, I just threw out my own dream, based on my own shooting experience. In the end, I feel that if we properly debated this, we'd find more grounds of commonality amongst all of us than differences. We all want to make beautiful pictures that dance across the screen. We have our own ways to do that. And I'm proud of each and every one of us for the work we do.

Link to comment
Share on other sites

  • Replies 93
  • Created
  • Last Reply

Top Posters In This Topic

Phil, where the hell do you get off saying that no sensor can deliver more than 8-bits? That's just so wrong. Are you talking log or lin? I can take a Phantom with its adjustable bit rate and see a difference between 12 & 14 bit depth. If I drop down to 8 the image really suffers. Sometimes you speak so definitively on subjects of which you clearly do not have complete knowledge. Just like the crap about Bayer-pattern sensors having lower resolution than their 3-chip counterparts -- as if those sensor systems didn't have their own issues that made their MTF as low as the Bayer-pattern design. That's the difference between theoretical knowledge and functional knowledge.

Link to comment
Share on other sites

  • Premium Member

> I can take a Phantom with its adjustable bit rate and see a difference between 12 & 14 bit depth

 

Good grief, you must have inhumanly good eyes to see contouring at -72dB. If what you're saying were true, all this 8-bit downconverted stuff I've worked with - from everything from handycams to Viper - would be completely noise-free. And it clearly isn't. Far, far from it.

 

It's quite possible that the Phantom is doing something else when you change bit depth that you can see, but visible differences at the 12/14 bit level are emphatically not due solely to the increased or decreased bit depth. I'm sure the Hubble Space Telescope has a sensor that's worth recording at a those depths but I have yet to see any production camera that does.

 

And as for recording stuff at a far higher bit depth than is worthwhile for the images - fine, if you want to pay for the (currently horrendously expensive) flash... until then I'll record at a lower bit depth and diffusion dither it back up again!

 

Phil

Link to comment
Share on other sites

  • Premium Member
Trust me, people are actively working on it.

And have been for some decades, and it's always "a couple of years away".

Like X million other breakthroughs we've been on the verge of for Y number of years.

Try looking up the word "boondoggle". :lol:

 

Enough work is being done actively in this area. This technology is just maturing and sooner or later you will see the results.

Speculating on the impact of future technologies before they have actually been invented was the original basis of the term "Science Fiction".

 

Considering that there aren't even 640 x 480 non-linear sensors available, I think a RED-sized version might still be some ways off.

Link to comment
Share on other sites

  • Premium Member
I know how to make a silicon chip from the acid baths used to what the difference between an AND and a XOR gate are.

 

Err... what?

 

I also know what the differences between AND and XOR gates, and OR and NOR and NAND and NOT for that matter but not so much about the acid baths.

 

Maybe Graeme Nattress can use you on the RED team. :lol:

Link to comment
Share on other sites

  • Premium Member
Forget dream camera specs, just create a type of film stock that is reusable and I'm happy.

Considering that it is regarded as bad practice for professionals to re-use video tape, and considering what film has to go through in the procesing, I would be very wary of re-using film stock. Besides, film is usually cut into all sorts of odd lengths in production, so you would need some way to glue them back together.

Link to comment
Share on other sites

Err... what?

 

I also know what the differences between AND and XOR gates, and OR and NOR and NAND and NOT for that matter but not so much about the acid baths.

 

Maybe Graeme Nattress can use you on the RED team. :lol:

Making a silicon chip is not much different from processing film you know. Just instead of using retinol, they use various baths, some very acidic, depending on the final result or process used. Almost all of them are completely toxic, hence why the hasmat suits within any silicon plant in that section of the process.

 

And no thanks, I gave up on chips when I realized that, while I did know enough to assist, I was no engineer. I just know enough in how to handle them, and some fundimentals of design. Good to debug a circuit, but that's about it.

Link to comment
Share on other sites

  • Premium Member

> Phil, you completely miss the important point that your 8bit video is gamma encoded, whereas sensor bit]

> depth is generally linear light bit depth.

 

Y....yes, which is why we have 14-bit converters on 8-bit recording systems. Even Viper does this within the limits of being a log output device.

 

This is the entire crux of what I'm talking about.

 

Phil

Link to comment
Share on other sites

Speculating on the impact of future technologies before they have actually been invented was the original basis of the term "Science Fiction".

 

Keith, I have given you a reference in the relevant area. I can only do so much. If you want to keep your eyes closed, it is up to you. I can't argue with somebody who is not willing to listen.

Edited by DJ Joofa
Link to comment
Share on other sites

  • Premium Member
A lot of the criticism about this larger format involves trying to pull focus on a sensor that size. But hey, the Vista Vision guys did it, right?

They were using fairly slow 1950's Leica glass, and generally lighting to fairly deep stops. A bunch of the Vista gear was stored in a basement that got flooded about ten years ago. A friend of mine got some lenses dumpster diving after that.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
Six chips of 65mm size

Hmmm -- how do you get the same image onto all six? Even a three chip prism block is kinda cumbersome, and limits you to no faster than f/1.4.

 

There is already an Imax sized three chip camera, Lockheed-Martin's "Blue Herring". It's rumored to be the camera out of the old KH-12 spy satellites. Last I saw, the prototype they had working was somewhat smaller than a Mini-Cooper.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
One argument I've discovered for an optical viewfinder is when you've got a 90-second boot-up time with the RED, it can be frustrating to grab a powered-down RED camera and quickly set-up a shot, only to find you can't see anything until it's done booting.

Instead of an Arri-style spinning mirror, how about a much smaller, simpler and lighter design flip-up mirror, like a traditional still SLR? It would be only for viewing with the power off or during booting.

 

To me, one of the really nice things about electronic cameras is that you can see a reasonable size image with both eyes open, and mount it in a convenient position. Pushing one eye against a finder and closing the other might not seem like such a big deal at first, but by the end of an 18 hour day, the toll adds up.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

Hmmm -- how do you get the same image onto all six? Even a three chip prism block is kinda cumbersome, and limits you to no faster than f/1.4.

 

There is already an Imax sized three chip camera, Lockheed-Martin's "Blue Herring". It's rumored to be the camera out of the old KH-12 spy satellites. Last I saw, the prototype they had working was somewhat smaller than a Mini-Cooper.

 

 

 

 

-- J.S.

it would require three prisms in total, and yes, the glass would have to be further away. In my theoretical work, the distance grew to the same as on your Hasselbled or Pentax 645. You first have your basic image splitter, passing 70% of the light one way, 30% the other. Behind that on each side, you'd have the three chip prism block to each sensor. Cumbersome it is, but results it would give. Using a simple filter + exposure adjustment test with a B&W sensor a few years back, I found myself with incredible range, and the larger photosites would compensate for the slower speeds.

Link to comment
Share on other sites

Last I saw, the prototype they had working was somewhat smaller than a Mini-Cooper.

 

 

 

 

-- J.S.

 

 

Then parking on a city street should be no problem, what's not to like ?

 

 

"Huh ? Is that the dolly or the camera ?" "Yes"

 

 

-Sam "It's a floor wax AND a desert topping" Wells

Link to comment
Share on other sites

  • Premium Member
A lot of the criticism about this larger format involves trying to pull focus on a sensor that size.

You have a tad less field than anamorphic (instead of a 40mm you'd use a 50mm) so it is really quite feasible.

 

Unfortunately there aren't that many 65mm lenses around. They are also not as fast as their 35mm counterparts, since they have to cover a larger area

Link to comment
Share on other sites

  • Premium Member
You first have your basic image splitter, passing 70% of the light one way, 30% the other. Behind that on each side, you'd have the three chip prism block to each sensor.

A 70-30 split would shift the two sensor assemblies a little over a stop apart. I'd think that you could push it a lot farther, like maybe 98-2, which would be between 5 and 6 stops. You need to overlap a little bit of the straight line parts of both sensors, but the offset is basically what you gain.

 

 

 

-- J.S.

Link to comment
Share on other sites

First of all you do not need to be a billionaire to come up with specifications that will outperform a Red camera even at no additional cost. All you need is creativity and brain power. That being said what the movie goer wants is the IMAX experience with the gigantic screen. However when the screen becomes so big that it occupies the periphery of the human visual system why is it that you have to maintain a constant resolution since no one looks out of the corner of their eye. What we need is a space variant resolution that concentrates the resolution more towards the fovea of the visual system and generates less resolution towards the peripheral areas of the human visual system.

Link to comment
Share on other sites

What we need is a space variant resolution that concentrates the resolution more towards the fovea of the visual system and generates less resolution towards the peripheral areas of the human visual system.

 

Some video compression schemes have been made around the notion of foveation. Please do an online search on foveation, compression, etc.

 

Ref.: Sanghoon Lee; Pattichis, M.S.; Bovik, A.C., "Foveated video compression with optimal rate control" IEEE Transactions on Image Processing, Volume 10, Issue 7, Jul 2001 Page(s):977 - 992.

Edited by DJ Joofa
Link to comment
Share on other sites

  • Premium Member
Keith, I have given you a reference in the relevant area. I can only do so much. If you want to keep your eyes closed, it is up to you. I can't argue with somebody who is not willing to listen.

Sorry, but I have never cared much for Debate by Hyperlink. No offense but I don't think I've ever found a single one of your posts particularly informative. Not that you have that on your own of course.

Link to comment
Share on other sites

A 70-30 split would shift the two sensor assemblies a little over a stop apart. I'd think that you could push it a lot farther, like maybe 98-2, which would be between 5 and 6 stops. You need to overlap a little bit of the straight line parts of both sensors, but the offset is basically what you gain.

 

 

 

-- J.S.

good call.

 

You know, I wish that the Foveon technologies took off that are in the Sigma SLR. That would make 3-chip less necessary.

Link to comment
Share on other sites

If we apply fovean compression to an existing sensor we will lose resolution. What we really need is a dedicated 4k fovean sensor.

???

 

You cannot "apply" foveon to an existing sensor. It is a new structure of the CCD so I understand, enabling each pixel to record each color, rather than interpolating as through a beyers pattern. But yes, as of yet the largest foveon would rate as only 2k.

Link to comment
Share on other sites

  • Premium Member

> If we apply fovean compression ...

 

> You cannot "apply" foveon to an existing sensor. ...

 

What we have here are two similar words with different meanings.

 

Thomas is talking about tailoring projected resolution to the human eye, giving more to the fovea. However, no info on how to do it, or if it has ever been done successfully.

 

Nate is talking about Foveon, a trade name for a multi layer color sensor chip, and the name of the company that makes it.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Broadcast Solutions Inc

CINELEASE

CineLab

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Film Gears

Visual Products

BOKEH RENTALS

Cinematography Books and Gear



×
×
  • Create New...