Jump to content

ACTUAL Pixel resolution of "35mm" CCD sensors.


Guest Jim Murdoch

Recommended Posts

  • Premium Member
Hi,

 

> With movie film, you can easily go 12 stops over exposed and still recover a workable image with a

> good telecine.

 

Is anyone else here willing to agree with that?

 

Phil

 

Although Kodak studies (Dr. Roger Morton and his team) have published results showing that color negative film can still capture information (i.e. extreme specular highlights) that are up to 15.9 stops above an 18% gray, no one claims that a negative overexposed by even 12 stops produces a "workable" image. Don't confuse dynamic range (ability to capture any detail) with exposure latitude (how much you can over or underexpose an image and still produce good results).

 

No doubt that film has BOTH great dynamic range and latitude, but they are not the same.

Link to comment
Share on other sites

  • Replies 77
  • Created
  • Last Reply

Top Posters In This Topic

Anyway, if you're going to cite the ability of a medium to help you get away with a mistake, you could just as well argue that video is more tolerant of underexposure.

 

Actually and maybe ironically in the Genesis test the 5218 held the shadows better than the Genesis did. But over all I was still verfy impressed with the Genesis image. If I were shooting television, I would push the producer to seriously consider it.

Link to comment
Share on other sites

  • Premium Member
Is your  agrument that  a RGB RGB RGB array has a colour detail only equivalent to  4:2:2  three chips?

It's also important to remember that 4:2:2 is a luminance-color difference space, which is smaller than the RGB color space. Most of the improvement that FotoKem is getting in their new uncompressed full RGB workflow coming from S-16 is probably due to being RGB instead of YUV.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

It's not inherently smaller, depending on the gamut you choose to represent. If you convert it to RGB you do quantise a lot assuming it's a digital signal, which is why video cameras tend to have high bit depth ADCs before converting to YUV for tape. But that's really more to do with the colour spaces they're forced to use by the capaibilities of the CCD and the requirements of the tape format.

 

Phil

Link to comment
Share on other sites

With movie film, you can easily go 12 stops over exposed and still recover a workable image with a good telecine.

 

First of all, why would anyone want to shoot 12 stops overexposed? And second, I wouldn't call such an image "workable" regardless of the telecine used.

 

Jim, it's obvious that you've already made up your mind regardless of the reality of the situation, and are not going to be dissuaded from your rigid opinion that is clearly based on theory and not any kind of practical experience. You haven't even seen any material that might or might not give you any real information. Continuation of this thread is pointless.

Link to comment
Share on other sites

  • Premium Member
Yeah, I've heard that, but you/they mean like:

 

RGBRGBRGBRGBRGB

RGBRGBRGBRGBRGB       

RGBRGBRGBRGBRGB

RGBRGBRGBRGBRGB etc,

 

that is utterly impossible. The RGB triplets have to be "shuffled" and heavily averaged,

 

who needs Panavison or Arri?

I asked Nolan Murdock at the demo, and Panavision is being very careful to say absolutely nothing about the size, shape, and arrangement of their pixels. They are saying that they have 12.4 Megapixels, but absolutely no comment beyond that.

 

So, we can only speculate. Remember that the Viper had 4320 subpixels in the vertical direction on a 2/3" chip. Scaling to a 1.78 extraction from super-35 yields about 19,900 x 11,200, or 222.9 million little squares. Obviously, addressing each one of them individually as pixels isn't happening. Dividing by 12.4 meg, though, it looks like you get about 18 little squares per pixel. Taking three colored pencils and a piece of graph paper, there are lots of interesting ways to make interlocking arrangements of contiguous groups of 18 little squares. This could go a long way towards solving the problem I mentioned before, of a distant red light falling entirely on a red or blue pixel in a Bayer mask. And it looks like a lot of patent or trade secret type stuff could be going on here. Looking at the fabric patterns in the Daviau tests, I'm inclined to believe that they're doing something clever with the size and shape of pixels.

 

As for who needs Arri or Panavision, the answer is people who work on movies 14 - 20 hours a day, six days a week, for months. The ergonomics and logistics of professional shooting can't even be imagined by those who haven't been there. For instance, consider the gear head. You turn a wheel with your left hand to pan, you turn another wheel with your right hand to tilt. Difficult to learn at first, but actually easier to use at the wrong end of a long day. We need Panavision and Arri because they can understand why being cabled to a rack of electronics the size of a fridge isn't going to fly. As Joe said, "Quantity has a certain quality all its own."

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
I just read an article that Fotokem has begun posting Law and Order in 2K

True. They're scanning a full 2048 wide on the Spirit, and working uncompresed RGB all the way thru until they output to tape for delivery. They online on Avid Nitrous, which lets you carry repos and simple effects thru automatically from offline. It looks like a good, cost saving interim system. I've OK'd it for any of our shows that wants to try it.

 

 

 

-- J.S.

Link to comment
Share on other sites

True.  They're scanning a full 2048 wide on the Spirit, and working uncompresed RGB all the way thru until they output to tape for delivery.  They online on Avid Nitrous, which lets you carry repos and simple effects thru automatically from offline.  It looks like a good, cost saving interim system.  I've OK'd it for any of our shows that wants to try it.

 

1. What's the point of doing this?

2. What format are they scanning to, since there's no existing 2K tape format? And if it's not tape, how are they storing/recalling the material for assembly?

3. Again - what's the point?

Edited by mmost
Link to comment
Share on other sites

I just looked at Fotokem's website and found it there in the news section.

 

This is what they claim:

 

This method greatly improves televison content quality in both 16mm adn 35mm by providing compression free imagery with true film color space, greater dynamic range, and higher resolution than HD, while providing archival and future proof masters.

 

Plus if they are delivering from HDCAM-SR, which is said to be slightly compressed 4:4:4 RGB, a 2K scan isn't that far away and you get to over sample before going to tape.

Edited by tenobell
Link to comment
Share on other sites

  • Premium Member
1. What's the point of doing this?

2. What format are they scanning to, since there's no existing 2K tape format? And if it's not tape, how are they storing/recalling the material for assembly?

3. Again - what's the point?

The point is to apply best practices to S-16 and make something inexpensive look good enough for the interim before we go to big chip cameras for TV. They're scanning directly to their internal servers, so the whole show exists only on FotoKem's non-removable hard drives until you have them spit out a tape. Perhaps a little scary, but it's on them to make it work. Mike, I'm pretty sure you know Paul. Give him a call and go take a look.

 

 

 

-- J.S.

Link to comment
Share on other sites

This method greatly improves televison content quality in both 16mm adn 35mm by providing compression free imagery with true film color space, greater dynamic range, and higher resolution than HD, while providing archival and future proof masters.

 

Plus if they are delivering from HDCAM-SR, which is said to be slightly compressed 4:4:4 RGB, a 2K scan isn't that far away and you get to over sample before going to tape.

 

After looking at the news item, I realize that they're doing this not with "Law and Order" but with "L&O Trial By Jury-" a show that is not on the air yet and has a lot of lead time. My guess, having not talked to anyone at Fotokem (although I may call someone out of curiosity) is that they're basically doing a DI finish, with everything before that (i.e., dailies) on a tape format, be it HD or SD. While I realize that there can be a slight increase in "picture quality," this is an approach that likely requires more time, and a lot more money than a "standard" HD finish, unless they're "giving it away." I suppose there's some value to this, but the bottom line is --- it's a TV show. It will never be projected theatrically. The difference in "quality" will be so minimal on a home screen as to be insignificant. There will also be a lot more dust busting due to the additional film handling, although I suppose that could be done to the HD delivery master only. The possible bottom line: they're doing this because they can, not because it's sensible and not because they can make money with it - because I don't know of any studio that is going to pay more for post production than the cutthroat pricing they already have.

 

I don't think NBC shows deliver on SR, although I could be wrong about that. John?

Link to comment
Share on other sites

The point is to apply best practices to S-16 and make something inexpensive look good enough for the interim before we go to big chip cameras for TV.  They're scanning directly to their internal servers, so the whole show exists only on FotoKem's non-removable hard drives until you have them spit out a tape.  Perhaps a little scary, but it's on them to make it work.  Mike, I'm pretty sure you know Paul.  Give him a call and go take a look. 

-- J.S.

 

Ah, so it's 16mm. That actually makes more sense (although I still think this is an elaborate and internally expensive solution in search of a problem). I will call Paul C., however, and I will go over and take a look. Thanks.

Link to comment
Share on other sites

Guest Jim Murdoch
First of all, why would anyone want to shoot 12 stops overexposed? And second, I wouldn't call such an image "workable" regardless of the telecine used.

 

It never ceases to amaze me how many times I hear "professional" cinematographers make that same statement.

 

Unless you're shooting in a totally lighting-controlled environment, like a TV studio or a sound stage, parts of the film image are always going to be overexposed, sometimes massively so. Do you have any idea at all what the actual difference in photon level is between say an actor's face and sunlight reflecting off wet leaves behind him? Or flashes of sunlight off windows, or even polished car paint? Hundreds of times? Thousands? Try millions.

 

No, you wouldn't normally overexpose the whole image 12 stops and try to telecine that, but anybody who shoots anything in a "practical" outdoor situation is going to have to live with the fact that at least some parts of the emulsion, however small, are going to wind up overexposed. Sure, you try to avoid this as far as possible, but there comes a point where some overexposure is unavoidable.

 

And my point is, that where you do get those overexposed bits, a good telecine can still extract something from them and put some semblance of an image there on the TV screen. Maybe not perfect, but it's a whole lot better than a white blob, which is all a video camera will give you!

 

 

A video camera will give you a "white-blob-period"; and no amount of post-production trickery can extract anything from that white blob. Video cameras are OK for "Shot before a live studio audience" stuff; "real-world" photography demands something more bulletproof.

Jim, .... You haven't even seen any material that might or might not give you any real information....

 

You got that right. I'll believe it when I actually get to shoot something with a Genesis myself and run my own tests. Naturally that will never happen. As I said, I heard exactly the same claims made about the CineAlta, and the real thing didn't hold up.

 

Continuation of this thread is pointless.

 

Oh yeah; for whom? As long as a few of the readers here are encouraged to comment on the excessively diaphanous nature of the Emperor's attire, my work is done.

 

Actually "my work" was to get some actual hard data on the Genesis and its misbegotten relatives. So far, that has been pointless. If I wanted B.S. I could have gotten plenty of that from Panavision and Dalsa with far less trouble :rolleyes:

Link to comment
Share on other sites

Guest Jim Murdoch
Although Kodak studies (Dr. Roger Morton and his team) have published results showing that color negative film can still capture information (i.e. extreme specular highlights) that are up to 15.9 stops above an 18% gray, no one claims that a negative overexposed by even 12 stops produces a "workable" image. 

Well surely, any sort of image is better than a white blob, which was what I was meaning by "workable".

 

After all, you can take a 35mm stills camera and progressively overexpose the same scene a stop at a time, up to 14 stops (depending on what film you're using) and a minilab will still give you 14 recognizable prints. Admittedly the last few will be somewhat bleached and "cooked looking", but most of them would be accepted by an uncritical layman. It's hard to get video cameras these days where you can disable the auto-exposure mechanisms, but when you can, it only takes a few stops over-exposure to make the picture completely unuseable.

 

For the quality-conscious professional, it's just one less thing to have to worry about.

Link to comment
Share on other sites

  • Premium Member
It's hard to get video cameras these days where you can disable the auto-exposure mechanisms

 

Are you talking about professional video cameras or consumer cameras? Because if you're talking about auto exposure on a professional camera, just turn off the auto iris function on the lense, its the button right next to the zoom controls on most ENG lenses. All of the "Cine" lenses (Digiprimes, Fujiprimes, etc...) that I have used don't have an auto iris function.

Link to comment
Share on other sites

  • Premium Member
It's hard to get video cameras these days where you can disable the auto-exposure mechanisms,

I can tell you for sure that the Genesis doesn't have an auto-exposure mechanism. It uses the same lenses as the Panavision film cameras. I'm not sure, but the tests could have been done swapping the exact same lenses onto both bodies. I know that they shot consecutive separate takes rather than side by side, the main reason I can see for taking the extra time to do that would be to completely eliminate the question of differences between lenses by using the same ones.

 

As I've said before, it may be that they have patent or trade secret reasons for not revealing any specifics on Genesis.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

Regarding servo iris on video lenses:

 

The handgrip assembly on most video lenses which contains the zoom and iris servos, VTR and return-video buttons, zoom rocker and other electronics, is designed to be removable. Four small screws and it's free, and you end up with something that looks almost exactly like the kind of lens you'd expect on a film camera, with standard gearing on the zoom and iris rings where the servos clamp on.

 

Of course as a general rule it's easier to leave this item be, as having a nice smooth motorised zoom is generally a bonus, but if you need to add a mattebox, front rods or other lens accessories that get in the way, don't hesitate to remove it.

 

Phil

Link to comment
Share on other sites

Guest Jim Murdoch
Are you talking about professional video cameras or consumer cameras?  Because if you're talking about auto exposure on a professional camera, just turn off the auto iris function on the lense, its the button right next to the zoom controls on most ENG lenses.  All of the "Cine" lenses (Digiprimes, Fujiprimes, etc...) that I have used don't have an auto iris function.

No, I don't mean the auto-iris. Most video cameras have what might be termed "hidden" sensitivity or "gain" controls, in the sense that you can't turn them off. I suspect the main reason they do this is to make the video camera behave at least superficially like film.

 

Because when this function is disabled (sometimes by 'brute force") in most cameras the overload characteristic becomes totally S.H.!

Link to comment
Share on other sites

Guest Jim Murdoch
I can tell you for sure that the Genesis doesn't have an auto-exposure mechanism.  It uses the same lenses as the Panavision film cameras.  I'm not sure, but the tests could have been done swapping the exact same lenses onto both bodies.  I know that they shot consecutive separate takes rather than side by side, the main reason I can see for taking the extra time to do that would be to completely eliminate the question of differences between lenses by using the same ones.

 

As I've said before, it may be that they have patent or trade secret reasons for not revealing any specifics on Genesis. 

-- J.S.

No as I mention above, I don't mean auto-iris.

As for "trade secrets", getting a 12-stop dynamic range out of a system that uses a 12-bit analog to digital converter on the "front end" is a doozy!

 

Well it's not; it's straight science fiction.

Link to comment
Share on other sites

Guest Jim Murdoch
While I realize that there can be a slight increase in "picture quality," this is an approach that likely requires more time, and a lot more money than a "standard" HD finish, unless they're "giving it away." I suppose there's some value to this, but the bottom line is --- it's a TV show. It will never be projected theatrically. The difference in "quality" will be so minimal on a home screen as to be insignificant.

 

"Slight increase"? Actually, the difference between 16mm and 35mm origination (and SD and HD origination for that matter) is easily visible on ordinary PAL or NTSC. The fact is, the more picture data you have to start with, the more apparent resolution you can shoehorn into the bandwidth available. I can tell 35mm origination from 16mm origination, even on VHS.

 

It's all in the detail correction of course, but it's vastly a simpler process to "fake" detail on an inherently low resolution delivery system if you have a high resolution image to start with (the case with 35mm to NTSC) than to have the image processing circuitry try to "guess" where the detail might go, which is the case with most practical television cameras.

 

There's no substitute for either cubic inches or origination resolution!

Edited by Jim Murdoch
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

BOKEH RENTALS

Film Gears

Metropolis Post

New Pro Video - New and Used Equipment

Visual Products

Gamma Ray Digital Inc

Broadcast Solutions Inc

CineLab

CINELEASE

Cinematography Books and Gear



×
×
  • Create New...