Jump to content

How many "pixels" does film consist of?


Niklas

Recommended Posts

Does anybody know how many pixels you would need in order to simulate a 35 mm movie frame? I've heard something like 80 Mb of data for one frame of Jurassic Park, but don't know if it's correct. Does anybody know? 10K * 10K ? Or even more? Less?

Link to comment
Share on other sites

  • Premium Member

Hi,

 

Very, very complicated, politically influenced and difficult-to-answer question.

 

In current practice, most digital work for film is done at 2048x1556 pixels, but bear in mind that represents a scan of the full gate area so a widescreen frame is a chunk cut out of the middle of that. This is generally considered to be a bit below what 35mm motion picture film is capable of once you take the four stages of duplication between camera original and theatrical print into account. This resolution is used for things including digital intermediate and effects work.

 

There is a current industry initiative to standardise twice that resolution, 4096 by however many pixels you need for your aspect ratio, for digital cinema distribution. It is probably fair to say that this is generally considered to be reasonable; in fact, I personally feel that direct scans of 4K images from modern neg stock projected digitally will probably look somewhat sharper than the average current print. Bear in mind also that "the average current print" is very variable. Super-35 to anamorphic via an optical printer is probably the softest possible way to go, and if you choose to add special techniques such as bleach bypass you can end up with a print which has comparatively low resolution. Shoot ISO50 stocks in cinemascope with good lenses and contact print, and resolution will be much higher.

 

If you're talking about capturing every bit of detail on the negative then things get more complicated. ISO50 stock would require a higher resolution than ISO800. You can look at the sharpness tests produced by film manufacturers to work out how big the scan would have to be, but there does come a point where you're just defining the shape of the grains more accurately which isn't actually getting you any sharper images on the output. 10K, or a scan 10240 pixels across, woud certainly do that, but there doesn't seem to be any sensible reason to go that far.

 

Several other things cloud the issue further:

 

- Digital images captured digitally, on something like the upcoming Kinetta camera with true 2K resolution, should look sharper than images shot on film then scanned since they only have the artifacts of one system rather than two piled together.

 

- Digital images digitally projected should look better than those scanned back out to film, for the same reason.

 

- The preceding two considerations are BOTH issues with current practice since material is almost always shot on film, scanned to digital, and then put back out to film again for distribution. This shows the digital technology in the worst possible light.

 

- Digital projection avoids dirt, scratches, cue dots, jumping splices and many other undesirable characteristics of film.

 

You may conclude from this that digital postproduction technology really needs both practical digital capture (which is probably imminent) and digital distribution (which is certainly not) to shine.

 

Phil

Link to comment
Share on other sites

  • Premium Member

Agree with Phil. 2K is not enough and 4K is. You could scan in 6K or 8K or whatever really, but it wouldn't do much good except take up hard disc

space. Then there's the whole Nyquist-thing which says you should oversample

by a 100 percent.....

Link to comment
Share on other sites

  • Premium Member

Almost everyone can see the difference between a 2K and 4K scan of a 35mm production. 2K is most common today, but 4K likely to increase as costs and throughput improve. ("Spiderman 2" used 4K). Recent work by Kodak's Dr. Roger Morton and his team shows that the primary advantage of going beyond 4K is further reduction of aliasing artifacts:

 

http://www.electronicipc.com/journalez/det...=45390011120508

 

http://www.electronicipc.com/journalez/det...=45390011120705

Link to comment
Share on other sites

The nyquist thery does not talk about oversampling!!!

 

The normal, "one on one", so to speak, sampling is based on the nyquist theory.

The theory states that if you have an analog signal with two different peak values,

for example a black line on the test target, you need to sample it with two

different digital values (two pixels in this case)

 

In other words. On the film there is a transition from black to white, this makes

one black line on film, it should be sampled with two pixels. One for the black part

and one for the white part.

 

It has nothing to do with sampling at 8K and downsampling to 4K.

Sampling at 4K is the nyquist way of sampling 85 lines/ mm

 

And one more thing. People often forget the lenses. The MTF curves in film specifications show the resolution of film only.

Any lens no matter how good will always reduce the maximum resolution captured on film.

 

the formula is 1/system res. = 1/ lens res. + 1/ film res.

 

If you have a very fine grain film like 5245 for example, and use it

on a daylight shot at f/16 aperture for example here is what you get:

 

Lets say that 5245 can resolve 200 lp/mm on very high contrast detail,

this is a bit less than 10K for super35.

 

ANY lens can only resolve about 93 lp/mm at f/16. This is the diffraction limit.

It is impossible to pass it because of the laws of physics. Not to mention that

not every lens will reach the diffraction limit, this is the best resolution in

THEORY.

 

Use the formula and you get 63 lp/mm and that is only 3K resolution.

 

For figuring out the resolution you must not forget about the lens.

With the best possible lens in the universe at f/16, a 10K film stock

will give you 3K images. On the high contrast targets that is.

In reality with usual contrasts you would get less than 3K at that aperture.

 

At 2.8 aperture for example, the diffraction limit is 530 lp/mm

which would give you 145 lp/mm on film with 5245 and that is a bit less that 7K.

But consider that lenses do not reach their diffraction limit at such big apertures

because of the abberations. So you wouldn't get 7K with common lenses.

And one more thing that is very important. This data here goes for

high contrast targets of 1000:1 In realitty you usually dont get such

contrasty detail. The resolution for lower contrast detail is lower.

 

Considering all this, you could say that for all but 100 ISO or 50 ISO films

(at best conditions, high key lighting, best lenses at best apertures) 4K can

be considered the film resolution. For the above cases, the resolution can be

as high as 6K.

 

But that doesnt mean that film should be scanned at 4K or 6K.

On film, detail can be diagonal or out of phase with the sensors.

So to get really good reproduction of a frame you should always scan

at higher resolution that the one on the film.

If your film can capture 4K , you should scan at 6K.

 

Not to mention that filmd does not give you a fixed grid like digital.

This is why film will be superior to digital capturing devices even when

the digital captures surpases film in resolution. You can never have

a diagonal straight line in a digital image. Nor can you have the line to

be placed anywhere between the pixels. It is either on one pixel or on the other,

or on both.

Link to comment
Share on other sites

  • Premium Member

You'd also think that regardless of how sharp the image is -- let's say it was shot with diffusion, a bad lens, and is out-of-focus to boot! -- the negative should still be scanned at enough resolution to resolve each individual color dye grain and reproduce it onto a new piece of film.

 

Of course, if the image is soft, one could in theory work at a lower pixel resolution but you are essentially "degraining" the image (may look nicer, who knows, which is why 2K seems to work OK for 16mm stuff). But ideally, I'd think the goal is to make a "perfect" digital copy of the original.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

I'd be happy so long as you had roughly one pixel for every grain. I don't think it's relevant to worry about a "perfect digital copy of the original" - because at some point you're just duplicating the faults of the original. If you look at it this way, a filmout of digitally-acquired material is poor because the film doesn't resolve every square block of colour, which is clearly ludicrous.

 

If some of your artistic techniques rely on the faults of film - grainy bleach bypass, or whatever - then I would contend that if it's subtle enough for a 4K scan to miss, then it's subtle enough to go without. Printing a neg changes the quality of the image; it's not unreasonable to expect that digital distribution would also change it, without that necessarily implying a drop in quality.

 

I'd prefer to consider the goal to be to create equal perceived image quality. I know that's rather up in the air and impossible to define, but I think the discussion in this thread goes to show that it actually is pretty much impossible to define since it's so dependent on circumstances.

 

Phil

Link to comment
Share on other sites

  • Premium Member

I guess the best criteria would be if the next step up in quality cannot be seen by a skilled cinematographer, all else being equal. 4K vs. 2K passes that test. 65mm vs. 35mm passes that test. 35mm vs. 16mm passes that test. Etc.

 

Of course, when comparing different media, there's more to the picture than resolution or the number of pixels or grains: latitude, dynamic range, color gamut, flesh reproduction, etc.

Link to comment
Share on other sites

You'd also think that regardless of how sharp the image is -- let's say it was shot with diffusion, a bad lens, and is out-of-focus to boot! -- the negative should still be scanned at enough resolution to resolve each individual color dye grain and reproduce it onto a new piece of film.

 

Of course, if the image is soft, one could in theory work at a lower pixel resolution but you are essentially "degraining" the image (may look nicer, who knows, which is why 2K seems to work OK for 16mm stuff).  But ideally, I'd think the goal is to make a "perfect" digital copy of the original.

David..

 

When I said all that I said, i never ment that scanning should go below 4K. I was just comparing film resolution to its digital equivalent. Even if the resolution

is 3K on the negative like in the example i mentioned, the scanning should be at least 4K. One of the reasons is that a 4K would be more precise in showing

fine details. And another is the grain size as you mention.

I agree that grain structure should be preserved.

 

From all the example 4K scans I have seen, I can see that even for 5245 film,

the grain structure is visible. One usual grain "particle" is showed with a small group of pixels. This is good. For grainier films, a 4K scan will show grain particles

with even more pixels. The patter is preserved.

 

When you resample a 4K to the size of 8K for example, the multiple-pixel-grain

particles would resamble the actual grain that you would get if you scanned at real 8K. One-pixel-grain would only look like softened noise if you double the resolution.

 

I have been experimenting with this in my photo editing program. I used

a microscope image of a section of film. The enlargement in that image was equal

to about 26K scanning. I downsampled to 8K, 6K and 4K and then experimented

with upsampling back to 26K. The grain structure made with upsampling

from 8K was closest to the 26K original, but even 4K gave an adequate grain structure. The film was velvia, it is a fine-grain film so I think it can "speak for" most grainier films on the market.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...