Jump to content

The truth about 2K,4K and the future of pixels


Tenolian Bell

Recommended Posts

  • Premium Member
Firstly, what is OLPF?

 

Also can someone provide me more information about aliasing? What is meant by this term?

John,

 

This information is readily available in many other posts on this forum and also on Wikipedia. These questions have been asked and answered here too many times already, so please do a search first and try to find these answers for yourself before you ask these questions here - it's a waste of bandwidth and of people's time to do otherwise. Phil already explained what the OLPF was in this very thread, post #12.

Link to comment
Share on other sites

  • Premium Member

To be scrupulously fair, I don't think I ever gave a definition of what OLPF stands for.

 

Optical low-pass filter - a blurring filter, basically, a groundglass screen mounted a tiny distance from the imaging sensor, which has the effect of uniformly diffusing the image into an (ideally) Gaussian distribution of itself, to avoid aliasing.

 

Aliasing you can indeed look up on wikipedia.

 

P

Link to comment
Share on other sites

  • Premium Member

Actually there are a number of other strains of creative accountancy going strong.

 

One is the old "Projected film only has a horizontal resolution of (somewhere between 800 and 1500 lines, depending on how hyperventilated the fanboy).

 

This may or not be true, but how is this relevant? At the start of the Post chain we either have a film negative, or an electronic sensor. Assuming it is shot by someone who knows he's doing, a scanned 35mm negative is always going to produce a higher quality image than a signal from an electronic camera. After that, every step in the chain, whether it's a film release or a digital release is identical (or can be at any rate).

 

Another old chestnut is "The maximum horizontal resolution ordinary 35mm movie film can resolve is about 3,500 lines. The Blah-blah-blah digital camera is a 4K camera therefore the Blah-blah-blah digital camera has a resolution greater than film"

 

First off, lets be precisely clear what is meant by "lines". "4,000 lines horizontal resolution" means 2,000 vertical black lines on a white background (or vice versa). Obviously the finest pattern of vertical lines a 4K sensor or screen could possibly display is 2,000 vertical black lines on a white background, where every other pixel in the horizontal direction is black. (If you had 4,000 white lines you would simply have a white screen!)

 

So, does the Blah-blah-blah digital camera with a 4K sensor therefore have a resolution greater than film? No!!! With film, you could start off shooting so that the alleged 3,500 vertical lines are framed to just fill the active emulsion area and then, zoom in or move the camera towards the chart, and you could have any number[/1] of lines focussed on the film's active emulsion area you like, anywhere you like, down to whatever distance the lens will allow you to focus to, and the film will capture them all. No aliasing, no nonsense. Maybe 3,500 lines don't make it out of today's scanners, but you can see it on a microscope, so it's there for future use.

 

Repeat that with a so-called 4K digital camera sensor with no optical filtering, and you will get nothing like that. There will be times when the white and black lines will straddle individual photosites, and the result will be the average to the two, ie grey.

Have a look at this picture:

The "corrugated iron" bit at the top represents a small part (about 50 lines) of a resolution chart of a little bit less than 4K, say about 3.6K, focussed on the image sensor.

The numbered "cups" underneath represent a single horizontal row of the individual photocells. The amount of electric charge they have accumulated is represented by the shade of grey.

 

scanneryk0.jpg

 

 

Photosites #1 and #2 directly coincide with dark and bright parts to the resolution pattern, so #1 has virtually no charge while #2 is fully charged.

But if you look closely, you'll see that this situation does not quite apply to photosites #3 and #: #3 is slightly lighter than #1, and #4 is slightly darker than #2. #5 and #6 are even more so, etc etc .

Now look at #10 and #11. Both of those are equally straddling dark and bright areas, so they both charge up by roughly equal amounts. After that, the pairs of adjacent photosites start to move back toward the condition of #1 and #2.

Eventually when you get to #21 and #22, the situation is identical to #1 and #2. The cycle then repeats the same cycle for every 22 or so photosites, across the width of the screen. What the TV screen looks like is something like this (this is meant to represent a small segment of the screen). The blank areas represent what happens around #10 and #11, where adjacent photosites are delivering the same or a very similar average signal. This is just one example of what as known as "Aliasing".

 

moirevp7.jpg

 

So how can you make the sensor display this resolution chart properly?

 

Well, the obvious answer is that you need some way to make the photosites able to accumulate more charge on one side than the other. Which is about as feasible as making a cup of coffee black on one side and white on the other!

 

The ONLY thing you could do is basically cut the photosites in half, so there are at least twice as many photosites as lines in the image. Basically, to correctly resolve 3500 lines (in the sense of producing an image identical to the one you could capture in a 35mm film emulsion) you need a 7K and (preferably an 8K sensor). Since the same applies in the vertical direction, you need twice as many vertically as well, which means four times as many overall.

 

However an 8K sensor is not really feasible at the moment (at least for movies), so instead of splitting the photosites in half, all you can really do is make sure that the minimum width of any line in the image is at least the width of two photosites. Which means there can only really be 2,000 lines optically focussed on the 4,000 photosites. You do this with the aid of an Optical Low Pass Filter. (OLPF)

 

An OPLF is a lot more sophisticated than a simple diffusion filter, in that the softening effect only kicks in past a certain level of detail. The effect is rather like having your taking lens focussing on a groundglass (similar to a video tap or a small-format video to 35mm film lens adaptor) but having the small camera slightly out of focus. That way no matter how carefully you focus the main lens, you can never get it quite up to 100% sharpness.

 

This of course is an extreme situation. Aliasing is most objectional where there are large areas of repetitive detail in an image, such as striped shirts or picket fences. In the case of single sharp edges of objects, the main effect of aliasing is that the sharp image is physically moved to the nearest avaiable pixel, since the pixels themselves cannot be moved. This produces the characteristic "jaggies".

 

The upshot of this is that it is not strictly necessary to filter down to the "half-the-pixels" limit, allowing you to trade some aliasing artifacts for extra resolution.

 

However the definition of "acceptable" seems to depend on whether you are selling or using the equipment...

Link to comment
Share on other sites

hmmm,

 

interesting stuff

 

I was guessing maybe the individual photosites would be 'snooted' off (sorry about my ignorance in terms) so they only see the corresponding area of the OPLF that would normally be in focus on the sensor - this is so the image doesn't get even more blurred on the other side (the 'not helpful blur', or is it helpful?)

 

... and if this is the case wouldn't this cause issues with an actively driven OPLF ? in that the snooting (again, sorry!) would also have to move with the OPLF whilst the sensor stayed still ? (as in get longer and shorter) Maybe there is some neat optical way to achieve this, akin to collimating the light emitted from the OPLF to be perpendicular with the sensor ?

 

Am I also correct in saying that by its nature that an image passed through an OPLF with a sensor such as these cameras do will reduce contrast relatively more in the edges in the process ? (perceived at least) ?

Link to comment
Share on other sites

  • Premium Member
Am I also correct in saying that by its nature that an image passed through an OPLF with a sensor such as these cameras do will reduce contrast relatively more in the edges in the process ? (perceived at least) ?

An OPLF of the type used in high-end cameras is an extremely complex device, often assembled from several ultra-thin layers of specially treated quartz. It is nothing like an ordinary diffusion filter, and even the much cheaper ones used in CCTV cameras just look like a completely clear piece of glass. (Most of them have an IR filter glued on so they look slightly greenish but that's all) You can't really see the low-pass function unless you look through it under a microscope and even then it's hard to see.

The only effect it has is to soften the sharpest parts of the image; it should have no other effect.

Link to comment
Share on other sites

What I don't get is why a cinematographer would want to remove the optical low pass filter? Is it because he wants to see the artifacts more clearly? But then again big resolution numbers sells more cameras and artifact reduction does not count for anything.

Link to comment
Share on other sites

  • Premium Member

Some time ago, I presented an idea that I indirectly stole from Mr. Pytlak. (Mussberger says I stoled it!) I noticed this thing by doing my own scans. The idea is this:

 

Film is a pan-resolution image. From my scans I could find some dye sites and grain sites that were small enough that could resolve only by unreasonably high resolution pixels. Since these tiniest of sites far exceeded all of my scan technologies up to 12,000 dpi interpolation, I've estimated the smallest to be resolvable by the equivalent digital resolution of around 32,000 to 64,000 dpi. That's just a guess, of course.

 

Since those tiny sites are statistically few, it is a fair estimate that 8K to 4K can capture the greater volume of sites fully. Some dye and grain clusters are so big that 4K is even a little too fine. What I have seen first hand is that 2K captures very few individual clusters, even the biggest. For this reason, I call 2K pixels only "pixel proxies" of the film grain. To me, 4K is the minimum resolution to represent any of the delightful qualities of film. 8K would be better, but little of that res will ever make it to the screen and is a bit overkill.

 

4K is good for scanning. 2K isn't worth the bother if you really care. If you don't care then shoot digital in the first place.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...