Jump to content

2008 - The New Years Resolution


Werner Klipsch

Recommended Posts

My new years resolution was not to get involved in any more of these discussions. Fortunately, it is still 2007 :lol:

 

I must confess I am rather confused about Mr Jannards recent statements on Reduder about detail correction with the RED.

 

Actually overall I am somewhat confused about many of the claims made about how much resolution the Mysterium sensor has (well any Bayer mask sensor for that matter).

 

Phil Rhodes insists that the absolute guaranteed resolution of a Bayer mask 4K sensor can only be 1K if we are talking about red green and blue, and not synthesizing any part of the image.

 

I do not understand this reasoning. If you consider the Bayer sensor as being made up of sub-pixels, each with two green, one red and one blue photosite, then there will indeed be 2K of these sub-pixels across the sensor , and 1152 vertically. From this you can at least assuredly generate a genuine 2K 16 x 9 image.

 

After that, with the extra detail information from the extra green photosites, I will allow that more useful detail information can be mathematically calculated and added to the base 2K 'no questions asked' to give a usefully higher resolution image. The D-20 does a good job of this, and I see no reason why the RED should not do as well or better.

 

Where I and some of my colleagues have trouble is that Mr Jannard in some of his recent posts on Reduser makes the point that some people may find the RED images not as sharp as expected because they "do not add any detail correction".

 

So, at which point does the synthesized detail above 2K stop being "enhanced picture information" and become "detail correction"? At the point where it becomes irrating perhaps?

 

I would much rather the finer detail correction was derived from the original RAW data, rather than re-synthesized from already synthesized data, which is what would happen if the extra correction is done downstream.

Link to comment
Share on other sites

  • Premium Member

Jim Jannard is referring to edge enhancement, i.e. "sharpening", when he says "detail correction", not de-Bayering algorithms to derive RGB data from the Bayer pattern. The converted RAW files don't use edge enhancement, unless that feature is selected when doing the conversion, so they feel a little soft. Truth is that most digital images ultimately get some degree of sharpening somewhere in the chain. I think most of us would rather the camera not add any artificial sharpening so we can add it to taste later.

 

Most people say that a de-Bayered image with a decent de-Bayering algorithm has roughly 2/3 or 3/4 of the original resolution, not half or less than half. RED says it's an effective 3.5K, I'm more likely to say 3K, but it doesn't really matter... the numbers aren't the point, the point is how much information, fine detail, does the image seem to resolve on the big screen without aliasing artifacts. All that matters for now is that it is competitive with 35mm, since that is the industry standard. So far, it seems any of the digital cameras in the 2K to 4K range are in the 35mm ballpark, sharpness-wise. Look at "The Golden Compass", which uses a lot of Genesis footage in there, and that's an HD recording. Of course, one reason it blends well is probably the 35mm footage was all posted in 2K, but the movie looked pretty good on the big screen. There was a softness, but it added to the faux period feeling.

Link to comment
Share on other sites

Single digital sensors need an OLPF (optical low pass filter) to eliminate aliasing. That filter softens the native image. Sharpening, unsharp mask, or OLPF Compensation (maybe a more elegant term) is necessary if you want to get back the original resolution of the sensor. Some do not want to get back the lost sharpness of their footage. They like the "softer, creamier" look of no OLPF Comp. Some want to snap it back. Some may even want to over-sharpen to get it to look more like traditional HD footage.

 

Canon's new 1Ds MKIII manual has a section dedicated to sharpening and why it is important for their cameras. It is the nature of the beast.

 

We choose NOT to put any in-camera sharpening so that the user has all his choices. But in head to head tests where RED RAW footage is being compared to in-camera sharpened footage, we just want to make sure everyone knows what's happening.

 

Jim

 

Jim Jannard

www.red.com

Link to comment
Share on other sites

Isn't the primary reason RED applies no sharpening to the image because sharpening isn't even possible on a RAW image. ;)

 

It's sort of like saying "We don't apply any sharpening to a Zip file."

 

That is not correct. A lot of things happen (and can happen) between the sensor and a RAW file. We just choose not to do it.

 

Jim

Link to comment
Share on other sites

  • Premium Member
Most people say that a de-Bayered image with a decent de-Bayering algorithm has roughly 2/3 or 3/4 of the original resolution, not half or less than half. RED says it's an effective 3.5K, I'm more likely to say 3K, but it doesn't really matter... the numbers aren't the point, the point is how much information, fine detail, does the image seem to resolve on the big screen without aliasing artifacts. All that matters for now is that it is competitive with 35mm, since that is the industry standard.

Slightly off-topic, but a few times here and in other places I have heard people claim that the best practical resolution 35mm film projection can offer is "about 1,500 lines". This is assuming a new print, a competent projectionist and a good quality, clean lens, that hasn't been dropped too many times.

 

But what do they mean by 1,500 lines?

 

Do they mean that such a system is only just capable of displaying 750 vertical white lines on a black background that occupies the width of the screen? And if so, how visible are these lines? Does "visible" mean "clearly visible to the average cinema patron with optimally corrected eyesight", or: "the best that can be seen by any means possible"?

 

Or do they mean "lines" as in: "columns of pixels", which is what I think a lot of people seem to think this means. In other words "1.5K projection", suggesting that this is inferior to 2K digital projection.

 

For alias-free projection, a display of 1,500 vertical lines would necessitiate a projector with at least 3,000 pixels horizontally, and in theory, a video camera with the same horizontal pixel count.

 

Another of the many annoying practices in this sort of discussion is the comparison of film prints produced entirely by the traditional photochemical processes, with video derived images which have all the benefits of electronic post-production "crispening" and other image enhancement. In the real world, there is nothing at all to stop images scanned from negative film getting the same treatment.

Link to comment
Share on other sites

1500 lines is 1500 lines per picture height, which is 750 line pairs per picture height. A line pair is a white and black line together. The best reference I've found for this is here, http://www.cst.fr/IMG/pdf/35mm_resolution_english.pdf where film was tested through the stages of production through to typical real world cinema projection, where best performance was 875 lines per picture height.

 

For a projector to display 1500 lines it needs 1500 pixels. You're confusing things by thinking of aliasing at this point, and confusing frequency, which would equate to line pairs, with pixels and lines which are different by the factor 2, lines being twice as many as the line pairs.

 

Graeme

Link to comment
Share on other sites

  • Premium Member
The best reference I've found for this is here, http://www.cst.fr/IMG/pdf/35mm_resolution_english.pdf where film was tested through the stages of production through to typical real world cinema projection, where best performance was 875 lines per picture height.

 

I can believe that -- it explains why 2K projection seems on par with really good 35mm answer print projection. Now if we never, ever had to make a 35mm IN and prints, then 2K D.I.'s and 2K mastering would probably be fine for 2K projection (though 4K scanning would still have some benefits even for downrezzing to 2K).

 

But my feeling is that if you still have to make movie release prints from an IN/IP, then starting out with the best possible 35mm negative is still a good idea, whether that means a 4K D.I. or no D.I. or 4K digital origination.

 

Of course there are other benefits from originating at a higher resolution than the presentation format (i.e. oversampling). And there would be some benefits from 4K digital projection in terms of reducing stairstepping, etc.

 

After seeing how good "Walk Hard: The Dewey Cox Story" looked shot in HD on the Genesis and digitally projected in 2K, it makes you think that the real weak link is the fact that we still make 35mm release prints from dupes.

Link to comment
Share on other sites

After seeing how good "Walk Hard: The Dewey Cox Story" looked shot in HD on the Genesis and digitally projected in 2K, it makes you think that the real weak link is the fact that we still make 35mm release prints from dupes.

 

I saw that picture on film and it looked quite good as well. In fact, I had no idea it was a Genesis origination until the end credits. Maybe it's not as weak a link as you think...

Link to comment
Share on other sites

  • Premium Member
I saw that picture on film and it looked quite good as well. In fact, I had no idea it was a Genesis origination until the end credits. Maybe it's not as weak a link as you think...

 

The last Genesis movies I saw before that were "Superbad", a reel of "Balls of Fury" (just to check out the Genesis photography). And ten minutes of "I Now Pronounce You Chuck and Larry" (what an awful movie...).

 

"Superbad" was a bit soft and muddy at times in the print I saw. "Balls of Fury" was OK but the skintones were a bit desatured and metallic-looking. What little I saw of "I Now Pronounce You Chuck and Larry" looked pretty good. But they were all somewhat pastel and soft compared to the rich colors and sharpness of the 2K projection of "Walk Hard".

 

"Flyboys" looked good. I wonder if part of the issue is whether some theaters are getting prints off of the original digital negative?

Link to comment
Share on other sites

The last Genesis movies I saw before that were "Superbad", a reel of "Balls of Fury" (just to check out the Genesis photography). And ten minutes of "I Now Pronounce You Chuck and Larry" (what an awful movie...).

 

"Superbad" was a bit soft and muddy at times in the print I saw. "Balls of Fury" was OK but the skintones were a bit desatured and metallic-looking. What little I saw of "I Now Pronounce You Chuck and Larry" looked pretty good. But they were all somewhat pastel and soft compared to the rich colors and sharpness of the 2K projection of "Walk Hard".

 

I guess these little comedies are a perfect testing ground of sorts for the technology. Very few people go to see them for the cinematography (no offense to the DP's, they know it better than I do). :/

Link to comment
Share on other sites

  • Premium Member
1500 lines is 1500 lines per picture height, which is 750 line pairs per picture height. A line pair is a white and black line together. The best reference I've found for this is here, http://www.cst.fr/IMG/pdf/35mm_resolution_english.pdf where film was tested through the stages of production through to typical real world cinema projection, where best performance was 875 lines per picture height.

 

For a projector to display 1500 lines it needs 1500 pixels. You're confusing things by thinking of aliasing at this point, and confusing frequency, which would equate to line pairs, with pixels and lines which are different by the factor 2, lines being twice as many as the line pairs.

 

Graeme

I'm somewhat confused.

 

Suppose I had a rectangular test card with 1,024 vertical black lines drawn across its width, each one separated from the next by an equal area of white. That would constitute a 2,048-line - "2K" horizontal resolution test pattern would it not?

 

If we set up a 2K or better HD camera (or used a high resolution film scan) so that the 2,048 lines just fitted into the width of the image capture area, then the resultant pattern of 2,048 lines should be just be projectable by a "2K" projector. If you could see the projected pixels on the screen, they should follow a repeating dark-white-dark-white etc pattern, each horizontal row consisting of 1,024 bright, and 1,024 dark, pixels. This would be an exact reproduction on the original test chart.

 

Is that correct?

Link to comment
Share on other sites

If you have such a card, divided into 2048 areas, alternate areas filled with black and white, you have a card that contains, if filling the FOV, a resolution of 2k lines or 1k line pairs.

 

And yes if you have a > 2k camera, you should be able to accurately sample that image. For 2k Projection, you'd be at the mercy of whatever downsampling filter you use to create a 2k image from your >2k camera image.

Link to comment
Share on other sites

  • Premium Member
I'm somewhat confused.

 

Suppose I had a rectangular test card with 1,024 vertical black lines drawn across its width, each one separated from the next by an equal area of white. That would constitute a 2,048-line - "2K" horizontal resolution test pattern would it not?

 

If we set up a 2K or better HD camera (or used a high resolution film scan) so that the 2,048 lines just fitted into the width of the image capture area, then the resultant pattern of 2,048 lines should be just be projectable by a "2K" projector. If you could see the projected pixels on the screen, they should follow a repeating dark-white-dark-white etc pattern, each horizontal row consisting of 1,024 bright, and 1,024 dark, pixels. This would be an exact reproduction on the original test chart.

 

Is that correct?

If this thought experiment 2k camera is lined up with the chart in such a way that the black and white lines land exactly on columns of photosites, you get the black-white columns on the screen as you say. But suppose it gets panned over by just half a photosite pitch. Then every photosite sees equal amounts of black and white, and the resulting picture is uniform gray. If the camera is a little more than 2k, or not lined up perfectly, some areas on the screen would land exactly black-white, while others land exactly gray, and in between those areas, things would ramp between those extremes, with alternating darker and lighter pixels.

 

That would result in really horrible pictures, which is why real world cameras have -- and need -- an Optical Low Pass Filter, or OLPF.

 

Harry Nyquist and Claude Shannon figured all this stuff out a long time ago. The answers in a nutshell are that if you want to get N samples without aliasing like this, you have to filter out all detail above N/2. To do that with an optical filter, because we don't have access to anti-photons, the rolloff has to start at N/4.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

If you have 2048 lines (samples) that means you can store any frequency up to 2048/2, ie 1024 line pairs. 10204 line pair is of course 2048 lines. Of course, any frequency about 1024 line pairs must be removed to avoid aliasing, and indeed, optical filters don't roll of high frequencies very fast, and hence must be brought in early. As you say, if you could have optical filters with negative taps we'd get faster rolloff, but we may also get ringing and halos too....

Link to comment
Share on other sites

  • Premium Member

Graeme, would it be possible for you guys to put together a test/demo only camera without the OLPF? That would make it possible to demonstrate visually what all this sampling theory Nyquist/Shannon stuff really means. Bill Schreiber at MIT did something like that 20+ years ago, using a shot of a dollar bill to demonstrate what interlace looks like without filtering. There's nothing like an eye-hammering demo to make the point. ;-)

 

Come to think of it, would it be possible to make cameras with interchangeable OLPF's, so you could have a variety of them for people to choose from? It might work out somewhat like the subtle differences in look between film stocks.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

I don't think interchangables would not be good as you'd risk getting dust on the sensor. And you'd have to have a way of recording in the metadata the filter used. It would add a large degree of complexity. Obviously, it can be done, but it could be very tricky to do right. Also, different OLPFs would be different thicknesses and that would throw your focus off unless you could compensate somehow.... It's not trivial.

 

As for a demo - how about I walk around NAB wearing a zone plate T-Shirt and people can point cameras at me :-)

Link to comment
Share on other sites

  • Premium Member
If you have 2048 lines (samples) that means you can store any frequency up to 2048/2, ie 1024 line pairs. 10204 line pair is of course 2048 lines.

Are you sure about that?

 

Imagine a second chart, almost the same as the first, except it has 100 lines less; ie there are only 1948 lines (or 974 line pairs).

 

At a casual glance the two charts look identical. If I shot an image of the new chart with suitably fine-grained film and projected it with a film projector, there would with any luck be 1948 lines visible on the screen. And 2048 with the earlier chart of course.

 

When projecting the image of the 2048 line chart with a 2K projector, as mentioned earlier, the light modulating elements of the projector would have to be activated in a light-dark-light-dark-light and so on sequence. That would indeed produce a pattern of 2048 vertical lines on the screen.

 

But what sequence of light and dark pixels on a fixed-position light emitting-matrix of 2048 elements could produce 1948 vertical lines across the screen?

 

In some places the white elements would approximately line up with the light generating elements, but in others they would want to straddle two elements. There is no way you can make a pixel bright on one side and dark on the other! What you would tend to get is adjacent pixels that both want to be half-and-half, which will come out as mid-grey. The result would be a series of bands of black and white lines alternating between bursts of the native pixel rate of the projector and mid grey - the classic "aliasing".

 

This is the fundamental difference between "lines" projected from film, and "lines" projected by a video projector. People freely interchange the terms when they are not interchangeable.

Link to comment
Share on other sites

Lines are like square waves - have infinite frequency, and therefore we've just violated sampling theorem. If instead we'd used a sinusoidal pattern we'd be ok. But video projectors lack a reconstruction filter - they just show the samples as is, rather than do reconstruction as you'd get in a CD player on audio waveforms, for instance.

 

What you have pointed out is the in-ability of any sampled system to properly sample any image that has frequencies that are too high in it.

 

Measurement of lines is completely interchangeable - you've just changed the rules mid way. In your thought experiment, you've omitted the optical low pass filter which would send your fine pattern of lines near the maximum limit to a uniform grey. That's what we see (or should see) on any resolution chart approaching maximum resolution. Also, the MTF of the system will be reducing contrast significantly at that point anyway.

 

You can think of Film as either having it's own built in OLPF due to the random grain structure, or that the random grain structure breaks up any aliasing into a random pattern that you cannot detect.

 

Sharp edge resolution charts are used because sinusoids are harder to print nicely, but if you want truer measures they're more useful. Also, zone plates are more useful than trumpets are they make the aliasing pop out as extra circles in the image that were not there in the target.

Link to comment
Share on other sites

  • Premium Member
As for a demo - how about I walk around NAB wearing a zone plate T-Shirt and people can point cameras at me :-)

I found a shirt once that looked a whole lot like multiburst -- and it wasn't even intended to be a joke. ;-)

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
You can think of Film as either having it's own built in OLPF due to the random grain structure, or that the random grain structure breaks up any aliasing into a random pattern that you cannot detect.

 

Film grain is a random sampling structure, and random fine detail can ailas against it just like regular detail against a regular sampling structure. It's just that random aliasing is very much less subjectively objectionable. Consider for example a day exterior shot with crosslit trees in the distant background. Even if it's a dead calm day, no wind at all, the shot could come back looking like the leaves were fluttering in the breeze.

 

That led me to consider an imaging system based on a pseudo-random sampling array. The idea would be to design a block or tile of a few thousand sample sites, roughly square, but designed so that copies of it would interlock like jigsaw puzzle pieces. The exact same structure would have to be carried all the way through post and the final projector or display. It should look a lot like film freeze frames as far as grain and aliasing are concerned. Of course, it would be utterly impractical, as we'd have to build a whole separate infrastructure. Optimizing those OLPF's would be a much smarter way to approach the problem.

 

 

 

-- J.S.

Link to comment
Share on other sites

Single digital sensors need an OLPF (optical low pass filter) to eliminate aliasing. That filter softens the native image. Sharpening, unsharp mask, or OLPF Compensation (maybe a more elegant term) is necessary if you want to get back the original resolution of the sensor. Some do not want to get back the lost sharpness of their footage. They like the "softer, creamier" look of no OLPF Comp. Some want to snap it back. Some may even want to over-sharpen to get it to look more like traditional HD footage.

 

 

We choose NOT to put any in-camera sharpening so that the user has all his choices. But in head to head tests where RED RAW footage is being compared to in-camera sharpened footage, we just want to make sure everyone knows what's happening.

 

Jim

 

Jim Jannard

www.red.com

 

Dear Jim,

 

Regarding: "Single digital sensors need an OLPF (optical low pass filter) to eliminate aliasing. "

Actually, all CCD and CMOS sensors need this, single or triple chip.

 

 

Your are quite correct to avoid level-dependent "sharpening"

 

For what it might be worth, might I also suggest you alter your statement about what this actually does and why you would want to avoid it until the end of the chain.

 

You said: "Sharpening, unsharp mask, or OLPF Compensation (maybe a more elegant term) is necessary if you want to get back the original resolution of the sensor"

 

It is just that nothing creates more engineer's teeth grinding than manufacturers claiming detail correction "restores missing detail" or "restores the original resolution"!

It does no such thing. The best it does is give a comfort-zone illusion that the pictures are sharper, by drawing thin lines around certain parts of the image.

 

Consider, you have two of your milk girls, one wears a pink dress, the other the same but with red pinstripes.

Put through the spatial filter, both dresses will look just pink.

 

The camera sensor only sees pink on both dresses. How could any electronic circuit possibly know one is plain pink and the other is striped. Or perhaps that it has tiny red polkadots? All the "sharpening" can do is make the outline of the dresses look sharper.

 

Which is still useful, it will give the illusion of sharpness, but it assuredly does not really "restore the missing detail"!

 

Another point you should raise, is that you have put a lot of time and expense to make a camera that restores the beloved shallow depth of field to cinematographers who want to use digital cameras. (Still surprises me, what a surprise it seems to be to some would-be "Hitch-Bergs" that anybody would deliberately WANT something to be out of focus :lol:)

 

You are in front and the focus is on you. The girls stand a respectful 600mm behind you. You have put in extra NDs because you want focus only on you, not them. The sharpening circuit says "What is this? Why aren't these lovely ladies in focus? Has the Focus Puller gone to sleep? I had better sharpen them up before somebody notices! And, I had better sharpen up that bus in the background, and why are those spots on Mr Jannard's chin not sharper? Better fix those up as well!"

 

So suddenly the depth of field has increased without you asking it to, and a couple of freckles and other minor blemishes on your face turn into nice strong black dots and crisp lines, and people on the bus suddenly want to offer you their seat :lol:

 

Amateurs of course love the twin champions of tiny imager size and detail correction, and they get an awful shock when they discover how hard it is to focus a 35mm format camera on a moving target!

 

By the way, commiseration on your recent QC troubles. Welcome to my world :rolleyes:

Link to comment
Share on other sites

Because of the MTF rolloff as you get higher and higher detail, a small amount of sharpening can compensate for this, to an extent, without going over the edge into producing those horrible, thick, halos you see around the edges on some systems. It's therefore ok to add a very very small amount early on. However, as you point out, it's best to add what you need at the end, if needed. If you're downsampling, you'd probably just pick an appropriate downsampling filter an not need sharpening at all.

Link to comment
Share on other sites

  • Premium Member
I found a shirt once that looked a whole lot like multiburst -- and it wasn't even intended to be a joke. ;-)

 

-- J.S.

I'm not sure of the situation for NTSC, but for PAL, anything with lines 22.5 Degrees from the vertical is deadly. For demonstrating this, I once bought a cheap white silk tie and drew several blocks of parallel 22.5 Degree lines on it, of varying spacings, with a black felt-tip pen. No matter how you zoomed in or out on the thing, it produced violent cross-colour.

 

I also once saw a contenstant on a quiz show wearing a shirt decorated with blocks of black and white vertical lines, close-spaced on the left and right sides of the blocks and getting more widely spaced toward the centre. Once again, it didn't matter what they did with framing, the thing produced a glorious psychedelic display.

Normally people appearing on such shows are told to bring a couple of shirts, in the hope of finding at least one that will not strobe. I don't know what happened there.

 

Hint: If you want a shot at being placed in the front row of the audience, wear plain a plain beige shirt and brown pants. "Dress to impress" and you'll almost certainly get relegated to the back row :rolleyes:

Link to comment
Share on other sites

  • Premium Member
By the way, commiseration on your recent QC troubles. Welcome to my world :rolleyes:

Given the battery current that the RED draws, and the fact that there are no moving parts, the RF component superimposed on the logic supply rails must be phenomenal. It will be interesting to see how well their bypassing components hold up over time.

 

With all the super high-tech componentry used in modern devices these days, it's truly amazing how many expensive devices are laid low by that humble relic of the 1920s, the electrolytic capacitor! A ten thousand dollar machine can easily be sunk by a 10 cent capacitor, when the manufacturer should have used a $1 capacitor!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...