Jump to content

Debayering resolution loss


Recommended Posts

Hi,

 

just one quick question: I normally read that a 4K raw file will only have around 75% of its resolution after debayering and converting to a videofile. Now i guess Red's trying to confuse me. In this article they're saying that "A 4K Bayer sensor is capable of producing full 4K 4:4:4 RGB files". In context they're talking about being able to produce 4:4:4 video and apply 4:2:0 chromasubsampling afterwards, but if you only look at that part of the sentence it is actually wrong isn't it?

 

Thanks in advance.

Link to comment
Share on other sites

  • Premium Member

No they are correct. You are confusing file size and measurable resolution. Generally you would debayer a 4K raw file into 4K RGB (three 4K files, one for each color). You could downrez that to 2K RGB if you wanted. The fact that the image itself would measure less than 4K if you had shot a chart is a separate issue. If you did a 4K RGB scan of 35mm film, the files would be 4K each but that doesn't mean that the image itself resolves 4K worth of detail. If the scanned shot was out of focus it would still be a 4K RGB file.

Link to comment
Share on other sites

  • Premium Member

Strictly speaking - and only strictly speaking - yes.

 

What we're talking about here is the frequently-discussed difference between the resolution of a file and the amount of real information in it. Ultimately, I can take a cellphone snapshot and scale it up to any resolution I like, but the amount of actual picture information in it won't reflect that resolution.

 

 

Bayer images tend to achieve 60-70 per cent of the stated resolution on black and white subjects. That is, if you aim it at a series of black and white stripes, you will get some large proportion of the entire sensor resolution, assuming you are using a sufficiently optimistic debayering algorithm. There are problems associated with this approach in which may lead to sharp edges in red or blue being aliased to sharp edges in green or greyscale because the algorithm must make assumptions about how much colour detail there is in the image. Generally there isn't much, because most scenes aren't very saturated, but of course this assumption falls down in important cases such as chroma key scenes where foreground subjects are highly saturated, particularly in the hypothetical situation of someone wearing red clothing against a blue screen.

 

So, if you aim a Bayer camera at a coloured test chart, such as one with green and red, or worse, blue and red, stripes, and things look a bit less rosy. In this instance it's entirely feasible that you would only see about half the full sensor resolution, meaning a camera claimed to be 4K would struggle to equal the sharpness of an HD camera, in the specific area of high saturation.

 

The underlying question, which is now very old and has been argued over for years, is whether Red and other manufacturers are being entirely honest when they describe a Bayer sensor that's n-thousand photosites wide as producing an n-K picture. My position has always been that this is a bit of a liberty. There is no doubt that the resulting images do not contain as much information as a true 4K digital cinema camera would. That said there is no serious contention that devices such as Epic don't have adequate amounts of sharpness for any job in the world; it's just a matter, for me and some others, of an attempt to redefine what "4K" means in a way that doesn't serve anyone except the company.

 

P

Link to comment
Share on other sites

It's not actually the debayering that causes a resolution loss tho is it?

I mean the resolution isn't there on the red and blue channels in the first place because there are less sensor pixels in those colours?

 

Or does the debayering process actually casue a further loss or resolution in some way?

 

Freya

Link to comment
Share on other sites

  • Premium Member

 

 

I mean the resolution isn't there on the red and blue channels in the first place because there are less sensor pixels in those colours?

 

Well, quite. Although you could call some of the more aggressive algorithms lossy inasmuch as they can sacrifice quite a bit in terms of cross-colour artifacts in order to make black and white test charts look good.

Link to comment
Share on other sites

  • Premium Member

The question isn't whether there is resolution loss in debayering from raw (for one thing, you can't use the raw image without debayering so whatever resolution the raw file had before debayering is academic), the question is whether the results have enough resolution for your work. There is nothing special about the number 4.

 

The whole "4K isn't really 4K" arguments are silly because it's not like anyone had been shooting with a "true" 4K RGB format in the past. 35mm film should be scanned at a higher resolution than it resolves to avoid aliasing, and it seems to actually resolve about 3.5K at best in truth (and film doesn't resolve equally in each color layer) and thus should be scanned at 4K or 6K ideally.

 

Of course, VistaVision and 65mm and IMAX would all resolve even more than 35mm.

 

But people still get confused because of the difference between file sizes / pixel dimensions and measurable detail. As I said, you can take your 5K raw Canon still camera and shoot a totally out of focus image and it's still going to be a 5K raw file that can be converted to 5K RGB... but the image itself has no measurable resolution.

 

And even if you shot on a 8K raw camera and created 8K RGB files that actually could resolve 4K detail in each color more or less, the moment you put on an older lens or used some diffusion filter or shot slightly out of focus, the image would be resolving less than 4K.

 

Not to mention most of these digital cameras use Optical Low Pass Filters to reduce high frequency detail to avoid aliasing artifacts, so they are deliberately limited by design to avoid resolving fine detail that would moire.

Link to comment
Share on other sites

  • Premium Member

 

 

The whole "4K isn't really 4K" arguments are silly because it's not like anyone had been shooting with a "true" 4K RGB format in the past.

 

I must respectfully disagree.

 

I'll try to be brief as it's been discused endlessly, but I think this is a matter of precedent. Sony have now actually produced a camera - the F65 - which by their numbers should be capable of filling a 4K image with information up to that image's Nyquist limit. If we are comfortable with people claiming 4K pictures from a Bayer sensor with 4K linear resolution, Sony had no reason to do that from a competitive standpoint. They could have built a Bayer camera with 4K linear resolution and made more or less the same claims for it that they do already (give or take the upcoming beyond-4K output options).

 

The fact that Sony chose to build the F65 with excess resolution is deeply heartening because it suggests a willingness on their part to trust the market to understand why their engineering is better. I am greatly encouraged by this feat of bullshit-resistance on the part of the film industry, but I would much rather there had never been any bullshit to resist.

 

P

Link to comment
Share on other sites

  • Premium Member

I said "in the past"... I don't consider the F65 a past camera. The "4K isn't really 4K" arguments pre-date the Sony F65 and started with the Dalsa Origin camera. But with the Sony F65, the 5K Epic and soon the 6K Dragon sensor for the Epic, obviously the whole 4K isn't 4K argument is getting even more unnecessary, though I'm sure Red loves to point out that cameras using sensors with 4096 pixels across can't really deliver "4K".

 

I'm sort of over this -- anything in the 3K to 6K range is fine, I'm more likely to make a decision just like one makes a decision as to what film stock or lens to use, based on the story... some projects need to be sharper than others. Besides, dynamic range, color rendition, noise/sensitivity, and ergonomics all have to be factored in as well, not to mention workflow.

Link to comment
Share on other sites

 

Well, quite. Although you could call some of the more aggressive algorithms lossy inasmuch as they can sacrifice quite a bit in terms of cross-colour artifacts in order to make black and white test charts look good.

 

So there are different debayering algorithms, some that produce cleaner results than others?

 

Freya

Link to comment
Share on other sites

The question isn't whether there is resolution loss in debayering from raw (for one thing, you can't use the raw image without debayering so whatever resolution the raw file had before debayering is academic), the question is whether the results have enough resolution for your work. There is nothing special about the number 4.

 

Well that's true, I was just interested in whether there was a potential for a loss of some kind in the debayering process. In practical terms it doesn't really matter of course!

 

As for there being nothing special about the number 4, theres not really anything special about the number 1080 either but things do get mandated at various points.

 

Freya

Link to comment
Share on other sites

Please also consider the shutter speed of the camera. At 24fps and. 1/50th sec exposure, there are usually very few frames that are as detailed as the camera is capable of recording. I would guess that most frames with camera and/or subject movement resolve less than 1k.

 

I have noticed when watching 4k projected that the loss of detail when the camera moves can be quite jarring. Anyone else feel this way?

Link to comment
Share on other sites

  • Premium Member

 

Well that's true, I was just interested in whether there was a potential for a loss of some kind in the debayering process. In practical terms it doesn't really matter of course!

 

As for there being nothing special about the number 4, theres not really anything special about the number 1080 either but things do get mandated at various points.

 

Freya

It's almost the opposite -- you'd think that with 50% of the photosites being filtered for green, and 25% for red and blue each, at best you'd only get 50% of the original raw resolution instead of 75%. So clearly some detail is extractable or interpolated from the surrounding photosites when building RGB out of raw.

 

Keep in mind that many cameras that record 1920 x 1080 also don't measure detail out to 1.9K, I think it's more like 1.5K at best. And when you deliver a 1080P master to a distributor or broadcaster, they don't measure how many lines of resolution the image is resolving (besides, your movie doesn't contain a line resolution chart in it anyway), they just look to make sure it "feels" like HD resolution and has a minimal amount of artifacts like noise and compression.

Link to comment
Share on other sites

If we knew how to read every grain of sand on the beach....each one would tell the story of the seemingly endless waves that had rhythmically moved it,. The light from the sun that had landed upon it. The interactions it had had with it's neighbours. The gravitational attraction to the earth, the moon and the other proximate bodies in the solar system. And more....

In comparison, a pixel, as far as I can tell, counts photons that land in a rectangle. Relatively large numbers of photons, relatively small rectangle, yielding an averaged value. This is a one dimensional piece of information. Hats off I guess to the programmers who are able to simulate a photograph from these. The word interpolation often comes up, but the word simulation always seems more apt.

I'm not sure why as a society we are abandoning the fact of the photograph, I mean the sense of reality inherent in a photograph, the literal imprint that one thing has imposed upon another (and hence the cinema), in favour of these digital simulations.

Chris, I'm not sure where your pithy jokes place you, what your opinions really are. Ask yourself, do you love the beach or not?

Cheers,
Gregg.
PS. I finally got banned from the Kiwi 48hrs forum. Those mutterbumfers !

Link to comment
Share on other sites

A 4K Bayer sensor is capable of producing full 4K 4:4:4 RGB files

 

 

Of course this statement is simply false.

 

You can interpolate 4k 4:4:4 RGB-files from a 4k Bayer-sensor but calling that "4k 4:4:4 RGB" would render any technical description of resolution or sampling useless. A VGA webcam could be combined with a software that makes it a "true 8k 120fps"-camera....

 

"4k Bayer != 4k RGB"-critique is propably even more relevant today since RED (and a few others) started the "k-Marketing", suggesting that 4k is at least twice as good as 2k - but the professional video world has been technically more precise till then - something that we should return to, IMHO. Otherwise we would only end up with cameras with more and more "K" and not necessarily higher IQ.

Link to comment
Share on other sites

  • Premium Member

It's not false -- you can create 4K RGB files from a 4K raw recording. People do it all the time! It just doesn't measure to 4K in picture resolution, but then, there is no guarantee that the original 4K raw image did either -- it could be out of focus, for example, but it's still a 4K raw file and it still can be debayered to a 4K RGB file.

 

"4K" is just the number of horizontal pixels in the file, in the raw file and then in the 4K RGB files. It's not a measurement of actual image resolution.

Link to comment
Share on other sites

  • Premium Member

I don't want to endlessly continue this as I think most of the people contributing to the thread understand the issues. But...

 

Don't you see that specifying cameras based on the output resolution as opposed to their actual capacity to resolve detail, is a licence to print money for camera manufacturers? You could claim anything, given software, and producers will believe it. That's not OK.

 

P

Link to comment
Share on other sites

Even if most images are mostly blurred (motion, out of focus etc...) - the rest, by logic wont be.

 

On occasion it'd be nice to have a defined and therefore comparable measure of the resolution for these parts.

 

it is my understanding people, in general, had that.

 

I hear that resolution is wasted on blurred imagery for most of us and I understand that many compression algorithms are based on that premise, but there are other fields that may use blurred imagery to effect, of relevance would be use in motion estimation, VFX ...

 

I'm going to try very hard to make this my last post in this thread. Promise.

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...