Jump to content

Getting 4K from a 4K sensor


Keith Walters

Recommended Posts

  • Premium Member

I can't believe the levels of heated discussion there have recently been about whether a 4K sensor really can produce 4K colour images. Obviously some of the people here understand the technology, whilst it probably goes over the heads of many others.

 

The following is part of a report I wrote on electronic imaging generally. It's not an in-depth treatment of Bayer processing by any means, but it might help some of you to get some understanding what various people are getting at.

 

It's not meant in any way to refer any particular manufacturer's camera, one way or the other. I'll leave you to fight that out amongst yourselves :lol:

 

 

 

The common concept of the output of a Bayer Mask is a series of macro-pixels, each based on sets of four monochrome pixels, two of which are green-filtered, and one each of red- and blue-filtered. On this basis, a 4:3 monochrome sensor with 4096 pixels horizontally and 3080 pixels vertically could only reasonably be expected to produce a maximum of 2048 x 1540 RGB macropixels when Bayer masked.

 

This admittedly was the case in the past, because with CCD cameras and analog signal processing, pixel data could only be accessed and processed as continuous streams, somewhat after the fashion of a PAL or NTSC chroma decoder. Extraction of the colour information required the use of glass delay lines and other fixed-parameter components that severely restricted the signal processing options available. It was also common practice to use inferior alternatives to Bayer masking, simply to reduce the complexity of the necessary analog processing to manageable levels. This is one reason early single-chip colour cameras tended to have poor colour rendition compared to otherwise equivalent 3-chip models.

 

Modern CMOS imagers use much more flexible data acquisition systems and modern digital techniques mean pixel data no longer has to be processed in a linear manner. This allows much greater flexibility in the choice of which actual pixels are used to make up each macropixel. As a consequence, virtually all single-chip electronic cameras (still and motion) now use Bayer masking.

 

This diagram represents an 8 x 16 pixel section of a Bayer-masked sensor. From the 128 pixels we can create 32 macropixels, (numbered here 1 to 32). In an old-fashioned analog Bayer-filtered processor, virtually the only option was to combine 32 fixed groups of four pixels to produce the 32 identical macropixels shown here. If we let 16 pixels represent 16K, it would seem that the maximum luminance resolution we could extract from a row of 16K pixels must be 8K (and 4K from an 8K sensor, 2K from a 4K sensor and so on).

firstblocksa6.gif

 

However modern all-digital signal processing allows a variety of processing tricks to be performed that allow us to do quite a lot better than that, tricks that were simply not possible with analog processing.

 

Instead of each macropixel being limited to a fixed block of four pixels, by shifting the selection by one pixel, eight alternative macropixels can be generated.

 

This animated GIF illustrates the principle. Instead of being limited to the information contained in the one group of four pixels, by shifting the selection block up, down, left and right by one pixel as well as in all four diagonal directions, the processing computer can create eight more macropixels, each of which has at least one pixel in common with the macropixel in the centre.

bayeranigd2.gif

This means that for every macropixel generated, there is available a choice of eight ?hints? on which way the signal amplitudes would be headed if the macropixel in the centre actually did consist of four RGB pixels. This allows a processing computer to make very intelligent estimates of what the values of the actual four RGB pixels in the centre should be. While this can never precisely recreate the original four pixels, it gets a lot closer than earlier techniques could, and in ideal circumstances can come very close to matching the monochrome specification of the sensor,

 

This sort of thing is routinely done with digital still cameras, which is why most of them take several seconds to process each shot. Doing this for motion pictures would require an enormous amount of computer power, and as far as I know cannot yet be done in real time, at least not at 4K resolution, and especially not at 24 frames per second! (Most of the secret algorithms seem to be more involved with prioritising the available computer time than actual pixel processing).

 

This is completely different process from the ?sharpening? used in traditional 3-chip RGB cameras or the Genesis. In those, all the detail correction circuitry is able to do is look for abrupt variations in the brightness level between adjacent blocks of pixels and increase the contrast in the transition region. Used judiciously, this can give a reasonably convincing illusion of higher resolution. The biggest problem with this is that, apart from sharp edges that have been softened by the imager and/or lens system, most images contain elements that are naturally soft or meant to be out of focus. Unfortunately, an electronic circuit is not able to tell the difference sometimes, which is why excessive use of detail correction often produces unnatural looking images, particularly of faces.

 

It is commonly stated that a Bayer Masked sensor?s resolution must fall off drastically when the scene being imaged consists largely of pure red or pure blue, since there are only half as many blue and red filtered pixels as there are green ones.

 

However this assumes that the colour filters are 100% efficient. That is, the red filter passes only red light and nothing else, while the green and blue ones similarly only pass green and blue light.

 

Actually, there?s no rule that says the filters have to be made like that. A perfectly good colour image can still be obtained if the red filter only filters out 50% of the green and blue light, say. (It would actually be pink, rather than red). The advantage of this is that the sensors would all have some response to all wavelengths of light, and after compensating for the known filtering characteristics of each pixel, a full-resolution luminance signal can still be obtained. Modern signal processing techniques are also quite good at recreating full-bandwidth RGB from lower resolution chrominance signals, as long as a full-bandwidth luminance signal is available.

 

So it is at least theoretically possible to get something like 4K RGB from a 4K Bayer masked sensor, but this is heavily dependent on the quality of the software used. Only rigorous field testing will prove this one way or the other.

 

There are also a number of engineering difficulties involved in the manufacture of the Bayer mask itself. All colour reproducing systems require precision colour filters, and each has its own challenges.

 

With colour film, the colours you see in the processed camera negative are produced by so-called ?chromogenic? dyes. A chromogenic dye is a coloured chemical compound created by the chemical reaction of two or more colourless substances, (referred to as ?dye couplers? in film terminology). Colour coupling can be triggered by the presence of oxidizing agents similar to the hypochlorite ions found in common laundry bleach. Such ions are conveniently produced during film processing, when the silver halide crystals are broken down to silver, and bromine, iodine or chlorine atoms.

 

The result is that wherever the silver grains are produced, a chromogenic dye ?shadow? of the grains is produced as a by-product. The silver grains can then be dissolved out, leaving the dyes behind.

 

Accurate colour reproduction places quite stringent requirements on the performance of any chromogenic dyes produced. A major problem is that the choice of dyes available is severely restricted by the fact that they not only have to be the right colour; their ?ingredients? also have to actually work in the coupling reaction!

 

One way of assisting with this is to add colour pre-correcting dyes to the silver halide particles in the negative emulsion. Because this is carried in the mild water-based environment of a gelatin solution, and is only ever going to be exposed to light for very brief periods over the operational life of the negative, the choice of available dyes is very large. This is one thing that contributes to the excellent colour fidelity of modern colour films.

 

In the case of a 3-chip (or 3-tube) video camera, the separation of the incoming light into its red, green and blue components is achieved by a precision dichroic prism assembly.The colour separation principle is the same one that produces the colours seen in films of oil on water. By building up numerous microscopically thin layers of glass with different refractive indices on the surfaces of the prisms, engineers can produce extremely precise colour separation, and being made from colourless glass, the filters will not fade or otherwise deteriorate over time.

 

With single-chip cameras using filter masks, the requirements of the dyes used are quite onerous. Apart from the fact that they have to stand up to years of exposure to light without fading or otherwise changing, they have to be compatible with the ultraviolet polymer curing and etching processes used to form the microscopic coloured filters onto the chip surface. All of this tends to compromise the colour filtering ability of the filters. A certain amount of digital post-correction is possible, but for the moment scanned colour film still seems to have the edge for colour quality.

Edited by Keith Walters
Link to comment
Share on other sites

  • Premium Member

That's generally quite a good write-up, Keith. I'd just add a few things:

 

The moving GIF illustrates a two dimensional three tap filter. They go farther than just three taps in real world designs, but I haven't been involved in that recently enough to know what's current -- perhaps a dozen taps?

 

The human visual system resolves brightness much better than it resolves color. This happy fact makes a whole lot of things easier. If colors don't quite line up with brightnesses, our brains "fix" it for us.

 

Brightness depends far more on green than on red and blue. A ballpark luminance equation looks something like Y = 0.7G + 0.2R + 0.1B. (In acutal systems, like Rec 601 or 709, the coefficients are specified to three or four decimal places.) Having two separate greens in the 2x2 Bayer block helps a lot, but the matter is quite complex.

 

The final quibble is the misuse of the word pixel. I've been guilty of it, too. Chips have photosites, not pixels. A photosite is a light sensitive area on the chip. From the photosites we get raw data. After the data have been processed into complete sets of color and brightness information for positions in a grid, we finally have pixels. A pixel is a set of color and brightness data for one place in the theoretical grid. It doesn't necessarily -- and usually doesn't -- correspond to any given photosite on a chip.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
The moving GIF illustrates a two dimensional three tap filter. They go farther than just three taps in real world designs, but I haven't been involved in that recently enough to know what's current -- perhaps a dozen taps?

 

I realize it's far more complex than that simple illustration suggests. The intention is only to give less-technical readers at least a basic understanding of how manufacturers can achieve something that at first glance seems impossible.

The human visual system resolves brightness much better than it resolves color. This happy fact makes a whole lot of things easier. If colors don't quite line up with brightnesses, our brains "fix" it for us.

 

Actually, I have made some animated graphics that dramatically demonstrate that exact principle. I was thinking of posting them here as well. The problem is, the necessary image files are huge, because the GIF format isn't very efficient at encoding actual photos.

 

Brightness depends far more on green than on red and blue. A ballpark luminance equation looks something like Y = 0.7G + 0.2R + 0.1B. (In acutal systems, like Rec 601 or 709, the coefficients are specified to three or four decimal places.) Having two separate greens in the 2x2 Bayer block helps a lot, but the matter is quite complex.

 

Actually the figures are .59G +.3R +.11B, at least for standard definition colour TV. I've had to design the RGB matrices for quite a few pieces of video equipment in my time, so those numbers are sort of burned into my brain!

 

Those ratios are simply the average values obtained in the 1950s by asking a very large number of volunteers to rate monochrome images synthesized from varying mixtures of the red, green and blue channels of early colour TV cameras. We used to play around with the ratios quite a lot in the days of analog cameras (when you COULD play around with things like that). As long as the luminance came out with the correct one volt amplitude, even quite large changes seemed make no significant difference to the picture, whether viewed as colour or monochrome. Manufacturers used to specify ultra-precision metal film resistors for the Y matrix, but it was really just a case of "useless precision".

The final quibble is the misuse of the word pixel. I've been guilty of it, too. Chips have photosites, not pixels. A photosite is a light sensitive area on the chip. From the photosites we get raw data. After the data have been processed into complete sets of color and brightness information for positions in a grid, we finally have pixels. A pixel is a set of color and brightness data for one place in the theoretical grid. It doesn't necessarily -- and usually doesn't -- correspond to any given photosite on a chip.

-- J.S.

This is indeed an area where there has been a lot of heated argument. But since the term "pixel" actually means "Picture element", I don't know it it's entirely inappropriate when referring to a pickup device.

 

It is the originating resolution that really matters of course, not the "container" the final images are delivered in. You could shoot on VHS and convert that to "4K", and by some people's definitions that would constitute "4K". In my opinion, signals should only properly be referred to as "4K" when there are four thousand horizontal pixels in the output image, all of which can differ significantly from each other.

Link to comment
Share on other sites

  • Premium Member

> I don't know it it's entirely inappropriate when referring to a pickup device.

 

Quite; also guilty as charged.

 

> In my opinion, signals should only properly be referred to as "4K" when there are four thousand

> horizontal pixels in the output image, all of which can differ significantly from each other.

 

Didn't think I could possibly be the only person in the world who thought so :)

 

Phil

Link to comment
Share on other sites

In any sampling system, there's no way you can max out it's resolution so that 1000 samples gives 1000 lines of resolution unless you allow for massive aliasing artifacts. Output measured resolution from your sampled system is more of a factor of the level of optical low pass fitlering you use than if it be single chip or three chip. They impact things a bit, but sampling theory and aliasing are the main concerns.

 

For years we've had interlaced video where the measured vertical resolution has always been significantly lower than the pixel count due to the necessary anti-twitter interlace filtering. Basically measured resolution has never been equal to the amount of samples, and I don't see that changing.

 

Graeme

Link to comment
Share on other sites

  • Premium Member
Actually the figures are .59G +.3R +.11B, at least for standard definition colour TV.

That's close to the Rec 601 numbers: Y = 0.299R + 0.587G + 0.114B. Most others tilt less to red and more to green:

SMPTE 240M: Y = 0.212R + 0.701G + 0.087B

Rec 709: Y = 0.2126R + 0.7152G + 0.0722B

About 10-12 years ago, I wrote a little program that simulated encoding using the 601 matrix and decoding with the 709 inverse matrix, and vice versa. The color errors landed in the worst possible place: big enough to irritate the cinematographers, but small enough that the bean counters wouldn't want to spend a dime fixing things. Fortunately, most of the hardware guys saw getting this right as a worthwhile feature, so they implemented it.

This is indeed an area where there has been a lot of heated argument. But since the term "pixel" actually means "Picture element", I don't know it it's entirely inappropriate when referring to a pickup device.

But if we use the word Pixel to mean Photosite, or apply it to raw data from a chip, what word do we use when we *really* need to talk about pixels? After all, most of what we do with image data is done in the form of these complete datasets for each position in a grid.

In my opinion, signals should only properly be referred to as "4K" when there are four thousand horizontal pixels in the output image, all of which can differ significantly from each other.

That's sort of like saying that your car has 200 horsepower, but only when you're going up a steep hill with the gas pedal pressed to the floorboards. What we're concerned with here isn't the resolution of specific signals or images, but rather the maximum resolution we can get from a particular piece of equipment or system -- much as you can still call a car 200 Hp, even when it's parked.

 

So, how do we quantify resolution? The whole business of talking about so many "K" pixels/photosites really isn't adequate. It might give you some notion of what's going on if you compare different chips using the same design, say Red, Dalsa, and D-20, which are all Bayer masked, or comparing the various 2/3" three chip cameras with each other. But it has no value in comparing a three chip with a Bayer.

 

To really know what's happening, we need to use things like zone plates and multiburst signals. And then we find out that resolution doesn't always cut off nice and clean, there's something called a modulation transfer function that really tells you what you have, and it's not just a single simple number. The shape of the curve isn't even always the same. For instance, shoot with an NTSC camera and a 1080p camera, and compare the camera original NTSC with a downconversion from the HD. (The downconversion will have more area under the curve, and a much steeper drop.)

 

 

 

-- J.S.

Link to comment
Share on other sites

> I don't know it it's entirely inappropriate when referring to a pickup device.

 

Quite; also guilty as charged.

 

> In my opinion, signals should only properly be referred to as "4K" when there are four thousand

> horizontal pixels in the output image, all of which can differ significantly from each other.

 

Didn't think I could possibly be the only person in the world who thought so :)

 

Phil

 

For the record Phil, I have NEVER disagreed with your postiion on the numbers concerning RED's chip and output, I'm simply not affected by it the same way you are.

 

Jay

Link to comment
Share on other sites

  • Premium Member
In any sampling system, there's no way you can max out it's resolution so that 1000 samples gives 1000 lines of resolution unless you allow for massive aliasing artifacts. Output measured resolution from your sampled system is more of a factor of the level of optical low pass fitlering you use than if it be single chip or three chip. They impact things a bit, but sampling theory and aliasing are the main concerns.

 

For years we've had interlaced video where the measured vertical resolution has always been significantly lower than the pixel count due to the necessary anti-twitter interlace filtering. Basically measured resolution has never been equal to the amount of samples, and I don't see that changing.

 

Graeme

 

Yes, I know. Actually I have some nice animated graphics that illustrate that principle as well, but they require operator input so I can't see how could show them here. It looks quite impressive on a good data projector though.

 

As you say, the theoretical maximum resolution that a monochrome sensor with 4,000 horizontal photosites could image is a pattern of 1000 vertical white lines on a black background, which is counted as "2,000 lines". To resolve a full colour picture with that level of detail, at first glance it would seem that you would need 12,000 photosites horizontally, filtered alternately red, green and blue.

 

I don't believe there is any sensor that has that many photosites, and certainly not one that can capture 24 frames per second. However the plus side would be that only relatively simple digital processing software is needed to extract the 4,000 pixels worth of RGB data. Which is what Sony did with the Genesis chip, but of course that only gives 1,920 pixels.

 

My only point was that on that basis, while at first glance it seems impossible that a chip with only 4,000 photosites can a job that would appear to need 12,000, the information is actually in there there if you have the software to extract it. How well your software or anybody else's actually does this out in the field remains to be demonstrated, but what I've seen so far is certainly impressive, particularly for such a cheap camera. I'm sure eventually there will be cameras that can produce convincing 4:4:4 4K in real time, and I can think of a lot more lucrative markets for them than making movies too.

 

Another point that seems to have gone over many people's heads is: "How important is 4:4:4 RGB anyway?". Unless you're planning on doing a lot of green-screen work, nobody apart from a few University boffins is really going to be able to tell the difference between 4:4:4 and 4:2:2; whether it's on the big or the small screen. With normal TV programming or cinema release films, how often do you ever see any post-production techniques other than simple cuts?

 

In my personal experience working for one of the manufacturers of studio equipment, chroma-key capability seemed to be one of those things that just everybody setting up a small TV production studio thought they should have, but rarely did anybody do anything really useful with it :rolleyes:

Link to comment
Share on other sites

In my opinion, signals should only properly be referred to as "4K" when there are four thousand horizontal pixels in the output image, all of which can differ significantly from each other.

 

 

So... Can anyone authoritatively say if the Red camera is able to deliver 4,000 horizontal pixels that can differ significantly from each other?

 

Keith, by the way, that was a very excellent post. The health meter of this forum has just gone up a bit. :)

Link to comment
Share on other sites

As you say, the theoretical maximum resolution that a monochrome sensor with 4,000 horizontal photosites could image is a pattern of 1000 vertical white lines on a black background, which is counted as "2,000 lines". To resolve a full colour picture with that level of detail, at first glance it would seem that you would need 12,000 photosites horizontally, filtered alternately red, green and blue.

 

4000 photosites can sample 4000 lines, or 2000 line pairs, not the numbers you quote above. This is, of course, if you allow massive aliasing. In practical terms, a goal of 3000 (given that optical low pass filters are basically a 2 tap filter, and hence don't roll off the frequency very fast) is right for us. To resolve 4000 lines of luma resolution, on a bayer pattern sensor, I'd need somewhere between 5000 to 5500 depending on strength of the OLPF. To get at least 4000 on chroma all the time, I'd need a 8k sensor. That would then give 6k luma, and at least 4k chroma.

 

I don't believe there is any sensor that has that many photosites, and certainly not one that can capture 24 frames per second. However the plus side would be that only relatively simple digital processing software is needed to extract the 4,000 pixels worth of RGB data. Which is what Sony did with the Genesis chip, but of course that only gives 1,920 pixels.

 

Well, NHK have an 8k science experiment...... You've got to think of the Genesis chip as a lof of RGB macropixels. It's not designed for a higher resolution to be extracted from it (or else they'd have used a bayer pattern). At the moment, they should be optically low pass filtering for the 1920x1080 resolution, and therefore there shouldn't be more resolution than that anyway getting to the sensor.

 

My only point was that on that basis, while at first glance it seems impossible that a chip with only 4,000 photosites can a job that would appear to need 12,000, the information is actually in there there if you have the software to extract it. How well your software or anybody else's actually does this out in the field remains to be demonstrated, but what I've seen so far is certainly impressive, particularly for such a cheap camera. I'm sure eventually there will be cameras that can produce convincing 4:4:4 4K in real time, and I can think of a lot more lucrative markets for them than making movies too.

 

Another point that seems to have gone over many people's heads is: "How important is 4:4:4 RGB anyway?". Unless you're planning on doing a lot of green-screen work, nobody apart from a few University boffins is really going to be able to tell the difference between 4:4:4 and 4:2:2; whether it's on the big or the small screen. With normal TV programming or cinema release films, how often do you ever see any post-production techniques other than simple cuts?

 

In my personal experience working for one of the manufacturers of studio equipment, chroma-key capability seemed to be one of those things that just everybody setting up a small TV production studio thought they should have, but rarely did anybody do anything really useful with it :rolleyes:

 

Chromakey is important, but again, aliases could screw that up as much as lower chroma resolution. In the end, it's the balance of all things.

 

Graeme

Link to comment
Share on other sites

  • Premium Member

"4000 photosites can sample 4000 lines, or 2000 line pairs, not the numbers you quote above. This is, of course, if you allow massive aliasing. In practical terms, a goal of 3000 (given that optical low pass filters are basically a 2 tap filter, and hence don't roll off the frequency very fast) "

 

That would be heavily dependent on the physical structure of the photosites of the imaging sensor, or am I missing something?

 

It would seem to me that if you had a test chart representing 2,000 cycles horizontally printed as a series of 2,000 vertical lines, and (with the low-pass filter removed), you were able to precisely align and focus the image so that the each of the dark and light areas was exactly coincident with one of the columns of photosites, then the camera would indeed be able to recreate the pattern of 2,000 light-dark cycles, that is to say, 4,000 lines.

 

However if you then moved the camera clightly closer or further away from the chart, you would immediately have the situation where some of the vertical lines are still exactly coincident with single photosites, while others would be sitting precisely astride two adjacent photosites, and others would be somewhere in between.

 

If there was no "dead space" between the pixels, in theory there would not be any problem, but since there is no sensor exactly like that, you are going to get the situation where the "white" cycles that are exactly coincident with photosites will produce full output, while those that coincide with the gaps in between are going to produce a lower output. (I was under the impression that CMOS sensors tend to have fairly small active sensor areas compared to CCD). This will produce the familiar "superimposed-flyscreen" moire effect.

 

I thought the only way of avoiding this is to ensure that no feature of the focussed image is smaller than a square of four imaging pixels, which is of course what the low pass filter does.

 

I still don't understand how 3,000 samples of information can be carried by a 4,000 sample "container" though. Unless you mean that the software is able to automatically detect and reduce the resoultion of just the troublesome areas, as intelligent HD to SD downconverters can do.

Link to comment
Share on other sites

Microlenses are used to increase the fill factor. Although a larger fill factor can reduce the nasties of aliasing, they cannot act as anti-aliasing filters or reduce aliasing below what sampling theory dictates.

 

Remember, you test chart for the "lines" should be sinusoidal in nature as a square wave has, effectively, infinite frequency because of the sharp edge.

 

Graeme

Link to comment
Share on other sites

  • Premium Member

> To resolve 4000 lines of luma resolution, on a bayer pattern sensor, I'd need somewhere between

> 5000 to 5500 depending on strength of the OLPF. To get at least 4000 on chroma all the time, I'd need

> a 8k sensor. That would then give 6k luma, and at least 4k chroma.

 

It seems to me that this is something not unadjacent to what I've been trying to say for the last... well, ever since Dalsa launched, and you've been vociferously disagreeing with me.

 

What gives?

 

Phil

Link to comment
Share on other sites

I dunno. Ever since people began asking about Bayer stuff I've been saying you get get about > 70% of the sensor resolution as measured luma resolution. Try to get any much than that (we're getting just over 75%, which I'm happy about) and you're using too weak an OLPF and you're in for a nasty alias surprise. However, you don't just want resolution - I've seen some demosaics from some stills cameras that are as nasty as hell in attempts to "win" on resolution. We've put a lot of effort into the raw compression / demosaic combo, doing some things differently from how you'd expect, and of course, that leads to the visual image which people are enjoying looking at, and enjoying working with.

Link to comment
Share on other sites

  • Premium Member

> Ever since people began asking about Bayer stuff I've been saying you get get about > 70%

 

...yes, and I've been saying I don't believe it, and you've been pointedly and repeatedly refusing to release anything which would serve to substantiate either claim.

 

But even then, I'd be pretty cheesed off to pay for ten gallons of fuel and get seven, if you catch my meaning.

 

Phil

Link to comment
Share on other sites

> Ever since people began asking about Bayer stuff I've been saying you get get about > 70%

 

...yes, and I've been saying I don't believe it, and you've been pointedly and repeatedly refusing to release anything which would serve to substantiate either claim.

 

But even then, I'd be pretty cheesed off to pay for ten gallons of fuel and get seven, if you catch my meaning.

 

Phil

 

Phil... with all due respect, we don't really care what you choose to believe. No one that has seen the 4K footage projected has ever any complaints about the resolution from RED or thinking it is less than advertised. It is not our job to convince you of anything.

 

We are doing quite a few new things that you obviously haven't figured out... and we are not about to tell you, or anyone else, what that is. That just wouldn't make sense.

 

If you had some theory that translated to the real world... maybe people would pay more attention to your rant. But your theories just don't hold up to the big screen. You are arguing that we are 2K or less, and others in the industry are arguing if we should be compared to 65mm film. So why are we listening to you?

 

If you want Graeme to give you a lesson... don't bother to ask.

 

Jim

Edited by Jim Jannard
Link to comment
Share on other sites

  • Premium Member

I'm really getting tired of going round and round this.

 

Being compared to 65 is more to do with low noise than resolution. That notwithstanding, your material, digitally projected, should considerably outresolve 35mm, which is barely a K and a half by the time it hits the screen even in absolutely optimal circumstances. That's not the point at issue and I've said so several times.

 

I have no expectation that you'll take my interest in any way seriously; I fully appreciate you have no need to. That said it's no secret that you are the money rather than the brains behind this project and it's therefore pointless to engage you in technical debate; I spend half my life informing passionless, disinterested management of issues like this and I'm always relieved that I have no such responsibility to politicise the truth here.

 

So, I'll limit my response to this: the more you refuse to shoot and release test charts, the more suspicious I, and I don't doubt others too, will become. If you were as confident as you claim to be, you'd have done it long ago. Why not do it? Why not silence me once and for all? I'm sure it'll be proven that you are making perhaps fractionally less what you're claiming to make, that is, a 2 to 2.5K image after every loss in the book. That would still be a major achievement and I don't know why it isn't enough for you.

 

Phil

Link to comment
Share on other sites

But sampling theory says all sampled systems give less measured resolution than the number of pixels on the sensor, unless you allow for nasty aliasing, which I think we can both agree is not something we want on a motion picture image. That fact doesn't change if you're a bayer pattern single sensor or a 3 chip system with prism or any other kind of sampling.

Link to comment
Share on other sites

1) The Mysterium©(patent pending) chip is a 4k chip. It shoots a moddled strangely uneven patchy mosaicy 4k grayscale image. And nobody can deny that. RED never claimed they shoot 4k RGB. Those are your words Phil. It's a technicality yes. But so is all of the crap you've been spewing about 1k (we have you on record as claiming 1k so you can drop your "oh I've always said 3k" bullshit.

 

2) Even if the image was a perfect down-sampled 4k RGB image you're only going to see a very very small sliver of the image with 4k worth of detail unless you stop down to F22 or something crazy.

 

In film scans that I've seen--usually the image has lost considerable resolution from the front of the face to the back.

 

Add on to that motion blur and you could probably intercut 1.5k without anybody ever noticeing.

 

The bourne supremacy could have shot about about 50% of its material on SD and nobody would have noticed.

 

Your crusade for "true 4k" loses all relevance in a real world shooting situation imo.

Edited by Gavin Greenwalt
Link to comment
Share on other sites

> we have you on record...

 

No, you don't :)

 

Phil

 

Phil... from your post in Nov. 2006 it looks like we have you on record 70 times:

 

For the seventieth time.

 

Say the sensor is 4096 pixels across (it's something like this, I can't be bothered to look it up). Given the standard Bayer pattern, the green-channel "virtual sensor" is at one half that resolution, and the red and blue channels are at one quarter. The highest resolution data is at one half the total size of the sensor. One half. Zero point five. OK?

 

That's 2K.

 

The red and blue channels are at half that.

 

That's 1K.

 

So, you could consider that the picture off this sensor has a similar amount of real information to a 4:2:2 subsampled 2K image.

 

This is perfectly legitimate; it's how most DSLRs work, but if you're not upfront about it, you're being deceitful.

 

Phil

 

You spew out so much bs that it is no surprise that you can't remember from one day to the next what you are saying. Now why don't you go heckle somebody else?

 

Jim

Edited by Jim Jannard
Link to comment
Share on other sites

Phil... from your post in Nov. 2006 it looks like we have you on record 70 times:

The red and blue channels are at half that.

That's 1K.

Read it again, that quote does not say the camera is 1K, it merely says that a typical 4K Bayer pattern would have red & blue channels sampling 1K each... its right there in the text if you look Jim.

 

You spew out so much bs that it is no surprise that you can't remember from one day to the next what you are saying. Now why don't you go heckle somebody else?

I understand you don't like Phils' tack on this issue, but I think he's got a valid question. Graeme & Phil had the discussion rolling well a couple of days ago... why can't we get back to that?

 

If you are trying to say that the issues Phil's digging at are covered by some NDA, corporate secrets, fine. Then just say so. But don't mistake Phil, or the rest of those asking these questions, for hecklers.

Edited by Daniel Sheehy
Link to comment
Share on other sites

Daniel is correct, Jim. You do have an obvious habit of trying to verbally bully and slick-talk anyone who seriously questions you. Your manipulating Phil's comment is just another example of it. I find your behavior to be completely unprofessional and condescending.

 

Did you originally come to this site to procure real insight from real industry professionals, or was it to market your latest toy? If Phil's criticisms are actually complete nonsense and just an attempt to heckle you, then why do you continually respond to him? I dare say that it is because he has valid points that you are afraid to address, and you want him to go away.

 

What are your scientific credentials to argue this technology with him on any intensive level? I was actually learning something from his debates with the other experts until you started in again. Why don't you do what you do best (marketing and promotion), and leave these guys to educate those of us who are actually interested in learning something here.

 

BTW, please don't call or Email me to tell me to fu** off again, because I will post it this time.

Edited by Ken Cangi
Link to comment
Share on other sites

Daniel is correct, Jim. You do have an obvious habit of trying to verbally bully and slick-talk anyone who seriously questions you. Your manipulating Phil's comment is just another example of it. I find your behavior to be completely unprofessional and condescending.

 

Did you originally come to this site to procure real insight from real industry professionals, or was it to market your latest toy? If Phil's criticisms are actually complete nonsense and just an attempt to heckle you, then why do you continually respond to him? I dare say that it is because he has valid points that you are afraid to address, and you want him to go away.

 

What are your scientific credentials to argue this technology with him on any intensive level? I was actually learning something from his debates with the other experts until you started in again. Why don't you do what you do best (marketing and promotion), and leave these guys to educate those of us who are actually interested in learning something here.

 

BTW, please don't call or Email me to tell me to fu** off again, because I will post it this time.

 

Graeme has stated, for the record, that Phil claimed we were a 1K camera (person to person) at IBC last year. His post infers that our sensor is sub 2K 4:4:4. He has also posted that we are at best a 2K camera (does that also infer that it is less?).

 

As for our personal converstation, looks like you just did post it. At the end of our 20 minute "personal" conversation I told you that I only wished you the best. Still do.

 

Jim

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...