Jump to content

Help me out here


Keith Walters

Recommended Posts

I could spend the next hour raising my blood pressure by listing and pointing out what's wrong with all the horrible inanities in this thread, but Tim Tyler says I'm not allowed to criticise Red anymore so I'll limit myself to this:

 

Jannard is selling an idea - something that a lot of people desperately want to be true, four hours of 4K on a DVD at production quality. It does not exist, and it will never be built because it cannot be built. The camera they advertised two years ago still has not been built and cannot be built. By their reckoning, an F35 is a 6K camera. It's ludicrous. I have never, ever come across a company which relied on stretching the truth this far, and got away with it.

 

Yes, yes, there's people selling these attractive fictions to producers all over the world, and siince producers are often a) not that smart and B) endearingly desperate to believe in anything that can save them money, I'm sure these people wiill do very well. But most of them are at the end of them are engineers who are honest enough at least with themselves to know that they're selling fantasies. I think this is a pretty sad reflection on the industry as a whole.

 

P

 

I'm glad you've made public the restriction that TT as imposed on you about crititising RED.

I dont agree with it but these forums are privately owned and largely unregulated.

 

Back on topic...In the same way that producers have seen that HD isn't the right format for many productions I dont see any change yet in their decision making process in regard to RED.

RED is getting the press but will have to prove itself.

Sure some producers will make the wrong decision and use RED (as what happened with HD) some will make the wrong decision by not using RED (as with HD)

 

But with HD, 5 years on the dust has settled and although the HD TV audience is being deprived of highest quality pictures through use of less than best cameras we can say that HD has taken off in the TV market, however the cinema audience's experience is largely the same as it was pre HD.

 

 

Perhaps 2011 (say 18 months after EPIC is due to be launched and 30 months after REDone's being used in anger) is the time to take the census.

 

Caveate is ability to understand impact of the sheer number of REDones sold on how fast it is taken up.

 

Internet forums that spread good and bad news both with equal abandon and inaccuracy!

 

 

 

 

Mike Brennan

Link to comment
Share on other sites

  • Replies 102
  • Created
  • Last Reply

Top Posters In This Topic

Jannard is selling an idea - something that a lot of people desperately want to be true, four hours of 4K on a DVD at production quality. It does not exist, and it will never be built because it cannot be built. The camera they advertised two years ago still has not been built and cannot be built.

 

Has Red delivered on absolutely everything they've ever promised for the Red One? No. RGB recording got dropped, and the camera doesn't have 1080p live output, at least so far. The camera body weighs more than it was supposed to. The raw port was canceled. Some of the workflow stuff hasn't come together quite as well as fast some people expected it to.

 

On the other hand, when I reserved prior to IBC 2006, there was no 4K on-board recording, and there was no raw mode. And for narrative work, I'd much rather have on-board 4K raw than on-board 1080p RGB. In fact, that little spec change more than makes up for everything else, in my book.

 

I suspect what you really mean is that the camera they advertised "cannot be built" because you don't think the Red One is a 4K camera and you don't think anyone can currently build something that would meet your definition of a 4K camera. Fine. But everyone who matters knew what "4K" meant in this context. Red never promised to deliver a camera that resolved 2000 vertical line-pairs; everyone knew from the start they were delivering a bayer-sensor camera with around 4000 horizontal photosites, which they did.

 

Just to get this on the record... if Red's distribution codec for the Red Ray produces a 4K image, but it's not 4:4:4, it's 8-bit, and there are some compression artifacts if you look closely, will that constitute failure in your mind because it won't meet your personal definition of what "4K" is?

Edited by Chris Kenny
Link to comment
Share on other sites

> But everyone who matters knew what "4K" meant in this context.

 

Yes. It means "2K", only with additional lying.

 

P

 

This is getting past silly.

 

4K is a capture resolution. 4096x2304 (16:9) with a single sensor, Bayer pattern.

4K is an output resolution. 4096x2304 (16:9) which are the file sizes we generate.

 

We have acknowledged that actual resolution is about 3.2K. This number has been independently verified several times and results posted on CML, among other places. Which you have seen.

 

If you make a 4K scan from film, is it not a 4K capture and 4K file? Even though the most resolution we can find acknowledged from any studio is 3.2K on slow film and less than 3K from fast film?

 

You have no right to call us liars. You have no justification to do so. We have absolute justification for the above.

 

This sounds more and more like a personal problem. Get some help.

 

Jim

Link to comment
Share on other sites

  • Premium Member

3.2K worth of Luma I can agree on, but you most certainly do not get 3.2K worth of Chroma from the Red. Which is where it differs from film and 3 chip cameras.

 

Unfortunately in real life shooting situations we don't film black and white resolution charts, but colored objects.

Link to comment
Share on other sites

  • Premium Member
I could spend the next hour raising my blood pressure by listing and pointing out what's wrong with all the horrible inanities in this thread, but Tim Tyler says I'm not allowed to criticise Red anymore so I'll limit myself to this:

 

Jannard is selling an idea - something that a lot of people desperately want to be true, four hours of 4K on a DVD at production quality. It does not exist, and it will never be built because it cannot be built. The camera they advertised two years ago still has not been built and cannot be built. By their reckoning, an F35 is a 6K camera. It's ludicrous. I have never, ever come across a company which relied on stretching the truth this far, and got away with it.

 

Yes, yes, there's people selling these attractive fictions to producers all over the world, and siince producers are often a) not that smart and B) endearingly desperate to believe in anything that can save them money, I'm sure these people wiill do very well. But most of them are at the end of them are engineers who are honest enough at least with themselves to know that they're selling fantasies. I think this is a pretty sad reflection on the industry as a whole.

 

P

 

Phil, the truth is going to come out eventually, just as it did with Sony/NHK's pathetic Hi Vision system, just as it did with the CineAlta, and just as it did with the Genesis. They're TV cameras, adequate for TV, not OK for cinema release movies. Producers overwhelmingly voted with their feet on this issue, but people still try to argue the toss. Dynamic range, start and finish.

 

I know trying to get the truth is sometimes like wading through an endless platypus-infested swamp in rubber boots, but there is only so much swamp, and only so many platypuses, and at least half of them are harmless females.

 

Consider, if it were not for the Platypus and their hideous venom, our rivers would be over-run with man-eating crocodiles. You must accent the positive: a platypus will only sting you if you annoy it; a crocodile will eat you whether he is in a good mood or not.

 

The Aborigines could train Platypus to catch the tasty freshwater Barramundi. You can't really train a crocodile to catch anything, and besides, they would rather eat the crocodile.

 

Everything has its place. Video-8 has its place, Mini-DV has its place, HDV has its place, DVD has its place, 1/5" CMOS cellphone cameras have their place, VHS has its place. Yes even the RED has its place.

 

Does the RED do the job it was designed to do? Who knows; I don't. But I will find out. Eventually.

 

I am sure you are doing the job you were designed to do. Is Jim Jannard doing the job he was designed to do? Who knows; we really don't know what it is.

 

OK Jim Jannard says the RED is 4K, you say it's 2K. George Lucas thought 1440 x 800 was good enough for Star Wars. Some people here always see the glass as half-empty. Others try to see it as half full. Me; I am more concerned about what George's glass was full of, but that's just me.

 

So the truth is out there, and so is a lot of bullshit.

 

Please do not mistake the Internet for reality.

 

Normal programming will resume shortly.

 

By the way, have you really been thrown off sets?

Link to comment
Share on other sites

  • Premium Member
This is getting past silly....

 

You have no right to call us liars. You have no justification to do so. We have absolute justification for the above.

 

This sounds more and more like a personal problem. Get some help.

 

Jim

 

I can never understand why you care. You have thousands of loyal fans over on Reduser hanging on your every word, and a mere handful of critics over here, (most of whom are not overly critical of the RED itself, just the ignorant blatherings of some of its fan base). But you break the hearts of all your Reduser fans to come posting over here.

 

 

While you are here, do you want to take a stab at answering the question I started this thread to ask? Is REDdrive really going to be able to record 2 hours of 4K on an ordinary 2-layer DVD blank?

 

Just 2 hours of proper 1920 x 1080 would be nothing to sneeze at.

 

And will the Epic have a 4:3 sensor?

Link to comment
Share on other sites

Phil, the truth is going to come out eventually, just as it did with Sony/NHK's pathetic Hi Vision system, just as it did with the CineAlta, and just as it did with the Genesis. They're TV cameras, adequate for TV, not OK for cinema release movies. (...) I can never understand why you care. You have thousands of loyal fans over on Reduser hanging on your every word, and a mere handful of critics over here, (most of whom are not overly critical of the RED itself, just the ignorant blatherings of some of its fan base). But you break the hearts of all your Reduser fans to come posting over here.

I definitely can't be taken in any way other than as RED fan, even though the most recent announcement.

 

In part 'cause of these silly approaches to the art of making (big) pictures.

 

Maybe you may think that your uneducated engineering obsession is more important than what a contemporary Iklimler/Climates* [imdb LINK] can bring.

 

Uneducated, not because that's your backyard or not. As matter of fact, I don't give a damn what it really is -- other than the sad clown part since you were used to post here as Jim Murdoch.

 

Uneducated, because you seem to know about what the big pictures are made. You does seem...

 

Instead, people like George Lucas (!...) always see your glass as half-empty.

 

It could be harmless if the Jim's obsession on the Epic would not be kind of lame.

 

:(

 

 

* Just an example (among many others) shot with CineAlta.

Link to comment
Share on other sites

...but Tim Tyler says I'm not allowed to criticise Red anymore so I'll limit myself to this:

 

That's not what I wrote, Phil.

 

Jannard wrote that Red was "working 24/7 to make our project better every day." and you replied "No, you're not." as if you had some insight into Red corporate operations. I asked you to not start unsubstantiated arguments like that.

Link to comment
Share on other sites

3.2K worth of Luma I can agree on, but you most certainly do not get 3.2K worth of Chroma from the Red. Which is where it differs from film and 3 chip cameras.

 

Unfortunately in real life shooting situations we don't film black and white resolution charts, but colored objects.

 

Of course, 3 chip cameras often have offsets in their chips which stop them delivering properly co-sited chroma in 4:4:4. And of course, if you try to get a triple of 1920x1080 sensors to actually allow you to measure a 1920x1080 resolution you will find that you've contaminated your precious image with significant levels of aliasing artifacts. To me, the only correct engineering and aesthetically correct approach is to properly optically low pass filter to achieve significant attenuation at the full resolution of the sensor making sure that any modulation caused by aliasing is at a practical minimum. That's why we "only" get resolution out to around 3.2k (~78% of the linear resolution, a slightly higher number than I originally quoted at the start of the project, but I've learned a lot since then).

 

Say you have 12mp of photosites. What is the best way to arrange them?

 

3x2.66k chips (the prism + three chip approach)

12mp = 4.61k bayer pattern sensor (what we do, near enough)

12mp RGB stripe, 4.61k (Genesis approach)

 

If you properly optically low pass filter, the three chip approach will give you about 2.4k resolution in R, G and B, but it's limited to using 2/3" chips and lenses. It will give a great image, but you've limited your lens choice. You may get lower dynamic range than current HD cameras as of the finer pitch on the photosites.

 

The bayer pattern approach will give around 3.6k luma resolution and 2.3k chroma resolution. Notice how you don't loose much at all chroma resolution compared to the 3 chip approach, but as our vision is relatively insensitive to chroma resolution, all you'll see when projected is the higher luma resolution. However, if you're not careful in your demosaic algorithm, you may still get some chroma moire. We deal with this completely in our demosaic though.

 

The RGB stripe approach is tricky. If you don't have an OLPF, you'll get 4.61 / 3 (as you need a triple of photosites to get your pixel as the arrangement of photosites is unsuitable for the kind of demosaicing we can do with bayer patterns) = 1.53k. However, the Genesis uses a non-square pixel approach of 1920x3 horizontally and 1080X2 vertically = 12.4mp, giving a max resolution before adding the OLPF of 1.9k. That would probably be around 1.7k with a proper OLPF.

 

All of the above are compromises, however, I do think that above shows that the bayer pattern is the one that gives you the most perceptually relevant resolution for your budget of photosites. In the above I'm taking into account that we generally use a stronger OLPF on Bayer pattern sensors, and the effects of a known high quality demosaic algorithm.

 

The 3 chip approach and RGB Stripe approach above are "guestimates" only, and would vary upon specific implementation of the OLPF and sensors. I've not seen, for instance, an F35 or Genesis pointed at a zone plate, so I don't know how strongly or weakly they set their OLPF, and I don't know if they do any software processing to account for the fact that the RGB on their sensor is not co-sited as it is in the three chip approach.

 

Graeme

Link to comment
Share on other sites

  • Premium Member

Graeme

 

Have you pointed the Red at a Zone Plate which is colored, rather than black and white? Say red and blue, or even different shades of the same color. That would impact the perceived sharpness, now wouldn't it?

 

I have the feeling that measuring resolution off a chart is similar to judging lenses by how they look on a lens projector.

Link to comment
Share on other sites

Yes I have, but it's hard to see the slightly lower chroma resolution other than on such a test chart. It's not the kind of thing you notice in actual use of the camera. That's because we hardly ever find a chroma edge without an associated luma edge, and our eyes just don't see sharp chroma only edges that well. As I said, every camera is a compromise, but I think, for the reasons I pointed out above, that the Bayer pattern is the most perceptually useful compromise for a given number of photosites.

 

Graeme

Link to comment
Share on other sites

  • Premium Member

So why won't you publish zone plates?

 

The flaw in the argument that's being presented heree is the "given number of photosites" thing, as if it were possible to split the chip into three without incurring any additional costs from semiconoductor and optical assembly manufacturiong. This is an obvious smokescreen; what bayer pattern sensors do is to give you the best result for a given cost, which is of course what Red is really all about.

 

And I repeat. It may be fine for its cost. But it is not what it's being sold as.

Link to comment
Share on other sites

Guest tylerhawes
AFAIK, and maybe someone who works at a fab can chime in here, but one large chip is generally more expensive than three smaller chips of 1/3 the photosite count?

 

Yes, flaws in silicon chips occur on average every X microns square, so the bigger the chip, the more you have to throw away in production due to imperfections.

Link to comment
Share on other sites

  • Premium Member

Plus the block, filters, mounting arrangements, and you're saying it's still cheaper? No.

 

Low-end consumer DV cameras have gone bayer for years principally because it is cheaper, not because it is better.

 

P

Link to comment
Share on other sites

I can never understand why you care.

 

Maybe Jim doesn't like being called a liar over and over. When I opened a RED file in after effects it was a 4K file. What matters is what that file looks like when on TV or projected in a film theater. I don't know why, but I (me, just me) found the RED footage texture to be nicer than that of the f35 (at NAB). But we are splitting hairs. It's evolving all of the time. I think the more interesting part of the whole RED thing is the RAW workflow (and I've got it working fine).

 

bob

Link to comment
Share on other sites

  • Premium Member
What I'm saying here can, by the way, be confirmed in about 10 minutes of testing with some high-resolution images and a copy of Photoshop. Spit out JPEGs of the images scaled to various sizes, but with the same quality level selected.

OK, I've got an old copy of Photoshop at home. Let me try to reproduce the illusion. Scaling, though, may be part of the problem. It's a resampling, which imposes its own losses and Nyquist limit. A better test might be to set a camera up on a locked off tripod, and shoot one image full telephoto and another full wide. Crop the wide to match the content of the long, eliminating the scaling issue. Or trading it off for zoom lens design issues.... ;-)

 

Anyhow, this notion of complexity diminishing with resolution is something that I believed about 10 - 15 years ago. I posted something on OpenDTV based on it, and ironically it was one of the MPEG or JPEG guys who replied with the correction.

 

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
... and just as it did with the Genesis. They're TV cameras, adequate for TV, not OK for cinema release movies. Producers overwhelmingly voted with their feet on this issue, ....

We have a couple shows on Genesis at the moment. Those cameras aren't gathering any dust. They're not cheap, either. Shooting ratio is the big factor. The more you need to grind, the more things tip towards tape.

 

 

 

-- J.S.

Link to comment
Share on other sites

Plus the block, filters, mounting arrangements, and you're saying it's still cheaper? No.

 

Low-end consumer DV cameras have gone bayer for years principally because it is cheaper, not because it is better.

 

P

 

Well, as you know, cheap cameras with bayer sensors (even high end DSLR's in JPEG mode) don't exactly use sophisticated demosaic algorithms. In software, we have the luxury of doing some pretty clever stuff.

 

For small chips, yes, one is cheaper than three, but also remember that that one chip is highly unlikely to have as many photosites as the three chips combined. As chips get larger, they get more expensive, but it's not a linear progression based on area - more exponential, so as you get up to very big chips (ie S35 size and beyond) I'd hazzard that the three smaller chips are cheaper, even with the associated prism and alignment issues. But the thing is, you and I don't know for certain on that - it's just friendly speculation between us :-) So yes, at the low resolution end, a single chip is much cheaper, but I don't think so a the large end of things. Either way, I think the argument is an interesting one.

 

Graeme

Link to comment
Share on other sites

  • Premium Member
Of course, 3 chip cameras often have offsets in their chips which stop them delivering properly co-sited chroma in 4:4:4.

This is something I've been wondering about. A 2/3" chip puts 1920 photosites across an image area 0.378" wide, which works out to a pitch of 0.0002". Standard machine shop work is done to +/- 0.002", and really high end stuff sometimes achieves +/- 0.0001", but it ain't cheap and it ain't easy.

 

How do they glue those suckers onto a prism block with such precision that the photosites line up? Or do they even try? Some of the specs I've seen (from Panasonic IIRC) give total numbers of pixels about 5% higher than the standard format, so my guess is that they put a test pattern thru the block and use it to hardwire a correction for where the chips actually landed.

 

 

 

-- J.S.

Link to comment
Share on other sites

Anyhow, this notion of complexity diminishing with resolution is something that I believed about 10 - 15 years ago.

 

-- J.S.

 

Many of these concepts of smaller/bigger frame compression ratio apply on single frames. For video temporal relationship also comes into analysis and perhaps the level of detail complexity regarding smaller/bigger frames is not as important.

 

As I mentioned before, for a regular spatio-temporal video compression we have simultaneously employed direct SD/HD sized acquisition for various bit rates (so no resaampling of data was required to get one from the other) and seen the relationship is more complex than just the phrase that "higher spatial size means necessarily better compression."

Edited by DJ Joofa
Link to comment
Share on other sites

  • Premium Member
That's because we hardly ever find a chroma edge without an associated luma edge, and our eyes just don't see sharp chroma only edges that well.

It's actually even stranger than that. Our brains do a lot of processing on the data from our eyes, and they even go so far as to "adjust" chroma edges to fit nearby luma edges. Here's an example that I made up and have framed in the office:

 

Take two sheets of colored script revision paper of about the same reflectance (I used blue and green). Butt them together on a table top long edge to long edge, and tape them the whole way. Then flip them over and draw a squiggle line with a black sharpie crossing over the blue-green boundary several times. (Say about like a sine wave making excursions of 3/4" on each side and six cycles in all). Hang this thing up and look at it from about 50 feet away.

 

The wavy black sharpie line will appear to be the boundary between the colors, not the straight edges of the sheets of paper.

 

Walk in closer, and at some point the auto-correction gets overwhelmed, and you see the boundary where it really is.

 

This thing has saved the tush of television over and over again. This is why NTSC-2 worked with a rather minimal color subcarrier. This is why 4:2:2 works for looking at, but not so great for chroma keys. This plus the fact that luminance depends far more on green than red and blue is why the Bayer pattern is a good idea.

 

Edit to add: Another thing, adjusting convergence on a CRT computer monitor, I just noticed that there's a wide range between noticeably too far right and too far left. Real convergence must be in the middle, but that's our eye/brain chroma power steering again. ;-)

 

 

 

-- J.S.

Link to comment
Share on other sites

The wavy black sharpie line will appear to be the boundary between the colors, not the straight edges of the sheets of paper.

 

Walk in closer, and at some point the auto-correction gets overwhelmed, and you see the boundary where it really is.

 

I think this phenomenon is similar to the famous "Mach Bands" effect that happen when human vision changes contrast perception at the boundaries. Mach bands effects results in seeing stuff that is not there.

 

This plus the fact that luminance depends far more on green than red and blue is why the Bayer pattern is a good idea.

 

The actual situation is more complex than just green vs. red and blue. Kindly Google "Opponent axis" as a starting point.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

Forum Sponsors

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Broadcast Solutions Inc

Visual Products

Film Gears

CINELEASE

BOKEH RENTALS

CineLab

Cinematography Books and Gear



×
×
  • Create New...