Jump to content

I hate RED


Chris D Walker

Recommended Posts

I Hate RED

 

I don't feel this way against any other digital camera. I have this hatred for the RED in it's mass hysteria of marketing, numbers and fandom. F23 and F35? Fine. Genesis? I can live with it. RED? I'm screaming in my head. This is wholly about what has been spoken about the camera, not the camera itself.

 

1) K's - Simply, a 4K bayer sensor isn't 4K. Marketing team, stop using this number. Also, as for the number of K's a camera has, how far can too far go?

 

2) Coming soon... - For obsoleting obsolescence, here's another build and after that an updated sensor followed by a new viewfinder.

 

3) Goodbye, intuition - Fashion and men's magazines have recently been using RED's and EPIC's to photograph their subjects, Esquire in particular. Said: "This makes it a lot easier on the photographer since he doesn't need to know, intuitively, when the best few seconds are to snap a stream of shots." Wait, the photographer doesn't have to click his shutter anymore? Wow, an artist is at work.

 

4) Wavelet goodbye - I'm not a master of software but a 10:1 compression ratio (Redcode 36) can't realistically carry uncompressed RAW data.

 

5) Indie Baby - "It's a professional camera for the prosumer market. Hooray!" I dispute. There are many great cameras that deliver an equally good-looking image for cheaper than a RED, both film and digital. Films get made on Super16 for under $30,000. Saying that, how much does the storage of compressed 'uncompressed', 4K, 12-bit data cost? Plus lens rental, DIT, colour grading, archiving etc.?

 

Everything written has been concerned with how the RED has been sold by the manufacturers. How the camera itself works in the field is not my point, I'll let others comment on that. I hate been given hype. Sony doesn't do this. Panavision doesn't do this. Arri doesn't do this. Conduct yourself in a gentlemanly fashion, RED marketing team.

 

Rant over. Has what I've said been fair?

 

before I reply let me just say that I do not own a RED

 

1) Disagree:They acknowledge the true resolution of their RED ONE's mysterium sensor to be 3.2K, and the Mysterium X to be 3.7K after debayer and they make it a point of telling people that their upcoming 5k camera is what will be a true full resolution 4k.

 

Agree:That being said, I am discouraged that they seem to make 4K seem like the be all end all when I might possibly be purchasing a 3k bayer cam from them (about 2.6 RGB) that hasn't even come out yet.

 

2) Disagree:Well for starters on this one, they had one camera....their first camera....and a lot of people didn't just jump on board but they made a leap of faith on board. they had imagined at the time that camera could be upgraded from now on...(and to a degree it can because you can continue adding on better accessories and now even the new sensor). They realized I suppose that it was not obsolescence-proof enough so after 2 years...yeah 2 years....they offered their customers the option to trade in their camera for full purchase value towards the new modular system. Can you think of any other camera company that has done that? Or any company for that matter. Toyota is letting people return their cars only on a "case by case basis" and their products are potentially fatal. The old accessories also will work on the new camera systems btw. For more I do suggest you read a little bit about the company's new program on reduser.net.

 

Agree: Nothing is truly ever going to be obsolescence proof, true, maybe they shouldn't say that...I don't know...It's hard to see your side on this one...When my panasonic DVC30 started becoming obsolete Panasonic didn't offer me anything trading it in for the HMC40 (HD version of dvc30)

 

3) Disagree: This is ridiculous, you can't possibly hold a camera company responsible for the way people use their products. That's like faulting a computer company because instead of a student using his laptop to write his paper, he used it to go online and buy a pre written one from someone else.

 

4) It's not uncompressed, its a proprietary codec, and it performs very well. DVCproHD at 1080p is actually compressed to 1280x1080 and I'm sure resolves to slightly less and goes at a rate of about 1GB/min. At redcode 36 you end up with about 6 times more resolution at about only about double the Data rate.

 

5) If you really wanted to compare apples to apples you would have to include the cost of 2k HD telecine to have close to equal stats. You would have to rent lenses for any camera (unless its fixed lens prosumer like an HVX or something), The amount you save on not doing the processing, telecine, or buying film stock with RED could easily pay for a DIT. Apparently you don't color grade unless you are using RED, but for the rest of us color grading is done whether the footage originated on film or digital. Everything I have shot on film I have a digital archive of the master and all footage.

 

 

Look if there is one thing I hate it's RED fanboys, but RED is a great company. They are looking out for their early adopters, they are connected with their user base...hell...I have even gotten direct responses to questions in the past from Jim Jannard. I don't know who the CEO of panasonic is but I'll tell you he didn't help me out when I had questions about the hmc150.

 

Marketing is a part of life in this country. That sucks to some of you out there but it is a result of something good called a free market. There is competition which yields better products. So the RED wasn't full 4k right away, but it was still a 3k camera that looked great printed to film or on 2k projectors when we went to the movies, and it was a fraction of the cost of many digital cinema or film camera systems. Would you rather it be like in the soviet union where they just churned out products without giving us a choice. Anyone who has used a K3 that suffered from poor quality control knows what I'm talking about.

 

Yeah there is a lot of hype, but when they can't live up to something they promised they admit to it ( for example they recently stated boot times would be closer to 10 seconds for the new cameras rather than 2 seconds as planned).

 

You have every right to your opinions and I hope you give RED a chance, but really man, if this gets you so angry I'd hate to see what happens when you go to McDonalds and your mashed up defrosted burger doesn't look like the picture on the menu.

Edited by Peter Osinski
Link to comment
Share on other sites

  • Replies 75
  • Created
  • Last Reply

Top Posters In This Topic

no, not at all. At hearing it yeah ok maybe.....but when I take a 4k frame and put it up on a 40" 1080p screen, then continually zoom in more and more and more without noticing any reduction in the sharpness of the image, then zoomed in the same exact amount on a 1920x1080 frame and saw it looked like a pixelated (real word? whatever) mess, then i start to believe.

 

something interesting though...a story I heard, my details may be a bit off...

 

some scientists at kodak did a test where they took a 4k projector and set it on an IMAX screen in front of an audience. They first projected 4 squares, in 2 rows.....black white, and white black underneath. They continued to increase the number of squares in a checkerboard pattern until they essentially got up to 4k resolution where the black and white squares were the pixels essentially. The thing is though, that before the even reached the full 4k, the audience couldn't notice the detail of the checkerboard pattern and everything looked gray....

 

Sometimes I think, especially due to the fact that cineplex sized imax screens are gaining popularity over the giant ones, that we are getting to the point where we will not be able to see the difference in resolution. That doesn't mean that a 4k or higher image won't be superior when scaled down to formats shot and projected at the same resolution, or that it won't make VFX compositing look cleaner, or that reframing won't look better.

Edited by Peter Osinski
Link to comment
Share on other sites

  • Premium Member
some scientists at kodak did a test where they took a 4k projector and set it on an IMAX screen in front of an audience. They first projected 4 squares, in 2 rows.....black white, and white black underneath. They continued to increase the number of squares in a checkerboard pattern until they essentially got up to 4k resolution where the black and white squares were the pixels essentially. The thing is though, that before the even reached the full 4k, the audience couldn't notice the detail of the checkerboard pattern and everything looked gray....

 

I heard about that test. The thing is that obviously you want enough pixel resolution that you can't see pixels, so I wonder how far past that point do you want. In other words, if people can only see 3.something K of information on a big screen, then maybe 4K is the optimal resolution so that you are just beyond the threshold of being able to see any more detail.

Link to comment
Share on other sites

  • Premium Member
The thing is though, that before the even reached the full 4k, the audience couldn't notice the detail of the checkerboard pattern and everything looked gray.....

 

Well, yeah. You can't display more than half as many cycles as you have pixels, without aliasing. It's like Nyquist's theorem for sensors. The display has to have a low pass reconstruction filter. You need filters both for the analog to digital and digital to analog conversions. They get you going both ways, like income tax and sales tax. ;-)

 

http://en.wikipedia.org/wiki/Reconstruction_filter

 

 

 

 

-- J.S.

Link to comment
Share on other sites

I Hate RED

 

I don't feel this way against any other digital camera. I have this hatred for the RED in it's mass hysteria of marketing, numbers and fandom. F23 and F35? Fine. Genesis? I can live with it. RED? I'm screaming in my head. This is wholly about what has been spoken about the camera, not the camera itself.

 

1) K's - Simply, a 4K bayer sensor isn't 4K. Marketing team, stop using this number. Also, as for the number of K's a camera has, how far can too far go?

 

2) Coming soon... - For obsoleting obsolescence, here's another build and after that an updated sensor followed by a new viewfinder.

 

3) Goodbye, intuition - Fashion and men's magazines have recently been using RED's and EPIC's to photograph their subjects, Esquire in particular. Said: "This makes it a lot easier on the photographer since he doesn't need to know, intuitively, when the best few seconds are to snap a stream of shots." Wait, the photographer doesn't have to click his shutter anymore? Wow, an artist is at work.

 

4) Wavelet goodbye - I'm not a master of software but a 10:1 compression ratio (Redcode 36) can't realistically carry uncompressed RAW data.

 

5) Indie Baby - "It's a professional camera for the prosumer market. Hooray!" I dispute. There are many great cameras that deliver an equally good-looking image for cheaper than a RED, both film and digital. Films get made on Super16 for under $30,000. Saying that, how much does the storage of compressed 'uncompressed', 4K, 12-bit data cost? Plus lens rental, DIT, colour grading, archiving etc.?

 

Everything written has been concerned with how the RED has been sold by the manufacturers. How the camera itself works in the field is not my point, I'll let others comment on that. I hate been given hype. Sony doesn't do this. Panavision doesn't do this. Arri doesn't do this. Conduct yourself in a gentlemanly fashion, RED marketing team.

 

Rant over. Has what I've said been fair?

Link to comment
Share on other sites

And there's nothing about that which strikes you as a bit suspect?

 

P

 

What is highly "suspect" IMHO is that most camera manufacturers keep using completely outdated video compression schemes like the ones based on DCT (Discrete Cosine Transform - JPEG). "Wavelet transform" (DWT) based compression, as a theory, has been with us for many, many years. Red is one of the very few who actually managed to use it (it is quite CPU intensive), even though everyone perfectly knows DWT is the way forward!

 

DCT is completely outdated, but I'm sure you know that very well! The question is: why the heck do they all keep using DCT now that it has been proven (by Red) there are powerful enough CPUs for real-time DWT encoding of 30 frames/s in 4K?

 

HDCAM-SR for instance, although "less compressed" stricto sensu than Redcode Raw, looks absolutely awful as soon as you push things up a little too hard in the blacks: 8x8 pixels blocks everywhere. Yuk! Try do do the same thing with a DWT compressed image like those from the RED-ONE: you'll get noisy blacks of course if you're pushing the noise floor up, but you sure won't get ANY visible compression artifacts whatsoever!

 

If I, as an engineer, had to choose between uncompressed and ANY type of DCT based compression (like for instance HDCAM, HDCAM SR, DVCPRO-HD, HDV, DV, and so on), I'd choose to work with uncompressed! No questions asked!

 

But if I had to choose between uncompressed and Wavelet Transform based compression schemes, I'd go with the wavelet without any hesitation! The very small difference between uncompressed and DWT compressed is hardly visible at all, and certainly doesn't seem like a big price to pay for the huge reduction in the amount of data!

 

Most of the commercial success of the Red-One comes from the fact they were smart enough to find a technical solution to use a "wavelet transform" based compression scheme. You can hardly blame Red to have chosen a State-of-the-Art compression algorithm... You certainly should blame the ones who didn't: Sony, Panasonic, etc.

Link to comment
Share on other sites

What is highly "suspect" IMHO is that most camera manufacturers keep using completely outdated video compression schemes like the ones based on DCT (Discrete Cosine Transform - JPEG). "Wavelet transform" (DWT) based compression, as a theory, has been with us for many, many years. Red is one of the very few who actually managed to use it (it is quite CPU intensive), even though everyone perfectly knows DWT is the way forward!

 

I suspect their are many reasons why you don't change your infra structure every time something new comes along. There are large investment in systems and you need to get full return on your system before moving on.

 

I know of one production that considered using RED, but went for HDCAM SR because of their tight post production schedules because of the delays due to rendering. All these codecs have advantages and disadvantages, it a matter of selecting the correct one for the job in hand and for much broadcast work where there is no time for post production colour correction, so a RAW format may not offer enough of an advantage (or any) for change.

Link to comment
Share on other sites

  • Premium Member
there are powerful enough CPUs for real-time DWT encoding of 30 frames/s in 4K?

 

There are. There are also storage systems fast enough to record it uncompressed. Both have upsides and downsides; the problem with doing high-res wavelets is that the silicon to do it will pull a lot of power. That may not be a reason not to use it, but it's a perfect example of the maxim that compression is actually nothing but a disadvantage in absolutely every area other than storage bandwidth. And storage bandwidth is really just a cost factor.

 

The other thing I have to pick you up on is this 4K thing again. There may be powerful enough CPUs to do real-time wavelets on 4K, but red doesn't have one. It doesn't need one. It isn't doing real time 4K. I hate to keep going on about this, but this is exactly, precisely why it is important: if you base technical analysis on advertising copy, you will not make good decisions as a result. This is why terminology matters.

 

Addendum: with regard to compression artifacts, wavelet technology is actually on very dangerous ground. There are two specific problems with wavelets: first, they can create incredibly low trans-codec signal to noise ratio while simultaneously looking like absolute poop, and the artifacts are effectively impossible to repair. In the ideal case, wavelets produce artifacts that just look like an approximately-gaussian blur (depending on which wavelet you actually used), which is a pretty benign artifact. On the other hand, the bad case makes the image just look like a badly-resampled texture in a 3D computer game, which is exceedingly ugly and - critically - effectively impossible to fix. Many of the advantages of wavelet can be duplicated in a DCT codec that does intelligent deblocking ("loop filtering" in h.264 terminology) and do not suffer these insidious problems.

 

Wavelet is not a panacea. I think the reason people aren't using it is for cost of the silicon and power consumption, but in any case, the idea that it's even twice as good as DCT (bearing in mind we don't accept raw SNR figures as definitive), is sheer fantasy. It's one of several fantasies that I think Red rely on rather heavily.

 

P

Link to comment
Share on other sites

  • Premium Member
Wavelet is not a panacea. I think the reason people aren't using it is for cost of the silicon and power consumption, but in any case, the idea that it's even twice as good as DCT (bearing in mind we don't accept raw SNR figures as definitive), is sheer fantasy. It's one of several fantasies that I think Red rely on rather heavily.

 

Phil I'm not an engineer, and only barely keep up with an understanding of the underlying technology platforms.

 

I am inherently sceptical of compression, and have certainly seen the shortcomings of other formats, but I'd put it way down the list of issues holding back RED image and picture fidelity. I've done a film and a TV series with RED, and I'm amazed out how good the compression is. It's certainly not getting in the way of pushing the picture around as much as other sensor limitations are.

 

I'm sure you'll tell me that the compression artefacts are there, but I can't seem to see them in my everyday work. I can't say the same about almost any other camera / format.

 

I have the greatest respect for your knowledge, but when you sledge RED for using this kind of compression when its clear (well in my experience) that it's far above anything else doesn't make a lot of sense. I'm genuinely trying to take on your point of view, but I really don't understand what your issue actually is because I just don't see it playing out in my own real world experience. Can you be any more definitive about what makes it some kind of subterfuge ? How would it play out in terms of end image result and what production conditions would reveal the fundamental problems of wavelet based compression.

 

I can understand your sensitivity over the constant use of the term 4K if numbers are important to you. Long ago in my mind I accepted that RED creates a 4K (pixel) file out of a 3.2K(at best) image of the sensor. Most imaging professionals probably think the same way. Anyone who has half a clue about RED knows that the camera doesn't record 4K.

 

It's interesting though because I don't think anyone else actually bothered to measure the shooting resolution of other digital formats. I was involved in some tests that were done by a local rental company. We couldn't even get 3.2K out of RED, but we also couldn't get anywhere near 1920 out of most HD cameras either. Red seemed to come closest to it's stated resolution.

 

I also long ago realised that most manufacturers fib about their products. Or at best, tweak their results to the most favourable possible light. Battery performance for example. Lens apertures not being what's written on the barrel. "no ramping" is a favourite. In my mind, Sony are the worst offender.

 

I've never been a fan of pointing cameras or lenses at charts though, cause they make for very boring subjects. Are they really so different to other manufactures wanting their products to be seen in the best possible way ? Is it that they are more successful than most ? Is it that their lower cost of entry as meant a flood of inexperienced imaging *experts* that think the best way to be a DP is to hitch your horse to one camera ?

 

jb

Link to comment
Share on other sites

  • Premium Member
I'm sure you'll tell me that the compression artefacts are there, but I can't seem to see them in my everyday work.

 

Long ago in my mind I accepted that RED creates a 4K (pixel) file out of a 3.2K(at best) image of the sensor. .....

 

I was involved in some tests that were done by a local rental company. We couldn't even get 3.2K out of RED, but we also couldn't get anywhere near 1920 out of most HD cameras either.

 

Wavelet artefacts look like soft focus and/or a sort of micro-fine version of the distortion used in shower door glass. People tend to be less aware of and less bothered by that than by the little squares we all see so clearly when the wheels come off of DCT compression. So, wavelets fail in a more "viewer friendly" way.

 

The Red has 4K+ Bayer masked photosites across the sensor. Its raw data output is 4K Bayer, one of three colors for each of those 4K locations. From that, they upconvert to 4K pixels, meaning three colors for each location, or whatever you want. Is that really equal to 3.2K worth of co-located three color samples? It's apples and oranges, very subjective.

 

The "1920" cameras, too, are counting photosites across their chips. It's just that they have three chips on a prism block. All of them run into the Nyquist limit, which says that you can't shoot details finer than two photosites worth, or you'll get artefacts. So, all of them have Optical Low Pass Filters (OLPF's) to prevent that. And they all cheat a little on the Nyquist limit, trading off a few artefacts for some more sharpness. That's why you can't get anywhere near 1920 of anything out of them. The limit, N/2, is 960.

 

Bayer cameras have two Nyquist limits, one for green, and a lower one for red and blue.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
I can understand your sensitivity over the constant use of the term 4K if numbers are important to you. Long ago in my mind I accepted that RED creates a 4K (pixel) file out of a 3.2K(at best) image of the sensor. Most imaging professionals probably think the same way. Anyone who has half a clue about RED knows that the camera doesn't record 4K.

 

Even this is an example of the problem of being unspecific - the camera does record "4K" -- 4K RAW to be specific. Or it doesn't record "4K" -- 4K measurable resolution to be specific. So it's confusing to say that it does or doesn't record 4K, because both are true, it does... and it doesn't. This is the problem of throwing around the term 4K without a descriptive suffix.

 

And even saying that it creates a 4K file out of 3.2K is also imprecise or misleading because in fact, it doesn't specifically uprez 3.2K to 4K, it generates 4K RGB from 4K RAW Bayer CFA, which happens to only measure out at 3.2K (or whatever it actually is). But it's still a process that goes from 4K RAW to 4K RGB.

 

Now I suppose you can argue that RAW to RGB conversions are a form of uprezing or upscaling if you think of 4K RAW Bayer as being 2K green and 1K each of red & blue, but that is a fairly inaccurate way of describing what a smart algorithm does to figure out the color information.

Link to comment
Share on other sites

  • Premium Member
Wavelet artefacts look like soft focus and/or a sort of micro-fine version of the distortion used in shower door glass. People tend to be less aware of and less bothered by that than by the little squares we all see so clearly when the wheels come off of DCT compression. So, wavelets fail in a more "viewer friendly" way.

 

The only time I've seen this kind of artefact is when the blacks are highly lifted and you have flat fields of a lower shadow tone, near black or just above. I had always assumed it was more to do with fixed noise of the sensor. Mind you, if that's the worst it can do, then I've only seen it appear in the most drastic of situations. I still think I prefer it to DCT.

 

Is that really equal to 3.2K worth of co-located three color samples? It's apples and oranges, very subjective.

 

I was just making a point that one has to arrive at your own *position* with regard to how you view the numbers and filter through the marketing talk to the real world outcomes. I think we do it with all cameras, not just RED.

 

jb

Link to comment
Share on other sites

I actually hate the "K" terminology. People still believe it means something, when It's almost completely meaningless. Originally it referred exclusively to the horizontal resolution in film scans, which was meaningless enough (different scanners will pull different amounts of detail from the same negative, at different noise levels, and pixels don't guarantee detail), but when I first heard it extended to digital sensors, I must have groaned so load you could hear it coast to coast.

 

You might as well measure overall film quality by running time. Or if you've ever heard a press person request a "300 DPI image." Okay, 300 DPI at what size? On its own, "DPI" 100% meaningless.

 

If you want to measure detail, you need to switch to line pairs per mm (MTF is too confusing in discussion). If you want to measure tonality, you need to switch to SNR (aka dynamic range). If you want to measure latitude, you're screwed, because there is no accepted way to measure it (the best way is to pick an "acceptable" SNR and figure out how many stops you can reproduce above that SNR, but "acceptable" is wildly subjective). Pixel dimensions has an impact on some of these factors, but NOT in a direct way.

 

Sorry, this is a major pet peeve.

Link to comment
Share on other sites

  • Premium Member
If you want to measure detail, you need to switch to line pairs per mm (MTF is too confusing in discussion).

 

Ah, but then how many mm's do you have?

 

That's fine for film, where you know about 16/35/65. For pictures in general, how about line pairs across the whole picture? The TV tradition is line pairs per picture height, while in film scans, they count samples across the frame. Or maybe compromise on line pairs on the diagonal? ;-)

 

 

 

 

-- J.S.

Link to comment
Share on other sites

Most of the commercial success of the Red-One comes from the fact they were smart enough to find a technical solution to use a "wavelet transform" based compression scheme. You can hardly blame Red to have chosen a State-of-the-Art compression algorithm... You certainly should blame the ones who didn't: Sony, Panasonic, etc.

 

There is a lot more to Red's commercial success than the use of wavelet compression. Silicon Imaging was using real time wavelet compression (licensed from Cineform) at least 2 years before Red ever appeared, and they haven't enjoyed nearly the success Red has. The Red success story is a combination of the compression technology, use of a large format sensor (and PL mount lenses), use of commodity recording media, their "rebel" style marketing approach (love it or hate it, it worked), their attention to their customer base, the personal involvement on every level of their owner/CEO, and, last but most certainly not least, their pricing structure. The combination of these and other factors was able to tap into the "live the dream" mindset of a lot of people who could never really afford a camera that was capable of the kind of images the Red can deliver. And it continues to do so, growing a direct sell through market that never really existed before, and forcing many to question the long term viability of the high end camera rental business. That takes a lot more than clever use of compression.

Link to comment
Share on other sites

  • Premium Member

Apologies for the delay in replying. For the life of me I couldn't find the link to the LA Times (Below) :lol:

 

 

Hi Keith, interesting numbers and experience you have, which gave me some historical perspective. I don't know how to do a proper comparison of Red with the cameras of the early 1990s. With the current league of cameras, Red is certainly financially attractive compared to quite more expensive digital film cameras.

 

Well, as the old saying goes: “Those who ignore the lessons of history are doomed to repeat them”.

The notion of introducing a program of replacing film cameras with video, as a panacea for all the financial ills of Panavsion, (and other companies) is far from new.

There were a couple of abortive attempts at this by PV in the early 1980s using tube-based standard definition cameras.

This link will take you to some archived articles from the LA Times

 

About halfway down the page you can read about a 1989 plan to equip seven of the then new NHK/Sony “Hi-Vision” HDTV cameras with PV lenses and thus revolutionize the industry!

 

Now, those particular cameras were an absolute joke! Just when the rest of the TV industry was celebrating the move to CCD cameras and the consequent elimination of all the drawbacks of tube-based cameras, NHK/Sony wanted to introduce their “HARP” (Hybrid Amorphous Rushing Photoconductor) Saticon tubes. Apart from being a major step backwards operationally, these didn’t produce anything like what we would consider a true HD picture today, they had terrible comet-tailing, were noisy, had a very short tube life, and could be permanently damaged by being pointed at a studio light!

 

The power consumption of a complete camera/recording deck was about 3 kilowatts, so the only place they could be realistically used was in a TV studio.

Sony were laughed off the set when they tried to introduce these to Hollywood in the early 1990s.

 

Back to the well-worn drawing board, George Lucas was originally supposed to shoot Star Wars Episode 1 on Panavized HDW750s with custom-designed 1.33:1 anamorphic lenses. That never happened, and there were still no anamorphic lenses available when he was ready for episode 2. There’s actually not a lot of difference between the 750 and the F900s Lucas eventually wound up using, raising speculation as to the true significance of the extra “150” added to the model number….

 

But after all that, for Episode 3, G.L. turned round and hired newer Sony cameras and lenses from upstart startup Plus8Digital. He apparently refused to have anything to do with Panavision after that; at the time I worked for PV’s major Australian competitor and the production people kept asking us if we could sub-hire certain items form PV for them, because they weren’t allowed to deal with them directly!

 

(Star Wars 3 was Plus8Digital’s only real rental success story, by the way. Eventually they went broke and were bought by Panavision).

 

These projects seem to be classic Boondoggles; enterprises whose only purpose was to maintain the employment/justify the existence of overpaid executives.

However much the wannabe producers want to praise the Chutzpah and progressiveness of the people who instigated these programs the sad facts are:

 

A. They weren’t doing it with their own money

B. There was never any plausible mechanism whereby they could realistically hope to recover even their investment, let alone make even the minutest dent in PVs hall-billion and growing Bond Debt.

C. It’s the eternal same ol’ same ol’: The actual acquisition cost of making most movies, TV shows and commercials is normally well down into single-digit percentages of the production budget, and the actual camera/media cost component of that, is normally in the low 2-dgit percentages. Everything else, (lights, grip crew etc etc) are going to cost exactly the same. Obviously nobody did “go figure”: The magical word “Digital” explained everything

 

Dalsa pulled the plug.

The Kinetta went nowhere etc etc

 

OK, finally, digital cameras are making some headway into US Prime Time TV production, but that’s almost entirely for political reasons After all, lots of shows are still being shot on F900s, which date back over 10 years!

 

I have based my comments on Red's specs. On the other hand I also do realize that practically what I have seen of Red footage in a few films such as "Knowing" and "The Book of Eli", was not as impressive as their specs say. I

 

I have yet to see any RED shot footage that I would describe as anything but “ordinary”. The fact that not a single Prime Time US network series is using it seems to say it all. That’s not to say that it’s a completely useless camera system, it’s more a case of “there’s nothing terribly wrong with it, but there’s nothing terribly right with it either”, at least from a completely unbiased viewpoint.

 

It’s supposed to be widely used in Australia; yet nobody seems too keen to tell me on what projects. (I think I already know…)

 

The Epic may turn out to be the camera that the Red One was supposed to be, but who knows? They've made the job of selling it to the industry far harder than it need have been.

Link to comment
Share on other sites

Ah, but then how many mm's do you have?

 

That's fine for film, where you know about 16/35/65. For pictures in general, how about line pairs across the whole picture? The TV tradition is line pairs per picture height, while in film scans, they count samples across the frame.

Samples do not equal lines. Pixels do not equal detail... Once you're able to fully separate the sampling rate from the signal, you can start to figure out what an imaging system can do.

 

My advice is always to take the lowest lp/mm of the system (lens, scanner, sensor, etc), multiply it by two (Nyquist), then multiply it by the sensor dimensions in mm. You'll get the "real megapixels" of the sensor.

 

For example, color negative tends to resolve around 60-70 lp/mm with decent contrast, your lenses can do better, and a good 4K scanner can most likely do even better. Assuming 65 lp/mm, a 2.35:1 extraction of Super 35 would give you 24.89 x 10.6mm, for a total of 1617.85 x 689 line pairs. To resolve such a frequency, you would need twice as many samples, which gives you 3235.7 x 1378 pixels, or 4.4 "real megapixels" on an ideal sensor. That number tells you the actual amount of detail the system can record.

 

Because of Bayer sensors and AA/Optical Low Pass filters, you usually need 1.25 - 1.5X as many pixels as a theoretical "ideal" sensor to reproduce a frequency. So, taking the optimistic 1.25X multiplier, we know that we would need at least a 4044 x 1722 pixel sensor to reproduce the amount of detail on that S35 section.

 

But of course it is not so simple. Grain moves around, which is good (error diffusion in spatial sampling). Pixels are fixed, which leads to all kinds of problems (aliasing, interpolation errors, various artifacts). Resolving the signal and having the signal look "good" or "natural" are very different things.

Edited by Ben Syverson
Link to comment
Share on other sites

  • Premium Member
Samples do not equal lines. Pixels do not equal detail... .

 

True. For the public, line pairs per millimeter isn't a particularly useful concept. Line pairs actually reproduced from top to bottom (or side to side if you prefer) would mean something. So, how about shooting Marconi charts, zone plates, etc. ? An end to end test trumps the theoretical calculations.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
OK, finally, digital cameras are making some headway into US Prime Time TV production, but that’s almost entirely for political reasons After all, lots of shows are still being shot on F900s, which date back over 10 years!

 

I have yet to see any RED shot footage that I would describe as anything but “ordinary”. The fact that not a single Prime Time US network series is using it seems to say it all.

 

It's more than just headway. Our company (which I'm not allowed to identify when I post here) has no film shows left at all on the studio side. We did a scripted dramatic series on the old Red for a major cable network in the 09-10 season. This pilot season, we have 8 pilots for major broadcast networks. Two are on the new Red MX, two on D-21, two on F-35, and two sitcoms haven't started yet. Ordinary works just fine at that price point. I doubt we'll ever do a film show again for TV.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
It's more than just headway. Our company (which I'm not allowed to identify when I post here) has no film shows left at all on the studio side. We did a scripted dramatic series on the old Red for a major cable network in the 09-10 season. This pilot season, we have 8 pilots for major broadcast networks. Two are on the new Red MX, two on D-21, two on F-35, and two sitcoms haven't started yet. Ordinary works just fine at that price point. I doubt we'll ever do a film show again for TV.

-- J.S.

Nothing using PV cameras?

What was that I said about people opening up markets, only to have somebody else take it from them? :rolleyes:

If you have a dependable Post infrastructure in place for RED footage, there's no reason why people wouldn't use it.

Not everybody has your confidence.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Broadcast Solutions Inc

CINELEASE

CineLab

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Film Gears

Visual Products

BOKEH RENTALS

Cinematography Books and Gear



×
×
  • Create New...