Jump to content

RED production schedule


Carl Brighton

Recommended Posts

  • Premium Member

I've been on a 3-week hiatus for the holidays but recently I'm back prepping for the next episode of "Big Love" that I shoot, which starts Monday, so expect to hear less from me...

 

My biggest bit of skepticism regarding the RED camera is still the price point of $17,500 per camera -- I just don't see how they will make a profit, but that's not really my problem! As for the rest, I'm honestly looking forward to it and what people will starting doing with it.

Link to comment
Share on other sites

  • Replies 495
  • Created
  • Last Reply

Top Posters In This Topic

I've been on a 3-week hiatus for the holidays but recently I'm back prepping for the next episode of "Big Love" that I shoot, which starts Monday, so expect to hear less from me...

 

My biggest bit of skepticism regarding the RED camera is still the price point of $17,500 per camera -- I just don't see how they will make a profit, but that's not really my problem! As for the rest, I'm honestly looking forward to it and what people will starting doing with it.

 

 

Spending time with family. Ice storm hit missed the flight to NY so I got to play hookie for a few days. Got get to work too.

 

Yeah, I know. The price tag is awfully low. It's good, but I know where the foundry where die is being made and it can't be cheap, and there is a lot more production costs to consider. Seems awfully low for the projected performance. It will be something if they can hold the price.

 

Best of the new year to everyone. Got to get back to work.

Link to comment
Share on other sites

Carl don't get upset.

I'm not upset. Curious and puzzled but not upset

Anyway David probably assumed you understood more about the technology than you did.

There's not really a lot TO understand, since RED never really tell us anything that couldn't apply to any number of other cameras :blink:

 

The problem with just sticking more processing power inside the camera is cost, power consumption, heat, etc. General processor don't do efficient work on high data rates and can't keep up in a serial (or multi parallel serial channels) real-time environment. What you need is a processor specifically optimized for this type data and is integrally configured to work with, or integrated with, on the same die, a digital/analog signal processor. The R&D isn't cheap, the process won't be cheap, because unless there is a commodity push that will trickle up the funds to pay for high performance high end cameras. The technology will improve in the CMOS with more preprocessing going on board the die.

 

At the moment it is cost, waste-heat and energy prohibitive. Such man not be true in the future.

I know what you're saying, but that wasn't really my question. Whether you do the signal processing on the fly in the camera, or process it later from recorded RAW data (which to my mind would be a better idea anyway), you're still getting an automated machine to try to complete a sort of video crossword, turning a string of red, green and blue samples into the same number of RGB samples. A newspaper crossword usually has only one solution, but interpolated video often has more than one solution! I've never had a satisfactory explanation as to how they actually accomplish this.

Link to comment
Share on other sites

I'm not upset. Curious and puzzled but not upset

 

There's not really a lot TO understand, since RED never really tell us anything that couldn't apply to any number of other cameras :blink:

I know what you're saying, but that wasn't really my question. Whether you do the signal processing on the fly in the camera, or process it later from recorded RAW data (which to my mind would be a better idea anyway), you're still getting an automated machine to try to complete a sort of video crossword, turning a string of red, green and blue samples into the same number of RGB samples. A newspaper crossword usually has only one solution, but interpolated video often has more than one solution! I've never had a satisfactory explanation as to how they actually accomplish this.

 

I'm back on our production final development and can't go into the mathmatics, but basicaly the algorithms are matice matmatics and other predictive algorithms. If you go to the Arriflex site at <http://www.arri.com/news/newsletter/articles/09211103/d20.htm> it gives you a basic concept of what is being done with the raw Bayer data.

 

For a better understanding there are a couple of papers that provide a good mathmatical insight of the process: http://graphics.cs.msu.su/en/publications/.../prog2004lk.pdf and

http://www.ima.umn.edu/preprints/oct2006/2139.pdf

 

I have used similar algorithms similar to some of the examples in the latter. My approach was to establish the boundries for the domains for patterns using similar algorithms then processing using some of my brain-theory based algorithms which are applied finite inductive sequence determinants along vectors from the primary pixel cluster. I've used these in data reconstruction of lost datum or data in other applications.

 

If the Arri site explaination is all you need, then fine. Otherwise, the other papers will give you a more concise understanding.

 

I hope this helps. I've got to get back to work.

Link to comment
Share on other sites

Thank you very much Lance, for your documented reply! I found the Arri paper to be a very clear, although quite simple, explanation of the basic philosophy for the algorithms used for de-bayering. It is very clear in my opinion, and my personal experience, that a very good de-bayer algorithm can achieve a little more than 80% of the raw sensor's resolution. On the other hand, I have read several times on this discussion that people thought the "Mysterium" sensor was a "4K sensor", when in fact it has always been claimed to be 4520 (h) x 2540 (v)... And, unless I am completly mistaken, 80% of the resolution from a "4520x2540" sensor makes an actual total resolution that is slightly above the "4K" mark... Therefore, unless I've missed some important information, I believe we definitly can call the future Red to be a "true 4K camera", whatever that means for you (sorry: English is not my mother tongue)...

Link to comment
Share on other sites

I hope this helps. I've got to get back to work.

Erm... WHO do you actually for, if it's not a rude question? When I Google "Lance Flores" and "film" I get Mockingbird Films, who don't sound like any sort of "bleeding edge" software company, at least not going by their website. How does Graeme Nattress fit into all of this?

 

Incidentally, how does one actually go about writing in-camera image processing software? Do you start with some sort of not realtime PC-Based Editor/Emulator and then compile to some sort of dedicated high-speed processor, or do have to write directly in assembly language to get the speed up? That sort of thing has always been a mystery to me.

Link to comment
Share on other sites

Erm... WHO do you actually for, if it's not a rude question? When I Google "Lance Flores" and "film" I get Mockingbird Films, who don't sound like any sort of "bleeding edge" software company, at least not going by their website.

 

How does Graeme Nattress fit into all of this?

 

MF is not a software company.

 

I don't know what you mean, it is a vague question. He works for the RED company, and from what I gather he is involved in the processing algorithms. I don't know him. I was just trying to clarify some of your questions.

 

 

Incidentally, how does one actually go about writing in-camera image processing software? Do you start with some sort of not realtime PC-Based Editor/Emulator and then compile to some sort of dedicated high-speed processor, or do have to write directly in assembly language to get the speed up? That sort of thing has always been a mystery to me.

 

Geez .. I don't have time to go though that in detail. You start off with a lot of knowledge and experience. You first need expertise in optical physics (light theory), semiconductor technologies, digital processing, analogue electronics, and mathematics. An extensive understanding in these fields and you're set. I haven't done any assembler or machine code since '74 then did some instruction code for AMD 2900 slice processor which is more akin to the process for machine processing of bayer algorithms .. like PALs mixed with shift/barrel registers etc., but I got away from most of that when I was able to move back into the fields of physics and mathematics. I speak using relative examples which I believe you can grasp. Understand, these electronics are not like general processors. They are more in the structure of RISC processors only even more optimized to work with other specialized machines like DSPs. The firmware, hard logic structure, and loadable instructions, are customized instructions/data are generated on more traditional computer using specialized programs. Since the data from the CMOS sensor line are analog in nature, the compensation, equilibration, and all post-sensory information has to undergo extensive post-sensory processing, the solution is quite complex and beyond the scope of further explanation on this thread.

Link to comment
Share on other sites

You might be able to find out more on his site......

Naw, that doesn't really tell you anything

 

I'm not sure what exactly he does for Red, but he is very well known for his FCP plugin solutions.

Unfortunately, his software is only available for the Mac platform which limits my objectivity.

I hate Macs so much that I'm convinced my negative thought processes ALONE are responsible for the unnaturally high crash rate I always experience with them. People who love Macs always insist they never crash when THEY use them, so I don't know what other explanation there is :P

At least the PC format doesn't engage in that sort of emotional blackmail; they seem to crash just as consistently no matter what you think of them :D

Link to comment
Share on other sites

The mac I'm working on now has been up and running for weeks. I code on it, I watch movies, email, web, etc. etc. Only reason it's weeks, not months is that it wasn't been used over xmas so it got switched off....

 

You were asking what I do at RED? Primarily image processing / codec development, but I'm involved in many, many aspects of the camera.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

> Incidentally, how does one actually go about writing in-camera image processing software?

 

At its very lowest level, like this:

 

U55: EN port map(A => mul_80_PROD_not_0, B=>n174, K=>n122);

 

...but usually in a higher level language!

 

I would presume it's on an FPGA, a field-programmable gate array, which is a semiconductor device consisting of many thousands of logic gates which can be programatically interconnected to create arbitrary logic devices. It's possible to write entire CPUs on FPGAs (although it would generally be faster and more efficient to use a discrete CPU device in that instance). The programming languages used for this sort of work are VHDL and Verilog, and you usually write the code in some sort of development software on a PC then flash it onto the FPGA. FPGAs with enough gates and clocked fast enough to do realtime 3D colour transforms and other work on HD video are not particularly difficult to come by. Writing FPGA software is a wierd experience for someone used to general-purpose programming on PCs because you can write blocks of code which all get executed at once, from the same clock source, and then define the way they are interconnected. Various parts of the device can be in various clock domains. Many FPGAs have other hardware built into them, such as Xilinx's RocketIO series, which have fast serial transcivers which are capable of synthesising and decoding HD-SDI.

 

Ordinarily, someone like Sony or JVC would develop their image processing devices on an FPGA, then send the formal description for it to a semiconductor foundry and have ASICs - application-specific integrated circuits - made. These aren't reprogrammable like an FPGA, but they're much cheaper to reproduce. However, the costs of doing this are such that an outfit like Red, who aren't expecting to ship as many cameras as say Canon, would probably just build all their cameras using FPGAs. There are downsides to doing this - it's generally possible to make things smaller and less power-hungry using ASICs, particularly floating-point mathematical operations such as those involved in the sort of image compression Red claim to be doing. It's also likely that there is some DSP hardware involved, very possibly something like Analog Devices' Blackfin, which could be loosely described as a programmable logic device with mathematical capabilities bent towards domain analysis and things like fourier transforms. Often, a DSP will be controlled by an FPGA. This probably explains the relative bulkiness of the Red camera. Prominent users of FPGAs include things like the Da Vinci colour corrector and many HD-SDI input cards.

 

Phil

Link to comment
Share on other sites

Whoah! I'm overwhelmed; So much information, and so little hand-waving and "arguments from ignorance" :D

 

What I was really getting at was, do they (ie RED etc) start the design process by downloading samples of RAW CMOS sensor data into the hard drive of a PC, work out their processing algorithms in a non-real-time programming environment, and once they'd gotten THAT to (slowly) grind out the sort of images they want, duplicate the data processing algorithms in some sort of multi-pipeline custom processor, so they can work in real time.

 

I thought that might be the explanation why they've only shown such limited amounts of RED footage so far. Otherwise, you'd think that once they'd started shooting footage, they could produce any amount.

 

I can understand that the Negative Entropy Decoding paradigm must be a tough nut to crack!

 

You were asking what I do at RED? Primarily image processing / codec development, but I'm involved in many, many aspects of the camera.

OK, it was just that Lance Flores sounded like he was more involved in the project than you are. He seems to know a lot about it, at any rate.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

Yes, I would expect that they would confirm their dematrixing, uh, matrix on a computer then flash it onto the camera - if they've even got to the stage of flashing it onto a camera, which I suspect they may not have done.

 

Either way it's not going to be as clever as they're desperately trying to make it sound. Photoshop does it every time you open a camera RAW file. Dematrixing cannot ever be much more than a carefully weighted average; the choices come when you decide how safe it is to assume that the two red pixels you have are separated by another one of similar value, and how likely it is that the value of green pixel that's actually inbetween them has anything to do with what the value of the red pixel you haven't got. It's just a matrix operation. Obviously they want you to believe that it is very clever and new and that they are magicians doing unprecedented work, but the safe assumption is that they're not. They want you to believe this because they wish to divert attention from the fact that they're really rendering about a 2K image (less, with antialias filtering), blowing it up and calling it 4K. Then blowing it down and calling it a 27Mbyte/second bitstream. Sigh. Make a decision, guys!

 

They could happily shoot as much material as anyone wanted to see. Postprocessing it is not a big deal, or if it is they have a competence problem. They're just choosing not to, and then objecting to people making the safe assumption that they have something to hide!

 

Phil

Link to comment
Share on other sites

"from the fact that they're really rendering about a 2K image (less, with antialias filtering), blowing it up and calling it 4K."

 

Last year you claimed it was a 1k image Phil. At least make your mind up about which anti-Bayer CFA FUD you want to go on about.

 

It amazes me, that in one post you can describe, in language anyone can understand all about FPGA and ASICs and their programming, and in the next go so extremely off the planet about Bayer CFA stuff.

Link to comment
Share on other sites

  • Premium Member
"from the fact that they're really rendering about a 2K image (less, with antialias filtering), blowing it up and calling it 4K."

 

Last year you claimed it was a 1k image Phil. At least make your mind up about which anti-Bayer CFA FUD you want to go on about.

 

It amazes me, that in one post you can describe, in language anyone can understand all about FPGA and ASICs and their programming, and in the next go so extremely off the planet about Bayer CFA stuff.

 

Graeme,

 

Hopefully you will shortly be able to demonstrate the resoloution is 4 times that of Viper.

 

Stephen

Link to comment
Share on other sites

Whoah! I'm overwhelmed; So much information, and so little hand-waving and "arguments from ignorance" :D

<snip>

I can understand that the Negative Entropy Decoding paradigm must be a tough nut to crack!

OK, it was just that Lance Flores sounded like he was more involved in the project than you are. He seems to know a lot about it, at any rate.

 

I am not involved in the RED project. I have just responded to what I perceived as genuine interest of individuals with rational questions; lending my experience from my background as a physicist, mathematician, and sometime engineer in the applications questions within the disciplines I am experienced. I am intimately familiar with the issue because I have been evaluating camera technologies for use in the pictures I am working on. That's all. I just happened to have designed an 8K camera and related electronics and math about two years ago and understand what the folks at RED, ARRIFLEX, DALSA, and others are doing. My interests are in the resulting performance for our application. That it.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

Graeme, you're doing it again. You're running around in circles screaming "he's wrong, he's wrong," but without offering any counterpoint. Tell me why I am wrong. Explain yourself - you are going to be expected to do so if you're going to run around disagreeing with people, otherwise they're justified in assuming that your motives are commercial.

 

OK. I'm simplifying the demosaicing a lot, but this is a cinematography forum, not a calculus forum, and references to the IEEE publication of Kimmel are unlikely to go down well - but if you want to have that discussion, fine, let's do it, and let's not write off a useful technical discussion as "FUD". Yes, you can do clever interpolation; you can do feature and edge detection (actually, I suspect in a seventeen grand camera you can't, but you theoretically could) but what you are still doing is interpolating, fudging, and here's the kicker - it's nothing you could not, with appropriate modification, do to 2K data shot on any camera, and call that 4K as well.

 

If you still think I'm wrong, then hallelujah, because with Red's patent image processing algorithm, Viper's a 4K camera as well! Great!

 

Phil

Link to comment
Share on other sites

  • Premium Member

I think people get a little too obsessed over the numbers thing, even though it's understandable because it is a simple way to describe resolution. I do it all the time myself.

 

Truth is that 35mm movies shown in the theaters seem to have all sorts of resolution levels due to the combination of the film stock, exposure, lenses, filtration, lighting contrast, post path, release print path, theater projection conditions, etc.

 

All that matters ultimately to me is whether these newer digital cameras can create images that compete on the level of 35mm for resolution, exposure latitude, and color information. And that doesn't mean that I think that's the only image attributes that matter.

 

The only way of really knowing how good these new cameras are is to shoot comparative tests of both charts and real world conditions -- comparisons to 35mm.

 

I think when RED and Dalsa get touchy about the whole "is a 4K Bayer-filtered sensor really 4K resolution for RGB after de-Bayering?" they fall into the trap of playing the numbers game, probably because they are selling the product as a 4K camera, not as in "it has a 4K Bayer-filtered sensor", but implicitly that it creates images that are equivalent in resolution to a 4K RGB scan of 35mm color negative. After all, they are selling the camera to people who think 35mm is the gold standard for motion picture imaging.

 

Truth is that in some ways, for all we know, the 4K RED image may look more detailed than a 4K RGB scan of 35mm (probably not, but I'll wait and see...). But really it starts to be an apples and oranges comparison, a film scan versus a digital sensor capture.

 

I'm not sure the numbers alone are really going to tell you what impression the final image will make on the viewer when shown on the big screen, which is ultimately the only thing that matters.

 

Obviously Phil when you say you could do the same uprezzing tricks to a 2K image, you don't mean you can do the same things to an image from a 2K Bayer-filtered sensor and make it the same resolution as one from a 4K Bayer-filtered sensor -- I assume you mean compared to a 2K RGB file either derived from a 2K film scan or a camera with separate 2K RGB sensors.

 

It would be an interesting comparison test, let's say, uprezzing a RAW 1.9K capture from a Viper to 4K RGB versus the conversion of a 4K Bayer-filtered Red image to 4K RGB. But what if the results actually showed that the de-Bayered RED image appeared to have more resolution than the uprezzed Viper image?

Link to comment
Share on other sites

  • Premium Member

Hi,

 

> Obviously Phil when you say you could do the same uprezzing tricks to a 2K image, you don't mean you can

> do the same things to an image from a 2K Bayer-filtered sensor and make it the same resolution as one from

> a 4K Bayer-filtered sensor -- I assume you mean compared to a 2K RGB file either derived from a 2K film

> scan or a camera with separate 2K RGB sensors.

 

Yes.

 

> It would be an interesting comparison test, let's say, uprezzing a RAW 1.9K capture from a Viper to 4K RGB

> versus the conversion of a 4K Bayer-filtered Red image to 4K RGB. But what if the results actually showed

> that the de-Bayered RED image appeared to have more resolution than the uprezzed Viper image?

 

I assume it would, by a few tens of percent. Yes, it would be very interesting to see Viper upscaled using modern techniques and put against Red. Can't see it happening, somehow.

 

Phil

Link to comment
Share on other sites

David, thanks for trying to be a voice of reason, but Phil came up to me at IBC and told me that the RED was really a 1k camera. On other forums we've had very reasonable discussions about Bayer CFA sensors, pros, cons, benefits, drawbacks, even resolution, but it's not easy to have such informed discussions here. Perhaps the only worse place to discuss such things is the Sigma forum at DPReview :-)

 

As someone who's done some serious R&D into upscaling algorithms, I've got to say that such a comparison of a HD resolution 1920x1080 RGB image from an RGB sensor or 3 chip / prism system to a 4k Bayer pattern image would be most interesting. I'd be pretty confident in saying that the equivalent resolution of an optimal RGB image would have at least 3k as the horizontal dimension, and that as for which looked better, the native 4k or the 3k upscaled would be very dependent on the scaling algorithm chosen, and even then, as scaling always introduces artifacts, the 4k could still very well be preferable.

 

Graeme

Link to comment
Share on other sites

Well, getting back to my original post, WHEN and WHERE can I expect to walk past an actual operating RED camera and see my ugly mug live, projected in glorious 4K (whatever that may mean) or at the very least, on a 50 inch 1920 x 1080 LCD monitor?

 

All talk of upscaling algorithms and other hocus-pocus to the side, the RED should at least be able to produce respectable 4:2:2 1920 x 1080 images with only simple processing. Just that alone for $17K would be quite an achievement. I have reservations about its suitability for green-screen work, but no doubt those can be laid to rest with a simple demo of that too!

Link to comment
Share on other sites

  • Premium Member

Hi,

 

> The RED should at least be able to produce respectable 4:2:2 1920 x 1080 images

 

Yes, it should be able to do that. If you really want every pixel to be real RGB data with only the tiniest proportion of erroneous data generated by the scaler, you'll want to knock it down by half again - that's what I meant by 1K, as Nattress well knows. Scaled Bayer images have huge problems with sharp colour edges.

 

Obviously, they want to call it a 4K camera and sell it as a 4K camera, but it's a fairly unavoidable technical fact that the images will contain significantly less information than a true 4K RGB device. You can't really get away from that; it's just basic mathematics. No matter how much anyone denies this, it will remain true.

 

I'm still happy to engage in a technical discussion of the scaling algorithm and why you feel you've been able to do something nobody else ever has. Why are you avoiding this discussion? You are giving the impression that you are, well, full of it.

 

Phil

Link to comment
Share on other sites

" Scaled Bayer images have huge problems with sharp colour edges." Since when? Pro photographers have used bayer pattern DSLRs for years now without these so called issues stopping them superb images. I'm sure a lot of people here use high end DSLR's as part of their production work, again without issue.

 

Phil you're still saying that 1k RGB = 4k Bayer which is utter nonesense. Surely you've got a digital camera with a bayer filter sensor so you can post up some example issues??

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

Forum Sponsors

Visual Products

Film Gears

BOKEH RENTALS

CineLab

CINELEASE

Gamma Ray Digital Inc

Broadcast Solutions Inc

Metropolis Post

New Pro Video - New and Used Equipment

Cinematography Books and Gear



×
×
  • Create New...