Jump to content

RED production schedule


Carl Brighton

Recommended Posts

I seem to remember reading that Jim Jannard was quoting "late 2006" for the first production prototypes of the RED camera, and delivery to customers in the second quarter of 2007.

 

However Dec 2006 has come and gone and I haven't heard anything further. Interestingly on the RED website they mention there's a last chance reservation window from Jan 21 to Jan 24 but nothing much else.

 

The photo gallery is now just a series of "renders" of the RED One, whatever that means and their "News" section contains several items, all with the exact same date: 2006-08-10.

 

Does Jannard still post anywhere? I haven't seen anything from him for ages.

Link to comment
Share on other sites

  • Replies 495
  • Created
  • Last Reply

Top Posters In This Topic

Apparently there's a new forum, one that is sponsored by RED itself. But to cut to the chase you can get some details from Mike Curtis' hdforindies.com blog. He's very much drunk from the RED Kool-aid, but it can be a decent clearing house spot. RED has a functioning prototype that is actually in the production style camera body. There are some pictures of it sitting next to a Sony F900R and a Panasonic HVX200.

 

I understand there are some plans to sneak debut the camera soon, but you won't catch me telling the where & when.

 

They are apparently on or close to schedule, although a recent change in their specs (which they did always say were fully subject to change) make it functionally much more a of very nice 2K camera in the vein of the Silicon Imaging SI-2K than a 4K camera. And frankly, that's what most people would use it for in general production anyway. It does do 4K but capture options are limited. Still, absolutely stunning technology and a real sign of where the industry is eventually headed.

Link to comment
Share on other sites

I've gone over to RedUser.net, and I was intrigued by

this post by stephen Williams

 

He says that "cinematography.com" have banned further discussion of the RED until a working camera is available for testing. I don't recall seeing that here anywhere. They didn't stop me starting this discussion!

 

www.reduser.net is the site if you're interested.

 

I don't know what you mean about making it more of a 2k camera. I'd say current changes are exactly the opposite of that.

 

Graeme

I have to confess this is one bit I don't understand. If you have a 4K chip Bayer-masked, it means the green is only sampled 2,000 times and the red and blue 1,000 times each. You might be able to synthesize 4000 horizontal pixels that are all different from each other, but how representative are they of the actual 4000 pixels of the original light image?

 

Also some of your images are captioned "Shot without a low pass filter." What exactly does that mean?

Link to comment
Share on other sites

  • Premium Member
I've gone over to RedUser.net, and I was intrigued by

this post by stephen Williams

 

He says that "cinematography.com" have banned further discussion of the RED until a working camera is available for testing. I don't recall seeing that here anywhere. They didn't stop me starting this discussion!

I have to confess this is one bit I don't understand. If you have a 4K chip Bayer-masked, it means the green is only sampled 2,000 times and the red and blue 1,000 times each. You might be able to synthesize 4000 horizontal pixels that are all different from each other, but how representative are they of the actual 4000 pixels of the original light image?

 

Also some of your images are captioned "Shot without a low pass filter." What exactly does that mean?

 

Hi,

 

Check out the last post by Tim Tyler.

http://www.cinematography.com/forum2004/in...18361&st=45

 

Stephen

Link to comment
Share on other sites

No, if you look at the above mentioned Tim Tyler post, you're not allowed to "tout" a camera that either doesn't yet exist or isn't available for independent evaluation. I think it's OK for one to talk about the RED in general terms, but not as though you've actually used one, which a lot of fanboys were starting to sound like they imagined they actually had.

 

Perhaps you'd better come over to reduser.net though if we're not allowed to talk about RED here?

 

Graeme

No thanks, I'll just sit on the sidelines. I don't really care one way or the other about the RED, and I get the distinct impression anybody who asks any sensible questions there is going to be shown the door rather quickly, a la some of the other fanboy-oriented forums.

Edited by Carl Brighton
Link to comment
Share on other sites

Answer is: very representitive. If you stick a high quality algorithm on the reconstruction, you get really, really nice, images.

 

Graeme

Errr OK, so if I take a small block of pixels from a digital photo, blow them up on Photoshop so you can see the actual pixels, use the Photoshop eyedropper thingie to analyse the Red, green and blue components of each pixel, and then make a new block of pixels using the just red, green and blue components in a standard Bayer pattern, you'll be able to tell me what the red and blue components were on the green pixels, the green and blue components were on the red pixels and the green and red components were on the blue pixels?

 

I'm not saying you can't do it, but I'd love to know HOW you do it! Can this technique be applied to consumer camcorders?

Edited by Carl Brighton
Link to comment
Share on other sites

Carl, http://scien.stanford.edu/class/psych221/p...ngchen/main.htm will probably give you a good idea of how it all works.

 

Traditionally, consumer cameras have used over-simple algorithms. We can use much better algorithms because we have the horse power in camera and in computer. There's no reason why consumer cameras today couldn't do the same.

Link to comment
Share on other sites

Carl, http://scien.stanford.edu/class/psych221/p...ngchen/main.htm will probably give you a good idea of how it all works.

 

Traditionally, consumer cameras have used over-simple algorithms. We can use much better algorithms because we have the horse power in camera and in computer. There's no reason why consumer cameras today couldn't do the same.

 

Well, OK, so who is actually developing this algortihm? I didn't find that link especially informative, all he seemed to be telling us is what needs to be achieved, rather than how to achieve it. At the end he says:

 

"As far as the future work is concerned, an algorithm that is both superior in image reproduction and computationaly efficient is still worth pursuing given the fact that none of the existing algorithm satisfies both criteria."

 

>> There's no reason why consumer cameras today couldn't do the same.

Then why don't they?

Link to comment
Share on other sites

  • Premium Member

You sound like you're trying to pick a fight Carl. What's the point of being so combative?

 

This is an extremely old argument we've already had here several time on de-Bayering algoriths and "true" resolution. You're not going to get any satisfactory answers.

 

The simple answer is that a simple algorithm just takes the all the red, green, and blue photosites and derives RGB from them, whereas a complex algorithm makes educated guesses as to the color of the photosite next to another photosite (partially due to the fact that each photosite doesn't perfectly filter out the other two colors) and then reconstructs that information.

 

As for exact details of that algorithm, to some degree, I'm sure that might come under the category of a corporate secret.

 

Ultimately all that matters is the end result, not the numbers, so when the camera comes out, test it for yourself and decide if you like the level of resolution.

 

Until then, my own private belief is that a 4K Bayer-filtered camera would roughly be a "3K" camera -- in other words, I don't believe that half the resolution is lost but I don't believe it is lossless either. But like I said, all that matters are end results, not math.

 

As for why digital still cameras don't put more powerful processors into them, I think you can make an educated guess as to why.

Link to comment
Share on other sites

From what I'm reading here, I'm getting the impression that this fancy footwork with filtering does not have to be limited to the RED, but may be a more general algorithm for improving resolution with any image gathering device.

 

I can see it now, running my Super 8 footage through the RED algorithm once, I get 16mm, run it throug again, and get 35mm, a third time IMAX! Now we're talking serious resolution. Isn't software amazing?

Link to comment
Share on other sites

Well, OK, so who is actually developing this algortihm?

 

Carl, I think that Graeme himself is in charge of developing this algorithm.

 

I have a feeling that Tim banned Red One "touting" for exactly this reason - when you have a camera that can't yet be proven, especially one that's attempting to take a novel route like the Red One, there's no way to prevent a technical discussion from turning into a lot of speculation, skepticism, and occasionally, hostility.

 

Personally, I'm glad that Graeme and the Red team have been so forthcoming about development of the Red One, and I personally enjoy reading his and other Red staff's discussions at reduser.net. I have my healthy skepticisms, as I think most people probably do to some degree - but I think the point is, there's really no need to discuss that aspect of things, as you can have the "I don't think it's possible" vs. "I do think it's possible" debate all day, with nothing resolved. Until we actually find out if it is possible (by the production release and industry testing of the Red One camera), we might as well discuss another topic.

 

I don't really care one way or the other about the RED, and I get the distinct impression anybody who asks any sensible questions there is going to be shown the door rather quickly, a la some of the other fanboy-oriented forums.

 

It sounds like you've witnessed a few people "shown the door" at reduser.net, probably not because they asked a sensible question, but because they found it necessary to argue the feasibility of Red's goals. I think the implied point of the forum is that by participating, you're confident in the project and are interested in seeing it succeed. If you're not, why would you waste your time? Let the camera come out and make up your mind about it then, and continue to use and discuss the camera gear that's available today and proven to be effective now.

Link to comment
Share on other sites

Robert's comment just reminded of something I wanted to ask about. Has anyone discussed / heard of something called "Alchemist?" David Lynch mentioned using "Alchemist" on Inland Empire to "upres" the image from his PD-150 to something akin to Hi-Def resolution. I think he mentioned something about how it fills in the missing information when "upresing."

Edited by George Gordon
Link to comment
Share on other sites

  • Premium Member
Robert's comment just reminded of something I wanted to ask about. Has anyone discussed / heard of something called "Alchemist?" David Lynch mentioned using "Alchemist" on Inland Empire to "upres" the image from his PD-150 to something akin to Hi-Def resolution. I think he mentioned something about how it fills in the missing information when "upresing."

 

Hi,

 

Saw it at IBC 2 years ago, made by Snell & Wilcox. It's quite impressive.

 

Stephen

Link to comment
Share on other sites

From what I'm reading here, I'm getting the impression that this fancy footwork with filtering does not have to be limited to the RED, but may be a more general algorithm for improving resolution with any image gathering device.

 

I can see it now, running my Super 8 footage through the RED algorithm once, I get 16mm, run it throug again, and get 35mm, a third time IMAX! Now we're talking serious resolution. Isn't software amazing?

 

Ummm, yes and no .. and it depends if you swallowed the red pill or the blue pill Neo. The more data ... the more accurate you can resolve images ... more importantly ... the more accurate the data, the more accurately you (a good algorithm) are able to predict, moreover, you can extrapolate/interpolate information and predict that which extends beyond the data you have captured. You apply heuristics and other algorithms which are deterministic functions to resolve density or color and end up with (hopefully) a more accurate image than you originally acquired. It is a naturally occurring phenomena in nature and synthesized visual recognition.

 

I read in another thread someone referring to film as storing "real" optical images and digital cameras creating synthetic or artificial, if something to that effect, images. David noted that by that person's own definitions, what we were seeing was artificial. This was an astute observation and corollary, and a point worth reiterating here.

 

The point is that none of what film, present solid state sensors, and the human visual sensory systems acquire is precisely "real." Because in the real world Neo, the energy which propagates as reflections from images or original emission is discrete. Human vision, digital cameras, and film, essentially sense three (generally speaking, there are methods of sampling more colors) and interpolate this information, or at least portions of the information into images we deem a reasonable facsimile of the "real" image. This is done in real time by both our optic chiasma where the bio analog/digital data is correlated (by algorithms). The data in then pre-processed through several more algorithmically steps at different location and arrives at the visual cortex where the final processing and portions of the recognition process takes place. This is a rather simplified explanation, but the essential point is that we don't see the discreet real world anyway. What we see through our eyes, what cameras see, whether a silver salt emulsion or present solid state, all depend on some algorithmic interpretation of the real world.

 

So, Super 8 to IMAX algorithms and processing. It's possible, but it would not end up looking like 70mm or 120mm film frames, could be better or worse depending on your perspective of what the out come should be. It's all an illusion, but that's film making, for the most part, is .. illusion. Like purple and lavender .. no such colour. It's just some colors we made up in our evolution that served as a benefit for the organism.

 

And the probability that such a process would be created for Super 8 ... about the same as the Cubs matching the World Series record of the Yankees in our life time.

 

Which all begs the question, "what is real" Neo?

Link to comment
Share on other sites

  • Premium Member

Trying to use post-processing to "fill-in the blanks" of resolution, color, exposure information, etc. are somewhat hit and miss -- some things are easier to do successfully than others, but they are all workaround solutions. Plus they can create artifacts if the image is pushed in certain directions.

 

There is no real substitute for capturing more information to begin with, but in the real world, we make do with what we can afford. Even 35mm seems like a compromise if you look at 65mm photography...

 

Really high-resolution sensors (let's say 24MP) would help for Bayer-filter cameras because you'd be starting with more information, but that's still not practical for cine cameras. Plus you may have a problem with sensitivity as you make smaller and smaller photosites to fit more of them into a 35mm-sized area.

Link to comment
Share on other sites

Trying to use post-processing to "fill-in the blanks" of resolution, color, exposure information, etc. are somewhat hit and miss -- some things are easier to do successfully than others, but they are all workaround solutions. Plus they can create artifacts if the image is pushed in certain directions.

 

There is no real substitute for capturing more information to begin with, but in the real world, we make do with what we can afford. Even 35mm seems like a compromise if you look at 65mm photography...

 

Really high-resolution sensors (let's say 24MP) would help for Bayer-filter cameras because you'd be starting with more information, but that's still not practical for cine cameras. Plus you may have a problem with sensitivity as you make smaller and smaller photosites to fit more of them into a 35mm-sized area.

 

Exactly, David. The more economically feasible data you can get the better, resolution being only one attribute. Cell size well noted. The problem with just scaling more cells into a 35mm format is that the inherent noise stays about the same reducing the S/N ratio, causing a loss of a couple of bits. Some of this can be corrected using 5T sense amp circuit using more feed back and other mechanisms like sense-line. equilibration. technique, or rather the data line for a optical sense/storage cell. The cost would be to sensitivity, but there are ways to compensate. You're right. Even with such enhancements and the improvements brought about, you can't reduce the cell size to get resolution with appreciable results. You have to go with larger cell, thusly, you must move to a larger format. There are other problems (challenges) you create doing this, but there are many benefits as well, like a substantially (I anticipate, but I'll know in a few hours) higher S/N ratio which will substantially improve the dynamic range.

Link to comment
Share on other sites

There is no real substitute for capturing more information to begin with, but in the real world, we make do with what we can afford. Even 35mm seems like a compromise if you look at 65mm photography...

 

And a compromise it is looking at jaw dropping 65mm. And in the real world we are forced to live within our means and the means of the executive producer.

Link to comment
Share on other sites

You sound like you're trying to pick a fight Carl. What's the point of being so combative?...

"Combative?" I'm simply asking the question, immediately I'm trying to pick a fight. Why I don't apost on any of the fanboy sites.

 

>>...As for why digital still cameras don't put more powerful processors into them, I think you can make an educated guess as to why.

 

No, I've no idea why. Well apart from the possibility that there aren't any. Or are we meandering into the land of the 100mpg carburettor that was supressed by the fuel companies:-). In any case, many digital cameras have RAW outputs where the "special sauce" of whatevr flavor you choose can be added later.

Link to comment
Share on other sites

  • Premium Member

Carl, I've had my run-ins with the RED team and the RED fans, and all I can do is warn you that they are extremely sensitive to ANY skepticism or being confronted to "explain" themselves. So procede however you want to... but don't expect to get more information out of them than CML, Cinematography.Com and I don't know who else combined have managed to get out of them in the past. They don't really have to tell you anything, so unless you are prepared to use a lot of honey, you aren't going to get anywhere with this line of inquiry.

 

Labelling the video sites where the RED team is most welcomed and praised as "fanboy" sites, whether or not there is some validity in that, will only send up warning flags to them that you are skeptical.

 

I've taken a "wait and see" attitude now. Ultimately the camera will be out there for all of us to try out and judge for ourselves. Dave Stump, ASC has been testing the RED prototype for them now and then and he can be trusted to put it through the works without an agenda. And once the final camera comes out, everyone else will be taking it apart so plenty of info will come out at that point.

 

Before it hits the market, there is zero incentive for the RED team to say anything that isn't 100% positive about their product -- and that's hardly surprising.

Link to comment
Share on other sites

Carl, I've had my run-ins with the RED team and the RED fans, and all I can do is warn you that they are extremely sensitive to ANY skepticism or being confronted to "explain" themselves. So procede however you want to... but don't expect to get more information out of them than CML, Cinematography.Com and I don't know who else combined have managed to get out of them in the past. They don't really have to tell you anything, so unless you are prepared to use a lot of honey, you aren't going to get anywhere with this line of inquiry.

I know what you mean, however I've noticed that occasionally they let real information slip out, despite their best efforts :D Occasionally I get asked questions about supposed "upcoming" technologies and I like to be able to answer them as intelligently as I can.

Link to comment
Share on other sites

"Combative?" I'm simply asking the question, immediately I'm trying to pick a fight. Why I don't apost on any of the fanboy sites.

 

>>...As for why digital still cameras don't put more powerful processors into them, I think you can make an educated guess as to why.

 

No, I've no idea why. Well apart from the possibility that there aren't any. Or are we meandering into the land of the 100mpg carburettor that was supressed by the fuel companies:-). In any case, many digital cameras have RAW outputs where the "special sauce" of whatevr flavor you choose can be added later.

 

Carl don't get upset. There is so much unnecessary bickering and chauvinism over the image acquisition instead of professional constructive exchange it is not funny. I'm sure David is just try to keep issues from boiling over as they often do. You guys are lucky to a couple of guys like David who are willing to help so many with so much. For the life of me I don't understand why he's not always on a feature. I don't know him personally, but I've ask a number of directors and cinematographers I know about him. All have said he was easy to get along with, highly competent and very creative. He seems to be least provocative of all the people with the most to contribute.

 

Anyway David probably assumed you understood more about the technology than you did. This technology is an old well-know process being used in a rather new market place which is evolving the process in a different direction. So you gotta think of it as an emerging technology. The problem with just sticking more processing power inside the camera is cost, power consumption, heat, etc. General processor don't do efficient work on high data rates and can't keep up in a serial (or multi parallel serial channels) real-time environment. What you need is a processor specifically optimized for this type data and is integrally configured to work with, or integrated with, on the same die, a digital/analog signal processor. The R&D isn't cheap, the process won't be cheap, because unless there is a commodity push that will trickle up the funds to pay for high performance high end cameras. The technology will improve in the CMOS with more preprocessing going on board the die.

 

At the moment it is cost, waste-heat and energy prohibitive. Such man not be true in the future.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

Forum Sponsors

BOKEH RENTALS

Film Gears

Metropolis Post

New Pro Video - New and Used Equipment

Visual Products

Gamma Ray Digital Inc

Broadcast Solutions Inc

CineLab

CINELEASE

Cinematography Books and Gear



×
×
  • Create New...