Jump to content

Is it 4K?


Dimitrios Koukas

Recommended Posts

By the specs I have the feelling that the best it can do is HD-SDI 4:4:4 RGB 2k.

Please correct me if I am wrong here...

Any critisism is more than welcome.

Dimitrios Koukas

 

You are wrong here, RED is 4k and the higher resolution of 2540P at 60 progressive frames in 12bit.

however there are 3 different recordingmodes and one of them indeed has a max of 2K.

 

method 1

Sensor shooting 4k or 2540p up to 60p, 2K up to 120P

For recording this uncompressed the RAW output is used.

You will need a powerful array for 2540p/4k@60p@12bit.

 

 

method 2

4K @ 30P, 2K @60P

Record to compact digital media, which includes CF-Flash, SATA on Card, REDRAM, attachable diskraid, Expresscard. This is redcode, which is compressed RAW.

You can buy most of the standarized medias direct.

 

method 3

1080 @ 30P, 720 @60P.

The "classic" pipelines for HDCAM SR, 2K DDR etc a la Panavision Genesis, Sony 750-950, Arri D20, GV Viper etc. This route is done via HD-SDI.

 

For a full overview visit:

http://red.com/formatoptions.htm

 

If you want a good introduction to the red camera and some 35mm/16mm scan comparisions in 2K/4K:

http://www.reduser.net/forum/showthread.php?t=1487

 

BTW - i have 2 directors in thessaloniki, greece, who are very interested in the camera.

we have 2 red cameras ordered, one of them quite early last year, so we are in the group who doesn´t have to wait until next year to get their cameras.

 

we plan on meeting in greece this summer to have some meetings regarding new coproduction in greece, and to give them the possibility to use the red for some testshots.

if you are interested to join, just let me know.

Link to comment
Share on other sites

4K is defined as a 4096 horizontal pixels format containing 4096 unique samples for each color (RGB)

What I read from the redusers websites the RED sensor has 5.5X5.5 micron size.

 

So for frame of 24 mm wide Bayer sensor (safe area in S35mm format) we will have:

24mm / .0055 mm = 4363 photo-sensors. 6.1% over sampled.

 

6.1% over-sampling for Bayer sensor is a bit small number to get 4096 unique reading in all color compositions of the picture. For B&W yes but for color it is bit too tight.

 

However we don?t know if 5.5 micron is the dimension of each photocell or it is the spacing from the center to the center, since photocells have to be separated.

 

Let?s wait till Monday for this answer.

 

Andrew Ray

Link to comment
Share on other sites

Let?s wait till Monday for this answer.

 

Andrew Ray

 

Hello Mr. Ray,

 

the sensor used by red one has 12642000 elements, in a 4900 (h) x 2580 (v) matrix.

 

i recommend not to go into another discussion regarding the different design approaches regarding the outreading, patternrebuilding, debayering, rbg orderingmethods, amount of individual colored pixel etc - as this information is not public and i suppose won´t become public by monday as well.

 

be it panavision or arri, hasselblad or leica, nikon or canon, red or dalsa - camera manufacturers usually don´t discuss all these details in public as they are considered business secrets for good reasons.

 

i was positivly surprised that some developers from red shared via pm & talks some details with me, but as that was back in sept. and december 06 they might have changed anyhow for the industrialproduction models.

Link to comment
Share on other sites

Jan, any manufacture that is hiding its specification is usually hiding something that is not to the business advantage. Judging the openness of RED camera creation process, they will not hide specification that will not harm their business.

 

Some lenses manufactures do hide MTF graphs and other technical specification only if the specs do not look good. If specs look good then they trumpet it all over the media.

Yet it is so easy to test and make MTF charts if need be.

The same relates to the camera sensors.

Remember Panasonic HVX200 sensor mystery?

We rented the camera and it took us 4 hours to figure it out that they use .5K sensors. We put it on test charts and we got 700 lines, so pixel shifting was in place.

They did more damage than good by hiding it from professionals.

 

It is my opinion, and only mine:

I do not have respect to the manufactures that refuse to give me specs without valid business reason, it just takes few days longer to figure it out anyway.

 

Andrew

Edited by AndrewR
Link to comment
Share on other sites

Jan, any manufacture that is hiding its specification is usually hiding something that is not to the business advantage. Judging the openness of RED camera creation process, they will not hide specification that will not harm their business.

 

I do not have respect to the manufactures that refuse to give me specs without valid business reason, it just takes few days longer to figure it out anyway.

 

Andrew

 

Hello Andrew,

 

they published all the specifications of the sensor a long time ago already, and they can be found on their website as well.

 

i was quite impressed that red always informed anyone about the aspects where they didin´t reach their original designgoals. weight was an example, or the fact that some colorspaces & resolutions originally planned will be later on etc.

 

however, and this is cruicial to understand: red has published all industry technical specifications, as sensor resolution, size, arraymatrix, sensitivity, dynamic, noise etc.

what they, and other cameramanufacturers don´t disclose, is the "soft" part of their deisgn. red can record raw, and how you translate a raw image to a rgb image is not -hard- science alone, but a great part an artistic decision. you can aim for sharp, you can aim for soft, all basing on the exact same given readouts from the sensor.

 

judging the quality of digital interpolators is, as with most A/D and D/A designs, balancing several tradeoffs.

 

as the raw->rgb/yuv/xyz conversion can be done in a standalone software when using red, these softwarebasing aspects can be changed at will anytime by red, red could even offer different methods how to decode the sensors data. i recommended that red could offer a userselectable debayering in the software, maybe even with the option for 3hrd party designs, but this isn´t priority number 1 atm.

 

but all of this doesn´t influence the hard design of the camera, its sensor, its noise, weight etc.

 

i think a pretty good analogy one should keep in mind is that we are dealing with a pipeline as in film here, some things in this pipeline are fixed, others are not.

 

simplified, on 35mm we have

lens->camera body->stock->lab

and on red we have

lens->camera body->sensor->redcine

 

now, what happens in the lab or in the software is a flexible choice, while the first three steps aren´t.

(hopefully, as long the filmstock has been handled in an appropriate way in the case of 35mm).

 

the hard techspecs, camera & sensor have been published by red, for the lab/redcine we will certainly see several tunings/changes/option becoming available - and all we could do there is speculate.

 

however, i really look forward to take delivery of our red cameras here (we have a ~200 and ~900 reservation, so we luckily won´t have to wait until 2008) and then put them through our own measurement.

Link to comment
Share on other sites

Hello Jan, yes getting it out of the sensor and de-Bayering is still more the art then science.

So I understand Panasonic, that they didn?t want to show how they are getting more from less.

I like the Dalsa and few other manufactures approach.

They publishing as much specs as they can and then right away there is a white paper

next to the specs explaining how they get there. Sometimes you do not want to publish anything, because you don?t want competitor to figure out how you got there, this I understand. But in case of Panasonic, come on, pixel shifting is well known trick to get higher final pixel count.

 

I see that RED has RAW output port, so if someone do not like what he gets with manufactures de-Bayering algorithm or conversion to RGB he can always get raw and use whatever is out there, or will be out there in the future.

 

Knowing Graeme and ability of CMOS cells to be directly addressable with lightning speed, he will make this 12MP Bayer data in to the 24MP.

http://www.nattress.com/

He is behind the REDCODE.

 

As oppose to the DLSR still picture camera the movie cameras deal with dynamic changing patterns so there is a room for creativity beyond the basic resolution of the sensor, each frame is different so more room to recover light information based on the light changes from frame to frame.

 

Andrew

Link to comment
Share on other sites

  • Premium Member

I didn't want to get into another "is 4K Bayer really 4K?" discussion again, which is why I asked Dimitrios to explain his first post -- perhaps he got confused by the HD outputs that the camera lists in its specs, for example.

 

I wasn't assuming that he was referring to de-Bayering resolution loss, because even if he did, you wouldn't only get 1920 x 1080 resolution from a RAW 4K Bayer-filter source except in the dumbest of de-Bayering algorithms (i.e. extract 2K worth of green pixels and 1K worth of red and blue.) If you really believed that, then you wouldn't even get 4:4:4 1920 x 1080 out of the camera.

Link to comment
Share on other sites

As oppose to the DLSR still picture camera the movie cameras deal with dynamic changing patterns so there is a room for creativity beyond the basic resolution of the sensor, each frame is different so more room to recover light information based on the light changes from frame to frame.

 

Hello Andrew,

 

theoretically speaking, it should even be possible to construct more information by comparing sequences of cmos-raw, basicly an inverted mpeg encoding, not to reduce, but to gain information.

 

however i am 100% positive that this is, even for graeme nattress, to much rocket science in 2007.

 

besides, doing that on 12.6MP@12bit@60p would, ahem, require a slight little bit to much of computation overhead for typical long-form productions. or the camera dept. rushes into the VFX/CGI departements and steals their rendering racks B)

 

however, i am quite interested what we can do with recording 4K@60p for output on 2K@24/25p.

in the last decade, we overcranked for purpose, as it was expensive on 35mm. now, as its seems becoming basicly cost-neutral, i am looking forward for new styles and tricks in the beginning era of, lets say, "oversampled" shooting.

 

And yes, graeme is certainly a real asset for red. i had some short personal mail exchanges with him in the past, and he is a very qualified person.

Link to comment
Share on other sites

Jan, I like your metaphor about inverted mpeg.

When you look on 3 dimensional space projected on two dimensional surface in motion, it is actually reverse compression of infinite elements of the space.

I do agree 100% with you, spatial movement in the picture as oppose to stills is actually compressed information of still picture infinite elements. This is the good simplified way of presenting it. Woowww! I never thought about it this way.

 

Most cameras, still or motion one, do have much better RAW to final output conversion, on the software supplied with the camera not on the camera itself.

I think it is because camera CPU is not as fast as computer that we use for post processing to begin with and it requires DSP that has higher power consumption, thus higher heat dissipation. What I like though he is using RAW or vavelet compression.

http://en.wikipedia.org/wiki/Wavelet_compression

The nice feature of this compression is the fact, that image is presented with the full resolution of the sensor as soon as you stop or slow down the pan or movement on the screen.

 

If you get some fast movement in the frame you loose resolution on the whole screen a bit, but then motion blur doesn?t let human eye to recognize many details anyway. As oppose to non vavelet compressions that shows you the blobs of quadrants flying all over the screen.

Just try to press the pause button while playing the most mpeg videos.

 

With vavelet, once you pan the wide angle view with tons of details on it and you slow down a bit or stop, the whole screen becomes crystal clear and incidentally it is the moment where we want to show to the viewer most of the details anyway. We don't show fast action scenes using wide angle lenses, do we?

Having RAW output clocking at 9GBits per second makes all this compression talk redundant, but it is good to have the compression for low budget production or previews anyway.

Edited by Andrew Ray
Link to comment
Share on other sites

I'm sure some day we'll also get some fancy motion flow algorithms involved as well for supersampling the path of say the color green through an image even as it transverses a blue well.

 

With enough processing power I wouldn't be suprised if temporal Debayer algorithms didn't eventually approach the theoretical 1:1 resolution factor.

Link to comment
Share on other sites

Actually David, 1920 x 1080 = 2073600

12MP Bayer has 3MP of red 3MP of blue and 6MP of green.

Yes, I do the same mistake sometimes again and again.

 

I think sometimes that 4K is 4MP, funny how brains work.

 

4K is 4096 x 2304 in 16:9 screen aspect ratio.

So 9.4MP

 

Gavin, temporal in 120fps actually will give you 1:1 deBayer

It is possible to do it in post on your computer if you have RAW, but yet I have to see the hardware DSP optimized for such algorithm so it could be installed on the camera.

As Jan said, 2007 is too early for this.

At 60fps you may get loss of sharpness trying to do it and at 30fps for sure not.

Link to comment
Share on other sites

12MP Bayer has 3MP of red 3MP of blue and 6MP of green.

how exactly the CMOS imagers array of photo-sensitive diodes is distributed though colors is up to the designers, and regarding red, we dont´t know that for sure as of yet.

we can assume it, probably yes, but there are lots of different approaches.

 

what we know for sure it that is a regular CMOS, not a foveon.

Link to comment
Share on other sites

Gavin, temporal in 120fps actually will give you 1:1 deBayer

It is possible to do it in post on your computer if you have RAW, but yet I have to see the hardware DSP optimized for such algorithm so it could be installed on the camera.

As Jan said, 2007 is too early for this.

At 60fps you may get loss of sharpness trying to do it and at 30fps for sure not.

 

I'm not sure we'll ever be able to do it in camera, because optimally it would require the entire clip to be recorded before it begins running the algorithm.

Edited by Gavin Greenwalt
Link to comment
Share on other sites

how exactly the CMOS imagers array of photo-sensitive diodes is distributed though colors is up to the designers, and regarding red, we dont´t know that for sure as of yet.

we can assume it, probably yes, but there are lots of different approaches.

 

what we know for sure it that is a regular CMOS, not a foveon.

 

Someone from RED team mentioned on the reduser forum that Misterium is Bayer CMOS sensor.

http://en.wikipedia.org/wiki/Bayer_pattern

 

Unless it is modified version of it, all is possible.

 

 

I'm not sure we'll ever be able to do it in camera, because optimally it would require the entire clip to be recorded before it begins running the algorithm.

 

Actually if we use picture movement just to enhance the resolution of the picture you don?t want to use more then 4 frames to do calculations, I think.

Anything more will introduce more blur.

I imagine there will be kind of 4 frames sliding window throughout the sequence.

Edited by Andrew Ray
Link to comment
Share on other sites

Someone from RED team mentioned on the reduser forum that Misterium is Bayer CMOS sensor.

http://en.wikipedia.org/wiki/Bayer_pattern

 

Unless it is modified version of it, all is possible.

Actually if we use picture movement just to enhance the resolution of the picture you don?t want to use more then 4 frames to do calculations, I think.

Anything more will introduce more blur.

I imagine there will be kind of 4 frames sliding window throughout the sequence.

 

Right, but in order to know where you are with pixel flow you also usually want to know where you're going.

 

So a minimum would be 3 frames. One past one present and one future. Ideally of course you would want more to ignore grain or noise and other inconsistancies.

 

So I think no matter how you slice it, the camera needs more data than it has at the instantaneous moment of capture to analyze the pixel vector.

 

Technically with a sufficiently advanced algorithm the computer could sort through all of your footage, do an automatic clean plate generation and start super sampling based on other shots, angles and frames. So if you have a slow pan across a wall it would not only do a temporal debayer, it would also auto align previous debayered frames and uprez your footage. :ph34r:

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...