Jump to content

RED ONE footage


Emanuel A Guedes

Recommended Posts

  • Premium Member

Mr. Worth makes an interesting point, although to be scrupulously fair, Nyquist applies to all cameras, regardless of whether they use a Bayer filter array or a three-chip splitter block. According to Nyquist, a Viper is only capable of resolving just under 1000 lines, and we don't have any problem calling that a full HD, ergo practically 2K, camera system. For this reason alone it's worth crediting Red with 2K. That figure could be severely affected by the large amounts of low-pass filtering required to avoid aliasing on a Bayer sensor, but we'll give them the benefit of the doubt here.

 

The reason I credit them with more is that there are techniques involving the statistical likelihood that everyday images are more or less monochromatic which can increase the apparent resolution of a BFA; effectively, this comes down to treating the three overlaid colour channels as a pixel-shifted setup. These techniques are very helpful and add materially to the sharpness of any BFA image. The problem with this is that they are asking us to believe that this technique, even with all the low-pass filtering, is allowing them to effectively double the native resolution of the camera.

 

This is a big ask, especially for a company that screams "pixel shifting and up-rezzing not spoken here" on its website.

 

Phil

Link to comment
Share on other sites

  • Replies 463
  • Created
  • Last Reply

Top Posters In This Topic

When one looks at the Red Camera one gets the impression that the human visual system has finally been replicated. however this is very far from the fact. The Red retina does not mimmick the design of the human retina. To do this the pixels would have to arranged in a series of concentric circles. The pixels themselves would have to be space variant in order to mimmick the periphery of the human visual system which means the resolution must be variant with higher resolution on the focal point and lower resolution on the circumference. The shape of the pixels must not be boxes but rather truncated pyramids. In other words to properly design a synthetic retina we must have a knowledge of architecture. What Red gives us is that they simply throw more pixels at the problem and solve this problem with brute computational power.Like an architect who designs the worlds tallest building but cares nothing what the building looks like. The building is very tall and very impressive but it still looks like a box.

 

For years these electronic companies have been giving us crummy interlaced video that are ridden with artifacts as to make the picture unwatchable. The 1080i format gave us what we call the super video look. Now Red comes along with a fully progressive ultra high definition high speed scanning system and now we are to leap with joy because it no longer looks like video but rather a computer screen. In other words we have been fed crumbs so long that now we have a chance to eat a good meal we are satisfied and seek no further improvement. Like I have said I have always been an advocate of progressive scanning but why do we have to stop at progressive scanning? Why not go for a completely new paridgm in chip design?

 

Of course if one wishes alternate pixel designs can be mimmicked using conventional sensors. However this results in a severe resolution loss as I have calculated that only 500 concentric circles with 1000 pixels in circle can be generated using the Red sensor if these pixels are going to incorporate space variancy.

 

Perhaps the saving grace of the Red camera is that it is modular and upgradable. If one desires a radical new chip design the old one can be swapped out. Also I can expect Jim to be a little more open to listening to my ideas than a giant mega corporation. Of course Jims camera is good enough and can be considered overkill. However what will happen when the competition steps up to the plate and offers their 4k or even 8k cameras? What will set Red apart when an arms race starts with which camera can offer the most pixels? The advantage Red will have is that Red listens. The mega corporations have an attitude that bad things happen to people that ask for progressive scanning. And when these corporations finnally do offer us progressive scanning we dare not ask for more.

Link to comment
Share on other sites

  • Premium Member
Why do we care what Phil thinks?

 

unless your royalty the question is 'Why should I care what Phil thinks?' and the answer to that is i don't know it's your choice. personally despite the fact that Phil might seem antagonistic he has actively enlivened debate on this board, he speaks from a position of research (though you might disagree with his prognosis) and quite frankly speaking your mind is in my opinion an admirable trait- that's what brits do. now i could ask what you hoped to contribute with your post, but i've got a feeling there was no point in it whatsoever.

 

keith

Link to comment
Share on other sites

It'd be a shame to have Jim Jannard & RED leave this forum after having attempted time & time again to establish constructive discussions on what & how to improve the RED camera as a cinematography tool.

 

Let's be real here... Jim Jannard goes out and makes a lot of cash selling sunglasses... in my humble .02 opinion someone like that shouldn't or wouldn't be afraid of what one person or a couple people say... if he was he'd probably still be selling sunglasses at the local swap meet which he isn't...

 

So I can't understand what all this fear is about... why is everyone so afraid because one or two people think its not real, it doesn't work... it doesn't live up to what they said it would do...

 

Again in my humble opinion when the little guy starts shooting with the camera everyday and those who have gone before (like in any other camera) have the workflow down. Then that's when the camera will finally be a hit... that's who Red marketed for at the beginning... the little guy... not the big guns...

 

Would I use the Red... YES... when I feel all the issues have been sorted out...

Link to comment
Share on other sites

I'm not sure what you mean by "worse case" resolution for pure red or blue images.

Exactly what I said. Worst case. if you film a subject that only creates response on the red photo sites you are down to this 1K resolution. For realistic subjects you get response from all photosites together. You get luminance information for 4K photo sites. If you shoot a black and white film with Red you end up with way beyond 1K real effective resolution. If you shoot in color it's still >> 1K since all the sites contribute all the time to the final image, before and after debayering. Film any real colors in the world and analyse the pictures. Film resolution charts. You do not get 1K or 2K effective resolution. You get more. Not only perceived but actual, measurable resolution. You don't get full 4K resolution with color footage, but there is a lot in beteen 1K and 4K.

Back to the 1K thing. The Nyquist-Shannon theorem states that you must have double the amount of information of the output device to accurately reproduce what it is you are sampling (in this case, an image). Considering this, even if the Red sensor did capture 2K of data for red, green and blue (it currently only captures 2K for green), your target resolution (based on the theorem) would be 1K. Let me state for the record that this is not my opinion. This is simply what the theorem states. This theorem is observed in many digital technologies. Here are a couple of examples:

Once more, real footage is sampled all the time from all photo sites. There is no 1K sampling going on. Each photo site contributes. If you film a bright red object the red 'pixels' react strongest, but the other two react as well, just less so. All sites always react to luminance in the object, how much depends on the frequency distribution coming from the object and the design of the sensor.

Link to comment
Share on other sites

I think we need to look at peoples comments like these in the right context. In comparison to an HVX200, the Red is pretty great!

 

I don't see Red vs. HVX200 as a fair comparison at all. You can purchase three HVX200s - optics included - for the price of one Red body, w/no lens. That fact alone puts them in two entirely different categories. I think red is priced in a grey zone, and will probably get more attention in the rental market, although I could be wrong.

Link to comment
Share on other sites

I don't see Red vs. HVX200 as a fair comparison at all. You can purchase three HVX200s - optics included - for the price of one Red body, w/no lens. That fact alone puts them in two entirely different categories. I think red is priced in a grey zone, and will probably get more attention in the rental market, although I could be wrong.

 

 

It's not a comparison. What I'm saying is a lot of people interested in the Red are coming from using an hvx200 or equivalent. The quality is nowhere near similar, which is why you read all the "second coming" comments on the reduser forum. Compared to what they have been using up until now (an hvx200 in a lot of cases!) the Red is blowing their minds!

 

However, the comments you read from people who are used to working with film have less "fanatical" comments. They've seen good quality before! I think they're more interested in the workflow benefits of digital.

 

In conclusion! :P People coming from an hvx200 will think the Red is all that and a bag of chips. People used to working with film are more intrigued by the digital workflow, since they're used to high quality images with film.

 

At least, that's how I'm seeing it.

 

 

Jay

Link to comment
Share on other sites

If you shoot a black and white film with Red you end up with way beyond 1K real effective resolution.

It is impossible to shoot pure black and white with a Red, or any Bayer-filtered camera, at ANY resolution. You are always shooting "color," because you will never get the full spectrum of luminance since each pixel has either a green, blue or red filter on it. The closest you could get, in my opinion, would be a "2K B&W image shot through a green filter." This is of course, without any post-processing. You could certainly "average" the output of red, green and blue. But as I mentioned earlier, I am just talking about the sensor data itself, sans any post-processing.

 

There is no 1K sampling going on. Each photo site contributes. If you film a bright red object the red 'pixels' react strongest, but the other two react as well, just less so.

Hold on a second. You wrote this, but you initially wrote this:

 

Exactly what I said. Worst case. if you film a subject that only creates response on the red photo sites you are down to this 1K resolution

I'm confused. Which do you mean? In any case, the red pixels will only respond to red light. Other colors are filtered out. This is necessary to isolate each color channel.

 

You don't get full 4K resolution with color footage, but there is a lot in beteen 1K and 4K.

Well, yes. There's 2K of true green data. However, you can only match this with 1/4 red and 1/4 blue. So you still come up short in reconstructing even a true 2K image. Remember, you need 2K of green, 2K of red and 2K of blue just to create a 2K color image.

 

Mr. Worth makes an interesting point, although to be scrupulously fair, Nyquist applies to all cameras, regardless of whether they use a Bayer filter array or a three-chip splitter block.

Yep, you're absolutely right. Interesting how all these other digital technologies cannot get away without observing Shannon's theorem (although 22.050 kHz audio was common a while back), but digital cinema can. This will change. We should be striving for 8K acquisition for 4K output. This would put digital cinematography in line with CD quality audio and print quality photos.

 

There's still a long way to go.

Link to comment
Share on other sites

  • Premium Member
Does anyone have time enough to count all the grains on an S35 film frame? Or maybe those numbers are already posted somewhere?

I asked Kodak about that maybe 5-10 years ago. Depending on which emulsion, it's somewhere in the ten million to billion range. (10^10 - 10^12). Fast stocks are the low end, print and IP/IN the high end. Comparing grains directly to photosites, though has some problems. Digital cameras usually give you 8 - 14 bits per photosite, while film grains are strictly one binary digit -- exposed or not. They're also random in size and shape, and potentially overlapping.

 

 

 

-- J.S.

Link to comment
Share on other sites

90% of this thread has been very interesting and educational. I'd like to thank those participating in the debate about the bayer pattern. I feel I understand it now better than I ever have (That first post on it was excellent).

 

At one point one of you suggested we get to an 8k sensor for true 4k output.

 

Even at this current resolution, we are getting complaints of a "too sharp" or "CGI-like" footage. At the end of the day, when the math is completed, don't you think RED's sharp enough? Current post production limitations, and the sharpness of the image as it stands brings me to the conclusion "For right now, whatever RED is resolution wise, is enough!"

 

As a RED customer and soon-to-be user.. I do understand RED's true OUTPUT resolution now and I would not complain if next to the 4K text on the camera there was a "*" and next to that it says:

- Native 4K sensor -

 

Of course, this is true.. no one is debating that the sensor is 4K, they are debating what the camera is outputting.

 

Jay

Link to comment
Share on other sites

At one point one of you suggested we get to an 8k sensor for true 4k output.

What I meant was true 8K for 4K output. That means shooting true 8K, and downrezzing to 4K to eliminate aliasing. From Wikipedia:

If the sampling condition is not satisfied, then frequencies will overlap; that is, frequencies above half the sampling rate will be reconstructed as, and appear as, frequencies below half the sampling rate. The resulting distortion is called aliasing; the reconstructed signal is said to be an alias of the original signal, in the sense that it has the same set of sample values.

This may seem like overkill, but the point is that an 8K image downrezzed to 4K will look better than a 4K image presented 1:1. I believe certain telecine machines (Thomson/Philips?) can do this. They scan at 4K for 2K output.

 

This concept follows in the footsteps of digital audio and print output, as mentioned earlier. It's no different. The Shannon theorem applies here as well.

 

This is where we really want to be for the best possible photographic reproduction.

Link to comment
Share on other sites

  • Premium Member

That's why I think the whole "is 4K Bayer really 4K?" is somewhat academic -- the only practical issue in my mind is whether it compares favorably in resolution to 35mm color negative, which is the current gold standard, and my initial impressions of the RED and Dalsa images is that it does. It appears to have the sharpness of 35mm with the grainlessness of bigger film formats. Now in other areas, like contrast, skintone reproduction, etc. it reminds me more of DSLR photography, which is to be expected... plus I actually like DSLR photography... but I can see someone objecting to that "digital look" just as when people complained about the skintone reproduction of early D.I.'s -- it starts to become an issue of personal taste.

 

To know what "true" 4K RGB is like, you'd have to design a monochrome sensor and shoot three passes through RGB filters (or make a prism block with three 4K sensors) and since that really isn't very practical right now or resembles anything we currently use, it's sort of pointless. Now you could compare 4K Bayer to a 4K RGB scan of 35mm color negative, but again, my initial impressions is that the RED and Dalsa 4K images compare favorably to 4K scans of 35mm. I have yet to see some really comparison tests yet, though, but it may turn out that these 4K Bayer-filtered cameras actually are outresolving 35mm color negative. I once saw some comparison tests of a 12MP DSLR image versus a 35mm frame which seemed to suggest this was happening.

 

It gets into that weird area where you need a 4K RGB scan of a 35mm movie film frame in order to capture all the grain information, yet that doesn't necessarily mean that you'd get the equivalent resolution if you used a 4K RGB 3-sensor set-up.

 

Anyway, if I get to shoot some comparison tests between the RED and 35mm (scanned to 4K) and find them similar in detail & sharpness, I'm not really going to care whether 4K Bayer-filter is actually some sort of theoretical "3K" or "2.5K".

Link to comment
Share on other sites

It is impossible to shoot pure black and white with a Red, or any Bayer-filtered camera, at ANY resolution.

What I meant is that the more black and white your subject is the more all photo sites contribute to the picture all the time. And the narrower the frequency spectrum hitting the sensor the more only one of the 3 types of photo sites responds.

I'm confused. Which do you mean? In any case, the red pixels will only respond to red light. Other colors are filtered out. This is necessary to isolate each color channel.

Separation is not perfect and there is cross talk. And objects are usually not made of pure R or G or B color but a mixture of colors. And this is used for getting higher resolution out of the sensor. Dalsa explicitly says so in their presentations. Are they lying?

I suggest you look at actual footage and then decide if you see/measure 1K, 2K or between 3-4 K of resolution. And whether it depends on what you photograph (what frequency spectrum is hitting the sensor) or not. if Red and Dalsa are 1-2K 35mm film is 1K.

Link to comment
Share on other sites

Not to be too off-topic here, but I just happened to talk to a friend today, who's providing some [non-RED] 4:4:4 workflow services to [insert major Hollywood studio here]. I mumbled something about RED to him. He said something about 4:4:4. I said, something about ProRes 4:2:2 proxies. He said something about . . . I don't remember, but it was something like, "I don't think RED . . . " Anyway, I said, "Well [the RED footage] looks pretty damned good on a 4K projector." He said, "You saw RED footage from a 4K projector?" He had concerns regarding the RED footage holding up, but then again, he's never seen RED footage shown on a 4K projector.

 

I also heard what's going on at the major Hollywood rental houses (you know which ones). I was impressed!

Link to comment
Share on other sites

Hey, I don't blame you there. The initial price of the RED was $17,500 and I just checked Spectra's website and for that much money, I could shoot, process, and RANK telecine 22,000 ft. of 16mm film. There is just no contest as to which I would rather do. I would love to have that much celluloid just chilling in my fridge. The RED image is nice, I wont lie. But it does look strange to me as well and it's not the kind of image that tells the story visually the way that I would want it told. For those whom it works for, more power to you. For me, I have to have a certain look and atmosphere that only film can do, and small gauge tends to be better for my stories.

 

 

...

 

And, Matthew, not trying to start an argument here but for that money, you could shoot a one off 16mm film. But with the Red camera you would be able to shoot, what most people would agree on, better quality images, over and over again.

 

However. Who said anything about buying?

 

Rental cost of the Red + acc's would be WELL below the 16mm belt.

 

 

Try again. The RED is an $18,000 camera, a zoom lens is $9,0000 and a set of primes of $20,000 = $50,000

 

Taken into the context of the whole discussion, if you want to shoot a film, and you're renting your equipment, all things equal, by the time your editing on your NLE, I would be very surprised if 16mm is cheaper than shooting with a Red One.

 

Now again, it's a tool. The more tools the better. You choose the one you prefer or you think is the best for the particular job your doing.

 

Cheers,

Damien

Link to comment
Share on other sites

Separation is not perfect and there is cross talk. And objects are usually not made of pure R or G or B color but a mixture of colors. And this is used for getting higher resolution out of the sensor. Dalsa explicitly says so in their presentations. Are they lying?

Believe me, I know what you're getting at. And I don't necessarily disagree. But in order to make "intelligent decisions" with the data, we must employ the demosaicing algorithm (which is still interpolation, despite the pixel offsets and "educated guessing" as to the true value of the information).

 

This is a type of pixel shift. Refer to Phil's post:

 

The reason I credit them with more is that there are techniques involving the statistical likelihood that everyday images are more or less monochromatic which can increase the apparent resolution of a BFA; effectively, this comes down to treating the three overlaid colour channels as a pixel-shifted setup. These techniques are very helpful and add materially to the sharpness of any BFA image.

And according to Red, based on this screen capture of their web site and how the average person would interpret it, they are not employing this technique:

 

pixel_shift.jpg

 

You mentioned that Dalsa states they are demosaicing the Bayer data so that the offsets in R, G, and B contribute to the resolution of the demosaiced image. I believe this is true, just as Adobe's Camera Raw or Lightroom does the same.

 

The text on Red's site that says "Pixel Shifting and Up-Rezzing not spoken here" would lead one to rationally conclude that they are not doing either of the following:

 

1. Using offsets in Bayer data to recover a more accurate image than NOT using the offsets

2. Adding any more resolution to the image than was already there

 

Frankly, I know that they are doing number 2, because the "4K" screen shots they have on their site are clearly up-rezzed as the interpolation artifacts are very obvious (possibly due to the type of demosaicing algorithm they are using). Here is a 200% view of the helmet of the WWI soldier in some of the Peter Jackson movie screen grabs. It is a PNG file, which is not recompressed as a JPEG would be. Also, it was scaled using Photoshop's "Nearest Neighbor" scaling algorithm as to simply duplicate adjacent pixels so the "rounding" or "smoothing" of the weave on the helmet was not added by me:

 

009000.png

 

The artifacts of demosaicing to 4K are clearly evident, especially in the top left quadrant of the frame. Notice the rounded corners of the intersecting threads of the weave. I can also dispel any excuse that this is due to JPEG compression (the original image on their site was a JPEG), as JPEG artifacts do not look like this. This is clearly the work of demosaicing, interpolation, or, in the case of this 4K image, up-rezzing.

 

Not that this is bad. It's actually very decent. But it's not good enough for me to consider this a 4K image.

 

Please don't think I'm here to talk trash about Red. I'm not. As I stated before, the camera will most certainly produce nice images. However, I feel it's important to point out the facts. I personally don't feel that the claim about pixel shifting and up-rezzing should be on Red's site. It's certainly bordering on misleading. Besides, Red, and many others here, always seem to downplay the technical limitations of the camera in favor of "how pretty the images are." To me, and I am speaking for myself, the technical abilities will dictate how pretty the images are. The reason I say this is because facts, that is, numbers in the form of true pixel resolution and dynamic range, are not something that is open to interpretation. If it's 4K, it's 4K. You have the information. If it's 2K, it's not 4K. You only have half of the information. In the case of Red, I already know the limitations of Bayer and know what creative pixel manipulation must be done in software to achieve 4K. Demosaicing is actually a responsibility of the software rather than the camera's sensor. So I feel the claims about Mysterium would better be delivered as "the combination of our Mysterium sensor and our clever de-bayering software." Because, as I've already pointed out, the Bayer-filtered sensor is only giving you a portion of the information you need -- even for a 2K image.

Link to comment
Share on other sites

Believe me, I know what you're getting at. And I don't necessarily disagree. But in order to make "intelligent decisions" with the data, we must employ the demosaicing algorithm (which is still interpolation, despite the pixel offsets and "educated guessing" as to the true value of the information).

 

This is a type of pixel shift. Refer to Phil's post:

And according to Red, based on this screen capture of their web site and how the average person would interpret it, they are not employing this technique:

 

pixel_shift.jpg

 

You mentioned that Dalsa states they are demosaicing the Bayer data so that the offsets in R, G, and B contribute to the resolution of the demosaiced image. I believe this is true, just as Adobe's Camera Raw or Lightroom does the same.

 

The text on Red's site that says "Pixel Shifting and Up-Rezzing not spoken here" would lead one to rationally conclude that they are not doing either of the following:

 

1. Using offsets in Bayer data to recover a more accurate image than NOT using the offsets

2. Adding any more resolution to the image than was already there

 

Frankly, I know that they are doing number 2, because the "4K" screen shots they have on their site are clearly up-rezzed as the interpolation artifacts are very obvious (possibly due to the type of demosaicing algorithm they are using). Here is a 200% view of the helmet of the WWI soldier in some of the Peter Jackson movie screen grabs. It is a PNG file, which is not recompressed as a JPEG would be. Also, it was scaled using Photoshop's "Nearest Neighbor" scaling algorithm as to simply duplicate adjacent pixels so the "rounding" or "smoothing" of the weave on the helmet was not added by me:

 

009000.png

 

The artifacts of demosaicing to 4K are clearly evident, especially in the top left quadrant of the frame. Notice the rounded corners of the intersecting threads of the weave. I can also dispel any excuse that this is due to JPEG compression (the original image on their site was a JPEG), as JPEG artifacts do not look like this. This is clearly the work of demosaicing, interpolation, or, in the case of this 4K image, up-rezzing.

 

Not that this is bad. It's actually very decent. But it's not good enough for me to consider this a 4K image.

 

Please don't think I'm here to talk trash about Red. I'm not. As I stated before, the camera will most certainly produce nice images. However, I feel it's important to point out the facts. I personally don't feel that the claim about pixel shifting and up-rezzing should be on Red's site. It's certainly bordering on misleading. Besides, Red, and many others here, always seem to downplay the technical limitations of the camera in favor of "how pretty the images are." To me, and I am speaking for myself, the technical abilities will dictate how pretty the images are. The reason I say this is because facts, that is, numbers in the form of true pixel resolution and dynamic range, are not something that is open to interpretation. If it's 4K, it's 4K. You have the information. If it's 2K, it's not 4K. You only have half of the information. In the case of Red, I already know the limitations of Bayer and know what creative pixel manipulation must be done in software to achieve 4K. Demosaicing is actually a responsibility of the software rather than the camera's sensor. So I feel the claims about Mysterium would better be delivered as "the combination of our Mysterium sensor and our clever de-bayering software." Because, as I've already pointed out, the Bayer-filtered sensor is only giving you a portion of the information you need -- even for a 2K image.

 

Crossing the Line was shot with RECODE which means that the data was compressed. Could this compression not account for what you are seeing here? If the RAW data was captured from the sensor I imagine this would be eliminated.

Link to comment
Share on other sites

Crossing the Line was shot with RECODE which means that the data was compressed. Could this compression not account for what you are seeing here? If the RAW data was captured from the sensor I imagine this would be eliminated.

The artifacts in question look like scaling artifacts to me. I don't believe this is what you'd see with image compression.

 

Even if this was a result of compression, there is still data loss. So now we're back to where we started, taking 2K G + 1K R + 1K B, demosaicing (interpolating), and then compressing (additional data loss). What's left after all that? How will this affect your color correction options?

Link to comment
Share on other sites

I think we have all found some common ground here. The RED is groundbreaking and I want one. As a high end video camera its the best at the lowest price. This technology is amazing and gives all kinds of new opportunities to indie film makers. NOW we can agree its not a film camera the looks are different and in my opinion will never be able to compete with 35mm at the cinema. Although thats not to say RED films wont be shown.

 

Brilliant stuff.

Link to comment
Share on other sites

...

 

So - and I really hate to beat a dead horse here - knowing this, why are you still marketing it as a 4K camera?

 

I'm sick and tired of asking entirely reasonable questions and having people fly off the handle. The fact that someone might somewhere have started marketing a device as something it's not, and their embarrassment at being questioned, is not my problem.

 

You won't believe the amount of intemperate foul language I had to cut out of this before I posted it.

 

Phil

 

Red are advertising their camera by the number of pixels on the sensor, which actualy has 4.9k, of which only 4.5k maximum may be used for recording (when going uncompressed) when and if that options becomes available. Presently those that have red can only record using a 4096 by 2048 portion of the sensor. Sensor size is what all DSLR manufacturers use to advertise their cameras. As to the measured resolution, Graeme has stated it is more something along the line of 3K, here :

http://www.reduser.net/forum/showpost.php?...mp;postcount=19

 

So Red are advertising, just like most companies, to make things sound great. They don't specifiy if they're refering to sensor resolution or output (measured) resolution of the full color image. However, if you ask precise questions, they usually respond, and I don't get the impression they're doing false advertisement. They're just being smart and a little sensational, but who can blame them for that.

 

Cheers,

Damien

Link to comment
Share on other sites

  • Premium Member

Mr. Mullen;

 

You're right. It's a perfectly acceptable picture and I've always said so.

 

I just consider it to be a point of order that we shouldn't let people bandy these impossible claims about. It raises all sorts of ugly possibilities: what else aren't they being straight about? If we let this go, what will someone try next?

 

Phil

Link to comment
Share on other sites

...

I just consider it to be a point of order that we shouldn't let people bandy these impossible claims about. It raises all sorts of ugly possibilities: what else aren't they being straight about? If we let this go, what will someone try next?

 

Phil

 

Is this sort of advertising any different than Panasonic passing the HVX200 off as a 1080p camera? Or what about Canon calling their new 1Ds Mark III a 21 megapixel camera? I mean, we all know it's a bayer sensor, so really it's only 7 megapixel, right?

 

I have to post this because I have a feeling there's something wrong with the grabs from Crossing the Line. This 100% crop is from a shoot that fxguide.com did when they first got their camera more than a week ago. Most of what's referred to as 'debayering' artifacts aren't apparent at all. I think the Crossing the Line footage had other problems of some sort. Also, it's looks to me like there's more than 2k of resolution here... maybe not true 4k, but obviously more than 2k.

 

EXT_Depth-ResCompare.jpg

Link to comment
Share on other sites

The artifacts in question look like scaling artifacts to me. I don't believe this is what you'd see with image compression.

 

Even if this was a result of compression, there is still data loss. So now we're back to where we started, taking 2K G + 1K R + 1K B, demosaicing (interpolating), and then compressing (additional data loss). What's left after all that? How will this affect your color correction options?

 

Actually, in the Red workflow the compression comes before the demosaicing, that is all that is done to your data before it is recorded, you can then decompress and demosaic to 4:4:4 uncompressed if you like.

 

Cheers,

Damien

Link to comment
Share on other sites

Actually, in the Red workflow the compression comes before the demosaicing, that is all that is done to your data before it is recorded, you can then decompress and demosaic to 4:4:4 uncompressed if you like.

 

Cheers,

Damien

 

It's true that the RAW data is compressed first, but even decompressed and demosaiced footage isn't true 4:4:4 since it comes from a bayer sensor and you can't get detail back that's already been lost in compression.

 

Although I'll argue that it's more than 4:2:2. The layout of the photosites and the way every pixel responds at least a little bit to every color means that an efficient algorithm can pull quite a bit of resolution out of bayered data.

Edited by Evan Owen
Link to comment
Share on other sites

  • Premium Member

> Is this sort of advertising any different than Panasonic passing the HVX200 off as a 1080p camera?

 

No, and that's just as silly. Or Silicon Imaging calling it a 2K camera.

 

> Or what about Canon calling their new 1Ds Mark III a 21 megapixel camera?

 

If you look back up in the thread that's exactly what I said - customary use has been to specify Bayer sensors in megapixels and 3-chip blocks in dimensions. I have no problem with the idea that a Red is a 12 megapixel camera, that's absolutely fine, because everyone understands what that means. What you can't do is start using the metric that has been used in one field to describe developments in another.

 

> Also, it's looks to me like there's more than 2k of resolution here... maybe not true 4k, but obviously more than 2k.

 

...which is exactly what I've been saying all along. You're not the first person to demonstrate this.

 

Phil

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Broadcast Solutions Inc

Visual Products

Film Gears

CINELEASE

BOKEH RENTALS

CineLab

Cinematography Books and Gear



×
×
  • Create New...