Jump to content

Understanding Red Codec


Deji Joseph

Recommended Posts

I havent used a Red camera personally, but was reading about the redcode raw and discovered it data rate is only around 28mb/s. I thought this was low for a camera that shoots over 3k when a 5D's h264 files have a 50mb/s data rate. I was wondering if someone could explain this to me?

Link to comment
Share on other sites

  • Premium Member

"Redcode" is just JPEG-2000, used to compress the bayer mosaic data into four planes (two green, one each for red and blue). The technology is extremely similar to Cineform.

 

P

 

Interesting -- How do they separate out the two greens? And why would they want to? I'd think you'd want to keep them together because the differences between adjacent samples would be smaller, and therefore easier to compress. The green channel would have double the data of the others, but why not?

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member

No idea. Probably some sort of implementation detail; I suppose it's easier to engineer something to repeatedly compress the same sized frame, than it is to have it do two small frames then one big one. They're presumably using an off-the-shelf JP2K encoder ASIC for this work and it seems reasonable that a video-oriented device would have limitations on the frame size it can handle; the green channel data would be somewhat bigger than an HD frame, for instance.

 

It was once possible to crop these planes out of an R3D file in a hex editor and feed them to any reasonably tolerant JP2K decoder; it actually didn't look too bad just interpreted as RGB, give or take a bit of single-pixel fringing on sharp edges. Not that you'd have wanted to use that decode for anything, but it always did strike me that doing it that way actually looked quite a bit better than the Red's monitor output.

 

P

Link to comment
Share on other sites

  • Premium Member

"Redcode" is just JPEG-2000, used to compress the bayer mosaic data into four planes (two green, one each for red and blue). The technology is extremely similar to Cineform.

 

P

Interesting. I was always under the impression that they simply treated the entire "Raw" output from the sensor chip as if it was a monochrome signal with a lot of high frequency information, and used JPEG2000 to compress/decompress that before feeding into a fairly standard De-Bayer chain in the processing computer.

 

Which is sort of like the traditional concept of digital stills "Raw", where the sensor data is basically intercepted on its way to the camera's de-Bayer circuitry and diverted for storage directly into a memory chip. Theoretically this gives the user an unlimited number of bites of the decoding "cherry", but it's all-to-often touted as yet another bolt-on substitute for skill, competence and talent :rolleyes:.

 

But from what you've said, what they're really storing is red and blue signals with roughly the same resolution as 1080p, and a green signal something less than twice that.

 

Which, data-overhead-wise, would be not much different different to converting it to the 1080p RGB format that comes out of the Genesis/F35.

 

So, really Redcode is more or less a variant on the old HDCAM format, with a bit more resolution, using a more advanced (JPEG2000) compression codec, and stored on flash memory instead of tape.

 

There would thus appear to be no technical rationale for the claims of Redcode's allegedly superior image quality (which by the way, I've never actually seen any evidence of).

 

I'm sure if Sony felt there was any real merit in the "Raw" concept, they would by now have offered it as an option on their high-end cameras, basically diverting the output of three ADCs straight into the recording device.

 

The Epic is certainly getting some impressive bookings lined up, but I can't help noticing that they're almost all for 3-D films. I'm beginning to suspect that this has far more to do with the system's compactness and lightness than considerations of picture quality. It's a bit like people loudly commenting on the deficiencies of some of the cheaper flat panel TVs currently on the market (eg 50 inch 1024 x 768 Plasmas etc). The real situation is, people aren't necessarily buying "HDTV sets", they're mostly buying nifty new TVs that they can mount on the wall and which get lots of extra channels.

Link to comment
Share on other sites

  • Premium Member

Interesting. I was always under the impression that they simply treated the entire "Raw" output from the sensor chip as if it was a monochrome signal with a lot of high frequency information

 

The compression smear would cause enormous, unworkable amounts of crosstalk between the RGB values if you did that.

 

So, really Redcode is more or less a variant on the old HDCAM format, with a bit more resolution, using a more advanced (JPEG2000) compression codec, and stored on flash memory instead of tape.

 

I wouldn't go that far; you're conflating YCrCb data with an RGB bayer matrix, which is not the same thing. It would, of course, be possible to output the raw frame over HD-SDI, which is what D21 does or did and what Alexa is reputed to soon do with ArriRaw; but it must be stored uncompressed or you'd wreck it. Or, you could do what Red do, separate the components out, and compress them individually.

 

 

There would thus appear to be no technical rationale for the claims of Redcode's allegedly superior image quality (which by the way, I've never actually seen any evidence of).

 

I'm not quite sure what you mean by this - what claims, and what do you feel has negated them? My own thesis here is that Red have wasted no time in jumping around telling everyone they're very clever for inventing an amazing new codec, whereas what they've actually done is bought a $10 JP2K encoder IC and taken the only approach they technically could in using it.

 

Now may not be the time to bring this up, but all of this does rather offend my sensibilities. I don't think cameras should output information that needs subsequent processing by the company's proprietary software. I think they should output information in formats for which the standards are free or cheap and unencumbered by patents. Anything else artificially constrains workflows, and data wrangling on anything other than the most trivial productions is now a full time job. It's hard enough; let's not make it harder. Digital camera technology puts the onus onto post enough without making them go through some enormous rendering process just to get to dailies.

 

Anyway.

 

I'm sure if Sony felt there was any real merit in the "Raw" concept, they would by now have offered it as an option on their high-end cameras, basically diverting the output of three ADCs straight into the recording device.

 

But the concept doesn't really apply to many Sony cameras. The F35 (and thus Genesis) is single-chip and could theoretically have some sort of output representing its unprocessed sensor data, although with the three-stripe chip it wouldn't be Bayer data. Perhaps you could make a claim that it would be possible to get a better-looking RGB image out of it with the more complex mathematics available in offline processing. However, Sony (and thus Panavision) chose to create a device that has standard, easy-to-handle SDI outputs which you can record onto the device of your choice (including non-Sony products). This is why the cameras are bulky and expensive and consume lots of power; Red could not have done that and achieved their price point, and their camera is still big and bulky and consumes a lot of power. Think what the damn thing'd have looked like if it had a hardware debayering engine in it.

 

Thomson did more or less what you're suggesting with Viper, a true raw device that still output viewable RGB because it was a 3 chip camera. It's noisy and soft but I still rate it.

 

P

Link to comment
Share on other sites

  • Premium Member

The compression smear would cause enormous, unworkable amounts of crosstalk between the RGB values if you did that.

Do you know that for a fact though? With standard compression I'm sure you'd be right. However Raw Bayer data contains large amounts of highly repetitive information, which is the favourite food of compression engines. I'd assumed that Nattress & co had come up with some sort of variation of JPEG2000 that was optimised for positional accuracy of pixels of a certain size. But apparently not.

 

I wouldn't go that far; you're conflating YCrCb data with an RGB bayer matrix, which is not the same thing.

Electronically, no, but as far as data overhead requirement goes, would there be that much difference? Even though the colour-difference signals normally spend most of their time hovering around the "zero axis" (ie 0111111111 in a 10-bit system) you still have to make provision for the possibility of the occasional "1111111111" or "0000000000" excursion. So regardless of the picture content, the camera has to continuously record a stream of 10-bit numbers for the Cr and Cb signals, at half the rate of the luminance channel.

 

In a Bayer Mask, the red and blue channels only get sampled at half the rate of the green, which is arguably similar to the 50% luminance equivalent bandwidth . Instead of sitting at a DC mid-point like a colour difference signal, the zero point is simply zero or close to it, but you still need the full 10- or 12-bit range to cope with peak brightness. So as far as data overhead goes, not a lot of difference.

 

The green isn't quite sampled at twice the red and blue rate since each successive sample is sited one pixel down, so by and large, the data rate is surely not going to be all that different to a 2K RGB sampling system.

 

I'm not quite sure what you mean by this - what claims,

Well, mostly the never-ending litany of "Film is Dead" - "Sony is Dead" - "Panavision is Dead" - "I can't believe I'm not looking through a department store window!" etc etc, of various forums (including this one in the early days). All allegedly due to some fantastic new software and recording system....

 

and what do you feel has negated them?

 

Well, mostly my wasting time, money and popcorn going to the cinemas to check them out. (There's SFA Red-sourced TV footage that I know about in this country).

My own thesis here is that Red have wasted no time in jumping around telling everyone they're very clever for inventing an amazing new codec, whereas what they've actually done is bought a $10 JP2K encoder IC and taken the only approach they technically could in using it.

Quite, although not exactly a new concept :rolleyes: (At this point some flannelmouthed fanboy is sure to jump in and exclaim: "But that's just marketing!" as if saying that is the magic spray that excuses any and all corporate dodginess...)

 

Now may not be the time to bring this up, but all of this does rather offend my sensibilities. I don't think cameras should output information that needs subsequent processing by the company's proprietary software. I think they should output information in formats for which the standards are free or cheap and unencumbered by patents. Anything else artificially constrains workflows, and data wrangling on anything other than the most trivial productions is now a full time job. It's hard enough; let's not make it harder. Digital camera technology puts the onus onto post enough without making them go through some enormous rendering process just to get to dailies.

Hence the supposely inexplicable popularity of the 5D et al

Link to comment
Share on other sites

  • 2 weeks later...
They're presumably using an off-the-shelf JP2K encoder ASIC for this work

 

JP2K is built on top of a wavelet transform, as is Cineform. And as is Redcode. Now it could be that Redcode does use the same compression algorithm as JP2K, (I don't know) but that does not necessarily follow from the fact that Redcode (like JP2K and Cineform) use a wavelet transform.

 

The wavelet transform is in the public domain and is well defined. One of the first applications of wavelet transforms was image compression, but it is not limited to that application. The wavelet transform itself is not lossy. It rearranges information in such a way that is completely reversible. When used in the context of image compression the wavelet transform provides information in a form that is easier (and more effectively) to compress. However, the actual compression of a wavelet transformed image is open to different algorithms. The actual compression algorithm is not a component of the wavelet transform.

 

Since the wavelet transform itself is not lossy, it means raw data can be saved in a format where it has been wavelet transformed but without any loss.

 

Carl

Edited by Carl Looper
Link to comment
Share on other sites

  • Premium Member

No, Carl - Redcode actually is JP2K. Before they started encrypting it, some JP2K decoders would actually read the individual frames.

 

 

When all us doubters were saying "you can't create a camera of that spec for that price", one of the principal things that allowed them to do it was probably using an off the shelf encoder.

 

P

Link to comment
Share on other sites

No, Carl - Redcode actually is JP2K. Before they started encrypting it, some JP2K decoders would actually read the individual frames.

 

 

When all us doubters were saying "you can't create a camera of that spec for that price", one of the principal things that allowed them to do it was probably using an off the shelf encoder.

 

P

 

Hi Phil,

 

I'm not sure as to what you are saying "No Carl".

 

I didn't say Redcode wasn't using a JP2K encoder. In fact I explicitly said "I don't know":

 

Now it could be that Redcode does use the same compression algorithm as JP2K, (I don't know)

 

Rather, what I was saying (and this remains correct) is that just because Redcode uses a wavelet transform (like Cineform) it doesn't follow that it uses JP2K. I mean if that was the case, it could also follow that it uses Cineform! However, if it does indeed use JP2K (as you say), then of course it will be using a wavelet transform, since JP2K uses a wavelet transform.

 

Carl

Link to comment
Share on other sites

Hi Phil,

 

further to my previous point, I can't find anywhere confirming that redcode was (or is) the result of a JP2K ASIC other than your anecdote: that once upon a time J2PK decoders could read redcode. The JP2K output could very well have been the result of an additional transcoding step in which the data was rearranged to be compatible with JP2K decoders. That makes to sense to me as well. I understand some 4K projectors were hardwired to read J2PK.

 

Also, it doesn't necessarily follow that because JP2K decoders can no longer read redcode, that redcode is now an encrypted version of JP2K data. It could very well be that redcode now represents data at an ealier step in the output pipeline - before it was transcoded to JP2K.

 

Just presenting alternative speculation.

 

Carl

Link to comment
Share on other sites

  • Premium Member

Find an old pre-encryption R3D file, straight out of a camera, and look at it in a hex editor. Find the JPEG-2000 magic number in it, just after all the timecode and take information. Copy that region of the file, paying attention to the length field in the JP2K header. Paste that into another file and save it with an appropriate filename extension. Load it into the default image viewer for JPEG-2000 files on an Apple Mac. You will see a recognisable reconstruction of the frame, with some minor chroma fringing due to the erroneous assumption that it's RGB and not Bayer data.

 

Redcode is JPEG-2000. Frankly this was known long before the camera was released when someone made an unguarded reference to "the JPEG board" in a press conference.

 

P

Link to comment
Share on other sites

Find an old pre-encryption R3D file, straight out of a camera, and look at it in a hex editor. Find the JPEG-2000 magic number in it, just after all the timecode and take information. Copy that region of the file, paying attention to the length field in the JP2K header. Paste that into another file and save it with an appropriate filename extension. Load it into the default image viewer for JPEG-2000 files on an Apple Mac. You will see a recognisable reconstruction of the frame, with some minor chroma fringing due to the erroneous assumption that it's RGB and not Bayer data.

 

Redcode is JPEG-2000. Frankly this was known long before the camera was released when someone made an unguarded reference to "the JPEG board" in a press conference.

 

P

 

Hi Phil,

 

thanks for the info. But you are again misreading my post. I did not say your anecdote was false. I have no reason to disbelieve you. But you say "Redcode is JPEG2000". However for the current version of Redcode that is not the case. Now you add that the current version of Redcode is encrypted JP2K. Well, that could very well be the case, but there is no evidence of that.

 

Certainly if there was a JP2K chip then it could be a proposition. But there may be no such chip, eg. the JP2K encoding could have been done via programmable hardware in the camera. Or the waveklet transform was done in fixed hardware but the JP2K specific coding done in software. There all sorts of alternative propositions.

 

Now I'm not against speculation. I'm speculating as well. But then I'm not arguing as if what I'm saying is the case.

 

Carl

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...