Jump to content

Help me out here


Keith Walters

Recommended Posts

  • Premium Member

I am afraid I am having a hard time swallowing this:

Red Ray

 

It is claimed that with some new codec, they can store about 2 hours of 4K video on a standard 2-layer recordable DVD (ie one that uses, appropriately enough, a red laser).

 

Since 4K video has four times as much information as standard 1080 x 1920, it would seem to follow that using the same sort of encoding, the same disc would hold eight hours of standard HD, or four hours on a single-layer disc, which means two regular sized movies!

 

Since you only get about six minutes of already significantly compressed 4K footage on an 8 GB CF card, how do they manage to get another 20:1 compression out of it?

 

I am not saying it cannot be done, but I'll believe it when I see it!

 

Mr Sony will not be amused :lol:

Link to comment
Share on other sites

  • Replies 102
  • Created
  • Last Reply

Top Posters In This Topic

I'm sure it's possible to compress 2 hours of 4K material onto a single DVD disc (well a 9 gig disc at least) using some of the codecs available today (mp4, h264, divx, xvid, etc - although I've never tried compressing 4K, so I don't know for sure), and I'm sure it wouldn't look half bad. Although to be fair, and assuming this is their plan for a playback system for distribution in theatres then wouldn't quality be of the upmost importance and that compressing your footage to 1.25MB/s be pushing it?

 

Well assuming they've already prototyped this thing and they've deemed it suitable. I hope the discs prove reliable for playback over the 2 hour running time - "transition between layers might cause a slight pause".

 

Does look suitable for dailies playback, through does that mean transferring footage back to a CF card at the end of the day to view your footage again or is it possible to playback and control the player via the FW800 connection? I'm assuming that the player will be decoding/debayering the 4k footage on the fly.

 

Oh well, personally I'd rather they use NAB to release or to announce 'soon-to-be-released' products. Early 2009, are any other companies announcing products that far away?

Link to comment
Share on other sites

Since 4K video has four times as much information as standard 1080 x 1920, it would seem to follow that using the same sort of encoding, the same disc would hold eight hours of standard HD, or four hours on a single-layer disc, which means two regular sized movies!

 

In current implementation Red's R3D files are linear light data. If Red's 4K video codec would act on this data type, then it would have to be even more efficient than say a 4K HD-style codec, since Red's 4K data is linear light, and typical HD data is gamma corrected. Gamma correction results in a perceptually better space for encoding signals. Many compression algorithms such as JPEG even assume inherently perceptual space signals. Therefore, Red's codec may have to work extra hard on linear light data.

Link to comment
Share on other sites

  • Premium Member
They can't do a lot of the things they claim to do. You and I are the only people who seem to notice.

Well, the only people who post here at any rate :rolleyes:

 

But I would never say they absolutely cannot do it. I would, however, rather they refrained from implying they "will do" something, rather than: they "hope to do" something.

 

I have always believed it is possible to store good 1920 x 1080 HD video (and multichannel audio) on ordinary dual-layer DVDs. 4K is stretching it, but a single demonstration would suffice to allay my skepticism.

 

(That is, a demonstration where I put the disc in and connect up the cables, not just a post from some trouser-talking fanboy waffling about how "fantastic" it is).

 

I am not sure what sort of pictures you would be getting with 4K, though. To get cinema quality images I think you would need to use some sort of Fractal-based compression system, another technology that has been "just around the corner" for the past few decades :lol:

 

To me the whole Blu-Ray thing has little or nothing to do with concerns over the image quality consumers receive, and everything with to do with trying to enforce DRM (Digital Rights Management). DRM is a dead horse that has long since been dragged away and made into dog food and is now fouling people's sidewalks (hopefully Sony's:-), but the various studio executives still go through the motions of pretending to be doing something about the problem.

 

A full-resolution HD system that uses ordinary DVD discs would certainly put the cat among the pigeons. Would there eventually be no difference between home video resolution and cinema resolution?

Edited by Keith Walters
Link to comment
Share on other sites

  • Premium Member
Well assuming they've already prototyped this thing and they've deemed it suitable. I hope the discs prove reliable for playback over the 2 hour running time - "transition between layers might cause a slight pause".

 

Most modern DVD players (even really cheap ones) have enough RAM to store about 7 seconds of full-quality sound and video, so the transition is no longer seen. I would imagine the same would apply to RED Drive.

 

In any case, even if it took two (or even three) DVDs to store a full 4K movie, that would be of little significance in a cinema situation. I am sure that loading a movie into a projector would be a process analogous to installing software on a computer. The discs (with a format unreadable by a PC DVD drive) would be loaded one by one, and the movie data would be "unpacked" and stored on the projector's hard drive, with no way of reading it out again.

 

Because there would be no equivalent to a software-based DVD player on a PC, it would be just about impossible for a hacker to work out how the data is encrypted.

Link to comment
Share on other sites

In any case, even if it took two (or even three) DVDs to store a full 4K movie, that would be of little significance in a cinema situation. I am sure that loading a movie into a projector would be a process analogous to installing software on a computer. The discs (with a format unreadable by a PC DVD drive) would be loaded one by one, and the movie data would be "unpacked" and stored on the projector's hard drive, with no way of reading it out again.

 

Has an internal harddrive been mentioned elsewhere for this thing? I'm going off the link you first posted, so all my assumptions are based off that. I get the impression they're planning to use regular DVD media for this process, not some personalised RED media that can only be read or written to by a custom RED drives.

Link to comment
Share on other sites

Guest tylerhawes

First of all, I don't see RED saying they are using standard DVDs, only that it will be off-the-shelf recordable media. That means it can be, and I logically assume it is, BlueRay discs. A dual-layer BlueRay disc is 50GB. The fact that it is called RED RAY seems to jive with that theory. So that alone makes this not so far-fetched.

 

Secondly, the wording on the site is:

  • PLAYS 4K, 2K, 1080P, 720P AND SD FROM RED DISC AND RED EXPRESS
  • ALSO PLAYS NATIVE RAW R3D FILES FROM COMPACT FLASH

Note that "also plays native RAW R3D" is only from compact flash? So 2+ hrs. 4K is for RED compressed RGB images, not the raw linear-light images. That means it can be in video gamma, on a level playing field with other compression.

 

So, consider that 4K 12-bit RAW files take up approx. 1Gb/minute, or about 120GB for 2-hours (not exact, I know), and that these could be 8-bit RGB images instead storing to 50GB, this does not seem so unrealistic.

Link to comment
Share on other sites

First of all, I don't see RED saying they are using standard DVDs, only that it will be off-the-shelf recordable media. That means it can be, and I logically assume it is, BlueRay discs.

 

Well according to Ted Schilowitz, Leader of the Revolution for Red it does use standard DVDs, as Mike Curtis reports from NAB 08...

 

"this allows for burning 2+ hrs of 4K footage, with 5.1 audio, to a standard DVD-9. They will be using an RGB codec, so you’ll be able to master your show with whatever tools you use, then run it through their unannounced compression software to a file format that goes on a standard DVD-9 burnable, red laser, normal DVD..."

 

Now it could be that Mike Curtis has misunderstood or misheard something that Red have told him, but he does refer a few times to a standard DVD-9, red laser, normal DVD. Also note the use of a red-laser, that would tend to suggest that it isn't blu-ray, and that the "Red-Ray" name is more a play on words than anything else.

 

Again this is all speculation and I'm sure the final spec come early 2009 will have changed to include an HD-DVD player instead. I'm interested to see how they do on these new products, hopefully at the very least it gets the other bigger, older players in the industry to rethink a couple of things.

Edited by Will Earl
Link to comment
Share on other sites

Guest tylerhawes
Well according to Ted Schilowitz, Leader of the Revolution for Red it does use standard DVDs, as Mike Curtis reports from NAB 08...

 

Oh. :ph34r:

 

Now it could be that Mike Curtis has misunderstood or misheard something that Red have told him

 

Doubtful. I know Mike. He's pretty careful and detail-oriented.

 

Well golly, now I'm wondering along with the rest of you. I know JPEG2000 compression (which AFAIK is basically what RED is using/tweaking for their codecs) is efficient, but 4GB/hr for 4K? I'm not going to worry about this anymore until it's available for purchase :)

Link to comment
Share on other sites

I was confused by this announcement. Is the Red Ray designed for long term lossless archiving of the footage? Or is it just supposed to be a viewing tool that has the benefit of outputting 4K (assuming you have a suitable display) albeit at some level of compression?

 

I assume if there was any way of losslessly compressing the raw data further, they'd have done it already in the camera.

Link to comment
Share on other sites

9 GB for two hours is about 10 megabits/second. H.264 can do perfectly watchable 2K at that data rate, and it's a five year-old algorithm at this point. It wouldn't be all that surprising if a newer algorithm could do somewhat better. Particularly if you were willing to accept something extremely computationally intensive, which would presumably be fine for the Red Ray, which obviously contains a fairly high-end DSP.

 

And remember, given two images of the same subject at different resolutions, you can almost always get away with a higher compression ratio for the larger one. For any natural image (e.g. not something like a single-pixel checkerboard pattern over the entire image), the larger image is going to contain less detail, on average, per unit of area. The upshot is that compressing 4K to the same sizes as 2K doesn't require an algorithm four times as efficient; it maybe requires one twice as efficient.

 

All of that said... I won't be too surprised if in the end they decide they need to bump the data rate a bit and they only end up with 90 minuets on a disc. (Note that the device has a FW800 port, presumably for reading content from external hard drives, so there would still be a solution for playing back longer content without interruption.)

Link to comment
Share on other sites

I was confused by this announcement. Is the Red Ray designed for long term lossless archiving of the footage? Or is it just supposed to be a viewing tool that has the benefit of outputting 4K (assuming you have a suitable display) albeit at some level of compression?

 

It's a viewing tool. To get down to those kinds of bit rates, they have to be using inter-frame compression, which isn't a great idea for acquisition (not that that's stopped some vendors...) and probably even (though there's no info about this) dropping down to 8-bits.

 

In addition to working with this apparently new distribution codec, the player can also play back REDCODE RAW files directly from CF cards or attached Red Drives (maybe other attached hard drives too?), which might make it useful to have kicking around on-set or in an edit bay. Actually, this last use case could be particularly interesting. If this device can play back REDCODE RAW files at 2K or 1080p in high-quality, hooking it up to a capture card provides a very low cost mechanism for transcoding REDCODE RAW to uncompressed HD (or ProRes or whatever) in real-time. That makes Red's cameras useful in applications where the long transcode times needed to output high quality footage have thus far made them somewhat impractical.

 

(Assimilate's new product, SCRATCH CINE, can also do real-time 2K transcoding of REDCODE RAW, but costs $30K with appropriate hardware.)

Link to comment
Share on other sites

  • Premium Member
And remember, given two images of the same subject at different resolutions, you can almost always get away with a higher compression ratio for the larger one. For any natural image (e.g. not something like a single-pixel checkerboard pattern over the entire image), the larger image is going to contain less detail, on average, per unit of area.

Do you have proof of this? Are you sure it's true?

 

 

 

-- J.S.

Link to comment
Share on other sites

Do you have proof of this? Are you sure it's true?

 

In my fairly extensive experience with compressing video and still images, yes. An image with four times the number of pixels will generally be between two and three times the size, with modern compression algorithms, particularly lossy algorithms. This is rather unsurprising when considering how such algorithms work. A larger image is almost always going to have more redundancy in it.

 

Playing around with exporting JPEGs from Photoshop will usually show this effect, even when exporting differently sized images with the exact same quality settings. And that's actually slightly under-representing this effect, because you can generally get away with knocking compression quality down a bit at higher resolutions, while still ending up with a much better looking image. Take a look at the HD trailers on Apple's web site, for instance. The 1080p versions are often less than 40% larger than the 720p versions, despite having more than twice as many pixels. And I'd imagine Apple knows a thing or two about distribution codecs; those videos are encoded with Apple's own H.264 implementation.

 

Downscaling an image to a lower resolution is, when you get right down to it, basically just an extremely primitive form of data compression. Is it really surprising to anyone that modern image compression algorithms can reduce file size while maintaing important detail better than scaling algorithms which do no analysis of image content?

Link to comment
Share on other sites

And remember, given two images of the same subject at different resolutions, you can almost always get away with a higher compression ratio for the larger one. For any natural image (e.g. not something like a single-pixel checkerboard pattern over the entire image), the larger image is going to contain less detail, on average, per unit of area. The upshot is that compressing 4K to the same sizes as 2K doesn't require an algorithm four times as efficient; it maybe requires one twice as efficient.

 

We have used the reverse of this many times in our calculations. In my experience a general rule of thumb is that if an image size is reduced by 2 then the data rate required to keep more or less the same image quality is reduced by sqrt (2) (and not by a factor of 2 as would be suggested).

Link to comment
Share on other sites

  • Premium Member
A larger image is almost always going to have more redundancy in it.

That's something I used to believe back in the early days of DCT compression. Try a thought experiment:

 

Suppose we take a picture with a crosslit tree in the background. We shoot it once at low resolution, getting two or three leaves on each pixel. The bright leaves and shadows between them average out, and all the pixels land in a fairly narrow range of color and brightness. Now, shoot exactly the same composition, only with higher resolution, so that we have two or three pixels on each leaf. Some pixels get the bright green of the sunlit leaves, but others very nearby land in the dark shadows. In this case, the high resolution image has less redundancy than the low resolution image.

 

Of course there are a lot of examples that work the other way. For instance, the gradient between the nearly white sky at the horizon to the bluer sky above. Do it over 100 pixels and the pixel to pixel difference will be greater than if you make the same transition over 1000 pixels.

 

Come to think of it, if everything really worked the second way, there would be no point to making higher resolution imaging systems.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

Suppose we take a picture with a crosslit tree in the background. We shoot it once at low resolution, getting two or three leaves on each pixel. The bright leaves and shadows between them average out, and all the pixels land in a fairly narrow range of color and brightness. Now, shoot exactly the same composition, only with higher resolution, so that we have two or three pixels on each leaf. Some pixels get the bright green of the sunlit leaves, but others very nearby land in the dark shadows. In this case, the high resolution image has less redundancy than the low resolution image.

 

Sure, if you've got the entire frame filled with material like that, but that's pretty rare. And remember, 2 hours in 9 GB is 10 megabits/sec as an average bitrate. A modern multi-pass variable bitrate codec, if it encountered a few shots like that in a feature-length film, could easily allocate more bits to preserving that detail.

Link to comment
Share on other sites

  • Premium Member
Sure, if you've got the entire frame filled with material like that, but that's pretty rare.

That's just an extreme thought experiment to prove existance of material that reveals additional complexity when you resolve it better. In fact, such material is everywhere, in fabrics, furniture, pictures on the back wall, text on paper, actors' hair, etc. If it weren't so, there would be no point in going to higher resolutions.

 

To the extent that we think we're getting away with more compression on finer pixel grids, we're kidding ourselves. Mostly because it looks OK on monitors that don't have enough resolution to show us what's really happening.

 

 

 

-- J.S.

Link to comment
Share on other sites

That's just an extreme thought experiment to prove existance of material that reveals additional complexity when you resolve it better. In fact, such material is everywhere, in fabrics, furniture, pictures on the back wall, text on paper, actors' hair, etc. If it weren't so, there would be no point in going to higher resolutions.

 

Yes, but it rarely fills the entire frame (there are usually low-detail areas somewhere else in the frame), it rarely changes completely between each two adjacent frames (relevant to inter-frame compression algorithms), it essentially never fills an entire program (relevant to VBR compression algorithms), and some of it looks just as believable to the human visual system even if, in a mathematical sense, information is lost (relevant to lossy compression algorithms).

 

To the extent that we think we're getting away with more compression on finer pixel grids, we're kidding ourselves. Mostly because it looks OK on monitors that don't have enough resolution to show us what's really happening.

 

Well... if we follow this this argument to its logical conclusion, using data compression techniques is never beneficial vs. simply reducing resolution. This is clearly not the case. REDCODE 28 has about the same data rate as uncompressed SD video. Feed them both to a 4K projector aimed at a 60' screen, and we're not "kidding ourselves" about the former looking vastly better than the latter.

 

Within the bounds that a compression algorithm is designed to work in, delivering a larger image and then cranking the compression ratio up to keep the file size down will almost always produce a better image, because, as I said before, a modern compression algorithm is far better at deciding which detail it can get away with discarding than is a downscaling algorithm.

 

BTW, here's Red's workflow guy on this subject.

Link to comment
Share on other sites

Within the bounds that a compression algorithm is designed to work in, delivering a larger image and then cranking the compression ratio up to keep the file size down will almost always produce a better image,

 

Not always true. Such scenarios work for certain ratios of higher and lower resolution image sizes and data rates.

We have routinely seen scenarios where a compressed HD image seems poorer in relative comparison to a lower data rate SD sized image when upscaled after decompression to the same size as HD image.

 

a modern compression algorithm is far better at deciding which detail it can get away with discarding than is a downscaling algorithm.

 

In *practise*, actually the implementation is kept as simple as possible. Typically, implementation of compression algorithms opt for simple objective function for parameter optimization to cover the large search space.

Edited by DJ Joofa
Link to comment
Share on other sites

  • Premium Member
Well... if we follow this this argument to its logical conclusion, using data compression techniques is never beneficial vs. simply reducing resolution.

No, that's not at all what I'm saying. What I'm saying is that for any value of x, applying x:1 data compression to an image using the same algorithm does about as much harm no matter whether the image you start with is 4K or SD.

 

That's because contrast and detail in real world objects vary randomly throughout whatever range of magnification we want to apply to them, whether we're looking at the whole planet, or bacteria thru a microscope. (OK, when you get down beyond optical microscopy into electron microscopy, there may be stuff I'm not aware of).

 

Look at it another way: Take a 4K image, crop 640 x 480 out of it. Apply your choice of compression to both images, maybe a few different algorithms and ratios to be sure of having some easily recognizable artifacts. A/B them pixel for pixel on a large projection screen.

 

Compare the common area. According to the "big images compress better" theory, that region in the compressed 4K should look a lot better than it does in the cropped version using the same compression.

 

The reason we think that big images compress better is because we don't look at them pixel for pixel on a big projection screen. We look at them on CRT's that don't have a snowflake's chance of actually displaying 2K resolution, let alone 4K.

 

BTW, the guy in the link looks to be an end user, not a Red employee, right?

 

 

-- J.S.

Link to comment
Share on other sites

Compare the common area. According to the "big images compress better" theory, that region in the compressed 4K should look a lot better than it does in the cropped version using the same compression.

 

If you compress the images such that you're using the same number of bits per pixel averaged over the entire image, it very well could. Because, of course, in the 4K image, there might be areas of low detail which can be represented with fewer bits, meaning that the actual compression ratio for the specific area of the image you're looking at might be lower. This cropping test is actually more likely to show an advantage for the larger image than a test involving two differently scaled versions of the same frame, assuming you choose to examine a high-detail area of an image which also has some low-detail areas.

 

What I'm saying here can, by the way, be confirmed in about 10 minutes of testing with some high-resolution images and a copy of Photoshop. Spit out JPEGs of the images scaled to various sizes, but with the same quality level selected.

 

BTW, the guy in the link looks to be an end user, not a Red employee, right?

 

Anyone on RedUser with their name in red is a Red employee.

Link to comment
Share on other sites

  • Premium Member

I could spend the next hour raising my blood pressure by listing and pointing out what's wrong with all the horrible inanities in this thread, but Tim Tyler says I'm not allowed to criticise Red anymore so I'll limit myself to this:

 

Jannard is selling an idea - something that a lot of people desperately want to be true, four hours of 4K on a DVD at production quality. It does not exist, and it will never be built because it cannot be built. The camera they advertised two years ago still has not been built and cannot be built. By their reckoning, an F35 is a 6K camera. It's ludicrous. I have never, ever come across a company which relied on stretching the truth this far, and got away with it.

 

Yes, yes, there's people selling these attractive fictions to producers all over the world, and siince producers are often a) not that smart and B) endearingly desperate to believe in anything that can save them money, I'm sure these people wiill do very well. But most of them are at the end of them are engineers who are honest enough at least with themselves to know that they're selling fantasies. I think this is a pretty sad reflection on the industry as a whole.

 

P

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

Forum Sponsors

Visual Products

Film Gears

BOKEH RENTALS

CineLab

CINELEASE

Gamma Ray Digital Inc

Broadcast Solutions Inc

Metropolis Post

New Pro Video - New and Used Equipment

Cinematography Books and Gear



×
×
  • Create New...