Jump to content

3D HD CineAlta, etc.


Guest Ultra Definition

Recommended Posts

How about the 'L3' compression used in the Origin?

 

This is supposed to knock a 1gb/sec feed down to 300mbps, but is totally lossless. That is, the original image can be precisely reconstructed from the compressed one.

 

:blink:

Link to comment
Share on other sites

  • Replies 117
  • Created
  • Last Reply

Top Posters In This Topic

Hi Phil, this may be going a little bit OT, but you wouldn't happen to know anything about LZW compression, would you? I tried Googling a bit and couldn't come up with much, but I know that LZW compression is, like Huffman, lossless. Is there any reason why this hasn't become as popular as Huffman compression (or maybe it has, I just haven't heard much about it), e.g. is it not as efficient, or as fast, etc.?

 

(And to go even more off-topic, ;) , would it be possible to apply another lossless algorithm on top of an existing one, e.g. LZW-compressing a Huffman-compressed signal?)

Link to comment
Share on other sites

I think compression is just a shortcut into achiving some technical

features that are technologicaly imposible or to expensive now.

 

But as the technology advances,more storage will be available,and

there will be no need for compression.

Link to comment
Share on other sites

Guest Ultra Definition

That is true but you'll still use compression for distribution. For acquisition 10 bit sampling will be about equivalent to the lattitude of negative stock. But for distribution you don't need this lattitude.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

> would it be possible to apply another lossless algorithm on top of an existing

> one, e.g. LZW-compressing a Huffman-compressed signal

 

Er, yes, and you've no idea how close you've come to a rather epiphanical realisation there. Lempel-Ziv-Welch (Which was developed by a guy called Welch on top of earlier work, presumably by Lempel and Ziv) actually IS such a scheme, using a sequence-substitution algorithm on top of Huffman. So, you can't LZW Huffman, since LZW is already compressed Huffman!

 

> Is their are an explination at the beginner level?

 

Grumble, mutter, oh, alright then. Propeller caps on, everyone.

 

Huffman works by encoding statistically-frequent sequences with a short code, and statistically-infrequent sequences with longer ones. This immediately reveals the flaw in the system, in that certain kinds of data compresses very poorly. However, the example most often given is to use the alphabet.

 

It's generally known that the most common letter in English language text is E. The ASCII standard (which you're reading right now) considers E to be character 101, or 01100101 in binary. So, if you were to replace every instance of the letter E in a text with just a single bit, 1, you'd save seven bits of space for every E. Gven that almost 14% of letters in most English text are E, that's not an inconsiderable saving - and the original text can be precisely recovered, with zero loss.

 

Obviously, we can do better than that - there are other common characters (T, next) which could be represented using shorter codes. In fact, even if some of the very uncommon characters (Q,Z,X) ended up needing longer strings than they did before, just to ensure they were unique, we could still save space.

 

The critically clever bit of Huffman encoding is the way in which the code tables are designed such that the shortest possible code strings are most optimally applied to the most common characters. It should be noted at this point that Huffman encoding is not optimal, that is, it does not compress data to the smallest possible space; if it were completely optimal, adding another scheme on top (as LZW does) wouldn't gain you anything. Anyway, let's pretend that we only have five letters in the alphabet, and that the likelihood of each one is one greater than the predecessor, just for the sake of example:

 

A 5

B 4

C 3

D 2

E 1

 

To skip over a lot of explanation (Google if you want to know more) it turns out that the most optimal way to create the code strings for each letter is to create a tree structure. We take the two least frequent characters - D and E in this case, combine them to form a new character to stand for either, and combine their frequencies:

 

A 5

B 4

C 3

DE 3

 

Repeat ad infinitum, combining the two least likely characters each time:

 

A 5

B 4

CDE 6

 

AB 9

CDE6

 

Then we can easily draw it as a tree - each fork can go one of two ways, which we represent as a binary 1 or 0:

 

A

AB

B

 

C

CDE D

DE

E

 

 

Assuming in each case that the upper branch will be numbered 1 and the lower branche 0, we can see that:

 

A = 11

B = 10

C = 01

D = 001

E = 000

 

The way this can be applied to digital picture data is hopefully pretty obvious; brightnesses of the pixels in the Y, U and V planes are also encoded as numbers.

 

However, unlike the English language, the statistical likelihood of pixels being any given brightness, or in the case of the implementation in Digital Betacam being any degree of brightness more or less than the one that came before, is not known. It is given that E is the most common letter in English; it is not given that any image will have a given luminance distribution. This means that while you can create the tree once for the English language and it should be nearly optimal for any English language text, a Huffman image compressor must first examine every frame and create a code table based on the likelihood of whatever phenomenon it is using - usually, the difference in brightness between adjacent pixels, which tends to be small. Therefore a Digital Betacam recorder must examine each pixel (414720 pixels for the Y plane, half that for the U and V) and work out a statistical probability table, then create the code table by combination of the least-likely pair. Considering that digibeta records in ten bit colour depth, and each pixel may have one of 8^10 or 1,073,741,824 values, this is a pretty big job.

 

Decoding Huffman is, unusually for video codecs, somewhat less of a brain teaser due to one trick of encoding I didn't mention above. Consider our example code table:

 

A = 11

B = 10

C = 01

D = 001

E = 000

 

If we were to swap around some of the positions in our tree diagram, not changing the probability levels but just which side of the branch things were on, we could arrange that each letter was actually encoded with a number representative of its order in the alphabet:

 

E

DE

CDE D

C

 

 

B

AB

A

 

Notice that this is essentially the same deal inverted. It produces the following code table:

 

A = 00

B = 01

C = 10

D = 101

E = 111

 

Each character is encoded with a sequential binary number, in alphabetical order. This is a clever bit of thinking, because it means that you do not have to send the entire code table for the decoder to be able to recover the image. Instead, you need only send the length of the code string for each character group:

 

A,B or C = 2

D or E = 3

 

Thereby, if you get a three-character code 101, you know that it must be the lowest-numbered item in the three-character group, or a D. This is important, because while our code table is small for a five-character data set, it's enormous for a 8^10 character data set and would take up lots of data space.

 

And that's how Huffman encoding works. Anyone with decent high-school calc should be able to prove why any data with sequential Fibonacci numbers in it compresses very poorly.

 

Phil

Link to comment
Share on other sites

Yes but once the technology gets really cheap,the questions

is "why not?" If something is so cheap,that it is a waste of time

making compression algorythms then why use compression at all?

 

Of course i'm not talking about near future,i'm talking about

the technology maybe 20 or 30 years from now.

 

Why bother drinking artifitial orange flavoured soda,if you can buy

real orange juice for the same price

Link to comment
Share on other sites

Guest Ultra Definition

Tenobel's quote:

So untrue. 10 bit is a start but its only a start.

 

So untrue? It gives you 10 F-stops lattitude.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

What?

 

Increasing bit count gives you finer gradations between colours, which can avoid banding. It has absolutely no impact on or connection with the dynamic range of the imaging device.

 

Phil

Link to comment
Share on other sites

Guest Ultra Definition

Good sensors have the same lattitude, or dynamic range (10 f-stops); it's basically the same thing, lattitude and dynamic range.

 

So 10 f stops is not good enough? How many stops do you think that color negative stock has, or you would like it to have? You can't live with 10 stops?

Link to comment
Share on other sites

I'm sorry i was thinking about the dynamic range which is higher

than 10 stops.

 

but i wouldn't agree with you. Dynamic range of a film density is not

the same thing as latitude.

Latitude is the dynamic range of usable images,in other words

how much can you over and underexpose and still capture detail

 

Dynamic range of film density is the difference between the highest

and lowers density on the negative.

Link to comment
Share on other sites

I'm getting so sick of this. "Ultra Definition" is such a joke of a tag name--you seem to make up your own definitions to things all the time. A grand mish-mosh of terms and comparisons to things that DO NOT COMPARE. I've just come back from a weekend of actual shooting -- yes, I really work in this business and use these tools that you blab about -- and I find post after post of misinformation or deceptive terms and misguided statements. So many of the things you say here are absolutely wrong. Have you ever had a real, serious conversation with an HD engineer, so someone at a post-production facility who actually deals with the different formats you talk about all the time. Do you even know anything outside of your Sony handbook? It's infuriating because you seem to have all the time in the world to spead your bullshit and people such as myself can't even spend the time to read through it much less correct it for others. You make grand statements again and again about technical measurements that are patently wrong and you act as if you know everything when it is obvious to someone like me who does know a bit that you are wrong about so very many things. My favorites are how all the world is going to MPEG (it's not) and how Panasonic's DVCPRO-HD only utilizes 40mb in 24p (completely wrong).

 

You know what I would respect? I would respect someone who asked if others knew what was up in the world of HD. And maybe someone who said NOTHING until after the NAB conference in Las Vegas next month. But plainly you are not being paid to do that. Either you are a shill for Sony (you seem to love all their products--do you still own a BetaMax?), or you are simply someone who likes to spout off about things he really doesn't know about for the sake of hearing his own voice. The Cliff Clavin (from "Cheers") of the cinematography internet world.

 

Just shut up for a while won't you?

Link to comment
Share on other sites

Guest Ultra Definition

Listen you guy who claims to know high definition. You were making here comments after comments about my statements, never talking specifics. Just because you said that you've shot couple of features in HD does not make you an expert. If someone is a shill, it is probably you, probably for Panasonic. I like Sony because they make the best HD products. So if you have anything specific to say, say it. Your general statements mean nothing.

 

I wanted to be nice and I did not want to argue with you and some other ones on this board that talk into area that they don't understand. So I did not correct some incorrect staements. I did let it go. But you're so arrogant and so ignorant when it comes to HD.

 

This, Mitch Gross, is the only specific thing you said:

My favorites are how all the world is going to MPEG (it's not) and how Panasonic's DVCPRO-HD only utilizes 40mb in 24p (completely wrong).

 

I already explained that all new Sony formats are MPEG, which is the blue laser based XDCAM, the new CineAlta SR, HDV, Blu-Ray, and IMG. There is no new Sony format that would not use MPEG2 or MPEG4. You don't get it do you?

 

As it comes to Panasonic's Varicam, which is DVCPROHD, shills for Panasonic have been claiming that it is less compressed than Sony's CineAlta, that it has more color information than CineAlta, that it is more movie-like than CineAlta, that the bit stream is 100 Mbps at 24p.

 

As a result of these untrue statements, quite a number of Varicam's was sold. None was ever used for a major motion picture. You know why? Because the real DPs know that the affective bit stream at 24 fps is not 100 but only 40 Mbps; they know that the format is more compressed than CineAlta. But people like you spread a lot of false information about this inferior Panasonic system.

 

Sure Varicam records in 720p, which is the lowest grade of HD. It lies 1/2 way between standard definition 480p, where DV belongs, and the max, level of HD, which is 1080p, which is Sony's CineAlta.

 

Panasonic's Varicam records in 4:2:2, while CineAlta records in 17:6:6. The Panasonic's DVCPRO HD, where Varicam belongs, is 63.4% more compressed than CineAlta. This is despite the fact that CineAlta has 267% of the nuber of Pixels that Varicam has. Sony's chroma is 41.7% more compressed, compared to luma, than on the Panasonic's system. But because the Panasonic is more compressed overall, guess which system has more compressed colors. Do little bit of calculating and you'll see that what the Panasonic's shills are calaiming is not true. The Panasonic does not have better colors than CineAlta because of lower color compression . It is a nonsense that Panasonic's shills have been claiming.

 

When it comes to the bit rate, it records at 100 Mbps at 60p (fps). So it records slow motion. That is all it records. Then you go through frame rate down converter and remove 60% of those frames to get 24p (fps). So you discard most of the information and are left with 40% of that information, or 40% effective stream compared to the acquisition effective stream.

 

You know what. There are people like you who have shot with Panasonic's Varicam and were brainwashed by Panasonic and then go around and make untrue statements about the Varicam system.

 

The truth is that the system is more compressed than $300 DV camera. Actually 34% more. Did you know that? Just because it is HD does not mean it is great HD. The only practical high grade HD is the new SonyAlta SR.

 

So please, if you don't understand the technical stuff, don't make these untrue general staements. You are a film DP who has shot couple movies in HD; it does not make you an expert on HD technology.

 

I'd debate you anytime on the subject, anywhere. But please again, don't make untrue statements. You are making a disservice to this film community that needs to learn sooner or later about HD, and the stuff that Panasonic shills have been spreading is outragously untrue.

 

Now, to be tchnically corect, by shill I mean not necessarily a peson paid by Panasonic, but someone who spreads untrue propaganda about their products.

 

So unless you want to talk specifics, don't make untrue general satemnets.

 

I have a break on my project; that is why I have the time to post right now.

 

If someone wants to learn some of the HD basics, here's good Microsoft info:

 

http://www.microsoft.com/windows/windowsme...gHDFormats.aspx

Link to comment
Share on other sites

Guest Ultra Definition

1080p is the highest level of the HD standard. Of course a company can create higher resolution.

Link to comment
Share on other sites

  • Premium Member
Just because you said that you've shot couple of features in HD does not make you an expert.

And what are your qualifications if I may ask?

 

Since I can't find any credits under 'Joe Tallen' on www.imdb.com, nor does this name show ANY results with a search engine, all your posts are pretty worthless in my opinion.

 

And please don't go insulting members like Mitch on this forum who actually know what they are speaking about.

Link to comment
Share on other sites

If you thought that the 2 Rodrigues' films I mentioned had sufficient quality, the SR system should be about equal to celluloid, by the time it gets projected. I do not mean equal for a fine DP to analyze it. It will be good enough for the producer, and it will be good enough for most directors, and it will be more than enough for the average theater goer. It will not be better or equal to film, it will be good enough. CineAlta was not good enough.

 

This is the significace of CineAlta SR. It is good enough to compete with celluloid

 

Truly sorry Ultra.... Even my leighman girlfriend wondered why "once upon a time.." looked so bad. Worse cinematography is hard to gather-up I say.

 

And why oh why, if you really care about the quality of images you yourself produce, would you even go near the statement "good enough"???

For a harsh language approach: Screw that!

That clearly tells the whole story of you; all knowing of technicalities and dumbfounded as it comes to achieving art that makes yourself feel fulfilled.

You cannot fulfill the nature of a script and your own visions with a good enough approach. Any director with common sence would throw you off the set if the "good enough" phrase was uttered in conjunction with his work.

 

Having used both film, cinealta, dvcam and imx; with or without PS35. Nothing has this far ever even come mentionably close to film.

 

Why is good enough your best?? Why not the best?? You have nothing to strive for then as I see it. Why?????????????

Link to comment
Share on other sites

Guest Ultra Definition

Please read again what I said. I said that the new CineAlta SR would be good enough and I named who it would be good enough for. It is bunch of times less compressed than CineAlta. And am I a DP? No, I'm not.

Link to comment
Share on other sites

But it doesn´t matter for whom it is "good enough".

My whole point was that all work regardless of occupation or who you are aiming at, isn´t best if it is only good enough.

 

Speaking in such terms renders the dp, or any other professional useless. Few are the producers that choose quality over profit. And as for the masses; they find vhs and digital broadcasting good enough. Most can´t tell difference between video and film...... Is that what we as dp´s should aim for and be good enough for????

Where´s the pride in that?

 

Sorry man. I just don´t comply with your thinking.

Link to comment
Share on other sites

Guest Ultra Definition

And the tale goes. One day the man woke up and what did he smell? Roses? No. Coffee? No. What did the man smell? HD? No. He smelled soup. That is because for everyone else HD became good enough, because that is where the money was. He smelled soup because he just woke up on the sidewalk next to the soup kitchen. Yes, he had in his shopping cart stack of 35 mm film stock, and he bought ice, whenever he could, to prolong the film life. He knew that he was the last person on Earth to own film and he wondered why no one's buying, not even robbing him on the skid row. Even the bumbs laughed at him. He thought to himself. What a strange world. Then he met his competition, some guy named Frederik from Sweden, also pushing a cart. :lol: But it was not you Frederik, it was a different Frederik. Explanation for the Swedes: It's a joke. You guys in Sweden are too serious. Look at all the Bergman's stuff. ;)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

BOKEH RENTALS

Film Gears

Metropolis Post

New Pro Video - New and Used Equipment

Visual Products

Gamma Ray Digital Inc

Broadcast Solutions Inc

CineLab

CINELEASE

Cinematography Books and Gear



×
×
  • Create New...