Jump to content

Sony wanted the Genesis to be 4K;


Keith Walters

Recommended Posts

  • Replies 74
  • Created
  • Last Reply

Top Posters In This Topic

I can tell you right now that the 1080p .TS file captures I have of Revenge of the Sith (recorded from HDTV) look WAAAY better than 90% of the film-originated 1080p Bluray files I have seen. I will make a thread about this at some point.

 

This depends on your definition of "better".

 

Most of the time in Revenge of the Sith, the only real elements that were actually photographed were the actors. Graphic artists and rotoscopers were able to spend months meticulously creating and perfecting everything else in the frame.

 

While 90% of film originated material is shot on stage with real elements on out in real locations. Where you work within degrees of limitation, or uncontrollable variables, and some degree of controlled chaos. These challenges are reflected in the photography.

 

I'm sure you look at ROTS as appearing better because of its sharpness and lack of grain. Others would consider that look sterile with no texture and no character.

 

Their are many filmmakers who would much prefer the unpredictable, organic, tactile feel of shooting reality.

Link to comment
Share on other sites

  • Premium Member
Funny, I have literature from Arri which says Mega-PL. I guess they used all three names at different points.

At least they brought their point across: that you cannot use PL mount lenses on them.

Link to comment
Share on other sites

  • 2 weeks later...
Guest Glen Alexander
Absolutely true. I don't think many people who choose film do so to be trendy. I know I don't. I use both and use what works best for a job, not because one is easier or cooler or looks sharper on a chart.

 

some people still use good old film because in the middle of the Mojave in August when it is 50C, your digital LCD's will melt, your hard drives will fail,.... old film just keeps going.

Link to comment
Share on other sites

Ok, since it hasn't been said,

 

 

Who is John Gault?

 

 

 

 

 

 

 

 

 

 

 

before you point me up the thread, I shall state that this is a question not meant to be answered, but instead is meant to prove that I, contrary to what some may think, can and do read books.

Link to comment
Share on other sites

  • Premium Member
some people still use good old film because in the middle of the Mojave in August when it is 50C, your digital LCD's will melt, your hard drives will fail,.... old film just keeps going.

Actually, you do need to protect film from extreme heat.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • 4 weeks later...

Here is some additional information that I think you may find of interest. a few weeks ago, John Galt and Larry Thorpe gave a presentation called Demystifying Digital Camera Specifications to a group at Panavision. It addresses issues raised in this thread and a number of others as well. Panavision has just posted the entire presentation on our website and we will announce it next week. But since this thread is directly concerned with the heart of the presentation I thought I would let you have a sneak peek. The presentation can be found at http://media.panavision.com/ScreeningRoom/...Box_Office.html I'll be curious to hear what everyone thinks.

 

Regards,

 

Andy

 

Andy Romanoff

Panavision

Woodland Hills

Link to comment
Share on other sites

  • Premium Member
Here is some additional information that I think you may find of interest. a few weeks ago, John Galt and Larry Thorpe gave a presentation called Demystifying Digital Camera Specifications to a group at Panavision. It addresses issues raised in this thread and a number of others as well. Panavision has just posted the entire presentation on our website and we will announce it next week. But since this thread is directly concerned with the heart of the presentation I thought I would let you have a sneak peek. The presentation can be found at http://media.panavision.com/ScreeningRoom/...Box_Office.html I'll be curious to hear what everyone thinks.

 

Regards,

 

Andy

 

Andy Romanoff

Panavision

Woodland Hills

 

Is there any chance you could have an option for lower resolution versions, they way movie companies do for theatrical trailers? I have a reasonably fast ADSL connection, but after 10 minutes all I had managed to download was the introductory sped-up shot of people wandering around in the theaterette, and the first 20 seconds of Larry's introduction.

 

I've downloaded perfectly adequate clips about the RED and other cameras from YouTube.

If you really need high resolution images to illustrate the presentation, PowerPoint might be better.

Link to comment
Share on other sites

I'm glad to see the clarity and depth of this presentation out there, as there is so much partial and mis-information out there. But I must state that there are positives and negatives to all three color sensor pattern variants (3-chip with prism, Bayer-pattern color mask, stripe-pattern color mask). I'm not going to go into a personal treatise of my own here, but I will say that there are some darn good reasons why every digital still camera manufacturer on the planet has chosen Bayer-patterning. I won't go into the pluses or minuses of the Sony-Panavision striped-pattern route, but will say that they do get nice pictures.

 

There are tradeoffs with every choice, because, well, that's what makes choices.

Link to comment
Share on other sites

Guest Glen Alexander
Actually, you do need to protect film from extreme heat.

 

 

 

-- J.S.

 

yes without a doubt, that's what the mini-fridge was for but that compared to the screams of the fans trying to keep the hard drives cool is nothing.

Link to comment
Share on other sites

Guest Glen Alexander
....some darn good reasons why every digital still camera manufacturer on the planet has chosen Bayer-patterning. ....

 

It's numerically and economically cheap way to get a color image is all I can justify.

 

Bayer reduces overall sensitivy

only samples the color spectrum

.

 

.

Link to comment
Share on other sites

  • Premium Member
Here is some additional information that I think you may find of interest. a few weeks ago, John Galt and Larry Thorpe gave a presentation called Demystifying Digital Camera Specifications to a group at Panavision. It addresses issues raised in this thread and a number of others as well. Panavision has just posted the entire presentation on our website and we will announce it next week. But since this thread is directly concerned with the heart of the presentation I thought I would let you have a sneak peek. The presentation can be found at http://media.panavision.com/ScreeningRoom/...Box_Office.html I'll be curious to hear what everyone thinks.

 

Regards,

 

Andy

 

Andy Romanoff

Panavision

Woodland Hills

 

 

Thanks for sharing this, Andy.

Link to comment
Share on other sites

  • Premium Member
I'll be curious to hear what everyone thinks.

This is generally a pretty good introduction. I looked at the 480p version, which has more than enough resolution for this purpose, but even on the office LAN with the company's internet connection, takes a while to download. So, I second the motion for a dialup or powerpoint only version.

 

Part 2, Slide 59: It would be a good idea to mention at this point that an optical filter has to start rolling off at F/4 in order to be out by F/2, but that the electronic filters ahead of the A/D can be much steeper, "brick wall" filters. Same for resampling filters in digital, which is why we can pick up top octave resolution by oversampling and down converting. The curves show it, but it's worth saying out loud.

 

Part 5, Slide 99: A typo: On the right side by the gray rectangle, it should say 540 line pairs per picture height, not Lp/mm.

 

Part 7: There's no mention of differing sensitivity between rows of photosites. David mentioned that you get more dynamic range by basically having RGB vertical stripes and alternating clear and ND horizontal stripes. That's an interesting distinction between your chip and the Bayers.

 

 

 

-- J.S.

Link to comment
Share on other sites

Calling bayer 4:2:0 is comparing apples to oranges.

 

Let's say you have 8M photosites to work with for a 16:9 single-sensor camera. If you arrange these in a vertical RGB stripe pattern, you end up with a sensor with 6528x1224 photosites. The theoretical maximum resolution you can get out of this camera is 2176x1224 pixels. In reality, of course, it will be lower. The good news is, you will get that on each color channel. In other words, you've got a 4:4:4 camera

 

Pretty impressive.

 

But now let's take those same 8M photosites and arrange them in a bayer pattern. You end up with sensor with 3771x2115 photosites. Exactly what useful resolution you end up with after processing the image depends on a lot of factors and can be argued over endlessly, but pointing bayer cameras at resolution test charts has confirmed repeatedly that bayer sensors, even after taking into account low-pass filtering and all the rest of it, have luma resolution upwards of 70% of their photosite count in each direction. In other words, this hypothetical bayer camera should resolve better than 2640x1480. Chroma resolution will, of course, be lower than luma resolution, meaning you don't have a 4:4:4 camera, however.

 

But think this through. The bayer camera isn't 4:4:4, but this is at least in part not because it provides lower chroma resolution, but rather because it provides higher luma resolution, and the 4:x:x notation simply defines one relative to the other!

 

Given that humans are far more sensitive to luma resolution than chroma resolution, it's very difficult to avoid the conclusion that for a given number of photosites, for general purpose imaging, a bayer pattern provides the best image.

 

Of course, you don't get anything for free. The big catch with bayer is that extracting a useful image from the sensor data requires far more processing. And this is probably the primary reason why you've seen bayer adopted almost universally for photo cameras, but many vendors seem to avoid it for motion picture acquisition. A professional still photographer might shoot a couple of thousand images in a day. A motion picture camera rolling at 24 frames/second shoots that many in a minute and a half. Running every one of those images through a sophisticated debayer algorithm requires vast computational resources. Processing gets faster every year, though, and file-based acquisition means you don't have to process sensor data at full quality in real-time on-camera anymore. I expect that as a result of these trends, bayer-pattern sensors will become more common in large-format motion picture cameras over time.

Link to comment
Share on other sites

  • Premium Member
Part 7: There's no mention of differing sensitivity between rows of photosites. David mentioned that you get more dynamic range by basically having RGB vertical stripes and alternating clear and ND horizontal stripes. That's an interesting distinction between your chip and the Bayers.

Indeed it is. There's more to a good image than just resolution.

Link to comment
Share on other sites

  • Premium Member
Part 7: There's no mention of differing sensitivity between rows of photosites. David mentioned that you get more dynamic range by basically having RGB vertical stripes and alternating clear and ND horizontal stripes. That's an interesting distinction between your chip and the Bayers.

 

-- J.S.

I have never had anybody from Panavision confirm that they actually do that, although apparently technical details are just as hard to come by inside the company as outside it. I rather suspect this is simply an internet generated factoid.

 

I would like to have someone from higher up confirm or deny it though.

 

However, I don't really see how it could achieve much, at least not with a CCD sensor.

 

First of all, by effectively slicing each photosite in half, you would halve its sensitivity, which would subtract at least one stop from whatever gain in dynamic range you achieved.

 

The biggest problem though is that, even though you have all probably seen diagrams depicting the physical structure of a CCD sensor, in reality all those P-N semiconductor junctions are about as physically solid as a smoke ring.

 

If you had NDs fitted to some of the photosites, certainly they would receive less light and so not go into saturation on highlights. Unfortunately the adjacent unfiltered ones would, and when they overloaded they would start to leak current carriers into all the adjacent photosites, effectively "flooding" them. The PN junctions on the chip are no more of a barrier than the rope fences they put up to control bank queues!

 

As far as I am aware, this technique might get two or three extra stops DR, but that is all, and after subtracting the sensitivity loss due to splitting up the photosites nad so on, you would really gain very little.

 

While it is possible to construct barriers between the photosites to reduce this problem, these simply eat up more silicon "real estate" and reduce the sensitivity still further.

 

Personally, I think the real reason Sony designed the Genesis chip the way they did is simply that by applying the appropriate masking, the same piece of silicon can serve as a 2K 4:4:4 sensor, or a 5K Bayer masked sensor.

Link to comment
Share on other sites

Of course, you don't get anything for free. The big catch with bayer is that extracting a useful image from the sensor data requires far more processing. And this is probably the primary reason why you've seen bayer adopted almost universally for photo cameras, but many vendors seem to avoid it for motion picture acquisition. A professional still photographer might shoot a couple of thousand images in a day. A motion picture camera rolling at 24 frames/second shoots that many in a minute and a half. Running every one of those images through a sophisticated debayer algorithm requires vast computational resources. Processing gets faster every year, though, and file-based acquisition means you don't have to process sensor data at full quality in real-time on-camera anymore. I expect that as a result of these trends, bayer-pattern sensors will become more common in large-format motion picture cameras over time.

 

Hi Chris, do you really think DSP is the problem now ? I can't get sustained high frame rates from my DSLR but it can process a RAW image very fast, give me jpegs (in fact does a more efficient job if it doesn't keep the RAW and record it) - mechanical issues aside the bottleneck would seem to be how fast thy can get data off the chip itself, the buffer size and read/write to & from the buffer.

 

Indeed it can make a viewable jpeg from the RAW capture much faster than my computer..

 

I note Canon would seem to be using more or less the same - or cousins "Digic" DSP in still cameras and the Bayer HV-20; Sony as well (Alpha DSLRs & EX-1)

 

-Sam

Link to comment
Share on other sites

Guest Glen Alexander
Yes and you can add me to the top of that list . Fuji/ Kodak have such a great range of stocks on offer now ,i know the argument is cost ! but film is still kicking poop out of any digital format .

 

As a side note, am new to the EU labs. Can you list some of the good labs for developing anamorphic 35mm B&W Kodak stock? Where are the places to get good B&W in EU?

 

yes i could goggle but prefer getting someone's actual experience.

Link to comment
Share on other sites

Guest Glen Alexander
...stuff deleted

 

- mechanical issues aside the bottleneck would seem to be how fast thy can get data off the chip itself, the buffer size and read/write to & from the buffer.

 

...stuff deleted

 

how fast you get data off the chip depends on variety of factors, dark current, integration time, serial or/and parallel shift and add time, etc the buffer has little to do with it. in addition you have to account for the A/D conversion delays etc, etc

Link to comment
Share on other sites

Hi Chris, do you really think DSP is the problem now ?

 

I think it has been a problem, and we're starting to see it get solved now. If you look at Red's situation, for instance, RGB recording and live 1080p output, both once planned features for the Red One, haven't materialized. But these are now planned features for the cameras Red will introduce next year. This all strongly implies that the electronics in the Red One are right on the cusp of being able to handle this (Red considered it possible for a long time), but aren't quite there, and the electronics improvements in next year's cameras will take them over the top. (Of course, in terms of bringing sufficient computational resources to bear on the problem, it probably also helps that one of next year's cameras is only 3K, while the other costs twice as much as the Red One.)

 

And sure, you've got low-end bayer cameras like the HV20... but there you're looking at many fewer photosites than a DSLR or Red sensor, and I would guess the debayer algorithm is simpler than what Red or DSLR engineers would consider ideal for professional products. You'd expect that to become possible rather sooner.

Link to comment
Share on other sites

  • Premium Member
But think this through. The bayer camera isn't 4:4:4, but this is at least in part not because it provides lower chroma resolution, but rather because it provides higher luma resolution, and the 4:x:x notation simply defines one relative to the other!

Yes, in most cases, trading lower chroma resolution for higher luma resolution is a good deal. The one case where it isn't is pulling keys. I'd guess that still photographers do less of that, and when they do, they can afford the time to clean things up by hand, because it's only one frame, not 24 per second.

 

While it's true that the OLPF on a Bayer camera has to allow more red and blue aliasing than green, I doubt that that's as bad as it sounds. Perhaps the human visual system's way of fitting the color onto the brightness image masks it. Does anybody have any images they can post showing the red/blue aliasing from a Bayer camera?

 

 

 

 

-- J.S.

Link to comment
Share on other sites

how fast you get data off the chip depends on variety of factors, dark current, integration time, serial or/and parallel shift and add time, etc the buffer has little to do with it. in addition you have to account for the A/D conversion delays etc, etc

 

Yes, I should have written "as opposed to...."

 

-Sam

Link to comment
Share on other sites

Yes, in most cases, trading lower chroma resolution for higher luma resolution is a good deal. The one case where it isn't is pulling keys.

 

I'm not even so sure of this. Remember, Bayer-pattern sensors don't actually sacrifice chroma resolution for luma resolution, relative to striped RGB sensors. They actually sacrifice blue and red channel resolution for green channel resolution (which happens to effectively increase luma resolution because the eye is more sensitive to green).

 

If we return to our hypothetical 8MP single-sensor camera, if that sensor is RGB striped, it has 2.67M photosites for each color. If it's Bayer-pattern, it has 4M green photosites, and 2M each for blue and red. This doesn't seem like it would be a disaster for chromakey, as long as you're using a greenscreen rather than bluescreen anyway. Maybe I'm missing something, but, what you really care about with greenscreen is whether something is green or not-green, right? So what should matter is not the fact that red and blue only get 2M photosites each, but the fact that there are 4M green photosites and 4M not-green photosites. Couldn't this actually work better for greenscreen than having 2.67M green photosites and 5.33M not-green photosites?

 

Most people seem to say (and my own experience seems to confirm) that in practice Red footage keys rather well. I haven't done side-by-side testing with a striped RGB camera like the Genesis, though, nor have I run across any examples of such testing. (And the Genesis has a higher photosite count anyway, so even if it did do better that wouldn't really settle the debate over which color filter arrangement gets the best results for a given number of photosites.)

 

(Mind you, the discussion in this post is specifically about striped RGB vs. Bayer. With three chip cameras, another variable enters the equation, because they provide much better color separation than any type of single sensor camera.)

Edited by Chris Kenny
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Broadcast Solutions Inc

CINELEASE

CineLab

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Film Gears

Visual Products

BOKEH RENTALS

Cinematography Books and Gear



×
×
  • Create New...