Jump to content

Sony wanted the Genesis to be 4K;


Keith Walters

Recommended Posts

As for 4k, eventually you get to a place where your just not going to squeeze that much more juice out of a lemon that is going to really affect the volume in the glass. Of course the SFX folks would disagree, but that is because they are under the marketing spell of the computer industry that keeps making 'bigger' and 'better'... that's another topic.

 

Where did you get this idea that us VFX folks only care about the resolution of an image? Whether it's a film scan or a rendered CG element, the resolution of an image is only a small part of trying to get the best image possible.

Link to comment
Share on other sites

  • Replies 74
  • Created
  • Last Reply

Top Posters In This Topic

  • Premium Member
Many people here appear to hold Mr Galt's technical credentials in high regard. I'm afraid I'm not one of them. In fact there are very few people here who haven't dropped the occasional technical "howler"*.

 

 

I remember when I publicly confronted him on Panavision's mistake on the slab of aluminum they threw in front of the F900 to make their mounting plate from. Problem was it breathed. Panavision had created this myth that you had to reset back focus every ten minutes because as they put it with HD it was so critical it never stayed in focus. And the world took it hook line and sinker while us who had been shooting video for 20 plus years laughed because we knew it was not a condition of format but of manufacturing. It was a myth they created to cover up the fact that their re engineered F900 wasn't engineered all that well. Thankfully that myth now only exists in the original Panavision manual for the camera and in a few folks who never heard the truth about back focus and video cameras.

Link to comment
Share on other sites

  • Premium Member
Where did you get this idea that us VFX folks only care about the resolution of an image? Whether it's a film scan or a rendered CG element, the resolution of an image is only a small part of trying to get the best image possible.

 

Okay it was a bad joke. Sorry! :)

Link to comment
Share on other sites

  • Premium Member

> Even if I only shot for TV broadcast, I'd be quite depressed at the level of compression being

> used by many digital broadcasters

 

I walked into a friend's apartment a couple of weeks ago, and remarked immediately on how good his TV picture was. Of course, this particular friend hasn't yet "upgraded" his TV to digital - and even with the composite encoding artifacts, it still looked better!

 

P

Link to comment
Share on other sites

Did you not read the very first line:

 

"I am not sure which is the best folder to put this in, as it covers quite a few issues, including RED. It's interesting wherever it belongs."

 

"Including Red" not only Red. Just thought Genesis thread would make sense or how about 2k vs 4k sensors. Sony has also stated that they are making a 4k camera. Maybe Sony vs Sony.

 

bob

Link to comment
Share on other sites

[

"Also, what kind of dynamic range does a Nikon D3 have?"

 

Hi Bob, I've been too busy with mine to attempt any specific measurement.

 

I had one page bookmarked which seemed credible (and comprehensible to me which not all are !), suggesting 10 stops, but can't find it.

But in any case, I'm concerned with what I'd call a working dynamic range i.e "what can I do and what can't I do when actually shooting things in the real world."

 

I could say as guesstimation 8 stops but this must be qualified..... first, I'm still nervous about exposing far to the right; yes I know that (with a RAW file especially) one can 'optimize' s/n etc, but - for instance unlike film, rather bright areas will not have any distribution to both the straight-line and some of the shoulder meaning that you can get a digital exposure that is technically "ok" but does not look ok, well to me - so here you either accept the "dayglo on velvet digital look" (I don't) or you assemble your post manipulation ducks-in-a-row strategy...

 

On the other end, the ability to dig and recover clean shadow detail from the D3 (even from jpeg) is remarkable. I don't think a conservative estimate of 5 stops under mid-grey is overly optimistic. I know I've gone further, but here it's a question of visually acceptable noise floor and I say that's hopelessly subjective -- in the sense of working practice. I also find myself asking "Will I want to black crush slightly ?" in a low light situation - therefore killing the darkest detail but offsetting any low level funkiness with the fool-the-eye smoothness a good "black" reference gives.

 

Anyway, this is from the POV of really working with it -- if I wanted to compete in the internet DR Olympics I'm sure I could find a way to make competitive claims equalling or exceeding most or all digital cinema cameras, but I'm more concerned with with, you know, getting the pictures..

 

Still exploring what I can do with this weird cool alien machine :)

 

-Sam

 

-Sam

Link to comment
Share on other sites

> Even if I only shot for TV broadcast, I'd be quite depressed at the level of compression being

> used by many digital broadcasters

 

I walked into a friend's apartment a couple of weeks ago, and remarked immediately on how good his TV picture was. Of course, this particular friend hasn't yet "upgraded" his TV to digital - and even with the composite encoding artifacts, it still looked better!

 

P

 

"Broadcast quality" doesn't mean much any more.

 

R.

Link to comment
Share on other sites

  • Premium Member
"Including Red" not only Red. Just thought Genesis thread would make sense or how about 2k vs 4k sensors. Sony has also stated that they are making a 4k camera. Maybe Sony vs Sony.

 

bob

My point in starting this thread is that many in the "2K is good enough" camp point to "Sony's" decision to make the Genesis a 1920 x 1080 camera as proof that 4K cameras are a waste of time, since presumably Sony would know what they are doing.

 

But now it appears that Sony's engineers did in fact agree that 4K cameras have something to offer. The question that remains, is whether they were talking about using the existing Genesis chip with a Bayer mask, or producing an actual "4K" RGB chip.

 

They have never really answered the question of why the Genesis chip is 5760 x 2160 pixels instead of 5760 x 1080, which would presumably give better sensitivity.

Link to comment
Share on other sites

  • Premium Member
They have never really answered the question of why the Genesis chip is 5760 x 2160 pixels instead of 5760 x 1080, which would presumably give better sensitivity.

 

What do you mean "they've never really answered..."? -- It's been common knowledge that the Genesis sensor has an extra row each of red, green, and blue filtered photosites that are ND'd to provide extra highlight information to increase dynamic range. The camera is already rated in the 500 ASA range, so who needs more sensitivity when you have a chance to improve the dynamic range?

Link to comment
Share on other sites

  • Premium Member
> Even if I only shot for TV broadcast, I'd be quite depressed at the level of compression being

> used by many digital broadcasters

 

I walked into a friend's apartment a couple of weeks ago, and remarked immediately on how good his TV picture was. Of course, this particular friend hasn't yet "upgraded" his TV to digital - and even with the composite encoding artifacts, it still looked better!

 

P

I believe in most countries the broadcasting authorities allow the individual boratcasters to decide how many channels they are going to try to cram into their allocated channel bandwidth, with predictable results. In Australia (at present anyway) broadcasters are allowed to transmit only three standard Definition and one High Definition program in the standard 7MHz channel bandwidth.

 

The result is that here at least, the Digital version of a transmission is always superior to the analog version, and it does in fact live up to most of the promises of Digital TV. However, due to pressure from cable operators, there are severe restrictions on multi-channeling, so at the moment, commercial broadcasters are only allowed to transmit either three identical versions of their SD program, or thay can have one as a program guide. Only this year (after 8 years!) have stations been allowed to transmit a different program on their HD sub-channel!

 

Many of the early digital set-top boxes were really only suitable for strong signal areas, and suffered severe dropouts with only modeate levers of interference. Newer models, apart from being ridiculously cheap, seem to be largely immune from these problems.

Link to comment
Share on other sites

  • Premium Member
What do you mean "they've never really answered..."? -- It's been common knowledge that the Genesis sensor has an extra row each of red, green, and blue filtered photosites that are ND'd to provide extra highlight information to increase dynamic range. The camera is already rated in the 500 ASA range, so who needs more sensitivity when you have a chance to improve the dynamic range?

"Common knowledge"?

I have seen that explanation repeated endlessly, but it still doesn't make sense to me.

Link to comment
Share on other sites

Much of this is why the Phantom HD camera's sensor is 1920 photosites across in uper-35 image area (total resolution is 2048x2048). For the Phantom 65, to get a 4K imager (4096x2440), the photosites are the exact same size but now the chip size is the same as a 65mm film negative. You don't get something for nothing. There are good reasons for large photosites beyond sensitivity.

Link to comment
Share on other sites

"I think 4:4:4 1080P comes close to 35mm resolution."

 

I can tell you right now that the 1080p .TS file captures I have of Revenge of the Sith (recorded from HDTV) look WAAAY better than 90% of the film-originated 1080p Bluray files I have seen. I will make a thread about this at some point.

 

I think the answer to the resolution vs dynamic range issue might be a 65mm-size sensor with 4K resolution. Something like that might really win the Pepsi challenge against 35mm film. Of course, you'd need a lot of new 65mm lenses, but it might be worth it.

Link to comment
Share on other sites

I think Panavisions excuse for rejecting 4k technology is that some people just want to hold technology back. The fact is that a lot of people are not yet ready for high definition and think that standard definition video is good enough. So when ultra-high definition is invented that threatens to make high definition obsolete the promoters of obsolete equipment make up even more excuses that somehow obsolete equipment is better because it has more dynamic range more sensitivity better low light performance and 3 CCD's. For a lot of people it is easier for them to fight off advances in technology rather than to accept the technology.

Link to comment
Share on other sites

Guest tylerhawes
I think the answer to the resolution vs dynamic range issue might be a 65mm-size sensor with 4K resolution.

 

Wouldn't DoF be quite an issue then? There is such a thing as too much DoF, and while the detail of 70mm is great and everyone seems to dream of a budget to shoot it, in reality would you really want to be doing all your productions with DoF like that?

 

I think Panavisions excuse for rejecting 4k technology is that some people just want to hold technology back.

 

I don't think that's really fair to Panavision. After all, they have built their success on years of innovation and pushing new technologies into their film cameras. And they were one of the earliest birds to the digital acquisition party. Regardless of what you think of Genesis, it's a fact that several studio features were shot on it before anyone laid eyes on a RED One prototype. So I think I will give them the benefit of the doubt that their choice to go HD and not 4K with Genesis was engineering and market driven, and not a conspiracy of all the big camera manufacturers to deprive us all of 4K.

Link to comment
Share on other sites

  • Premium Member
That could be due to all so elusive Kell factor.

Elusive indeed -- Here the term "Kell factor" is generally used to mean the resolution reduction (from progressive) used to keep interlace from having too much small area flicker/twitter. The numbers usually are about 0.65 to 0.7. Since interlace used 0.5 as much bandwidth, its advantage was however much you could push the Kell factor above that. Raymond Kell himself actually wasn't responsible for that, and objected to the use of his name on that term.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

Elusive indeed -- Here the term "Kell factor" is generally used to mean the resolution reduction (from progressive) used to keep interlace from having too much small area flicker/twitter. The numbers usually are about 0.65 to 0.7. Since interlace used 0.5 as much bandwidth, its advantage was however much you could push the Kell factor above that. Raymond Kell himself actually wasn't responsible for that, and objected to the use of his name on that term.

 

Originally, the term Kell factor only applied to progressive scanning, and indeed so, because interlaced TV was not invented when people did their analysis of loss of vertical detail in progressive scanning. The loss of vertical detail due to interlacing is sometimes also called "Kell factor of interlacing", however, it is not to be confused with the original meaning of Kell factor arising due to scanning spot size in progressive video.

 

I.e., in the original concept for progressive video it was the issue of vertical detail lining up with the scanning spot, and in interlaced scanning the issue is the flicker/twitter resulting from interlaced fields not displayed at the same time. It appears even Adam Wilt mixed up the two on this page:

http://www.adamwilt.com/TechDiffs/FieldsAndFrames.html

 

There is no scanning spot in digitals sensors today. However, the original concept of Kell factor was due to the fact that vertical detail may not line up with the scanning spot, and that can still be used today for digital sensors. For e.g., one would get different responses if alternate black and white lines, which approach the size of photosites on a digital sensor, are displaced slightly so that they do not line properly with photosites, and their effect is average out to some extent.

Link to comment
Share on other sites

It appears even Adam Wilt mixed up the two on this page:

http://www.adamwilt.com/TechDiffs/FieldsAndFrames.html

 

Don't know how to edit my original post so I am responding to my own post.

 

I corresponded with Adam Wilt, and indeed he is aware of the difference of notion of the original usage of the term Kell factor as applied to vertical detail loss in progressive scan vs. the flicker/twitter resulting from interlaced scanning. However, his response was that he does not want to scare the audience away by going into details. Fine enough for me, though, I still believe that the term Kell factor should be reserved for its original intended meaning of resolution loss in progressive video.

 

On the other hand Kell was the not only person trying to determine this elusive number. At least half a dozen different people were approaching this issue from different angles, and in fact, Mertz and Gray published their findings in July 1934 (factor = 0.53) before Kell et. al. (factor = 0.64) in Nov. 1934, which Kell later revised to a higher value.

Link to comment
Share on other sites

I think the answer to the resolution vs dynamic range issue might be a 65mm-size sensor with 4K resolution. Something like that might really win the Pepsi challenge against 35mm film. Of course, you'd need a lot of new 65mm lenses, but it might be worth it.

It's called the Phantom 65, available for rental from Abel Cine Tech. We're working on making everything available for it that you could ever need for a full production.

Link to comment
Share on other sites

  • Premium Member
Originally, the term Kell factor only applied to progressive scanning, and indeed so, because interlaced TV was not invented when people did their analysis of loss of vertical detail in progressive scanning.

IIRC, the RMA system of the mid to late 1930's (343 total lines) was interlaced. Mark Schubin at a tech retreat years ago mentioned that he had found a U.S. patent on interlace, in the Nipkow disk days, granted to someone named Garcia in 1915.

 

 

 

-- J.S.

Link to comment
Share on other sites

IIRC, the RMA system of the mid to late 1930's (343 total lines) was interlaced. Mark Schubin at a tech retreat years ago mentioned that he had found a U.S. patent on interlace, in the Nipkow disk days, granted to someone named Garcia in 1915.

 

 

 

-- J.S.

 

It is possible that interlaced TV was there in 1930's and perhaps earlier. However, I wanted to make the distinction between analyses by some people for the detail loss on progressive scanning, as opposed to interlace twitter, which is a separate phenomenon.

Link to comment
Share on other sites

  • Premium Member
Much of this is why the Phantom HD camera's sensor is 1920 photosites across in uper-35 image area (total resolution is 2048x2048). For the Phantom 65, to get a 4K imager (4096x2440), the photosites are the exact same size but now the chip size is the same as a 65mm film negative. You don't get something for nothing. There are good reasons for large photosites beyond sensitivity.

 

Mitch, what sort of lens mount is the Phantom 65 going to use? Is there likely to be a possibility of mounting medium-format still camera lenses on it? The idea of shooting 4K through some of the old Pentax 67 lenses I've used gives me goosebumps! I imagine it would also make access to glass that could be used with the camera much easier - there's certainly a few more MF lenses floating around than 65mm Cine lenses!

 

 

I think Panavisions excuse for rejecting 4k technology is that some people just want to hold technology back. The fact is that a lot of people are not yet ready for high definition and think that standard definition video is good enough. So when ultra-high definition is invented that threatens to make high definition obsolete the promoters of obsolete equipment make up even more excuses that somehow obsolete equipment is better because it has more dynamic range more sensitivity better low light performance and 3 CCD's. For a lot of people it is easier for them to fight off advances in technology rather than to accept the technology.

 

And Thomas I just can't see why any technology company would want to hold back their technology - being at the cutting edge is what brings customers to them. And I'm yet to see projected digital footage that looks better than what I've seen from the Genesis.

Link to comment
Share on other sites

  • Premium Member

The Phantom 65, as far as I'm aware, has a Maxi-PL mount, some as the Arri 765. Arri have rehoused Hasselblad lenses and a Cooke zoom (an 18-100mm with a relay system to spread out the image to 65mm size, losing about 2 stops in the process)

Link to comment
Share on other sites

Currently there are two standard mounts for the Phantom 65, the Mamiya 645 medium format and the Arri Super-PL (aka Mega-PL, never heard it as Maxi-PL). And we are developing something else as well...

 

The camera shoots amazing images. All 4k cameras are not equal. There is a lot more to it than hitting a resolution number. We really believe in this product. We have a lot of IMAX shooters talking to us.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Broadcast Solutions Inc

CINELEASE

CineLab

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Film Gears

Visual Products

BOKEH RENTALS

Cinematography Books and Gear



×
×
  • Create New...