Jump to content

Mike Curtis on RED


Mark Allen

Recommended Posts

Do you seriously think the resolution of MF film is less than that of a 22 or even 33 mega"pixel" Bayer-based sensor? They may be equal to or better than MF film in terms of grain, but that's as far as it goes. Resolution is not better and neither is the dynamic range. The same can be said for a typical digital SLR compared to 35mm film. A 33MP Bayer-based back has a resolution no greater than 16.5 true MP under ideal conditions and no less than 8.25 true MP under the very worst conditions. MF film has at the very least 30 true MP resolution (no one would dare dispute that figure as being too high).

 

I am curious as to where you are getting these figures from... ?

Link to comment
Share on other sites

  • Replies 97
  • Created
  • Last Reply

Top Posters In This Topic

Not that many of them have any idea what they are talking about....

As I have had to point out to my critics on too many occasions, TV cameras are my "Trade"; I am far more conversant with what they can and what they can't do than most of the sad wannabe's who post here.

Apologies, I guess it's a scottish accent. Sorry to lump you in with the brittish, I hear those are like two different countries. ( http://www.jimart.co.uk/ ) those are incredible paintings by the way.

 

Anyway, I find that most of these blokes are quite happy wannabe's. Call us wannabe's, if you will, but not one of us is remarkably sad.

Link to comment
Share on other sites

I think the professional industry as a whole is still recovering from the hype by Sony and others when the Sony F900 was released in 2000. The internet was full of "Film is Dead" websites and the newspapers took off on that theme too.

 

 

I've been out shooting the last days. One night, after a few wrap beers, we looked backed at this time and laughed.

Link to comment
Share on other sites

Guest Jim Murdoch
Anyway, I find that most of these blokes are quite happy wannabe's. Call us wannabe's, if you will, but not one of us is remarkably sad.

Well of course you aren't sad; you're all fully convinced that once Jim Jannard works his $17,500 miracle, that's all you'll need to be the next Steven Spielberg. Who wouldn't be happy?

Link to comment
Share on other sites

Guest Jim Murdoch
I honestly don't get half of Jim Murdoch's open hostility to every company developing a digital camera. Either he accuses a company of hyping something... or hiding something if they DON'T hype it.

Oh OK, so you do get half of it, then. That's better than most of the illiterate sad sacks here. Well done!

 

I don't have any problem with people developing Digital cameras, In fact I've remarked more than once that if RED can come up with any sort of "35mm sized" HDTV camera for under 20 Grand, that would be a very impressive achievement. The push toward producing workable "Digital Cinematography" cameras has produced some damned fine TV cameras, but so far cinematography-wise all we've seen is megabuck cameras with megacent performance; which I don't think is terribly impressive at all.

 

Why does this upset me? Well I like dogs and horses, but I don't particularly like horse and dog sh!t on my front lawn....

 

 

You're never going to win with a guy like that...

Well you got that right at least :lol:

Link to comment
Share on other sites

  • Premium Member
John Pytlak provides tons of useful information... so we just live with the inevitable cheerleading for film that goes along with his posts ;) If anyone is going to be a fan of film, it's probably someone who works for Kodak after all.

 

37 years at Kodak certainly helps make me a "fan of film". :) Although I find myself taking more photos with my Kodak DX6490 digital camera than with my film SLR, I much prefer the prints I get that are made on silver halide color paper than on my home printer --- quality over convenience. Over half of Kodak's revenue is now from digital imaging, and Kodak Entertainment Imaging is part of that. Kodak Digital Cinema is the largest independent supplier of digital pre-show systems, and led the way with a JPEG2000 capable Digital Cinema system, demonstrated at ShowEast last year. Yet, only about 1 percent of theatre screens worldwide are equipped for Digital Cinema, SEVEN years after the Digital Cinema hype of ShoWest 1999. The rest use 35mm FILM -- still the most cost-effective way of filling large screens with feature entertainment.

 

Kodak is in the unique position of having leading-edge technology and capabilities in BOTH film and digital imaging. I think Kodak Entertainment Imaging President Eric Rodli sums it up well:

 

http://www.kodak.com/US/en/motion/about/ei...1.4.3&lc=en

 

Ours is a business that's more than a hundred years old, but is always reinventing itself because our business -- and our industry - is powered by the human ability to tell stories in motion.

 

As our customers' stories have become more imaginative, we've expanded the tools used to create them. So, 'change' has been a way of life with us, but every enduring change has been one that brought technical, business, and creative value.

 

That's the way we view changes taking place in our industry today - as opportunities to open up new possibilities for human creativity, provide additional tools for the artist, and enable storytellers to work at increased levels of quality.

 

That's why we're excited about the future of entertainment -- film and digital. Film is proven technology - but we're out to prove it can be even better. Digital is good and is getting better, but we're working with our customers to drive it to much higher standards.

 

Although film and digital are often described as competitors, in more important ways they are 'creative allies,' expanding choices for filmmakers and extending each other's capabilities. We're working to make the film-digital combination even more powerful...

 

So, as you can see - we're working on much more than meets the eye. As you visit other parts of this site, you will find specific products, services, and systems. Those will change on a regular - and often, accelerating -- basis.

 

We will broaden our capabilities in additional technologies. We will participate more fully in services. We will add new dimensions to the Kodak brand.

 

But, some things will not change. No one will set higher standards of imaging quality. No one will provide more reliable information and support. No one will be more trusted as a member of the industry. And no one will listen more closely - or respond more imaginatively - to its customers.

 

All of that is part of our heritage - and, in Entertainment Imaging, that heritage continues. Thank you for allowing us to help you tell your stories.

Link to comment
Share on other sites

  • Premium Member

Can I just say that currently onlining even one hours worth of footage at 10bit 4:2:2 is a royal pain in the ass- even on hi-end equipment (yes even on discreet or quantel). How is anyone expected to monitor Red's footage, let alone colour correct it, composite it etc etc. Small datarates and wavelet 'from the future' compression might help you record, but it means nothing when it has to be turned into deliverables. People need to remember that filmmaking is a collaborative process, as much as I'd like a 4K or 8K or whatever camera there will be no revolution until it is televised (ba da boom) and there seems to me a deep shortage of people with 4K screens around here. The Red camera might be better value than 'camera x' but that means nothing to a producer who has to wonder how the hell the images shot are going to end up on Digibeta six months down the line! And please can people stop talking about the DV revolution, all that has done is ruined the quality of television images and filled two bob universities with 'meeede-ya' students. how many films have you seen on the cinema shot on dv? and how many idiots waste everyone's time thinking they are professional because they have shot some dv footage and slowed it down in premier? and no i'm not just bitter about rates getting slashed, i'm just bored of people believing that knowing how to press a button equates to haveing creative and proffesional knowledge.

 

haveing said all that, i really can't wait to see the footage from red's camera and as i've said before, especially if simple allowances are made to allow scope lenses.

 

keith

Link to comment
Share on other sites

  • Premium Member
Oh come on; the overwhelming majority of people here fully support your efforts.

Not that many of them have any idea what they are talking about....

As I have had to point out to my critics on too many occasions, TV cameras are my "Trade"; I am far more conversant with what they can and what they can't do than most of the sad wannabe's who post here.

 

Jim, I'm curious, do you actually shoot anything or do you solely expound at length and sans tact about the technical deficiencies of various cameras?

Link to comment
Share on other sites

  • Premium Member
Can I just say that currently onlining even one hours worth of footage at 10bit 4:2:2 is a royal pain in the ass- even on hi-end equipment (yes even on discreet or quantel).

Is the problem one of maintaining real-time playback, lengthy rendering time, or what? Thanks.

Link to comment
Share on other sites

Now to argue for the other side, the real problem is that film works. It works just fine; the main complaint that younger indie people have is mostly that it is too expensive, but 4:4:4 HD cameras and higher aren't any cheaper than 35mm either. I suppose they feel that digital has the potential to come down in price like all electronic technology, whereas they expect film and film cameras to stay at the same price points. But you have to recognize that you're spending a lot of time and money just to get to the quality level of 35mm film, which we've had for over one hundred years.

 

But this is unavoidable: first the technology has to match film, and then it has to somehow be cheaper and more convenient to use than film. Otherwise, what's the point of switching? In the meantime, while we live and work in a world with both film and digital, it's more a question of which serves the particular needs of the project better. We're not really at a stage where digital does everything that film does for us, so we will have both technologies in use for awhile.

 

 

Very well said, David.

 

On the other hand, if you look at the professional stills market, in many ways film is better than digital

and digital is now more commonplace because it's better than film in many ways also. Aesthetically,

the advantages of either can be argued till the cows come home but that does mena that how you

shoot is any less important. From the time I started shooting with a nikon fe at nine years old up

through undergrad and graduate school (in photography and film) the one thing I was taught over and

over and over was that how you shoot (left and right brains) is more important than what you're shooting with (be it an instamatic or a f5). In the professional stills world, the practical side (which in many

ways affects the aesthetic side) has been deeply affected (or infected) by the digital workflow

advantages. But as you said, digital doesn't do everything for us and it shouldn't and should

still be viable option as long as people want to shoot with it. Just was with color and b/w.

 

Now, I'm sure people will argue in response that I'm saying that how you shoot is more important

than the aesthetic results. That's not what I'm saying... For example, for all my stills photography

I shoot mainly 50 year old, fully manual, no built in light meter Leica M2 and M3s with very specific

lenses(both newer and older lenses). The practical operation of the camera is as important to me

as the aesthetic quality because it influences the aesthetic results.

 

On the other hand, there are many situations where a nice full frame digital would be much

more suitable and more conducive to a better aesthetic end result.

 

Horses for courses...

 

Deanan

Link to comment
Share on other sites

Guest Jim Murdoch
And please can people stop talking about the DV revolution, all that has done is ruined the quality of television images and filled two bob universities with 'meeede-ya' students. how many films have you seen on the cinema shot on dv? and how many idiots waste everyone's time thinking they are professional because they have shot some dv footage and slowed it down in premier? and no i'm not just bitter about rates getting slashed, i'm just bored of people believing that knowing how to press a button equates to haveing creative and proffesional knowledge.

 

keith

 

I have a cousin in Sydney who works for Sony's telephone helpline there. I haven't had any reason to contact Sony myself anywhere for some time, but said that over there at least, you can no longer just phone up and ask to speak to somebody in the broadcast or professional division. All you can do is leave your name and a very rigorous list of details and wait for someone to phone you back. In theory, this applies even if you're the chief engineer of the Nine network, although he would probably have somebody's mobile number!

 

And the reason for this is that they were getting snowed under by the sheer number of dreamers and other time-wasters who suddenly thought they were Broadcast Professionals because they had just bought a MiniDV handycam!

 

This is very much the same situation that bedevils people wanting to find a publisher for a book: every idiot with a PC and Microsoft Works suddenly thinks he/she is a professional writer, and the publishing companies were getting swamped with submissions, the vast majority of which are absolute garbage, so virtually none off them will accept unsolicited manuscripts any more.

Link to comment
Share on other sites

I have a cousin in Sydney who works for Sony's telephone helpline there. I haven't had any reason to contact Sony myself anywhere for some time, but said that over there at least, you can no longer just phone up and ask to speak to somebody in the broadcast or professional division. All you can do is leave your name and a very rigorous list of details and wait for someone to phone you back. In theory, this applies even if you're the chief engineer of the Nine network, although he would probably have somebody's mobile number!

 

And the reason for this is that they were getting snowed under by the sheer number of dreamers and other time-wasters who suddenly thought they were Broadcast Professionals because they had just bought a MiniDV handycam!

 

This is very much the same situation that bedevils people wanting to find a publisher for a book: every idiot with a PC and Microsoft Works suddenly thinks he/she is a professional writer, and the publishing companies were getting swamped with submissions, the vast majority of which are absolute garbage, so virtually none off them will accept unsolicited manuscripts any more.

 

The statement above by you Mr Murdoch is extremely patronizing towards most consumers of electronic goods and media. I am sure you coil when you hear the term "prosumer".

 

I hope that you will redeem yourself in future posts.

Link to comment
Share on other sites

  • Premium Member
I honestly don't get half of Jim Murdoch's open hostility to every company developing a digital camera. Either he accuses a company of hyping something... or hiding something if they DON'T hype it. You're never going to win with a guy like that so don't bother even getting into an argument with him.

 

He just wants to sound/feel important. It's cute when he refers to himself as an industry expert as if the masses are awaiting his verdit on things... erm.. I mean his 'following'.

 

I hope that you will redeem yourself in future posts.

 

He won't.

 

Just ignore him, like the rest of us. >8)

Link to comment
Share on other sites

Guest Jim Murdoch
The statement above by you Mr Murdoch is extremely patronizing towards most consumers of electronic goods and media. I am sure you coil when you hear the term "prosumer".

 

I hope that you will redeem yourself in future posts.

What are you waffling about? I'm just saying, Sony have had to severely restrict access to people in the professional division because of all the people who suddenly imagine they're "professionals" and want to tie up the phone lines trying to get an education at Sony's expense. I don't imagine for one minute that Sony like doing that. It would no doubt be different if Sony thought they were ever going to buy anything...

 

"I am sure you coil when you hear the term "prosumer".

 

I do; the term is really a euphemism for "wannabe wanker"....

Link to comment
Share on other sites

Do you seriously think the resolution of MF film is less than that of a 22 or even 33 mega"pixel" Bayer-based sensor? They may be equal to or better than MF film in terms of grain, but that's as far as it goes. Resolution is not better and neither is the dynamic range. The same can be said for a typical digital SLR compared to 35mm film. A 33MP Bayer-based back has a resolution no greater than 16.5 true MP under ideal conditions and no less than 8.25 true MP under the very worst conditions. MF film has at the very least 30 true MP resolution (no one would dare dispute that figure as being too high).
I am curious as to where you are getting these figures from... ?

Very simple. I regularly shoot 135 negative film and scan it at 10 megapixels. Since a film scanner is used, it is able to obtain a true full-color sample for every pixel location. This means that the scanner produces true 10 megapixel resolution for each scan. The scanner can actually do 40 megapixel scans. At that resolution, I find that it is possible to reveal much more detail on the film than the TRUE 10 megapixel scan is capable of doing.

 

By the way, a 10 megapixel from 135 format is about the equivalent of a 2.5K scan of Super35 film. I know everyone here agrees that Super35 is easily worth more than 2.5K!

 

Anyway, assuming 135 format is only 10 TRUE megapixels, then by simple math, a negative with ~4 times as much area can produce ~4 times as many "pixels".

 

As for the digital camera resolution not being the full 33 megapixels, that has simply to do with Bayer pattern sampling. It is the green photodiodes which are giving the digital camera most of it's detail. In a Bayer pattern, there are half as many green photodiodes as the total "pixel" count of the camera. The remaining half of the photodiodes are evenly distributed red and blue. Typical Bayer demosaicing algorithms will actually copy edges from the green channel of the image into the red and blue channels. The edges from the green channel are used since the green channel has more resolution (after all, more pixels were used for it). This helps remove some of the unwanted "color sparkle" edge artifacts that are inherent in Bayer-captured images. It also helps the red and blue channels appear to have greater resolution. This process is a major cause of edge halos because all too often, edges with differing luminance are copied into another channel with conflicting edge luminance. This is why edges halos are usually gray.

 

Under the worst conditions (ie. red objects on black background), the green channel will not contain any detail. Therefore, there are no edges to copy from the green channel into the red channel. Thus, the image will only have a resolution (pixel count) as great as the number of red photodiodes (1/4 the advertised megapixels).

 

 

-Ted Johanson

Link to comment
Share on other sites

By the way, a 10 megapixel from 135 format is about the equivalent of a 2.5K scan of Super35 film. I know everyone here agrees that Super35 is easily worth more than 2.5K!

-Ted Johanson

 

Curious: What should 65mm neg be scanned at, in your estimation?

Link to comment
Share on other sites

Under the worst conditions (ie. red objects on black background), the green channel will not contain any detail. Therefore, there are no edges to copy from the green channel into the red channel. Thus, the image will only have a resolution (pixel count) as great as the number of red photodiodes (1/4 the advertised megapixels).

-Ted Johanson

 

May I ask which bayer reconstruction software you know uses this algorithm?

 

I would not assume that everyone uses such a simplistic algorithm for their reconstruction.

 

 

Anyway, assuming 135 format is only 10 TRUE megapixels, then by simple math, a negative with ~4 times as much area can produce ~4 times as many "pixels".

 

But not necessarily 4x the resolution as even many good medium format lenses don't generally

resolve to the same level as a good 35mm lens. I shoot 6x7 with some great Mamiya

lenses and scan on a nikon 9000 and only one lens resolves somewhat close to most of my

Leica lenses.

 

Not all pixels are created equal but that doesn't mean they're not pixels :)

 

Deanan

Link to comment
Share on other sites

  • Premium Member
The statement above by you Mr Murdoch is extremely patronizing towards most consumers of electronic goods and media. I am sure you coil when you hear the term "prosumer".

 

I hope that you will redeem yourself in future posts.

 

Got to be honest I'm not a big fan of the word 'prosumer'- if it is aplied to mean that it is a product that a consumer/ non-professional uses to make professional results. It is demeaning to professionals and although is great for training purposes it is ulitimately a highly confusing oxymoron. I also think there is a somewhat daft amount of hostility towards Jim Murdoch, I always enjoy his 'voice of reason', but then again I don't take everything quite as seriously as others here. It seems to me bizaar that people can critisize his open and visible presence when they cannot even log on under their own name. Despite his tone he tends to offer a strong counter balance to all the hot air that others mistake for qualified opinions. I don't know perhaps it is one of those atlantic translations, but sometimes it does seem to me that people occasionally take things a little bit too much at face value in the former colonies.... as far as dpgoulder's question, it is not really a question of real time playback or rendering these are really just hardware issues. the real problem is that software designed for working in compressed SD formats are now being 'retrofitted' for HD and higher and they are just not tested enough in the real world before they are released. An hour of 10bit HD of the top of my head is around 600gigs with a data rate that willl push a bloody good raid to its limits which the computer will then have to keep track of, if you allow for a real world situation then it is likely that you will be accessing at least a terabite for that hour. Basically if you can remember back to the early days of SD non-linear and remember all the problems computers had with working it is very similar now- only with HD obviously. So I personally allow a hefty amount of time for an HD online despite the fact that in principle I am conforming from tape the same way I would with SD footage (and yes working with Data is only solving some of the issues). Most hi-end software for HD and beyond work is really beta software as far as I'm concerned- FCP is now in its third year of HD and I still think that is no where near 100% solid and I'd say the same about Adrenaline and Nitris. I have also experienced the missery and endless hair pulling of conforming at 2K, so if someone said we are doing a film with an experimental 4K workflow I'd say fine this will be fun, but do not expect to meet any deadlines at all. As I also previously pointed out if you can, with cunning compression, reduce data rates storage etc and monitor at 2K+, but you have to ultimately deliver at 1080p for broadcast, then you will suddenly have to deal with the issue of turning this cunning compression into 'real world' data. In conclusion everything is possible, but from a professional workflow stand point I cannot see the 'Red' revolution being anytime soon. That is not to say that the Red camera will not become an extremely useful tool for us all, but don't expect to be shooting 4K films reliably at any budget in the near future.

 

keith

Link to comment
Share on other sites

Guest Jim Murdoch
I don't know perhaps it is one of those atlantic translations, but sometimes it does seem to me that people occasionally take things a little bit too much at face value in the former colonies....

 

Funny you should mention that.

I've just had a private e-mail from Jim Jannard basically saying that:

"I would appreciate that you do not stop by our booth at IBC."

 

Apparently I have "no respect for his project" :D

Link to comment
Share on other sites

Guest Jim Murdoch
Jim, I'm curious, do you actually shoot anything or do you solely expound at length and sans tact about the technical deficiencies of various cameras?

Oh I see, so you're one of those people who thinks just because I haven't served a Chef apprenticeship at Maxim's of Paris, I can't be taken seriously when I say that certain restaurant food is crap.

 

I have a TV set, and go to the movies every now and then. That is all the test equipment I need really. Apart from shooting countless film tests, no I've never actually made a program on film. Most of my experience in that field comes from bozos who use inferior equipment or don't know what they're doing and then come to me expecting me to wave some sort of magic wand and turn miniDV into 35mm Big TV.

 

I'll tell you one thing, even complete morons can turn out stunning images shooting on 35mm film. There's so much correction available in Post it's hard to go wrong. (Although there are people who can stuff it up, God knows how:-)

 

Regarding "technical deficiencies of various cameras" the problem generally isn't so much the quality of the cameras themselves so much as that we keep getting endless "New Generation" cameras which work remarkably little better than the previous generation....

Link to comment
Share on other sites

May I ask which bayer reconstruction software you know uses this algorithm?

I would not assume that everyone uses such a simplistic algorithm for their reconstruction.

 

Well, obviously not the one which Dalsa is using for the Origin - judging by the examples I have seen. They appear to be using some more generic algorithm which is simply filling in the voids by using neighboring "pixels".

 

Take a look at the red channel in an image from (almost) any digital camera. How do think it could be so high resolution even though only 1/4 of the photodiodes were actually red? Did you ever notice that where there isn't an edge in the green channel, the red channel is left to fend for itself and therefore looks extra pixelated? Did you ever notice that some edges in the red and blue channels have a luminance that is exactly the same as that of the same edge in the green channel? Ultimately this process causes high frequency details to become desaturated.

 

I wouldn't call this method "simplistic". I found it much harder to duplicate than the regular Bayer-reconstruction algorithms. That's right, I have actual experience in building these algorithms.

 

Haven't you ever noticed all of those edges artifacts on digital camera images - do you think they're simply caused by over-sharpening or something?

 

If you have any experience at duplicating the results of Bayer demosaicing algorithms and have a better explanation for the edge artifacts, I'd love to hear your side of the story.

 

But not necessarily 4x the resolution as even many good medium format lenses don't generally

resolve to the same level as a good 35mm lens. I shoot 6x7 with some great Mamiya

lenses and scan on a nikon 9000 and only one lens resolves somewhat close to most of my

Leica lenses.

True; but that also applies to the digital back as well anyway. I'm sure there are lenses out there that are capable of pushing MF film to the limit. 30 megapixels is quite a conservative figure for medium format - especially if considering the capability of the sensor itself without limitations imposed upon it by the system.

 

 

-Ted Johanson

Link to comment
Share on other sites

Curious: What should 65mm neg be scanned at, in your estimation?

Well, if a Super35 frame is capable of resolving at 4k, then vertically run 65mm negative is capable of resolving at 8k and IMAX is capable of resolving at 12k. This of course doesn't take into account things like focus, depth of field, lens quality, etc.

 

Just a bit of trivia here...if one does scan IMAX at 12k, 10bit, then IMAX film uses the equivalent of 420 megabytes per frame, 10 gigabytes per second (at 24FPS), and about 24 terabytes for a typical length IMAX feature (40 minutes). WOW!

 

 

-Ted Johanson

Link to comment
Share on other sites

Well, obviously not the one which Dalsa is using for the Origin - judging by the examples I have seen. They appear to be using some more generic algorithm which is simply filling in the voids by using neighboring "pixels".

Hardly. We've been working on the algorithms for a about four years now with

image science phds dedicated specifically to the algorithms. Our algorithms take

into account very specific cross over points and spectral contribbutions of each

baised photosite. Every pixel contributes to the edge quality, not just the green

as you're implying

 

I can't say I know which images you've seen but there has been a continual improvement

and some of the older images are not close to where they could have been.

 

Take a look at the red channel in an image from (almost) any digital camera. How do think it could be so high resolution even though only 1/4 of the photodiodes were actually red? Did you ever notice that where there isn't an edge in the green channel, the red channel is left to fend for itself and therefore looks extra pixelated? Did you ever notice that some edges in the red and blue channels have a luminance that is exactly the same as that of the same edge in the green channel? Ultimately this process causes high frequency details to become desaturated.

 

If you shoot something completely red, you're implying that only the red sites are picking up

red objects when in fact each site picks up a broader spectrum than just the color of that pixel.

However, in some cases where the exposure is quite low, the contribution in the other pixels

becomes not as useful and then the red resolution starts to suffer. However this does not mean

that this happens all the time and therefore the technology is useless. Likewise, you also get edge

artifacts on film because of the layer depths (ie. the crappy and soft cyan edges on filmed greenscreen

elements). (I think David might have seen the some comparisons).

 

The process by which we can create a repeatable pixel spectrum response is one the very useful

patents we got when we purchased the phillips division that designed the sensor for the viper

and the spirit.

 

High frequency desaturation tends to happen more often when you try to overcorrect for

desaturation. Likewise, in film your high frequency can get muddy and blocked up because

not all layers are on the same focal plane. Then you also have the color crosstalk problems

which also alter saturation. If you want to talk about desaturation, lets talk about the

limited gamut of film where you can never get the nice saturated reds, yellows, etc. no matter how hard

you try.

 

I wouldn't call this method "simplistic". I found it much harder to duplicate than the regular Bayer-reconstruction algorithms. That's right, I have actual experience in building these algorithms.

 

I honestly would like to see the results of your algorithm as we're constantly looking for

anything that might work better for certain cases. If you're working on your own algorithms

then you certainly know that the state of the algorithms only cintinues to improve.

And yes, I am working on reconstruction algorithms also (currently a realtime

one on the GPU and next week I'll be training on a pixel processing coprocessor

to do the same there(along with a few other people here)).

 

If you have any experience at duplicating the results of Bayer demosaicing algorithms and have a better explanation for the edge artifacts, I'd love to hear your side of the story.

It's a combination of things, not just the bayer algorithm that causes edge

artifacts. It has to do with the color temperature of the sensor vs the color

temperature of the scene, the exposure level, the lens design and the low pass/ir filter .

 

True; but that also applies to the digital back as well anyway. I'm sure there are lenses out there that are capable of pushing MF film to the limit. 30 megapixels is quite a conservative figure for medium format - especially if considering the capability of the sensor itself without limitations imposed upon it by the system.

-Ted Johanson

 

The 80mm and 150mm for the mamiya 7 are two of the sharpest MF lenses I've

used and overall considered to be some of the sharpest MF lenses. However,

they don't resolve nearly as close to the grain as say the the summicron 50.

I would guestimate (highly accurate :) that it's about a 50-60% gain in resolution

instead of the expected 100% gain. (ie. a 3k dpi scan vs a 4k dpi scan shows

little improvement in resolution (8.2k across 7cm vs 11k across 7cm).

 

Deanan

Link to comment
Share on other sites

Hardly. We've been working on the algorithms for a about four years now with

image science phds dedicated specifically to the algorithms. Our algorithms take

into account very specific cross over points and spectral contribbutions of each

baised photosite. Every pixel contributes to the edge quality, not just the green

as you're implying

 

You must be joking! Four years?! I see consistently equal or better results from Canon digital SLRs on a regular basis. Most notably, I don't see "color sparklies" as often as I see them in the Origin samples, in spite of it's obviously heavy use of low-pass filtering. Perhaps that's what's causing the problem. Excessive blurring of the image is giving the edge detection a hard time.

 

How good do you think this stuff (Bayer reconstruction) can get anyway? You've got a limited amount of data to work from and that's the way it will always be. No amount of mathematical ingenuity can truly replace lost detail. If it can, then you might as well get to work on an algorithm that can also restore lost highlight detail.

 

I can't say I know which images you've seen but there has been a continual improvement

and some of the older images are not close to where they could have been.

 

I've seen images from many brands of cameras...mostly Canon, Kodak, Nikon, Sony, Minolta. They're all generally the same. Some of the older cameras seem to have used a very innaccurate interpolation in which an overly complicated method was used to try to determine the direction of an edge to fill in the missing "line". I tried that interpolation method once and it produced the most idiotic looking results I have ever seen. It really messes up random, high frequency details.

 

If you shoot something completely red, you're implying that only the red sites are picking up

red objects when in fact each site picks up a broader spectrum than just the color of that pixel.

 

That's obvious...we've all seen spectral response curves. The point is it's all too often not overlapped enough to provided sufficient detail to the green channel.

 

However, in some cases where the exposure is quite low, the contribution in the other pixels

becomes not as useful and then the red resolution starts to suffer. However this does not mean

that this happens all the time and therefore the technology is useless.

 

This happens a lot more than many people would like to admit: ie. red signs or letters against a dark background. The darker the background, the worse the effect. The end result is that the red on the object looks to have been sub-sampled by JPEG compression.

 

Nobody ever said the technology is useless. While it may look acceptable, it is a far cry from the accuracy of film or 3CCD systems (please don't mention all of the technical problems with 3CCDs).

 

Likewise, you also get edge

artifacts on film because of the layer depths (ie. the crappy and soft cyan edges on filmed greenscreen

elements). (I think David might have seen the some comparisons).

 

True, but at least it happens in more natural and pleasing way. There are no blundtly obvious monotone

edge halos because edges aren't deliberately copied from one channel to another in a desperate attempt to hide color sparklies and low resolution.

 

Likewise, in film your high frequency can get muddy and blocked up because not all layers are on the same focal plane.

 

Again, true. But this all happens in a truly natural process. It is the effect of optical softening and is most prominent in the red channel. The difference in resolution between the green and red layer in film is nowhere near as pronounced as the problems induced by wildly unnatural Bayer reconstruction algorithms.

 

Then you also have the color crosstalk problems which also alter saturation.

 

I don't think it is anywhere near as noticeable as you make it out to be. The simple fact remains that edge copying has a FAR more pronounced effect on the saturation of high frequency details; far more so than color-crosstalk and layer softness combined.

 

If you want to talk about desaturation, lets talk about the limited gamut of film where you can never get the nice saturated reds, yellows, etc. no matter how hard you try.

Really? Have you never seen a Kodachrome or Velvia slide projected? And what about this image...

 

superia1600.jpg

 

It is a crop from an image shot using FujiFilm Superia 1600. I didn't increase the saturation. All I did was scan and color balance it. I think this high-speed film has surprisingly good separation. Would you really want the red to be anymore saturated than that? I looks quite accurate compared to the original scene.

 

If you're working on your own algorithms then you certainly know that the state of the algorithms only cintinues to improve.

 

No, I don't know that. Your statement seems to imply that Bayer reconstruction algorithms will forever improve. How can that be possible? Does that mean that someday we'll be using 1 PIXEL cameras which use an extremely advanced alien algorithm to construct a 10 million pixel image filled with incredible details and colors from the original scene?

 

It's a combination of things, not just the bayer algorithm that causes edge artifacts. It has to do with the color temperature of the sensor vs the color

temperature of the scene, the exposure level, the lens design and the low pass/ir filter.

 

I know that. That's why I said "one of the major causes" or something to that effect.

 

The 80mm and 150mm for the mamiya 7 are two of the sharpest MF lenses I've

used and overall considered to be some of the sharpest MF lenses. However,

they don't resolve nearly as close to the grain as say the the summicron 50.

I would guestimate (highly accurate :) that it's about a 50-60% gain in resolution

instead of the expected 100% gain. (ie. a 3k dpi scan vs a 4k dpi scan shows

little improvement in resolution (8.2k across 7cm vs 11k across 7cm).

 

What do you expect? A 33% increase in resolution isn't going to work wonders; it will provide "little improvement in resolution", nothing more.

 

 

-Ted Johanson

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

Forum Sponsors

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Broadcast Solutions Inc

Visual Products

Film Gears

CINELEASE

BOKEH RENTALS

CineLab

Cinematography Books and Gear



×
×
  • Create New...