Jump to content

Dalsa Hollywood open for business?


Guest Jim Murdoch

Recommended Posts

Guest Jim Murdoch

Excuse me if this is old hat, but the news item is dated Oct 31 2005:

Dalsa Digital Cinema

They're still firm in their insistence that it's a "4K" camera, even though most 35mm lenses will only be able to use something under 3K of this, and 3K with a Bayer mask gives you quite a lot less than 3K actual resolution!

 

Apparently they're ready and waiting for your call, so, who's going to be first cab off the rank?

Edited by Jim Murdoch
Link to comment
Share on other sites

Hi Everybody!

 

Jim, I must confess that I share your apparent frustration with Dalsa.

Most notably, I am frustrated by their claims of having created a 4k camera.

The fact is that their "breakthrough" 4k camera truly has no more than a

MAXIMUM of 4 megapixels of resolving power. This is due to the inherent

drawbacks of using a Bayer pattern CFA based sensor. Many people reading

this may think they already know about these problems, but they most likely

attribute them simply to sharpening, chroma subsampling, or compression.

This is very often not the case. If anyone is interested in knowing more

about what I am talking about, please read on. Some of the problems induced

by Bayer pattern sampling are explained below. Those of you with a passion

for using three-chip cameras, read on; you'll get a kick out of this.

 

To understand this, let's first consider the true meaning of the word "pixel".

The word "pixel" comes from the two words "picture element". A pixel is the

building "block" of a digital picture. As we all know by now, each pixel is

made up of three color components: R, G, and B.

 

The big confusion in megapixels is created when manufacturers call a color-

blind photodiode a pixel. A photodiode is NOT an entire pixel, it's true

effect is that of a single color component of a pixel. A photodiode cannot

detect color, only brightness. Therefore, a color filtering dye must be placed

over the photodiode so that it will only see ONE of the primary colors. Three

of these photodiodes are necessary to produce a full color pixel. So, in

reality, the so-called 8 megapixel Dalsa camera is more correctly called an

8 megaphotodiode camera.

 

Each photodiode is, however, treated as though it were a full color pixel by

filling in the two missing color component values. This is done by examining

the nearest photodiodes that are of the same color as the missing components.

For example, if finding the missing red and blue values for a green photodiode,

look at the nearest red and blue photodiodes and use their values. This is

known as interpolation and it is basically used in trying to determine the

missing values that were most likely to have been in a certain location.

 

Virtually every single-chip digital camera (including Dalsa's) uses a Bayer

pattern CFA. In every Bayer pattern CFA, there are just as many green "pixels"

as there are red and blue "pixels" combined! That means that an 8 megapixel

camera, for example, truly has only 4 million green "pixels", 2 million red

"pixels" and 2 million blue "pixels". Bayer pattern CFAs were designed to

contain more green "pixels" to take advantage of the fact that the human

eye (and brain) perceives most of an image's detail from the green wavelengths

of the spectrum. So, in simple terms, the Bayer pattern CFA provides more

green "pixels" on the sensor to give the image it's detail and the red and

blue "pixels" are there to provide color to the image.

 

Now, because the green component of an 8 mega"pixel" image was captured with

only 4 million photodiodes, and because green provides the detail for the

image, virtually all single-chip digital cameras have only half the advertised

pixel count; and that is true only under ideal conditions.

 

Now you may be asking, "If the red and blue components only have 1/2 the

pixel count of the green component, then why do they appear to be the same

resolution as the green component?". The simple answer to this question is

"edge detection". Because the green component has the most detail, it's edges

are carefully copied to the red and blue components. This process not only

has the effect of increased resolution; it also helps to hide the artifacts

caused by sampling the three color components for a single pixel from differing

locations on the xy axis of the sensor.

 

If you don't believe this to be true, you can prove to yourself that it is

true. Simply use you Bayer CFA based digital camera to take a picture of a

completely red colored object on a black background. Because the green channel

will contain no edge detail, no edges can be copied from it. That means that

the red channel must be left untouched and therefore shows it's original edge

detail. Hence, you will notice an increased softness and pixelation. This is

NOT due to JPEG subsampling of the red component. This can be proven by looking

at the red component in an area where the green component contained an edge.

It will have a sharper edge than the area where green had no edge. There are

many irreparable artifacts introduced by copying edges from one component to

another; but they are too complicated to go into at this time.

 

This ultimately means that with ANY Bayer CFA based digital camera, if you

take a picture of a completely red or blue object on a black background, the

actual pixel count will be 1/4 of the advertised pixel count. This also means

that if you were to take a picture of a completely gray scene, the green

component would be able to provide almost completely accurate edge data to

the red and blue components with very few adverse effects; thus providing a

nearly completely accurate 1/2 advertised pixel count. A gray scene is an

example of the most ideal condition for a Bayer CFA. A completely red or

blue object on a black background is an example of the worst possible

condition for a Bayer CFA. So, when using a Bayer CFA based digital camera,

the best possible pixel count that can be expected is 1/2 of the advertised

pixel count and the worst that can be expected is 1/4 the advertised pixel

count.

 

Now you are probably asking "So, if an 8 mega"pixel" image really only has a

maximum pixel count of 4 megapixels, then why would an 8 megapixel image be

needed to maintain ALL of the detail?". This is because the green "pixels"

on a Bayer pattern CFA based sensor are basically arranged in a diagonal

fashion. Every column and every row contains a green photodiode. This is not

the case with red and blue photodiodes. If you remove any column or row to

reduce the size of the image, you will be removing a green "pixel". Green

"pixels" provide the image with it's detail, so removing them should be

avoided.

 

I hope this can be understood easily enough. It's quite a complicated subject,

so I apologize if I haven't made everything clear.

 

So, Dalsa, I suppose you just forgot to mention anywhere the fact that your

camera has unavoidable artifacts and lower resolution than advertised due to

the use of a Bayer based CFA?

 

 

-Ted

 

Oh, I must also say that I am annoyed by Dalsa's claims of

"12 stops of Latitude". Has anyone else noticed the many

discrepancies on Dalsa's web site? I will cite some examples

from here http://www.dalsa.com/dc/origin/dc_sensor.asp

and here http://www.dalsa.com/dc/origin/origin.asp

 

First, they say "This gives us more dynamic range, which

means much wider exposure latitude than any other cinema-

tography sensor, CCD or CMOS.". Doesn't that sound as if

they are implying that their sensor now has a larger

dynamic range than negative film stocks? I mean, after

all, isn't film considered a "cinematography sensor"?

Later, on the same page, they say "Origin's exposure

latitude is comparable to the best film stocks". So now

it's only comparable and not better?

 

Second, they say that their camera "...offers at least 12

stops of linear response...". On another page, they say

their sensor offers "...more than 12 stops of exposure

latitude...". You'll notice great ignorance at work here.

In one place, they use the phrase "linear response" which

is basically the same thing as "dynamic range". In another

place, they use the phrase "exposure latitude". Now somebody

please correct me if my many years of experience have

provided me with incorrect knowledge: dynamic range and

exposure latitude are two different things!

 

Dynamic range refers to the total number of stops between

the brightest white and darkest black a sensor can sense

in a single exposure. Exposure latitude refers to the

number of stops left over after subtracting the displayed

dynamic range from the total dynamic range. A good example

of this is: if your displayed dynamic range is 12 stops

and you have a total dynamic range of 12 stops, then you

have no latitude. But, if your displayed dynamic range is

only 5 stops, then you can move that range around inside

of the total dynamic range a total of 7 stops.

 

 

-Ted

Link to comment
Share on other sites

  • Premium Member

Hi,

 

It should be pretty good. Even if you take the half-measure that is the most critical assumption of a Bayer sensor, that's still a full 2K 4:4:4 sensor, larger than any other current camera. And it's a large chip - larger, if memory serves, than 35mm full gate - so the dynamic range should be excellent. Necessarily conjecture, of course, but it looks good on paper. And I personally think that with proper processing you could call it 3K, especially if you were storing it as 4:2:2.

 

The reason I'm posting this, though, is that they do actually have quite a good looking camera here. So why all the double-dealing with the resolution? It's not a 4K camera. Deal. It's probably the best in the world as regards resolution anyway.

 

Phil

Link to comment
Share on other sites

I think this whole argument is moot point when we're talking about the fact that most 3-chip sensor cameras are going through an inordinate amount of prefiltering and compression-to-tape; after all is said and done, the "4K" Dalsa is definitely on-top.

 

Go over to the Dalsa site and look at the images-guess what, they look good, better than any other digital camera I've seen. And furthermore I don't see any bayer artifacting.

 

Also these crappy resolution numbers that people keep quoting is complete garbage unless all you're doing is a very dumb 3x3 interoplation. There are a number of adaptive interpolators that get you far above 4:4:4 2K resolution (approaching the 3K mark for "true" resolution). The fact is if you know the chromacity spectrum sensitivity of the sensor, then actually every pixel picks up a certain amount of light, and you can use every pixel for luminance calculations (off-setting whether they are green, red, or blue pixels and again, assuming from the chromacity spectrum sensitivity overlap of the three spectrums). So that gives you a "4K" luma sensor if you intelligently and adaptively interopolate the raw bayer data.

 

Again, the images look good, so I think in the end this is all moot and a lot of hand-waiving over nothing.

Link to comment
Share on other sites

Jason, you say you see no artifacts? Are you looking for edge artifacts on

full-sized samples? I don't mean edge artifacts as in color artifacts. I am

talking about unusual edges in the red or blue component of the image

that are not supposed to be there. Some people would write this off as

sharpening artifacts, but that's not the case.

 

My so-called "crappy" resolution "numbers" were apparently too hard for

you to understand. I never said anything about exclusively using "a very

dumb 3x3 interoplation". I said that the full process of edge copying is

used to hide edge color artifacts and cause an apparent increase is

resolution to the red and blue components. I am talking about the best

De-Bayering algorithms in the world!

 

I guess I don't quite understand your point about a "4k luma sensor".

Sure, if you wanted a B&W image off the sensor, then you would get

true 4k resolution. But, the image would have an unusual pattern in it

due to the possibility of the red and blue pixels having more or less

luminosity than the green pixels. The absolutely indisputable fact of

Bayer pattern sensors (when dealing with full color) is that their pixel

count is, at best, only 1/2 the advertised count. Sometimes, it can be

as bad as 1/4 the pixel count. And at anytime, it can vary wildly,

within the same image, between 1/2 and 1/4 pixel count.

 

Every single time I bring up this point, someone says "So what about

the resolution, it's how it looks in the end that matters.". While it is true

that the final image's appearance matters most, many people think it

looks "OK" simply because they don't see any serious pixelation. It is

because of interpolation that pixelation isn't much of a problem. But not

even the most advanced interpolation algorithms from 50 years into the

future will truly add real-world detail. When speaking of resolution, one

must remember that resolution refers to the ability to actually resolve

high frequency objects; it does NOT refer to the lack of obvious pixelation!

 

Here is a crop from an image shot with the glorious, better-than-thou-art

Canon EOS 20D digital still camera:

post-5255-1131042727.jpg

 

This is a good place to find some obvious De-Bayering artifacts. Just one

look at this image shows blatantly obvious inaccuracies. If you compare

the different components of this image, you'll notice the following...

 

1. Because the green component has no edges inside of the flowers,

the red component has completely soft looking edges inside of the

flowers.

2. Because the green component does have an edge on the outside

of the flowers, the red component flowers have a sharp outside edge.

3. The outside edges of the flowers in the red component "mysteriously"

have the same luminance as the green component's outside flower edge.

 

Now if you just stop and think for a while about the problems that this

can ultimately cause, you'll realize what a problem it could be. Just because

it looks okay, that doesn't mean it truly is okay. If you were to compare

the same image shot with a Bayer sampled image and a truly sampled

image, I don't think you'd accept the obviously flawed Bayer sampled

image!

 

Nobody ever said in this thread that Dalsa's camera isn't the best in some

way. My whole point is to say that Dalsa is misleading people.

 

 

-Ted

Edited by Ted Johanson
Link to comment
Share on other sites

Nobody ever said in this thread that Dalsa's camera isn't the best in some

way. My whole point is to say that Dalsa is misleading people.

 

I think Dalsa is misleading people in the same manner that every other still camera manufacturer is misleading their customers.

 

Your points are well taken.

 

And the images on the Dalsa site are half-resolution, so you can't see the "edge" artifacts you talk about.

 

All I can say is this-almighty film, scanned at 4K, only has 1800 line-pairs/mm *after* enhancement, and 1530lp/mm before. This is from a test by Kodak themselves. You can read at this site.

 

So, if that "film" scan qualifies as "true" 4K resoution, and the Dalsa, which can also resolve a similar amount of lp/mm (according to their white paper), is not "4K", then I'm confused on what constitutes what, and why film should be considered "right" all the time, when there are obviously some short-comings.

 

If Dalsa is a "lier", then so is Kodak IMHO.

 

BTW, if the Dalsa was "only" 3K, that would still mean 1500lp/mm (at least in perfect theory), which is still comparable in resolution to a "4K" film scan.

 

The biggest problem with the Dalsa is that there's no way to record their data in an efficient manner (and it's a huge camera). I think it's a waste of time to slam them as being unethical.

 

BTW, what lens did you use with that 20D? Your lens looks soft in general, and there's quite a bit of motion blur (or it may be lens blur if you're near the edge of the frame). I can attest that a good L-series lens is much better at resolving fine detail compared to the "normal" like of Canon lenses. That shot looks like it was taken with a 28-135 or one of the 24-85 or 28-105 series of "stock" lenses that come with the kits. If so, then again a lot of what you're seeing comes from the crappy nature and low-contrast of those lenses rather than the resolution deficit you're talking about, although I'm not denying that it doesn't exist. I'm just saying that your specific example may have more to-do with lenses and/or in-camera JPEG modes.

Edited by Jason Rodriguez
Link to comment
Share on other sites

  • Premium Member
And it's a large chip - larger, if memory serves, than 35mm full gate -

True, and therein lies the rub. Lenses made for 35mm film don't *cover* the whole chip. You can only use the middle 75% or so. That's why people talk about it being 3k rather than 4k. Of course you could get lenses from the 65/70 system, or re-package 35mm stiill camera glass.

 

So why would they do that? Dalsa is a specialized chip foundry that makes imagers. They made an experimental ultra-HDTV chip for Hitachi, and got the rights to the mask set in the deal. The purpose of this camera is to make them some money from that asset.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

> So, if that "film" scan qualifies as "true" 4K resoution

 

...which I hesitate to blindly accept in all but the very best case...

 

> and the Dalsa, which can also resolve a similar amount of lp/mm (according to their white paper), is not "4K",

 

It isn't. It can at best be considered 3K with interpolation; 2K without. This is not really opinion, it's fairly well technically indisputable. I would consider it misleading to call Dalsa's Origin camera a 4K acquisition device.

 

> then I'm confused on what constitutes what, and why film should be considered "right" all the time, when

> there are obviously some short-comings.

 

Well, fairly clearly, film should not be considered "right all the time". I have spent dozens of hours staring at 2 and 4K film scans, including downsamples of each other, side by side on calibrated monitors. In most cases, you're just resolving more grain. Line pair scores are calculated using a metric which in no way resembles practical images. I continue to feel that the numbers are not that much use for comparison until people start to take grain into account, also the way the affect of grain on perceived resolution is reduced in a moving image, and until people start being upfront about their Bayer maths. Publishing the algorithm would be a start.

 

This stuff winds me up. Manufacturers seem to be obsessed with giving out ever bigger and bigger numbers. Message to Panasonic, Sony et al - do not assume that the professional market is as stat-chasing as your domestic customers. And Dalsa seems to be following this route, too. Professional film and TV people are going to ask the awkward questions and you're going to be left looking sillier than if you'd just been honest in the first place. And Dalsa have no reason to do this anyway. Even by the worst possible metric they're the highest resolution camera that I can currently rent - assuming I can currently rent one. So why the run-around?

 

Phil

Link to comment
Share on other sites

Here is the info on that picture according to the EXIF data:

Lens: EF 17-35mm f/2.8L USM

Focal length: 27mm

EI: ISO 200

Aperture: 5.6

Exposure time: 1/100 second

Also note that the crop was taken from very near the center of the frame.

 

As for your technical specification comparisons between negative film stocks

and the Dalsa sensor, I have some problems to point out. First of all, you say

that Kodak says film has a maximum of 1800 line pairs per millimeter. Don't

we all wish that could be possible?!!! They actually say it has 1800 lines per

picture height. That equates to 3200 lines per picture width. Now, are

you sure the Dalsa white papers are dealing with the same resolution

measurements? Are they dealing with line pairs per millimeter, lines per

millimeter, lines per picture width, lines per picture height, line pairs per

picture width, or line pairs per picture height?

 

Kodak's example was also shot on the old Vision 500T stock; not exactly

the most ideal stock for testing ultimate resolution.

 

Dalsa admits in their own sensor white paper that "Some film negatives

have been tested to exceed 4000 lines of horizontal resolving power".

That can be found in the first sentence on page 3. Then they go on to blab

their brains out about internegatives, prints, etc. and generational loss

as if an image captured on film MUST remain on film!

 

Let's also not forget that Dalsa's sensor is larger than a Super35 frame,

so not all lenses will fill the frame. Therefore, not all of the frame can be

used.

 

Has anyone ever seen any proof that the Origin has 12 stops of exposure

latitude?

 

 

-Ted

Link to comment
Share on other sites

Guest Jim Murdoch
They actually say it has 1800 lines per

picture height. That equates to 3200 lines per picture width.

 

Can we just get something straight here? (Not Ted, just some of the other posters) If the film has a resolution of 3,200 lines across its width, that means that if you looked at under a microscope and counted the lines there would actually be 3,200 of them (ie 1,600 black lines on a white background). To equal that with a video camera it would have to have at least 6,400 pixels across its sensing surface, and in practice a bit more to allow for losses in the spatial "softening" anti-aliasing filter that's essential for any solid-state scanning system.

 

The reason you don't need a spatial filter for film is that its grain pattern is totally random, so visible artifacts are most unlikely to occur, and if they ever do they only persist for 1/24th of a second. ]

 

You may have heard manufacturers talking from time to time about film scanners with up to 16K horizontal resolution. Obviously they must think there is a valid reason for doing this.

 

Has anyone ever seen any proof that the Origin has 12 stops of exposure

latitude?

-Ted

Hah! Has anybody seen any real proof of half the crap they claim about these new cameras? They seem awfully reluctant to let too much footage get out into the real world, where someone who understands the limitations of the technology might be tempted to point the excessively diaphanous nature of the Emperor's new Glad Rags!

Link to comment
Share on other sites

  • Premium Member
Has anyone ever seen any proof that the Origin has 12 stops of exposure

latitude?

-Ted

Hi,

 

I am interested to see a test verses the Viper.

I have seen almost 10 stops of DYNAMIC RANGE from a Viper.

I have been told that the Varicam has 12 stops on this forum, however I don't believe thats possible.

 

Stephen

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...