Jump to content

RED ONE footage


Emanuel A Guedes

Recommended Posts

One way to do that, if they go for a whole new chip, would be to intersperse two sets of photosites: One set of big ones for the toe through mid range, and another set of smaller ones, with ND in addition to the color filtration, for the mid range to high end -- sort of like woofers and tweeters in audio. Of course, you'd have to solve the crossover problem, just like with audio. And the high end would be severly undersampled. But who knows, you might be able to come within a couple stops of doubling the dynamic range. Interesting?

-- J.S.

 

Very interesting! I'm amazed at how much you guys know about all this!

 

I remember reading something David Mullen suggested. If you had a chip that had a couple more stops of dynamic range then film, you could apply some sort of (don't remember what he called it) envelope? to round off the edges, similar to the way film behaves.

 

It would be interesting to see the results.

 

 

Jay

Link to comment
Share on other sites

  • Replies 463
  • Created
  • Last Reply

Top Posters In This Topic

It is impossible to shoot pure black and white with a Red, or any Bayer-filtered camera, at ANY resolution. You are always shooting "color," because you will never get the full spectrum of luminance since each pixel has either a green, blue or red filter on it. The closest you could get, in my opinion, would be a "2K B&W image shot through a green filter." This is of course, without any post-processing. You could certainly "average" the output of red, green and blue. But as I mentioned earlier, I am just talking about the sensor data itself, sans any post-processing.

Hold on a second. You wrote this, but you initially wrote this:

I'm confused. Which do you mean? In any case, the red pixels will only respond to red light. Other colors are filtered out. This is necessary to isolate each color channel.

Well, yes. There's 2K of true green data. However, you can only match this with 1/4 red and 1/4 blue. So you still come up short in reconstructing even a true 2K image. Remember, you need 2K of green, 2K of red and 2K of blue just to create a 2K color image.

Yep, you're absolutely right. Interesting how all these other digital technologies cannot get away without observing Shannon's theorem (although 22.050 kHz audio was common a while back), but digital cinema can. This will change. We should be striving for 8K acquisition for 4K output. This would put digital cinematography in line with CD quality audio and print quality photos.

 

There's still a long way to go.

 

You are confused about the nomenclature. A 4K image means an image with 4K PIXELS across, not 4K CYCLES (LINES) across. As such the question of lines and the Nyquist theorem itself is irrelevant.

 

Tripurari

Link to comment
Share on other sites

Exactly what I said. Worst case. if you film a subject that only creates response on the red photo sites you are down to this 1K resolution.

 

Such an image is impossible since the green filter will let through some of the red, regardless of how red it is. The red, green and blue response curves of most filter arrays overlap each other - just like the human eye.

 

Tripurari

Link to comment
Share on other sites

  • Premium Member
Very interesting! I'm amazed at how much you guys know about all this!

 

I remember reading something David Mullen suggested. If you had a chip that had a couple more stops of dynamic range then film, you could apply some sort of (don't remember what he called it) envelope? to round off the edges, similar to the way film behaves.

This isn't something I know about -- I'm not sure if anybody's tried that with a chip design. I'm just speculating that it would be one way to substantially increase the dynamic range of a single chip camera. This is a long long way from becoming a shipping product.

 

Given more dynamic range than film, yes, you certainly could design some kind of filter to match the shoulder and toe characteristics. It would probably be implemented as a look-up table. Save the raw data, and pick your film stock in post.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
You are confused about the nomenclature. A 4K image means an image with 4K PIXELS across, not 4K CYCLES (LINES) across.

To be even more precise, 4k photosites across, under a Bayer mask -- Pixels aren't the same thing as photosites, especially in single chip designs, like Bayer.

 

 

 

-- J.S.

Link to comment
Share on other sites

You are confused about the nomenclature. A 4K image means an image with 4K PIXELS across, not 4K CYCLES (LINES) across. As such the question of lines and the Nyquist theorem itself is irrelevant.

Fair enough, let me put it this way: You will need to acquire double the pixel count of the target output resolution to satisfy the Nyquist-Shannon theorem. Example:

 

Desired output resolution: 640x480

Native sensor resolution (assuming 3x CCDs in a prism arrangement): 1280x960

Link to comment
Share on other sites

[snip]

Demosaicing is actually a responsibility of the software rather than the camera's sensor. So I feel the claims about Mysterium would better be delivered as "the combination of our Mysterium sensor and our clever de-bayering software." Because, as I've already pointed out, the Bayer-filtered sensor is only giving you a portion of the information you need -- even for a 2K image.

 

There's a difference between data and information. The Bayer Filter throws out 2/3 of the data, but how much information does it lose? Image data has a lot of redundancy, as evidenced by its high compressibility.

 

Coming back to sensors, square lattices have greater diagonal resolution than vertical/horizontal by a factor sqrt(2). If you are interested in a non-astigmatic image (a reasonable goal, especially since you seem to be interested in worst case analysis), this extra data capacity can be used to convey color information. Starting with a non-astigmatic image, you can use a Color Filter Array (CFA) to modulate chroma info into the (empty) diagonal high frequencies. The demosaic algorithm can easily heterodyne this chroma info back to baseband. If you work out the details, you will find that more than 2K worth of information can be retrieved.

 

The gist of this long drawn argument is that the sensor actually retains a lot of information and is not mostly synthetic data being invented by software.

 

Tripurari

Link to comment
Share on other sites

Fair enough, let me put it this way: You will need to acquire double the pixel count of the target output resolution to satisfy the Nyquist-Shannon theorem. Example:

 

Desired output resolution: 640x480

Native sensor resolution (assuming 3x CCDs in a prism arrangement): 1280x960

 

When the output resolution is also measured in pixels and not cycles? No!

 

If the desired output resolution is 640x480 you only need a sensor with 640x480, ignoring practical issues such as a non-ideal Optical Low Pass Filter.

 

Tripurari

Link to comment
Share on other sites

  • Site Sponsor
I remember reading something David Mullen suggested. If you had a chip that had a couple more stops of dynamic range then film, you could apply some sort of (don't remember what he called it) envelope? to round off the edges, similar to the way film behaves.

 

 

http://www.fujifilm.com/about/technology/s..._ccd/index.html I do not know how much resolution you can get out of these or if they surpass film for latitude I would think chip size vs. noise would be an issue and if it can be clocked for higher frame rates...

 

-Rob-

Link to comment
Share on other sites

Hi guys, I'd like to weigh in on what we've seen so far from the red camera.

 

This thread at the very least has been entertaining to a point, but I think it would be more productive to carry the discussion to another more practical and objective level.

 

Case in point, in my view the images we've seen from the RedOne have great resolution, and look like something between HD and Film. Whether or not the camera is 4K is irrelevant to a well told story, shot cinematically, with high production values. But, I will admit, film does still seem to have an edge in terms of its ability to convey emotion, and to aid the storytelling process. It remains to be seen how well filmmakers can adapt the Red's visual, spatial, and cinematic qualities to various scenarios, genres, etc. I don't think anyone can argue that the Red achieves a quality that surpasses the film of yesteryear, and if audiences then saw something shot with the red today, I dare say the difference would be irrelevant against the impact of the story,a nd the way the story is told.

 

Now. In comparing the Red to Film, I have to say there is something subtle lacking in terms of the texture of the image; it does not seem organic, or it seems a bit cgi/computer generated/plastic. I don't think that is something that has been totally addressed. If the Red has resolution to spare, and it produces a clean image, what can we do to emulate the way our eyes, and film, display what's going on around us? This is not something specific to the Red. Superman Returns displayed a lack of texture which I believe hurt the film (mediocre story aside), but the quality of the images were fine.

 

So, I'm asking. Will someone step up to the plate and start a dialogue on how the Red may be used by itself or in conjunction with XXX to enable a more organic look to the images it produces? Regardless of the medium, content is king. However, art is most effective in the cinematic world if it imitates life, especially in how the images are presented, captured, etc. So, again, how can the Red's potential be more effectively used to circumvent this byproduct of digital capture?

 

Can the Red Team, distinguished members of this forum, guests, or just anyone with an idea or potential solution weigh-in?

 

I don't think it's enough to just have a clean high resolution picture, we need some texture (not necessarily grain, mind you - I'm not suggesting we degrade or muddy the picture...unless that's the only option). All the other tools seem to be there, workflow testing aside, let's see what this baby can do...at the very least, we will see a revolutionary leap in what is achievable and at best there will be an evolution. Either way, as an aspiring filmmaker, I'm just happy to be here, while others are working out the bugs, refining things, so that when I am ready to create my masterpiece (ahem, hopefully one day!!!!!!) I will be able to make an informed decision as to the best tools availble for the job.

 

Just to say a bit about myself, I am a film major at college, focussed on directing, but I have a long number of years in the media using everythign from svhs, betacam sp, miniDV, and digi-betacam to 35mm for commercials, and broadcast programming. Most recently I've sort of fallen in love with 16mm, while I've been at college, and have an eclair npr, which I would part with if there were a comparable video camera which could produce the raw, evocative images 16mm can create (no way I could afford a red unfortunately...).

 

Mr. Jannard. Congratulations on your vision and faith. I am sure, you and your team can say

 

"...I took the one less traveled by..." borrowed from Robert Frost.

 

Krystian.

Link to comment
Share on other sites

  • Premium Member

oh, i'm sure i'll regret this because phil can be a real son of a $#@! but following all this, he seems to be laying out pretty legitimate questions. jim, you keep asking him to make a real contribution, but what he's saying seems pretty straightforward. i haven't seen a real rebuke of his claim. saying, "others think it looks good" isn't quite an answer. no disrespect meant in an otherwise disrespectful thread, but combat the idea, not the person.

 

i'm just a humble, lo-profile chap making a living aiming cameras and lights.

Link to comment
Share on other sites

Personally I feel that at some point a line in the sand has to be drawn on this sort of thing. Either you are interested in upholding technical standards, or you're willing to forego them for convenience, but either way don't lie about it. I stand for cutting the crap, but other people may feel differently. But fine, okay, nobody's interested. Dismiss it as noise. I'm clearly fighting a losing battle here; Red is being discussed on the 2K-444 list of CML, a place known for the rigidity of its rules, even though the compressed output is neither 2K nor 444 nor anything like.

 

This is the beginning of a slippery slope down which producers would just love us to slide, towards a point where manufacturers can sell their products to us as anything they like, with or without reference to reality, and where the balance between cost and technical standards is hopelessly upset. This balance has been jealously guarded by generations of scientists and engineers, and it's an insult to every one of them that we are so eager to roll over for a guy who's trying to sell us a 2K camera as 4K the same way he used to persuade us that ten-dollar sunglasses are worth a hundred. I'm dismayed that Mr. Nattress, a man who I'd previously have esteemed as among those keepers of technical excellence, is willing to be involved.

 

All I can say is stand by for a plummeting of standards elsewhere, too. The precedent set here is lethally dangerous. This is an immensely sad moment; it makes me feel as if the principles I've always worked for are being dismissed out of hand.

 

Phil

 

Phil,

 

I wouldn't be so alarmed over this situation, for the simple reason that no one is likely to get something other than what they expect. Technically savvy people would know about the Bayer issue, while the technically disinclined will expect RED's picture to be similar to a DSLR/Digital Still Camera with similar pixel count. All thanks to aggressive marketing by digital still camera manufacturers.

 

I cannot imagine a realistic situation in which someone would feel cheated.

 

Tripurari

Link to comment
Share on other sites

oh, i'm sure i'll regret this because phil can be a real son of a $#@! but following all this, he seems to be laying out pretty legitimate questions. jim, you keep asking him to make a real contribution, but what he's saying seems pretty straightforward. i haven't seen a real rebuke of his claim. saying, "others think it looks good" isn't quite an answer. no disrespect meant in an otherwise disrespectful thread, but combat the idea, not the person.

 

i'm just a humble, lo-profile chap making a living aiming cameras and lights.

 

Phil's arguments seem "straightforward" because of his silent assumption that data and information are the same thing.

 

-Tripurari

Link to comment
Share on other sites

  • Premium Member

I get worried whenever people start treating picture information as redundant or disposable. Experience has taught the motion picture industry that treating any picture information as disposable can have unpleasant consequences, most noticeably when performing chroma keying. Systems will see this data which do not see the way we see, and assumptions based on the human visual system can turn out to be unhelpful.

 

> Color Filter Array (CFA) to modulate chroma info into the (empty) diagonal high frequencies.

> The demosaic algorithm can easily heterodyne this chroma info back to baseband. If you work out the details,

> you will find that more than 2K worth of information can be retrieved.

 

Right. That's what we're talking about when we credit them with maybe more than 2K after LPF. But really, we're being nice. Do you really think they have implemented multidimensional (usually 5D, if I remember correctly) frequency domain analysis of a 4500-pixel wide bayer image in a software codec to run on desktop PCs? If they have, what does that imply about the HD-SDI outputs and viewfinder?

 

Phil

Link to comment
Share on other sites

The gist of this long drawn argument is that the sensor actually retains a lot of information and is not mostly synthetic data being invented by software.

Well, to be fair you can't really say one way or another without specifying 1.) an input resolution, and 2.) an output resolution. If I begin with a Mysterium sensor and output 8K, then yes, it is mostly synthetic. If it is 4K, then what? The consensus seems to be 2.5K, which I'll go along with, so yes, I'd say you're right. But all of this clever trickery doesn't necessarily work all the time, under all lighting conditions. You mentioned that I am interested in "worse case analysis." Well, yes -- because those situations are very real, and are the ones where half-measures always fail to deliver.

 

Observe the following demosaiced photo:

 

debayer_01.jpg

 

Now, look at this 300% view:

 

debayer_02.png

 

What's wrong with this picture? Despite our "intelligent and informed guess," incorrect decisions were made during demosaicing. Not that this is the best demosaicing algorithm, but one nonetheless (a hardware one, from a Canon dSLR). The point is, the data cannot be recovered 100%, and even though we can get close, there are still times when errors will occur because we are making up the missing data.

Link to comment
Share on other sites

When the output resolution is also measured in pixels and not cycles? No!

 

If the desired output resolution is 640x480 you only need a sensor with 640x480, ignoring practical issues such as a non-ideal Optical Low Pass Filter.

Still doesn't satisfy the theorem, which is my whole point. If you think I'm wrong, then please correct me because from what I've been lead to believe, sampling a scene at the exact same rate as the output does not observe the Nyquist-Shannon theorem.

Link to comment
Share on other sites

I get worried whenever people start treating picture information as redundant or disposable. Experience has taught the motion picture industry that treating any picture information as disposable can have unpleasant consequences, most noticeably when performing chroma keying. Systems will see this data which do not see the way we see, and assumptions based on the human visual system can turn out to be unhelpful.

 

> Color Filter Array (CFA) to modulate chroma info into the (empty) diagonal high frequencies.

> The demosaic algorithm can easily heterodyne this chroma info back to baseband. If you work out the details,

> you will find that more than 2K worth of information can be retrieved.

 

Right. That's what we're talking about when we credit them with maybe more than 2K after LPF. But really, we're being nice. Do you really think they have implemented multidimensional (usually 5D, if I remember correctly) frequency domain analysis of a 4500-pixel wide bayer image in a software codec to run on desktop PCs? If they have, what does that imply about the HD-SDI outputs and viewfinder?

 

Phil

 

Fair enough, though I'd put the figure closer to 3K than 2K assuming that color difference signal signals have roughly half the frequency bandwidth of luminance or any of the primary colors.

 

I don't know what kind of software codec RED is running but it is certainly possible to apply the necessary filters in reasonable time. You can do it really quickly if you use the graphics card. The big advantage of a RAW workflow is that you can do all of this off line.

 

As for the viewfinder and HD-SDI you can get away with lower order filters since they are at half resolution. Again I don't know what RED is doing, but realtime processing using FPGAs is not impossible.

 

-Tripurari

Link to comment
Share on other sites

  • Premium Member

Actually that -is- something that I'd suggest. Ensure that any software produced to go with this camera makes maximum possible use of pixel shaders (or, unified shaders, now) afforded by modern graphics cards. It's something of a moving compatibility target, but the're immensely powerful and very suitable for that sort of work, and could take a huge load off the processor.

 

Traditionally it isn't much done, but it could be, and even more so now with the popularity of the PCIe bus.

 

Phil

Link to comment
Share on other sites

Actually that -is- something that I'd suggest. Ensure that any software produced to go with this camera makes maximum possible use of pixel shaders (or, unified shaders, now) afforded by modern graphics cards. It's something of a moving compatibility target, but the're immensely powerful and very suitable for that sort of work, and could take a huge load off the processor.

 

Traditionally it isn't much done, but it could be, and even more so now with the popularity of the PCIe bus.

 

Phil

 

I believe Rob Lohman and Graeme have been saying for some time now that REDCine is GPU dependent, so I imagine they're making heavy use of pixel shaders. It'll be interesting to see what kind of performance it'll have with modern GPUs. Real-time 4K (sorry Phil) decompression and demosaicing isn't far off I don't imagine...

Link to comment
Share on other sites

Still doesn't satisfy the theorem, which is my whole point. If you think I'm wrong, then please correct me because from what I've been lead to believe, sampling a scene at the exact same rate as the output does not observe the Nyquist-Shannon theorem.

 

I don't see why you are worrying about the Nyquist theorem at all. Nyquist-Shannon sampling theorm is just that a sampling theorem. You are trying to relate the DISCRETE signal from the sensor to the DISCRETE signal of the display without any re-sampling.

 

Here's an alternate argument - Nyquist theorem has two variants:

1. sampling a continuous signal to obtain a discrete signal

2. re-sampling a discrete signal to obtain another discrete signal

 

You can't be talking about 1 because continuous signal is measured in cycles (or lines) and not pixels. Sampling a continuous signal of 4K PIXELS is a meaningless statement. 2 is also ruled out because no one is forcing you to resample.

 

A third way of looking at it is that the sampling theorem isn't merely about sampling, rather it places a fundamental limit on how much info. a discrete signal can hold. The amount of info a 4K-sample signal can hold is the same at the sensor as it is at the display. There is no way to stuff more info into 4K at the display by using higher resolution sensors. (In practice this is not true because of non-ideal LPF, hence the popularity of oversampling)

 

(There is also the issue of the reconstruction filteration which is usually left to the human visual system)

 

-Tripurari

Link to comment
Share on other sites

...he seems to be laying out pretty legitimate questions. jim, you keep asking him to make a real contribution, but what he's saying seems pretty straightforward. i haven't seen a real rebuke of his claim. saying, "others think it looks good" isn't quite an answer.

I have to say here that I understand where Phil is coming from.

 

There is absolutely no denying that Jim & his team have done a great job, the images look great. However, its how the finished product looks on the screen that really counts, not the raw footage. If you're not going to do a lot of post work, then you've got great footage and you don't really need to worry about whether or not it is true 4K or something less... it simply looks great.

 

But if you've got a lot of post, keying or compositing work to do, then it becomes vital to have the real figures... how it looks is not relevant till you've finished. It is for these people, that being consistent with the terminology is important. The way I see it, Phil has a point about what you call it. If you say its 4K, then its only fair for the vFx guys to expect to work with 4K. If its not really 4K, then there's nothing wrong with saying so, to those to whom it is important. I mean, 16mm can look really good, but if I'm doing vFx on it, the post guys deserve to know what they're getting... it has real implications for them.

 

Having said that, the images I have seen look impressive.... they remind me of HDR photos.

Link to comment
Share on other sites

Wow another Digital image. The only finished work I've seen from Red One is the Crossing The Line trailer. It looks like a cartoon! There are a couple pics at Reduser that look like CG or hyper reality. I guess some people like that CG look? I would like to see some footage that looks better than film. 4k just looks fake to me. I thought red was going to replace Film?

Just catching up on this thread after chasing after some other passions in the last couple months. I may have a sense of what you're objecting to in the "Crossing the Line" demo. Now, I've actually seen the demo on a 4K projector?twice. Once, about 6 feet from the screen, the second time, about 18 feet from the screen. And it looked pretty goddamned good. No noise. Tack-sharp. But still, I think I know what you mean. I think we'll have a better sense of RED's "look" once we see some low key, studio-lit interiors. You know, some high-contrast, color-saturated stuff with say, a girl's face in the shot. Much of the RED footage so far has been daylight exteriors. Flatt-ish looking, bright, outside kinda stuff. But in video, where its biggest imaging hurdle has historically been its limited contrast-handiling ability, flat in this sense, is "good," because that translates into more captured dynamic range. But what I mean, is, we haven't seen the kind of photography that I would like to shoot. Colorful, contrasty, lit, night interiors and night exteriors. Even a nice contrasty daylight interior would be nice (think "breakfast cereal" :30 spot kinda kitchen; it's daylight, but very contrasty, because it's studio-lit).

 

Anyway, here's what I can't wait to see . . .

 

A beautiful model in an interior scene. Lots of practicals in the shot. A 2K Xenon streaking in through a rear set window, bouncing off a polished floor, and gently bouncing all around the set to finally put a hint of its lumens on the model's face. The Xenon also 3/4-backlights the model's hair?very hot. Burning hot. Two-stops kinda hot. And splashes the camera-left side of her cheek, and nape of her neck with light just as hot. Then, a super-soft key on say, the camera-right side of the model's face, so that the light goes from say, a 1/3 to 1/2-stop "overexposure," and falls off to the camera-left side of her face, to a level as dark as the stray light on the set will allow. In the background, have lots of colored glass elements. Lens this at a medium focal length, at about 4.0. Production design motif is perhaps a highly stylized, modern nightclub look. Actually, this could be either a high-key or low-key scene. As long as there's lots of saturated color and a lot of lighting contrast. This is what I can't wait to light for a RED demo.

 

Ralph Oshiro

Edited by Ralph Oshiro
Link to comment
Share on other sites

Here's another one I'd like to see lensed with RED . . .

 

A night exterior. Mercury vapor street lights. Street completely wetted down. A row of 18K HMI Fresnels raking the street. Some police cars in the background with their lights and strobes on. A space light, nearer to camera, filling the actors. LOTS of contrast. LOTS of rich blacks. LOTS of saturated color.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Broadcast Solutions Inc

CINELEASE

CineLab

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Film Gears

Visual Products

BOKEH RENTALS

Cinematography Books and Gear



×
×
  • Create New...