Jump to content

Epic HDR


Adrian Sierkowski

Recommended Posts

  • Premium Member

Hey all,

anyone else see the new Epic HDR stuff over @ Reduser?

 

http://reduser.net/forum/showthread.php?t=49668&page=2

 

just curious thoughts, impressions etc. At first blush looks very nice, though a bit noisy on their step chart. the image outdoors should grade well, but looks very similar to most stock I've shot, which in and of itself is an accomplishment. I'd love to see something with heavy back-lighting, before and after, to see how much one can really get (perhaps with reflective readings for a reference).

My biggest concern is what such shooting does to motion inasmuch as it has to make multiple exposures of the same scene, or really push/pull the image with processing.

 

So yes, ideas, thoughts?

Link to comment
Share on other sites

  • Replies 173
  • Created
  • Last Reply

Top Posters In This Topic

  • Premium Member

I'd need a frame of reference to make any meaningful evaluation, like one frame in normal mode, one in HDR mode, and maybe a frame of film as well. I can't just look at a single day exterior shot and tell if there is added DR unless it is an extreme case, like backlit water and clouds near sunset, or a dark interior with big windows looking out at a sunny background.

 

But in principle, it sounds like a great development, one of the final steps in matching and/or exceeding film negative.

Link to comment
Share on other sites

  • Premium Member

You know, I'd not mind that, Brian, for a SciFi/Comic book SHORT film. I don't know if I could handle it for a feature. And I dunno, David, I don't think, honestly, digital will ever be equal to or greater than film (nor that we should think in such terms, really). Rather Digital should be Great and be awesome next door neighbor to Film who is also Great, and they get coffee sometimes and stare at people they find beautiful which rarely are the same person.

Link to comment
Share on other sites

Yeah, I guess I didn't mean to knock the process entirely, but just the fact that, like 3D, people could start doing really crude HDR for everything as a shortcut for lighting and all.

 

It has its use, but god it'd be awful if that short of HDR became the default for indie filmmakers.

Link to comment
Share on other sites

  • Premium Member

It sounds like the EPIC HDR mode will be adjustable in strength so one could avoid an odd look and just be able to capture a natural-looking dynamic range.

 

My point about matching what film gives us was not so much about the idea that digital will be exactly like film, but that since digital will eventually replace film for the majority of production, what we need are cameras that don't involve technical compromises compared to shooting film negative, I'm not talking so much about the artistic issues. I've been watching the Blu-Ray of "The Red Shoes" and thinking about how that is a lost look, so it's not like we can always have what we had in the past, there is always something lost with progress.

 

But from a practical standpoint, I think what most people want are all the advantages of shooting film negative PLUS any more advantages from shooting digital. And one of the biggest advantages of film has simply been its extended dynamic range. I took advantage of that every day when I was shooting that golf movie in Texas last month.

 

I caught some of "Covert Affairs" last night in HD, a TV show shot on the Red One (not sure if it is the MX version but I believe so) and it looked quite lovely except that a scene at Niagara Falls where the lead actress is on the phone in backlit morning or late afternoon light with the falls behind her... the falls were mostly clipped, burned-out. Which is annoying simply from the perspective of shooting a major sequence at Niagara Falls and having your star location clipped in the background. I thought that if they were shooting on film they could have saved more detail in the background.

Link to comment
Share on other sites

I sure would be happy for the day when a digital camera can match the quality and resolution of 5/65.

 

Not because I relish seeing that glorious format mothballed for good, but just because at this point, the odds seem much better of me getting to shoot something in that hypothetical format, as opposed to 65mm film, which is getting ever slim.

 

Really, I wish I could've been born much, much earlier, so I could've camera opped in the days of Camera 65 and Todd AO, or the heydey of Imax in the 80s and 90s.

 

Alas!

 

Still, those latest epic stills are VERY encouraging, if I neglected to say it before. Which drives me all the more nuts, since I can't afford to buy an Epic, and even renting would put a mighty big dent in my savings! :(

 

BR

Link to comment
Share on other sites

  • Premium Member

My biggest concern is what such shooting does to motion inasmuch as it has to make multiple exposures of the same scene, or really push/pull the image with processing.

 

So yes, ideas, thoughts?

The only way I can see this even being remotely possible is by some sort of multiple exposure system like the Arriscan uses, but that only works because it is essentially photographing the same image (ie the frame of film) twice in succession. With real-world images that could only ever work if there was no or very limited motion between "shots" which is hardly feasible.

 

If that test chart they're showing is really producing an 18 stop brightness range, it means the brightest part of the image has to be 218 times (~250,000) as bright as the darkest part. To achieve that would require some extraordinary conditions, since the tiniest bit of reflection onto the darker bars will completely ruin the ratio. The chart would have to be set up in a large blacked-out warehouse completely lined with black curtains.

 

Apart from that you would need an extraordinarily good lens, since any internal reflections would have the same effect as light bouncing off the walls.

 

The bottom line question is, what is the actual tonal range focussed onto the chip? Astronomers often get around this by using optical fibres moved around by micro-servo systems instead of focussing onto an actual flat surface; it would be interesting to subject the same lens/chart combination to such a setup.

Link to comment
Share on other sites

  • Premium Member

I'd need a frame of reference to make any meaningful evaluation, like one frame in normal mode, one in HDR mode, and maybe a frame of film as well. I can't just look at a single day exterior shot and tell if there is added DR unless it is an extreme case, like backlit water and clouds near sunset, or a dark interior with big windows looking out at a sunny background.

 

Or maybe a RED One, shooting the same scene :rolleyes:

Link to comment
Share on other sites

  • Premium Member

That's why I posted over here, Chris. I know for one thing, I'll get some useful insights into it -v- reduser, where I am afraid of banishment just for questioning!

 

And I agree Keith, a R1 shooting the same scene, all files given RAW would be very useful. But we can keep on dreaming till the cameras come out at least.

Link to comment
Share on other sites

  • Premium Member

I really do try not to add to these sorts of threads, but eesh, sometimes...

 

As far as I can recall, back in 2006 they claimed they'd have a fully usable camera in a year and they didn't. They claimed it would shoot 4K RGB images (I specifically asked). It doesn't. They claimed it would shoot 120fps at full resolution. It doesn't. They claimed it would have uncompressed output. It doesn't. They claimed it would boot in under a minute. It doesn't. They claimed it would have the same dynamic range as a Viper and it only maybe sort of might do now, four years later, after significant changes.

 

As far as I can see, these people have a record of making claims which are either clearly impossible or nearly so. Now they're claiming 18 stops of dynamic range and it does make me wonder.

 

Personally I would have hoped that the cinematography market would have been just a touch more savvy than the sunglasses market, but apparently it really doesn't matter what you can actually do - grandiose claims appear to be all the market really needs. As someone who takes care not to promote things I can't provide, I find this a little irksome.

 

P

Link to comment
Share on other sites

Fair enough Phil, fair enough. Though I'm curious, any ideas how they might be making up the HDR? I figure it's some saavy frame blending, as Keith mentions; which is what scares me. Any other thoughts?

 

Adrian I think you've got it. In a lot of ways, HDR reminds me of the early attempts at color. Kinemacolor was essentially a crude form of frame blending, where two successive frames coloured red and green were combined on screen via the "persistence of vision" effect. The process and many like it failed to catch on because of bad fringing since the images were not capture simultaneously. Technicolor resolved this problem but had to use beamsplitters to capture multiple images of the same thing at once. Eastman Kodak then did Technicolor one better by eliminating the need for beamsplitters and multi-strips of film by devising what we know today: a single strip of film with various dye layers.

 

HDR is very similar because you're creating a depiction of reality that relies upon that reality captured from the same vantage point at the same time, through albeit slightly different exposures, then recombined. Either you must do successive exposures as photographers have done, or you have to use some form of beamsplitter with two or more cameras (as those Russian fellows did in the video link I posted) or you have to find a way to create translucent sensors that can be layered and programmed to each over or underexpose.

 

The Epic has to be using some very advanced form of the first method, simultaneous exposure blended, which could raise some concerns about photographing action type scenes.

 

BR

Link to comment
Share on other sites

Honestly, Brian, it concerns me for any type scene... I mean, what happens, for example, if an actor was to blink in an ECU eye shot!

 

Ditto. The only way I can envision proper, true HDR would involve several cameras arrayed around a plate beamsplitter, so you can capture the identical image twice, as well as retain the power to adjust your exposure through each lens. But with only two cameras, taking the over and underexposure, will just yield really low-rent HDR one sees on flickr by amateurs. True, quality HDR needs a wider range of exposures for better blending...at least three exposures, and better yet, five (-2,-1, 0, +1, +2)

 

BR

Link to comment
Share on other sites

  • Premium Member

I am wondering, perhaps, if they did a way to vary the inputs from each pixel, or groups of pixels, on the chip. For example, 3 pixels normal, 3 over, 3 under, etc.. and then figure out a way to interpolate the data from them, kinda like a super bayer-mask magic. Of course then, you'd take a resolution hit in reality, though you could still be recording 5K or whatever. I think, if something like that worked at all, then it ought to work at multiple frame rates, as they're claiming. Just my quick thought on how it might work.

I kinda would love to make a 3x bayer mask camera with a prism and varying ND on each side of it, so as to make an in-cameras HDR, then record each as it's own little file (ok, not so little file) for later recombination. 'course, i could see how that's screw with a lot of things too...

Link to comment
Share on other sites

We will probably not know how they do it until they release Epic. Jim said the data is doubled in HDR mode - immediatly that suggests two concecutive exposures, yet Jim claims there is no trade off in HDR mode except the larger amounts of data (twice). Then it must be done in the same exposure... and if so it must lead to lower resolution, which again they indirectly say will not happen - as in "The only disadvantages are that you record twice the data." which suggests two concecutive exposures, this just beats me!

 

But RED did an incredible job on REDCODE, which is an amazingly efficient codec, maybe they have done magic again?

 

I am excited and curious, but I have to see it to believe it, and for it to be usable to me I don´t think two different frames (different in time) merged into one will work.

Edited by Eirik Tyrihjel
Link to comment
Share on other sites

  • Premium Member

Ditto. The only way I can envision proper, true HDR would involve several cameras arrayed around a plate beamsplitter, so you can capture the identical image twice, as well as retain the power to adjust your exposure through each lens.

BR

Yeah, but all such schemes work on the assumption that once a particular pixel reaches overload on the high-sensitivity sensor, it will thereafter quietly and demurely sit at that clipping level, allowing you to simply add on the unclipped data from the corresponding pixel of the low-sensitivity sensor.

 

That idea is as old as the hills. The problem has always been that the excess charges generated in the overloaded pixels of the high-sensitivity sensor tend to overflow into the surrounding not-saturated pixels, producing blue fringing and other nasties. Various schemes have been tried to prevent such horizontal charge leakage, to my knowledge none has been able to provide more than 1-2 stops improvement without introducing other forms of image impairment.

 

I’m afraid it’s just another same ol’ same ol’: CMOS and CCD sensors have been around for nearly 40 years; if it was that simple to improve the dynamic range, it would have been long ago.

 

Or alternatively, if Jannard and co have really found a way to solve this problem, why are they only applying it to such a minor-league product? The surveillance and military applications alone would make Epic sales look like coins shaken out of the sofa....

 

The sensor in the Arriscan was specially designed for this sort of operation, but at the expense of sensitivity, which would not be an issue. Also it only ever works in monochrome mode, so there are no colour fringing issues.

Link to comment
Share on other sites

  • Premium Member
Well nevertheless it´s a damn good codec, and it made 4K processing possible for any small time operation, even on a laptop computer.

 

Well actually, again, no. Interesting choice of words, because 4K processing is still an enormous pain in the neck, just as much as it was when Dalsa tried it. At least Arri put a debayer board in the Alexa, so you aren't forced to do half the work that (in my view) the camera should be doing as a very expensive part of your post process. I suspect that whenever you cut Red stuff on your macbook, you're working in considerably less than 4K resolution and with a considerably poorer debayer than the one about which Red like to make their most grandiose claims.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Broadcast Solutions Inc

CINELEASE

CineLab

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Film Gears

Visual Products

BOKEH RENTALS

Cinematography Books and Gear



×
×
  • Create New...