Jump to content

Epic HDR


Adrian Sierkowski

Recommended Posts

Hmmm .... We're getting bright points from the short highlight exposure, but dim trails from the longer shadow detail exposure.... Doesn't that sound backwards? Shouldn't we be getting dim from the short exposure and bright from the long one? .... If in fact that's what they're doing?

 

The bright point/dim streak thing looks a lot like an old old time tube TV camera. I'm not sure whether or not it's related to the HDR technology. Could it be that the clearing of photosites between frames doesn't happen completely in severely overloaded areas?

 

 

 

-- J.S.

 

As I said, this is pure theorizing from what is logically possible and what I see, in other words, wtf do I know?

 

I don't think you fully follow theory though - since the second exposure is longer the trail is spread over multiple photosites, creating a dim trail. If the camera was static that trail would possibly be a dot brighter then the first dot, but it would be in the same exact place, and it would have it's information replaced by the detail in the other dot (from the shorter exposure) hence the HDR preserving more highlight detail...

 

As I said before, this process could have some weird motion artifacts...

Link to comment
Share on other sites

  • Replies 173
  • Created
  • Last Reply

Top Posters In This Topic

  • Premium Member

 

If they post a moving shot with bright sunlight and deep shadow both holding detail I might change my tune pretty damn quick though. I want to believe, I just can't help being skeptical.

 

Thoughts?

One of the concerns is what you might be going to exchange for the extended highlight range.

There must be some sort of trade-off, otherwise, why would you you want to switch it off, or dial in different levels of HDR?

 

Anyway JHJ was fairly specific that no multiple exposures are involved (Well, as specific as he ever gets).

 

The only other way I could see this working is that they have some way to control the "shutter speed" on a pixel-by-pixel basis, but you would still have the problems of charge carrier migration from adjacent overloaded cells. Or maybe that's what the "comet tailing" we see actually is. If that's the case, what they've really achieved is a way to eliminate the colour fringing this normally produces.

 

Oh well, I'm sure all will be revealed in due course, juat like whether RedRay (or whatever they call it now) is applicable to other HD formats.

(Yes, yes, I've got any number of patronizing prats anxious to reassure me that "of course it is", but that is just the opinions of a lot of patronizing prats. :P )

Link to comment
Share on other sites

  • Premium Member

You know Keith, I'd not want to have it any other way ;) Despite your "bashing," and I do use that term lightly, I for one have always learned something or gotten something to think about, from all your posts on digital systems, and to think they said the only thing good to come out of Australia was Crocodile Dundee and continuity editing.

 

I don't think that pixel by pixel would work, though. As you'd run into problems as soon as you start moving the camera. Btw, without multiple exposures can we even call this HDR? Perhaps something like Extended Range Mode (ERM!) would be better.

Link to comment
Share on other sites

 

The only other way I could see this working is that they have some way to control the "shutter speed" on a pixel-by-pixel basis, but you would still have the problems of charge carrier migration from adjacent overloaded cells. Or maybe that's what the "comet tailing" we see actually is. If that's the case, what they've really achieved is a way to eliminate the colour fringing this normally produces.

 

 

I don't mean to sound negative, but I'm just highly skeptical that the engineers at RED have discovered a way to do something that was until now undoable.

 

I'm going to rant briefly, but first let me say that I shoot more with the RED now than any other camera, and I do believe that it is currently the best intersection of budget, footage quality, and practicality.

 

/start rant

 

My problem with RED is the effing hype - the magical 'mysterium' sensor just happens to be the same size, megapixels, and native color temp as an APS-C DSLR chip. Nothing mysterious about that to me. The K's are just a byproduct, hell, the raw was quite possibly just a byproduct of how they pull info off the chip (too much data to transcode and store at that speed).

 

Now, I do think hooking up a DSLR chip to a computer and streaming motion images off it is a pretty freaking genius idea - I'd have 10 000 times more respect for the company if they'd just admitted it from the start. They should shout about it - it's a really really clever idea. Instead they wrap everything in this pretentious jargon about revolutions and mysteries.

 

I guess it works for the fan boys, but I think that the more solemn and reflective types who actually make a living off of cinematography might like a little honesty and transparency with the tools they are expected to entrust their careers to.

 

But that is simply not the market that RED chose to appeal to, and that is my problem with RED.

 

/end rant

 

Anyway, my point is ... well ... my point is ...

 

Oh yes:

 

Jim doesn't actually say that it isn't two exposures, he says (for some reason I cannot post a screen grab of this so you'll just have to trust me)

 

Quote:

Originally Posted by Gavin Greenwalt View Post

So you are just doing two exposures.

 

Nope... "just two exposures" gives disjointed (unconnected and stuttering) motion.

Jim

__________________

 

 

There's a huge difference between 'not two exposures' and 'not just two exposures'.

 

Marketing smoke and mirrors my friends.

Link to comment
Share on other sites

  • Premium Member

Hey Matt, for those times things go wrong, I have this awesome form of Kool Aid, it's not RED, but it's got a cool name too, Yuengling. I need one or two before wading into REDuser. I agree 100% with you about the marketing hype and the RED. It's put a bad taste in my mouth for a long time (not to mention not ever being able to get any helpful information out of many "REDUsers," when I hit problems because, the camera is perfect and it must be something I'm doing wrong. I also agree with you that it has to be some form of multiple exposures. if it's "not two," maybe it's 3, maybe 4, who knows. It's just not 2. I also really really doubt this will work as well in the wild as it does a theory on paper. I'm just hoping, like mad, that after the "growing pains," of the original RED, people won't be as rabid about this new feature and use it just to use it. In the end, such things are killing our bread and butter as DoPs; we often use tools just to say we did with no appreciation as to what is really appropriate for the shoot. Of course, many times these choices are rammed down our throat by budget or higher-ups who haven't got my own form of Kool-Aid, but still it hurts us all as soon as we start believing that something is the be all end all.

I don't want to run a super risk of having this thread deteriorate into an endless debate of the RED company, or what is better (film/digital debate,) or anything of the sort, but I need to speak to the fact that the more I read about RED, the less I care and the less I take them seriously as a company. If RED really wants to succeed it has to EARN it's respect from cinematographers much in the same way Arri, Panavision, Aaton, Cooke, etc have. With years and years of hard work.

Still, one hopes they perhaps did make up something magical-- and that's what sells, that faith-- because it may put more good tools into our hands. May we pray we use them the right way.

Link to comment
Share on other sites

  • Premium Member

You know Keith, I'd not want to have it any other way ;) Despite your "bashing," and I do use that term lightly, I for one have always learned something or gotten something to think about, from all your posts on digital systems, and to think they said the only thing good to come out of Australia was Crocodile Dundee and continuity editing.

 

I don't have any problem with 35mm movie cameras being replaced by better and more economical digital cameras.

I do however, have massive problems being ear-bashed about this happening when it clearly hasn't.

Particularly from people where it is clearly and painfully obvious that they have have zero or near-zero experience or understanding of how just about ANY commercial project (film or video) is actually organized, financed or executed.

Link to comment
Share on other sites

  • Premium Member

I don't think you fully follow theory though - since the second exposure is longer the trail is spread over multiple photosites, creating a dim trail. If the camera was static that trail would possibly be a dot brighter then the first dot, but it would be in the same exact place, and it would have it's information replaced by the detail in the other dot (from the shorter exposure) hence the HDR preserving more highlight detail...

 

Yeah, but if the exposure in the trail area is limited by the motion of the pan causing the image of the source to move over several photosites during the exposure, wouldn't that put an upper limit on the exposure that's possible? So, the short exposure would be the lesser of the short-shutter time and the image transit time? So, we should expect a dim spot and a streak, or an indistinguishable spot and a streak, not a bright spot and a streak.....? I still kinda doubt that multi-shuttering is what they're up to here.

 

 

 

-- J.S.

Link to comment
Share on other sites

Yeah, but if the exposure in the trail area is limited by the motion of the pan causing the image of the source to move over several photosites during the exposure, wouldn't that put an upper limit on the exposure that's possible? So, the short exposure would be the lesser of the short-shutter time and the image transit time? So, we should expect a dim spot and a streak, or an indistinguishable spot and a streak, not a bright spot and a streak.....? I still kinda doubt that multi-shuttering is what they're up to here.

-- J.S.

 

OK, I see what you're saying now and you have a good point, but let me ask you this...

 

If you were going to combine two exposures into one to create an HDR image, wouldn't you have to manipulate the images somewhat in order to get a 'realistic' image? Jim said it's not 'just' combining two exposures...

 

To explain:

 

Let's say A is a wider shutter exposure that captures most of the mid tones and shadow detail, but the highlights are blown out.

 

Now let's say B is a tight shutter exposure to preserve additional highlight detail that A couldn't hold.

 

If you simply combined the two images, the bright spots in B would be darker then the blown out bright spots in A right? And I imagine it would look pretty odd for the highlights in A to have a dark center... even if it held more detail...

 

So, what I would do is create some kind of algorithm that reduced the brightness of the highlights in A, and then I would increase the brightness of B so that it seemed that B continued the brightness curve of A but held the highlight detail.

 

And that could explain why the streaks from A (the longer exposure) are dimmer than the points from B (the shorter exposure).

 

B)

Link to comment
Share on other sites

  • Premium Member

 

So, what I would do is create some kind of algorithm that reduced the brightness of the highlights in A, and then I would increase the brightness of B so that it seemed that B continued the brightness curve of A but held the highlight detail.

 

Of COURSE!!!

(Smacks forehead)

An Algorithm! Why didn't I think of that :P

 

Sorry, this forum has had a long history of various manufacturers' representatives telling us that all problems will solved "once we develop an algorithm...", as if they're somehting you can buy on eBay B)

 

The problem is (groundhog day, groundhog day ...) once a pixel gets overloaded, (or above a certain resolution is not captured and so on) no amount of algortihms are ever going to put back what was not there in th efirst place.

 

Having said that, it is possible that what they're really using is some sort of "steadyshot" type algorithm to remove the smearing artifacts from multiple exposure that they don't actually use, just like the noise reduction they don't actually use :)

Link to comment
Share on other sites

  • Premium Member

If you simply combined the two images, the bright spots in B would be darker then the blown out bright spots in A right? And I imagine it would look pretty odd for the highlights in A to have a dark center... even if it held more detail...

 

So, what I would do is create some kind of algorithm that reduced the brightness of the highlights in A, and then I would increase the brightness of B so that it seemed that B continued the brightness curve of A but held the highlight detail.

 

What you'd have to do is something like old fashioned luminance keying. Discard the crushed blacks in your low exposure and the blown out whites in your high exposure. Your target format would have, of course, more dynamic range that the two source exposures. So, you'd slide the brights to the top and the darks to the bottom, and then combine them in that wide range format. What you'd want to do is tweak both of them first so the mids pretty much match. Then you'd have to do a kind of crossover weighted averaging through the mids.

 

That would work for still pictures where nothing moves between the exposures, or for motion if you shot with some kind of beam splitter camera and had simultaneous bright and dark frames to work with. But the exposure durations would also have to match identically.

 

The rub is single chip motion. Some pixels would get keyed out and discarded in both, because a bright object moved, leaving holes in your result. Others would have both exposures competing to fill them, and confusing the algorithm because they're not mids, and not the same subject matter in both frames.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

Of COURSE!!!

(Smacks forehead)

An Algorithm! Why didn't I think of that :P

 

Sorry, this forum has had a long history of various manufacturers' representatives telling us that all problems will solved "once we develop an algorithm...", as if they're somehting you can buy on eBay B)

 

The problem is (groundhog day, groundhog day ...) once a pixel gets overloaded, (or above a certain resolution is not captured and so on) no amount of algortihms are ever going to put back what was not there in th efirst place.

 

Having said that, it is possible that what they're really using is some sort of "steadyshot" type algorithm to remove the smearing artifacts from multiple exposure that they don't actually use, just like the noise reduction they don't actually use :)

 

Keith : You think you're funny, I guess, but I don't take kindly to your facetiousness, which is I guess one of the pitfalls of having any kind of discussion an open board.

 

I'm well aware that once a pixel is overloaded there is no information there, as should be apparent from my posts. I believe I was fairly clear that the algorithm is not some kind of magic that retrieves lost highlights from a single exposure, but rather a theoretical attempt to understand how RED might be integrating two shots to create an HDR effect, much like 'merge to HDR' in photoshop is an algorithm comprised of a series of photoshop functions.

 

But since you didn't even spell check your own post I guess expecting you to read mine before responding is a lot to ask :)

 

John : I agree that for any kind of motion HDR to be perfectly done a prism set-up would be best.

 

However, unless the Epic has substantially changed it seems unlikely that that is what they are doing, and for me this is just an attempt to examine what RED might be doing to create this HDR effect with the limited tools available to them - ie. with a single chip.

 

I also agree with you that if a luminance key was applied to both images it would create holes.

 

What I would do would be to only apply a luminance key to exposure B (highlight detail), and lay it over exposure A (mids and shadow detail). That way any holes in B would show the shadow and midtone details beneath, while the additional highlight detail from exposure B would cover the blown highlights in A.

 

I would also lower the white output levels of A so that it's highlights don't seem brighter then B.

 

And yes, this would create artifacts in motion because the two shots are not identical with only a varying exposure. I believe the comet tailing of the lights is an example of that.

Link to comment
Share on other sites

  • Premium Member

And yes, this would create artifacts in motion because the two shots are not identical with only a varying exposure. I believe the comet tailing of the lights is an example of that.

 

Come to think of it, this would be like that 1920's two color film process that used alternating red and blue frames. Anything that moved had color fringes. This would have exposure fringes instead.

 

 

 

-- J.S.

Link to comment
Share on other sites

It´s not that complicated, just clever.

 

The Mysterium-X sensor was designed for HDR in the first place. That´s why it works.

 

The Mysterium-X in HDR mode takes two different exposures, a normal exposure (exposure time 1/48) plus a much shorter (1/384) for highlight detail.

 

These two exposures can be combined in-camera using a fixed tone mapping called “EasyHDR” with no increase of data rate.

or recorded for combining in post called “HDRx” with double data rate..

 

Frank

Link to comment
Share on other sites

  • Premium Member

It´s not that complicated, just clever.

 

The Mysterium-X sensor was designed for HDR in the first place. That´s why it works.

 

The Mysterium-X in HDR mode takes two different exposures, a normal exposure (exposure time 1/48) plus a much shorter (1/384) for highlight detail.

 

These two exposures can be combined in-camera using a fixed tone mapping called “EasyHDR” with no increase of data rate.

or recorded for combining in post called “HDRx” with double data rate..

 

Frank

Yeah, that's what everybody had figured.

And it's hardly a new idea.

It works pretty well with still images.

People have been doing that for years with ordinary stills cameras to produce realistic-looking images of things like operating TV sets in normally-lit living rooms, monitors in production studios and so on.

You don't necessarily need digital cameras either, the same technique also works for film, although the processing is somewhat more complicated.

 

Except, Jim Jannard says it doesn't work like that.

That's what most of the speculation is about: if it doesn't use multiple exposures, how DOES it work?

But it's a bit like wanting to argue about how flying saucers fly: first, show me some real evidence that they actually exist B)

Link to comment
Share on other sites

  • Premium Member

But since you didn't even spell check your own post I guess expecting you to read mine before responding is a lot to ask :)

Well I know you don't think you're funny but

The presence of a few simple "mechanical" typing errors indicates that the writer clearly is not using a spelling checker, and the absence of spelling errors in the vast bulk of the text indicates a level of literacy that most posters here are too illiterate to even realize that there is something to dream about.

But no spelling checker is going to help you when you don't know the difference between the possessive "its" and the contraction "it's" as in "it is"

I would also lower the white output levels of A so that it's highlights don't seem brighter then B.

It's one of the most fundamental and immutable laws of Internet forums: Any post intended to highlight another poster’s spelling or grammatical deficiencies, is going to contain a spelling or grammatical deficiencies…

:P

Link to comment
Share on other sites

Keith, I'm glad to see you're reading my posts a little more carefully ;)

 

Now perhaps we can put aside this unpleasantness and return to the topic at hand?

 

Except, Jim Jannard says it doesn't work like that.

 

The argument that the process Frank and I describe is not the one being used is based on Jim's denial of it, but I cannot find the post where Jim says it doesn't work like that - perhaps you could link to it?

 

All I could find is one where he denies that it's 'just' two exposures, and with your knowledge of grammar I'm sure you'll agree that that's a fairly ambiguous comment :)

Link to comment
Share on other sites

  • Premium Member

Keith, I'm glad to see you're reading my posts a little more carefully ;)

 

Now perhaps we can put aside this unpleasantness and return to the topic at hand?

 

 

 

The argument that the process Frank and I describe is not the one being used is based on Jim's denial of it, but I cannot find the post where Jim says it doesn't work like that - perhaps you could link to it?

 

All I could find is one where he denies that it's 'just' two exposures, and with your knowledge of grammar I'm sure you'll agree that that's a fairly ambiguous comment :)

 

Well I don't know whether I'm just not entering the correct search words, or the "Ministry of Truth" have gotten into the act, but I can't find any reference to that at all on RedUser now.

 

Yet in this article, they say it is just two exposures.

It now sounds like it works more or less the way everybody expected it to work.

The question is not whether it's possible, but how predictable the results are going to be.

Link to comment
Share on other sites

  • 2 weeks later...
  • Premium Member

Ok, it looks nice and all, but :

 

"We didn't take a meter with us... ugh."

 

Really? That takes the images I'm looking at from useful, to a curiosity. In truth, I have no way of knowing what I'm looking at in terms of what this is showing me.. To my eyes it looks like a frame stolen from a lot of ungraded material... film, some digital, certainly DSLRs (raw still mode of course) . I would love to post that on RED, but I'd just get flamed for it, but I certainly wish someone would cajole them into showing up a proper frame with some kind of reflected readings! I will admit it is an accomplishment in getting more out of their systems, which is quite beneficial. But if I were to drop 50K for an EPIC, I'd want to see a direct comparison to other systems.

Link to comment
Share on other sites

  • Premium Member

More HDR tests:

 

We had a similar setup on the first day of this season's "NCIS/LA", only a smaller door farther away, and no gaps between barn boards. The Alexa handled that at least this well, but they decided to burn out the exterior in final timing. It's time for a side by side shootout between this, Alexa, and the Sony 9000 PL. (Disclosure: I work for CBS TV Studios, we make "NCIS/LA".)

 

 

 

 

-- J.S.

Link to comment
Share on other sites

 

The Mysterium-X in HDR mode takes two different exposures, a normal exposure (exposure time 1/48) plus a much shorter (1/384) for highlight detail.

 

That's not exactly how it works. Basically, Jim was right when he said they didn't combine multiple exposures.

 

In fact, what they do is multiple readings of the same exposure. The very first reading happens very shortly (1/384th of a second if you're shooting 24fps) after the sensor's reset. This reading will provide the highlight details. Then comes the "regular" reading of the sensor's data.

 

This is a very clever method I believe, because you don't actually have to try to merge two different exposures which would only provide good results if nothing was moving in the frame.

 

The only problem they have is related to the motion blur, which will obviously be very different between the two readings: the "highlight reading" will have very little motion blur while the "regular reading" will have... regular motion blur. Merging those 2 readings can therefore theoretically display some very "funky looking" kind of motion-blur.

 

That's where you have two possibilities (in post):

 

1) "Magic Motion". That's how they call the "algorithm" which, in fact, doesn't do anything at all: it simply merges those two different motion blurs together without changing them. Results are "stunning" according to the happy few who have seen it. They claim the resulting motion looks more "organic", less "stuttery", much closer to what you'd get with a mechanical shutter, thereby implying what we all know: electronic shutters are inadequate to render motion correctly.

 

2) "MNMB" ("More Normal Motion Blur"). This is an algorithm developed by The Foundry (Nuke, Furnace) where they interpolate what the motion blur on the "highlight reading" should have been if this had been exposed 1/48th of a second instead of 1/384th of a second. Chances are this might not work all the times though, because it works as an interpolation...

Edited by Emmanuel Decarpentrie
Link to comment
Share on other sites

  • Premium Member

In fact, what they do is multiple readings of the same exposure. The very first reading happens very shortly (1/384th of a second if you're shooting 24fps) after the sensor's reset. This reading will provide the highlight details. Then comes the "regular" reading of the sensor's data.

 

That would explain why they had such difficulty with Adam Wilt's suggestion to put the short exposure/reading/dataset at the end rather than the beginning of the long one. Adam's idea was to produce trailing rather than leading blurs on bright objects.

 

Given that this works fine for the vast majority of shots, perhaps they could provide a way to turn it off for those special cases in which the leading blur is a problem. Or, if the two readings are stored separately, that could be decided in post.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

http://provideocoalition.com/index.php/awilt/story/red_visit_21_september_hdrx/

 

http://provideocoalition.com/index.php/awilt/story/red_visit_21_september_epic/

 

Hehe you guys are too funny. The end result is all that matters, whether you use film or digital, who cares. We're in the business of making great pictures and sound to accompany it. It seems like a lot of people seem to think they're stuck in a cult that says it's evil for them to get these great images with anything but film.

 

I think the Epic will surpass 35mm film (and larger format film with their bigger sensors in the future) in every way technically. Higher resolution, higher dynamic range, and equal to or better color reproduction. The only reason I could see why you would want to shoot film would be for fun, or because you want film grain and film scratches. I personally prefer a smooth clear image as opposed to film prints. But that is always subjective for everyone on an individual basis.

 

P.S. To whoever was complaining about Red's big sensors being a pissing match...... OF COURSE having bigger sensors with more resolution matters! Why do you think everyone makes such a big deal when something is shot on IMAX film or something larger???? You don't need do output to 4K. A lot of my projects finish on an SD DVD. But the RED footage looks a million times better than footage shot from any SD or HD camera, hands down.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Visual Products

Film Gears

BOKEH RENTALS

CineLab

CINELEASE

Gamma Ray Digital Inc

Broadcast Solutions Inc

Metropolis Post

New Pro Video - New and Used Equipment

Cinematography Books and Gear



×
×
  • Create New...