Jump to content

Epic HDR


Adrian Sierkowski

Recommended Posts

That would explain why they had such difficulty with Adam Wilt's suggestion to put the short exposure/reading/dataset at the end rather than the beginning of the long one. Adam's idea was to produce trailing rather than leading blurs on bright objects.

 

Given that this works fine for the vast majority of shots, perhaps they could provide a way to turn it off for those special cases in which the leading blur is a problem. Or, if the two readings are stored separately, that could be decided in post.

 

 

 

 

-- J.S.

 

I think the two of you have it backwards. Even though the camera is panning right to left in the fire truck scene, the buildings and the lights on them are actually moving left to right. The short exposure comes at the end leaving the sharp highlights on the left side, which is the trailing edge. A little confusing I know, but at least the picture in motion doesn't look weird because of it.

 

The Foundry is working on this stuff too. So a lot of improvements will probably come regarding options of processing the two streams after the fact.

Link to comment
Share on other sites

  • Replies 173
  • Created
  • Last Reply

Top Posters In This Topic

I think the two of you have it backwards.

 

I really don't think so ;)

 

Even though the camera is panning right to left in the fire truck scene, the buildings and the lights on them are actually moving left to right. The short exposure comes at the end leaving the sharp highlights on the left side, which is the trailing edge.

 

The panning is made from right to left, the lights are thus moving from left to right and the "short exposure" is indeed on the left-side, which proves my point: the "short exposure's reading" is made first, when a relatively low number of photons have been captured by the sensor. Then comes the "long exposure's reading", with 8 (or more) times more photons, hence more motion blur. The trick is to do those two readings without reseting the sensor between the two. Otherwise, you won't be able to smoothly merge the two readings together. It is simply impossible to merge two consecutive exposures together, unless there is nothing in motion in the frame...

 

Adam Wilt's idea to make the "short exposure's reading" after the long one doesn't make much sense at all, from an engineering prospective: in that case, unless you reset the sensor between the two readings, your "short exposure's reading" will have blown-out highlights if it comes after the long one, which is exactly what we wanted to prevent in the first place...

Link to comment
Share on other sites

 

Given that this works fine for the vast majority of shots, perhaps they could provide a way to turn it off for those special cases in which the leading blur is a problem. Or, if the two readings are stored separately, that could be decided in post.

 

 

That's exactly why they called The Foundry for help :) Basically, that's what The Foundry's "MNMB" does: it visually removes this "leading motion blur" by generating a similar looking motion-blur for both readings! Thus, when you merge the the two readings with The Foundry's "MNMB", you'll get a regular looking motion blur, exactly as if you didn't use HDR...

Link to comment
Share on other sites

Speaking of this "MNMB" thing, I've been thinking that chances are this feature might very well consistently give spectacular results. Why? Because it isn't pure interpolation. Unlike what the Foundry does with its high-end "Furnace" plugin, whose "Motion Blur" tool is indeed capable of "guesstimating" a moving-object's motion blur with a higher (more open) shutter angle, in this case, they DO have the correct looking motion blur. No need to guesstimate that: the "long exposure's reading" (LER)'s motion blur is correct. The only issue, is that the LER's motion blur isn't quite color correct. But it's a whole easier job to "guesstimate" the correct looking color of a motion blur rather than having to guesstimate the motion blur itself... In my humble (engineer's) opinion at least...

Edited by Emmanuel Decarpentrie
Link to comment
Share on other sites

  • Premium Member

After laboriously hosing all the Fanboy Comments off the sidewalk, it would appear the the Epic HDR simply exploits the ability of CMOS sensors to take multiple non-destructive samples of each pixel as the exposure commences, and basically selects the best ones to be assembled into the final image, giving extended tonal capture range.

 

This is hardly new, but up to now has only been practical with still images.

 

After reading a series of Gavin Greenwalt's pointed posts on Reduser(which appear to be pushing him perilously towards a Stephen Williams type ban :rolleyes: )it appears that they are really developing (or attempting to) is a hardware/software package that can synthesize a high dynamic range image from the multiple exposure data from moving images. I don't know whether you need a special version of the new camera/sensor, but you will almost certainly need a fancy Post production software package, at what cost is anybody's guess. But also, as with the so-called "4K" image, the result is always going be what a computer thinks was there, not an actual record.

 

How well this will actually work "in the wild" very much remains to be seen, (as Tom Lowe is so fond of saying) but Gavin has thrown up some interesting scenarios.

 

I can't help wondering if that is not what happened to RedRay (or whatever it's called now). I suspect that all it really is, is some sort of MPEG4'd version of RedCode, where they apply inter-frame compression to the RedCode "RAW Slideshow". If that is in fact what they do, I could see how there could be severe problems with accurate colour rendition. In other words, if you carefully selected your showreel you could get very impressive-looking results, but it wouldn't work in the real world.

 

I'm afraid it's still very much all sizzle and very little Porterhouse.

Link to comment
Share on other sites

  • Premium Member

By the way, if you're into Software-Porn, how do you like this:

 

 

screenshotic.jpg

 

This is a little applet I've been writing over the past couple of months.

It allows you to print out high quality custom resolution charts on cheap inkjet printers. You can make charts up to 10,000 lines or down to any figure you like. (Although I seriously doubt anything above 2,000 lines is going to be useful for anything.

 

Now very few printers can print to that sort of resolution, so this program gets around that by breaking the charts up into a user-selectable number of 2,000 x 1600 pixel bitmap images. It can make stepped charts with either log or lin spacing, or log or lin sweeps.

 

The lines are proper sinewaves, which avoids aliasing problems with single-sensor cameras.

 

It also does custom resolution Bayer Patterns for checking critical focus and OPLF performance.

 

You can adjust the resolution to your own requirements, and even re-scale the images to match the dimensions of any existing charts you may already have.

 

Here's a couple of scaled-down images.

You can't really compress the actual images without ruining their accuracy, and they're about 9MB each! The only real option is to generate them on the spot in your computer. (The entire program consists of a single executable file a whopping 92K in size, talk about ultra compression!)

 

101of100to1000lines.jpg

201100to1000lineslogswe.jpg

 

And the cost? Nothing, it's freeware.

I've been sorting out a couple of annoying bugs in what little spare time I've had over the past few weeks. I've only got to sort out a few more minor things and it will be ready to launch.

Link to comment
Share on other sites

Hahahaha! I'm assuming that you haven't seen the second motion clip of HDRx from inside the barn with zero processing.

 

Or should I have started talking about the latest NFL trade rumors. It would have had as much relevance to this thread as your last post Keith. Just keep on ignoring the real world results. I'm sure everyone wants to know more about lines of software coding. We don't care about what a usable image looks like.

 

Also, HDR modes are in all new sensors including the scarlets, and the software for processing it in redcine x which is free. For the Foundry's processing techniques, $375 for Storm.

 

Wow..... that sure is gonna break the bank for anyone who is already buying an Epic........

Edited by Brian Langeman
Link to comment
Share on other sites

  • Premium Member

Hey Keith, let me know when that finished, I'd not mind taking a spin with it

I'm going to put it on a file hosting site when I've checked it with enough versions of Windows. One of the problems at the moment is that it's almost impossible to send an executable file as an email attachment these days no matter how you try to disguise it. The email servers assume its a virus and it simply never arrives at the other end, as I only recently discovered!

Link to comment
Share on other sites

  • Premium Member

Hahahaha! I'm assuming that you haven't seen the second motion clip of HDRx from inside the barn with zero processing.

 

Or should I have started talking about the latest NFL trade rumors. It would have had as much relevance to this thread as your last post Keith. Just keep on ignoring the real world results. I'm sure everyone wants to know more about lines of software coding. We don't care about what a usable image looks like.

 

Also, HDR modes are in all new sensors including the scarlets, and the software for processing it in redcine x which is free. For the Foundry's processing techniques, $375 for Storm.

 

Wow..... that sure is gonna break the bank for anyone who is already buying an Epic........

No, I have not been in that barn at all as it happens. Neither have you. Why didn't they shoot the same scene with a RED one for comparison?

And OK so if we're now talking HDR Hahahaha-HandyCam, why don't they just go out and shoot some wild footage at random? Why do we keep seeing the same damned scenes over and over?

That's what's known as a rhetorical question by the way, I don't need any imaginative answers (which is all I will get :rolleyes: ).

Link to comment
Share on other sites

If you don't want an answer, then don't ask the question. The shot in the barn WAS random. That's why they didn't have another camera on hand to compare the results with. I didn't ask if you were there, but if you'd seen the shot. Don't put words into my mouth.

 

You're turn to answer a rhetorical question.

 

Why won't you discuss how the clip looks?

Link to comment
Share on other sites

  • Premium Member

 

You're turn to answer a rhetorical question.

 

Why won't you discuss how the clip looks?

 

That's not actually a rhetorical question. (And it's "your" not you're")

But I shall answer it anyway:

"Because, Little Grasshopper, I have absolutely no idea what the actual lighting conditions were like."

 

That was a random shot?

 

So, what were they doing with a expensive camera like that, in a barn like that? Why didn't they just point the camera out the window of RED headquarters for example? You don't really have to look hard to find difficult lighting in your own backyard. And there would doubtless be plenty of other RED cameras lying round there.

 

I've evaluated plenty of broadcast video cameras in my time, and I never found it necessary to do anything other than point the thing out the window to get a general feel of its dynamic range. Sure, you can do quantitative measurements with calibrated light boxes and so on, but they seldom tell the whole story. Reliance on precision measurement is a clear sign of inexperience.

Link to comment
Share on other sites

 

This is hardly new, but up to now has only been practical with still images.

 

After reading a series of Gavin Greenwalt's pointed posts on Reduser(which appear to be pushing him perilously towards a Stephen Williams type ban :rolleyes: )it appears that they are really developing (or attempting to) is a hardware/software package that can synthesize a high dynamic range image from the multiple exposure data from moving images. I don't know whether you need a special version of the new camera/sensor, but you will almost certainly need a fancy Post production software package, at what cost is anybody's guess. But also, as with the so-called "4K" image, the result is always going be what a computer thinks was there, not an actual record.

 

How well this will actually work "in the wild" very much remains to be seen, (as Tom Lowe is so fond of saying) but Gavin has thrown up some interesting scenarios.

 

I can't help wondering if that is not what happened to RedRay (or whatever it's called now). I suspect that all it really is, is some sort of MPEG4'd version of RedCode, where they apply inter-frame compression to the RedCode "RAW Slideshow". If that is in fact what they do, I could see how there could be severe problems with accurate colour rendition. In other words, if you carefully selected your showreel you could get very impressive-looking results, but it wouldn't work in the real world.

 

I'm afraid it's still very much all sizzle and very little Porterhouse.

 

I don't know the English translation for the French "mauvaise foi", but, with all due respect, I'm afraid that's exactly what you're suffering from.

 

First of all, if this is "hardly new", can you please give us a few references to illustrate your point? I have been working as an engineer in this industry for more than 20 years but I never heard anything remotely similar to what they are doing. Sure, there was a few white papers or theoretical studies about "multiple readings of CMOS sensors", but so far, no one ever tried to integrate that concept in a digital video camera.

 

Next, you seem to believe they are doing multiple exposures. Not correct! It's multiple READINGS of the same exposure. In fact, it's very comparable to what Arri does with the Alexa's Dual Gain sensor or what Fuji does with its "SR super CCD". Fuji, Arri and Red all end up with two pictures of the same exposure. They both have to mix those two pictures together to increase DR! The only difference is that RED's solution causes "motion blur" issues, while Arri's (or Fuji's) dual gain sensor is less efficient at artificially increasing DR, but doesn't have any "motion blur" issues to deal with.

 

Next, you claim that the task of combining those two pictures will be very difficult and will require a software package whose "cost is anybody's guess". What do you think is so difficult in merging two readings of a same exposure? Do you need an expensive software to be able to read Fuji's SR super CCD pictures? Do you need an expensive software to take advantage of Arri's dual gain inflated DR? Did you even notice the fact that RED (like Arri) does offer the option to do this operation inside the camera? There is nothing particularly complex about merging two readings of the same exposure! The trick for Red will be to work around the motion blur issues, if necessary! That's what Foundry has been working on!

 

There is no reason to put them down! The HDR trick they use is clever! And, at this point, there is simply no reason to believe RED's HDR will give inferior results to Arri's DG HDR. We'll soon see what it does "in the wild".

Edited by Emmanuel Decarpentrie
Link to comment
Share on other sites

  • Premium Member

Emmanuel, I am no engineer, but by taking 2 readings of 1 exposure, you are basically making 2 exposures for all intents and purposes-- that is if the RED is doing it with a 1/x (some speed) readin and then a 1/48th reading, as is theorized. As for showing it, they go out to this barn and play around with the camera, they don't bring even a LIGHT METER?! Or show the info screen on the RED camera with it's histograms etc, or even switch the HDR off to show how it looks w/o it? That's suspicious, and really should make any professional stop and say, wait a minute...

Link to comment
Share on other sites

but by taking 2 readings of 1 exposure, you are basically making 2 exposures for all intents and purposes

 

Yes and no: I'd rather say that it's two different pictures coming from a single exposure because the word exposure always refers to a single shutter cycle. If it really was two different exposures, the moving objects would never over-lapse which would make it impossible to mix the two pictures.

 

they go out to this barn and play around with the camera, they don't bring even a LIGHT METER?

 

I agree! This is not a serious test! I'm pretty sure we will get more serious tests in the coming weeks/months. They keep repeating that it is far from finished yet. Graeme and The Foundry are writing tons of code as we speak.

 

But it doesn't really matter in my opinion, for I KNOW the results will be impressive! Why? Simply because the method they use is one of the most clever I've ever witnessed. The only possible issue they can get is with the motion blur. And they seem to have seriously addressed that issue: the motion in the Barn's video is far from being ugly or awkward. And yet, this hasn't even been treated by the Foundry's MNMB algorithm.

 

All the other potential issues (like possible wrong tone mapping, etc.) are going to be fixed sooner or later. I wouldn't say it is easy, but it can be done! The Arri Alexa is the best example I have in mind: their own "HDR" images, which, in a sense, are also made from two different readings of a single exposure, are beautiful! If you don't believe in RED's HDR potential, then, you clearly must hate the Alexa's pictures, IMHO. But in my opinion, the Alexa offers the best alternative to film I've ever witnessed.

 

Let's think "potential" here, and let's not be bothered by some quickly made non professional test/preview of a feature that is still in a very early stage of development. All in all, I believe this RED made HDR thing really has a big potential! And I'm so sure this thing is going to be huge that I invite the skeptics to come back to this discussion two years from now. Let's give them enough time to fix everything, for Red is far from being an example when it comes to "respecting their own development's deadlines". :)

Edited by Emmanuel Decarpentrie
Link to comment
Share on other sites

  • Premium Member

True true. I think our own issue, Emmanuel, is just semantics over how we define exposure cycles, and in the end it isn't a big big deal. I certainly don't hate the Alexa images, nor do I hate the RED images; but I also wouldn't let either of them marry my daughter (if I had one and if you know what I mean). And, while I can respect RED for intent, I must chide them for their marketing, yet again. Yes it sells cameras, but at the same time, it makes it increasingly difficult to get through the hype and zealots around it. I would love to think potential, but in the end, I'm being presented an actual image from the sensor/system and I would love to begin to form an understanding of how it works, what I'm getting etc, yet I am being confounded on both of those fronts which is quite irritating.

Link to comment
Share on other sites

True true. I think our own issue, Emmanuel, is just semantics over how we define exposure cycles, and in the end it isn't a big big deal.

 

True :)

 

while I can respect RED for intent, I must chide them for their marketing, yet again. Yes it sells cameras, but at the same time, it makes it increasingly difficult to get through the hype and zealots around it.

 

I'd say the easiest way to deal with that (reduser.net) infamous signal/noise issue is to mostly pay attention to what Jim and Red' team members (or a few other trustworthy people like David Mullen or Stephen Williams) have to say. The "find more post by xxx" function really is a big time-saver I think :)

 

I would love to begin to form an understanding of how it works, what I'm getting etc

 

A few months of patience will be necessary, I'm afraid. Oh well, that's no big deal: I have enough work to keep me busy ;)

Link to comment
Share on other sites

That's not actually a rhetorical question. (And it's "your" not you're")

But I shall answer it anyway:

"Because, Little Grasshopper, I have absolutely no idea what the actual lighting conditions were like."

 

That was a random shot?

 

So, what were they doing with a expensive camera like that, in a barn like that? Why didn't they just point the camera out the window of RED headquarters for example? You don't really have to look hard to find difficult lighting in your own backyard. And there would doubtless be plenty of other RED cameras lying round there.

 

I've evaluated plenty of broadcast video cameras in my time, and I never found it necessary to do anything other than point the thing out the window to get a general feel of its dynamic range. Sure, you can do quantitative measurements with calibrated light boxes and so on, but they seldom tell the whole story. Reliance on precision measurement is a clear sign of inexperience.

 

Hey old, cynical, grandpa. It was rhetorical because it wasn't intended to be answered, though I knew it would be. It's not because you don't know what the lighting conditions were like. It's because you don't want to discuss potential. You choose not to be constructive, but destructive.

 

You don't know what sensors HDRx is going into, but that doesn't stop you from talking about that. You don't know how much the relevant software costs, and you talk about that. You don't know how RED is compressing footage for RED RAY, yet you still talk about that. I could go on for quite a long time here.

 

So, one thing that you do know. You know how good these test results are looking. But... you don't talk about it... haha. You can't say something bad about them because everyone on the board can see the results, and it doesn't require them to trust your "words of wisdom."

Link to comment
Share on other sites

  • Premium Member

Honestly, I can't tell what the lighting conditions were aside from barn was darker than outside, but the question is HOW MUCH DARKER! how many stops? What was behind the red camera in this case, where is the fill light coming from, what was the cameras ASA, shutter speed, lens set at (and what lens for that matter). All of these things are very important if we are to arrive at any kind of evaluation of the posted examples aside from the novelty of, oh a camera is recording an image. If outside is a 45 and the inside of the barn is a 1.4 that's a big deal, but if it's a 2.8 in the barn and an 11 outside, that really isn't as big of a deal, now is it.

 

Emmanuel, as for the S/N @ REDuser, I even find that many of the posts from the RED staff are rather uninformative in regards to what they are doing, and any objective evaluations of anything, which is disheartening. . . It's nice to get some of the hard facts about a system (est costs and the like) but aside from that I hear "18 stops dynamic range," and then don't get to see any proof, in an objective way aside from, of course nice looking flat images, good for grading, but which give me no reference to what the actual place was like and how camera x is differing from camera y and then many post of just.. well nothing... It is quite sad.

Link to comment
Share on other sites

It's nice to get some of the hard facts about a system (est costs and the like) but aside from that I hear "18 stops dynamic range," and then don't get to see any proof, in an objective way aside from,

 

As far as "hard facts" are needed, they've shown a "stouffer like" chart showing there is indeed "18 stops of dynamic range" with HDR+6 enabled, exactly like the theory would give: let's say your sensor's native DR is 13 stops (as they claim). If you merge a picture taken by this sensor with another picture taken with an "apparent shutter cycle" that is 64 times faster (2 to the power of 6), then, you will indeed, in theory, be able to add 6 stops of DR to the original image's highlights.

 

I don't see any reason to believe this theory couldn't be proven to be correct in practice when Arri Alexa's similar approach gives outstanding results. Therefore, I don't need to be overly skeptical about all this RED-HDR thing.

 

Furthermore, what difference would it make if Jim had taken a light meter and gave us some extra "hard facts"? Would you trust him? Wouldn't you prefer to get an independent evaluation? How is that currently possible: the camera hasn't been released yet? I think it's best to "wait and see": sooner or later, we'll get "hard facts", verified by independent sources. In the meantime, I enjoy the idea that sooner or later, I might be able to shoot a car interior at noon without being forced to gel the windows. Or a steadicam shot where the camera moves from inside to outside of the house without having to mess around with the iris ;)

Edited by Emmanuel Decarpentrie
Link to comment
Share on other sites

  • Premium Member

A chart in a test is one thing, and a useful thing; if I buy the chart itself, which honestly, I have trouble with. But just to get some idea of exposures would be very very useful for these in the wild interpretations. Personally, I am skeptical of everything until I can see some "proofs" of it.

Link to comment
Share on other sites

  • Premium Member

Sure, there was a few white papers or theoretical studies about "multiple readings of CMOS sensors", but so far, no one ever tried to integrate that concept in a digital video camera.

 

It's multiple READINGS of the same exposure. In fact, it's very comparable to what Arri does with the Alexa's Dual Gain sensor or what Fuji does with its "SR super CCD". Fuji, Arri and Red all end up with two pictures of the same exposure.

 

What's the IP situation on this? Are there any patents on multiple reading techniques?

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member

 

It's multiple READINGS of the same exposure.

 

Would you like to explain how that works, M. Ingénieur?

 

 

The problem remains eternally the same. The analog photocells that make up the heart of all digital cameras have a very restricted dynamic range. No amount of "downstream" digital jiggery-pokery is going to change that.

 

But it doesn't really matter in my opinion, for I KNOW the results will be impressive! Why? Simply because the method they use is one of the most clever I've ever witnessed.

 

Hey, would you mind if I used that in my signature line? :lol:

 

Are you by chance related to Jan von Krogh? :rolleyes:

 

 

 

 

Edited to see if I really can now edit posts more than a few minutes old :D Yes!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Forum Sponsors

Metropolis Post

New Pro Video - New and Used Equipment

Gamma Ray Digital Inc

Broadcast Solutions Inc

Visual Products

Film Gears

CINELEASE

BOKEH RENTALS

CineLab

Cinematography Books and Gear



×
×
  • Create New...