Jump to content

Emmanuel Decarpentrie

Basic Member
  • Posts

    67
  • Joined

  • Last visited

Profile Information

  • Occupation
    Cinematographer
  • Location
    Brussels
  • My Gear
    Red-One, Red Epic, Red Scarlet, Arri Alexa, Lumix GH2, Blackmagic Pocket

Contact Methods

  • Website URL
    http://
  1. Very good point indeed! I took a single line because, as far as I'm aware of, that's what (most) rolling shutter are working with. But it could very well be 4 or even 5 lines, although I have some doubts. It would be very easy to verify: a simple whip pan would give us enough deformation to check if we have some sort of "staircase" effect on the verticals. If so, we just have to count how many pixels per stair, and that's it. It would even be easier if we decreased the motion blur by increasing the shutter "speed". The reason I doubt this is what they are doing to reduce the Read-Reset is that I guess I would have noticed this staircase effect: I'm very fond of whip pan effects, and I've shot more than 100 hours of footage (including many RS tests) on the Red-One ;)
  2. BTW, this number is pretty easy to evaluate, if we have the total time it takes for a sensor to "read-reset" itself. We just have to divide the full "Read-Reset" cycle time by the number of lines. Am I wrong? Since Jim said the "Read-Reset"'s cycle time of EPIC's MX is around 5ms and it's number of lines is 2560 (in 2:1 mode), that would put the number we are looking for to a 2 micro-seconds figure! Still enough to get a little gap on moving objects, but would that gap ever be noticed or hard to fill? It's hard to say...
  3. We don't need to wait for the whole chip to be read or reset: due to the rolling shutter, all that matters is a single pixel (or, more precisely, a line of pixels). In theory, the "reset" of a pixel could take place right after it has been "read", which would indeed put the whole "gap" in the "dozens of microseconds", perhaps even less (I don't have the exact number). However, I still have the intuition that it would be a mistake to try to combine two pictures whose exposure didn't start at the very same time. I could be wrong though. It's definitely worth a try. I hope someone from Red or Arri will test this, at least to try to obtain an effect similar to a "second curtain flash" instead of the "first curtain flash" Magic Motion's effect.
  4. That is correct! However, I am convinced that moving objects would be separated by a "gap" (I can't find a better word to explain it) if one decided to use this "two consecutive exposures" approach. And to fill that "gap" would be very tricky, to say the least. Results would be inferior, I have no doubt about that. Well, that is your right. I can't find a better explanation tough. Marketing considerations aside, we must differentiate RED's method to achieve HDR (which is quite unique in this industry) from regular exposure bracketing (i.e. multiple consecutive exposures) which would be very tricky to use on moving objects. The difference might be trivial to you (I agree: it's just a missing "RESET" signal), but it is far from insignificant in reality. I just made this picture to simulate the "motion" difference between classic exposure bracketing (aka. "multiple exposures") and Red's "Magic Motion" as well as The Foundry's "MNMB" (More Normal Motion Blur). You may claim there is absolutely no difference between those 3 methods, that it's all marketing bull****, but I beg to differ. To me, the differences are obvious.
  5. My post was directed to Georg, as you should have known by reading it, but do you even bother to read my words? I'll pass the ad hominem arguments you seem to be fond of: they are totally irrelevant. What exactly makes my explanation "impossible", according to you? Do you have any argument other than your trivial "you're just plain wrong, no matter what you say". Perhaps you should read "what I'm saying" a little more carefully, although it certainly isn't written as gracefully as your own prose (I'm not a native english speaker)? I agree that, in the "analog film world", it would have been impossible to make two samplings of the same exposure: the shutter must close between two consecutive "readings" (sort of speaking). But in the digital world (with CMOS technology), why would that be impossible? Unlike CCD sensors, a CMOS sensor allows us to "read" each pixel individually. Let's assume we want to make 2 readings of a standard exposure (1/48th of a second). We want the first sampling (reading) to take place early so as to capture 8 times (3 stops) less charges than what is accumulated during the entire cycle. What, according to you, is impossible in the following sequence: 1) RESET of a pixel at T0. Start accumulating charges. 2) First reading of this very same pixel at T1=1/(48*8)=1/384th of a second after T0. 3) Second reading of the pixel at T2=1/48th of a second after T0. 4) "Stop accumulating charges" by... a RESET. 5) Start cycle all over again (after the required time, depending on shutter "angle"). BTW, an "exposure", by definition, takes place between two consecutive resets. You got it perfectly right indeed ;) This is rather comforting... But I don't think that "to make two consecutive exposures" would make such a big difference in engineering complexity. It's only an extra "reset" signal to generate. No big deal. The real difference between the "two consecutive exposures" approach and Red's own HDR would be visible on fast moving objects, where you would get a trailing ghost image of the moving object. Even if it's just 1/8th of the exposure time, it is still enough to be quite visible on fast moving objects. That's all the difference an extra "reset" makes ;) So why would you even bother to make two consecutive exposures if it would only give inferior results?
  6. Again: this is NOT what they are doing! Re-read what I've said before or what the paper on Provideocoalition is clearly explaining. Allow me to repeat for the fourth time: they don't "take two exposures at different times", they sample (read) the same exposure twice. Combining those two samples is pretty straightforward, because they both come from the very same exposure! In other words, the shutter didn't (close) reset itself between those two samplings. And that is a very clever (simple) way to achieve an actual increase in DR! I'm wondering why no one did it before them... The only problem might have come from the combined motion blur, but I have to admit the naturally combined motion blur actually gives excellent results. Hence the name "Magic Motion" I guess. The "Magic Motion" truly makes it like it was shot with a mechanical shutter, like the D20-D21. Why? I have no clue... But should you dislike this "Magic Motion" thing, you still have the Foundry's solution, which I'm pretty sure will give outstanding results (I use the Foundry's products on a regular basis).
  7. As far as "hard facts" are needed, they've shown a "stouffer like" chart showing there is indeed "18 stops of dynamic range" with HDR+6 enabled, exactly like the theory would give: let's say your sensor's native DR is 13 stops (as they claim). If you merge a picture taken by this sensor with another picture taken with an "apparent shutter cycle" that is 64 times faster (2 to the power of 6), then, you will indeed, in theory, be able to add 6 stops of DR to the original image's highlights. I don't see any reason to believe this theory couldn't be proven to be correct in practice when Arri Alexa's similar approach gives outstanding results. Therefore, I don't need to be overly skeptical about all this RED-HDR thing. Furthermore, what difference would it make if Jim had taken a light meter and gave us some extra "hard facts"? Would you trust him? Wouldn't you prefer to get an independent evaluation? How is that currently possible: the camera hasn't been released yet? I think it's best to "wait and see": sooner or later, we'll get "hard facts", verified by independent sources. In the meantime, I enjoy the idea that sooner or later, I might be able to shoot a car interior at noon without being forced to gel the windows. Or a steadicam shot where the camera moves from inside to outside of the house without having to mess around with the iris ;)
  8. True :) I'd say the easiest way to deal with that (reduser.net) infamous signal/noise issue is to mostly pay attention to what Jim and Red' team members (or a few other trustworthy people like David Mullen or Stephen Williams) have to say. The "find more post by xxx" function really is a big time-saver I think :) A few months of patience will be necessary, I'm afraid. Oh well, that's no big deal: I have enough work to keep me busy ;)
  9. Yes and no: I'd rather say that it's two different pictures coming from a single exposure because the word exposure always refers to a single shutter cycle. If it really was two different exposures, the moving objects would never over-lapse which would make it impossible to mix the two pictures. I agree! This is not a serious test! I'm pretty sure we will get more serious tests in the coming weeks/months. They keep repeating that it is far from finished yet. Graeme and The Foundry are writing tons of code as we speak. But it doesn't really matter in my opinion, for I KNOW the results will be impressive! Why? Simply because the method they use is one of the most clever I've ever witnessed. The only possible issue they can get is with the motion blur. And they seem to have seriously addressed that issue: the motion in the Barn's video is far from being ugly or awkward. And yet, this hasn't even been treated by the Foundry's MNMB algorithm. All the other potential issues (like possible wrong tone mapping, etc.) are going to be fixed sooner or later. I wouldn't say it is easy, but it can be done! The Arri Alexa is the best example I have in mind: their own "HDR" images, which, in a sense, are also made from two different readings of a single exposure, are beautiful! If you don't believe in RED's HDR potential, then, you clearly must hate the Alexa's pictures, IMHO. But in my opinion, the Alexa offers the best alternative to film I've ever witnessed. Let's think "potential" here, and let's not be bothered by some quickly made non professional test/preview of a feature that is still in a very early stage of development. All in all, I believe this RED made HDR thing really has a big potential! And I'm so sure this thing is going to be huge that I invite the skeptics to come back to this discussion two years from now. Let's give them enough time to fix everything, for Red is far from being an example when it comes to "respecting their own development's deadlines". :)
  10. I don't know the English translation for the French "mauvaise foi", but, with all due respect, I'm afraid that's exactly what you're suffering from. First of all, if this is "hardly new", can you please give us a few references to illustrate your point? I have been working as an engineer in this industry for more than 20 years but I never heard anything remotely similar to what they are doing. Sure, there was a few white papers or theoretical studies about "multiple readings of CMOS sensors", but so far, no one ever tried to integrate that concept in a digital video camera. Next, you seem to believe they are doing multiple exposures. Not correct! It's multiple READINGS of the same exposure. In fact, it's very comparable to what Arri does with the Alexa's Dual Gain sensor or what Fuji does with its "SR super CCD". Fuji, Arri and Red all end up with two pictures of the same exposure. They both have to mix those two pictures together to increase DR! The only difference is that RED's solution causes "motion blur" issues, while Arri's (or Fuji's) dual gain sensor is less efficient at artificially increasing DR, but doesn't have any "motion blur" issues to deal with. Next, you claim that the task of combining those two pictures will be very difficult and will require a software package whose "cost is anybody's guess". What do you think is so difficult in merging two readings of a same exposure? Do you need an expensive software to be able to read Fuji's SR super CCD pictures? Do you need an expensive software to take advantage of Arri's dual gain inflated DR? Did you even notice the fact that RED (like Arri) does offer the option to do this operation inside the camera? There is nothing particularly complex about merging two readings of the same exposure! The trick for Red will be to work around the motion blur issues, if necessary! That's what Foundry has been working on! There is no reason to put them down! The HDR trick they use is clever! And, at this point, there is simply no reason to believe RED's HDR will give inferior results to Arri's DG HDR. We'll soon see what it does "in the wild".
  11. Speaking of this "MNMB" thing, I've been thinking that chances are this feature might very well consistently give spectacular results. Why? Because it isn't pure interpolation. Unlike what the Foundry does with its high-end "Furnace" plugin, whose "Motion Blur" tool is indeed capable of "guesstimating" a moving-object's motion blur with a higher (more open) shutter angle, in this case, they DO have the correct looking motion blur. No need to guesstimate that: the "long exposure's reading" (LER)'s motion blur is correct. The only issue, is that the LER's motion blur isn't quite color correct. But it's a whole easier job to "guesstimate" the correct looking color of a motion blur rather than having to guesstimate the motion blur itself... In my humble (engineer's) opinion at least...
  12. That's exactly why they called The Foundry for help :) Basically, that's what The Foundry's "MNMB" does: it visually removes this "leading motion blur" by generating a similar looking motion-blur for both readings! Thus, when you merge the the two readings with The Foundry's "MNMB", you'll get a regular looking motion blur, exactly as if you didn't use HDR...
  13. I really don't think so ;) The panning is made from right to left, the lights are thus moving from left to right and the "short exposure" is indeed on the left-side, which proves my point: the "short exposure's reading" is made first, when a relatively low number of photons have been captured by the sensor. Then comes the "long exposure's reading", with 8 (or more) times more photons, hence more motion blur. The trick is to do those two readings without reseting the sensor between the two. Otherwise, you won't be able to smoothly merge the two readings together. It is simply impossible to merge two consecutive exposures together, unless there is nothing in motion in the frame... Adam Wilt's idea to make the "short exposure's reading" after the long one doesn't make much sense at all, from an engineering prospective: in that case, unless you reset the sensor between the two readings, your "short exposure's reading" will have blown-out highlights if it comes after the long one, which is exactly what we wanted to prevent in the first place...
  14. That's not exactly how it works. Basically, Jim was right when he said they didn't combine multiple exposures. In fact, what they do is multiple readings of the same exposure. The very first reading happens very shortly (1/384th of a second if you're shooting 24fps) after the sensor's reset. This reading will provide the highlight details. Then comes the "regular" reading of the sensor's data. This is a very clever method I believe, because you don't actually have to try to merge two different exposures which would only provide good results if nothing was moving in the frame. The only problem they have is related to the motion blur, which will obviously be very different between the two readings: the "highlight reading" will have very little motion blur while the "regular reading" will have... regular motion blur. Merging those 2 readings can therefore theoretically display some very "funky looking" kind of motion-blur. That's where you have two possibilities (in post): 1) "Magic Motion". That's how they call the "algorithm" which, in fact, doesn't do anything at all: it simply merges those two different motion blurs together without changing them. Results are "stunning" according to the happy few who have seen it. They claim the resulting motion looks more "organic", less "stuttery", much closer to what you'd get with a mechanical shutter, thereby implying what we all know: electronic shutters are inadequate to render motion correctly. 2) "MNMB" ("More Normal Motion Blur"). This is an algorithm developed by The Foundry (Nuke, Furnace) where they interpolate what the motion blur on the "highlight reading" should have been if this had been exposed 1/48th of a second instead of 1/384th of a second. Chances are this might not work all the times though, because it works as an interpolation...
  15. What is highly "suspect" IMHO is that most camera manufacturers keep using completely outdated video compression schemes like the ones based on DCT (Discrete Cosine Transform - JPEG). "Wavelet transform" (DWT) based compression, as a theory, has been with us for many, many years. Red is one of the very few who actually managed to use it (it is quite CPU intensive), even though everyone perfectly knows DWT is the way forward! DCT is completely outdated, but I'm sure you know that very well! The question is: why the heck do they all keep using DCT now that it has been proven (by Red) there are powerful enough CPUs for real-time DWT encoding of 30 frames/s in 4K? HDCAM-SR for instance, although "less compressed" stricto sensu than Redcode Raw, looks absolutely awful as soon as you push things up a little too hard in the blacks: 8x8 pixels blocks everywhere. Yuk! Try do do the same thing with a DWT compressed image like those from the RED-ONE: you'll get noisy blacks of course if you're pushing the noise floor up, but you sure won't get ANY visible compression artifacts whatsoever! If I, as an engineer, had to choose between uncompressed and ANY type of DCT based compression (like for instance HDCAM, HDCAM SR, DVCPRO-HD, HDV, DV, and so on), I'd choose to work with uncompressed! No questions asked! But if I had to choose between uncompressed and Wavelet Transform based compression schemes, I'd go with the wavelet without any hesitation! The very small difference between uncompressed and DWT compressed is hardly visible at all, and certainly doesn't seem like a big price to pay for the huge reduction in the amount of data! Most of the commercial success of the Red-One comes from the fact they were smart enough to find a technical solution to use a "wavelet transform" based compression scheme. You can hardly blame Red to have chosen a State-of-the-Art compression algorithm... You certainly should blame the ones who didn't: Sony, Panasonic, etc.
×
×
  • Create New...