Jump to content

Emmanuel Decarpentrie

Basic Member
  • Posts

    67
  • Joined

  • Last visited

Everything posted by Emmanuel Decarpentrie

  1. Very good point indeed! I took a single line because, as far as I'm aware of, that's what (most) rolling shutter are working with. But it could very well be 4 or even 5 lines, although I have some doubts. It would be very easy to verify: a simple whip pan would give us enough deformation to check if we have some sort of "staircase" effect on the verticals. If so, we just have to count how many pixels per stair, and that's it. It would even be easier if we decreased the motion blur by increasing the shutter "speed". The reason I doubt this is what they are doing to reduce the Read-Reset is that I guess I would have noticed this staircase effect: I'm very fond of whip pan effects, and I've shot more than 100 hours of footage (including many RS tests) on the Red-One ;)
  2. BTW, this number is pretty easy to evaluate, if we have the total time it takes for a sensor to "read-reset" itself. We just have to divide the full "Read-Reset" cycle time by the number of lines. Am I wrong? Since Jim said the "Read-Reset"'s cycle time of EPIC's MX is around 5ms and it's number of lines is 2560 (in 2:1 mode), that would put the number we are looking for to a 2 micro-seconds figure! Still enough to get a little gap on moving objects, but would that gap ever be noticed or hard to fill? It's hard to say...
  3. We don't need to wait for the whole chip to be read or reset: due to the rolling shutter, all that matters is a single pixel (or, more precisely, a line of pixels). In theory, the "reset" of a pixel could take place right after it has been "read", which would indeed put the whole "gap" in the "dozens of microseconds", perhaps even less (I don't have the exact number). However, I still have the intuition that it would be a mistake to try to combine two pictures whose exposure didn't start at the very same time. I could be wrong though. It's definitely worth a try. I hope someone from Red or Arri will test this, at least to try to obtain an effect similar to a "second curtain flash" instead of the "first curtain flash" Magic Motion's effect.
  4. That is correct! However, I am convinced that moving objects would be separated by a "gap" (I can't find a better word to explain it) if one decided to use this "two consecutive exposures" approach. And to fill that "gap" would be very tricky, to say the least. Results would be inferior, I have no doubt about that. Well, that is your right. I can't find a better explanation tough. Marketing considerations aside, we must differentiate RED's method to achieve HDR (which is quite unique in this industry) from regular exposure bracketing (i.e. multiple consecutive exposures) which would be very tricky to use on moving objects. The difference might be trivial to you (I agree: it's just a missing "RESET" signal), but it is far from insignificant in reality. I just made this picture to simulate the "motion" difference between classic exposure bracketing (aka. "multiple exposures") and Red's "Magic Motion" as well as The Foundry's "MNMB" (More Normal Motion Blur). You may claim there is absolutely no difference between those 3 methods, that it's all marketing bull****, but I beg to differ. To me, the differences are obvious.
  5. My post was directed to Georg, as you should have known by reading it, but do you even bother to read my words? I'll pass the ad hominem arguments you seem to be fond of: they are totally irrelevant. What exactly makes my explanation "impossible", according to you? Do you have any argument other than your trivial "you're just plain wrong, no matter what you say". Perhaps you should read "what I'm saying" a little more carefully, although it certainly isn't written as gracefully as your own prose (I'm not a native english speaker)? I agree that, in the "analog film world", it would have been impossible to make two samplings of the same exposure: the shutter must close between two consecutive "readings" (sort of speaking). But in the digital world (with CMOS technology), why would that be impossible? Unlike CCD sensors, a CMOS sensor allows us to "read" each pixel individually. Let's assume we want to make 2 readings of a standard exposure (1/48th of a second). We want the first sampling (reading) to take place early so as to capture 8 times (3 stops) less charges than what is accumulated during the entire cycle. What, according to you, is impossible in the following sequence: 1) RESET of a pixel at T0. Start accumulating charges. 2) First reading of this very same pixel at T1=1/(48*8)=1/384th of a second after T0. 3) Second reading of the pixel at T2=1/48th of a second after T0. 4) "Stop accumulating charges" by... a RESET. 5) Start cycle all over again (after the required time, depending on shutter "angle"). BTW, an "exposure", by definition, takes place between two consecutive resets. You got it perfectly right indeed ;) This is rather comforting... But I don't think that "to make two consecutive exposures" would make such a big difference in engineering complexity. It's only an extra "reset" signal to generate. No big deal. The real difference between the "two consecutive exposures" approach and Red's own HDR would be visible on fast moving objects, where you would get a trailing ghost image of the moving object. Even if it's just 1/8th of the exposure time, it is still enough to be quite visible on fast moving objects. That's all the difference an extra "reset" makes ;) So why would you even bother to make two consecutive exposures if it would only give inferior results?
  6. Again: this is NOT what they are doing! Re-read what I've said before or what the paper on Provideocoalition is clearly explaining. Allow me to repeat for the fourth time: they don't "take two exposures at different times", they sample (read) the same exposure twice. Combining those two samples is pretty straightforward, because they both come from the very same exposure! In other words, the shutter didn't (close) reset itself between those two samplings. And that is a very clever (simple) way to achieve an actual increase in DR! I'm wondering why no one did it before them... The only problem might have come from the combined motion blur, but I have to admit the naturally combined motion blur actually gives excellent results. Hence the name "Magic Motion" I guess. The "Magic Motion" truly makes it like it was shot with a mechanical shutter, like the D20-D21. Why? I have no clue... But should you dislike this "Magic Motion" thing, you still have the Foundry's solution, which I'm pretty sure will give outstanding results (I use the Foundry's products on a regular basis).
  7. As far as "hard facts" are needed, they've shown a "stouffer like" chart showing there is indeed "18 stops of dynamic range" with HDR+6 enabled, exactly like the theory would give: let's say your sensor's native DR is 13 stops (as they claim). If you merge a picture taken by this sensor with another picture taken with an "apparent shutter cycle" that is 64 times faster (2 to the power of 6), then, you will indeed, in theory, be able to add 6 stops of DR to the original image's highlights. I don't see any reason to believe this theory couldn't be proven to be correct in practice when Arri Alexa's similar approach gives outstanding results. Therefore, I don't need to be overly skeptical about all this RED-HDR thing. Furthermore, what difference would it make if Jim had taken a light meter and gave us some extra "hard facts"? Would you trust him? Wouldn't you prefer to get an independent evaluation? How is that currently possible: the camera hasn't been released yet? I think it's best to "wait and see": sooner or later, we'll get "hard facts", verified by independent sources. In the meantime, I enjoy the idea that sooner or later, I might be able to shoot a car interior at noon without being forced to gel the windows. Or a steadicam shot where the camera moves from inside to outside of the house without having to mess around with the iris ;)
  8. True :) I'd say the easiest way to deal with that (reduser.net) infamous signal/noise issue is to mostly pay attention to what Jim and Red' team members (or a few other trustworthy people like David Mullen or Stephen Williams) have to say. The "find more post by xxx" function really is a big time-saver I think :) A few months of patience will be necessary, I'm afraid. Oh well, that's no big deal: I have enough work to keep me busy ;)
  9. Yes and no: I'd rather say that it's two different pictures coming from a single exposure because the word exposure always refers to a single shutter cycle. If it really was two different exposures, the moving objects would never over-lapse which would make it impossible to mix the two pictures. I agree! This is not a serious test! I'm pretty sure we will get more serious tests in the coming weeks/months. They keep repeating that it is far from finished yet. Graeme and The Foundry are writing tons of code as we speak. But it doesn't really matter in my opinion, for I KNOW the results will be impressive! Why? Simply because the method they use is one of the most clever I've ever witnessed. The only possible issue they can get is with the motion blur. And they seem to have seriously addressed that issue: the motion in the Barn's video is far from being ugly or awkward. And yet, this hasn't even been treated by the Foundry's MNMB algorithm. All the other potential issues (like possible wrong tone mapping, etc.) are going to be fixed sooner or later. I wouldn't say it is easy, but it can be done! The Arri Alexa is the best example I have in mind: their own "HDR" images, which, in a sense, are also made from two different readings of a single exposure, are beautiful! If you don't believe in RED's HDR potential, then, you clearly must hate the Alexa's pictures, IMHO. But in my opinion, the Alexa offers the best alternative to film I've ever witnessed. Let's think "potential" here, and let's not be bothered by some quickly made non professional test/preview of a feature that is still in a very early stage of development. All in all, I believe this RED made HDR thing really has a big potential! And I'm so sure this thing is going to be huge that I invite the skeptics to come back to this discussion two years from now. Let's give them enough time to fix everything, for Red is far from being an example when it comes to "respecting their own development's deadlines". :)
  10. I don't know the English translation for the French "mauvaise foi", but, with all due respect, I'm afraid that's exactly what you're suffering from. First of all, if this is "hardly new", can you please give us a few references to illustrate your point? I have been working as an engineer in this industry for more than 20 years but I never heard anything remotely similar to what they are doing. Sure, there was a few white papers or theoretical studies about "multiple readings of CMOS sensors", but so far, no one ever tried to integrate that concept in a digital video camera. Next, you seem to believe they are doing multiple exposures. Not correct! It's multiple READINGS of the same exposure. In fact, it's very comparable to what Arri does with the Alexa's Dual Gain sensor or what Fuji does with its "SR super CCD". Fuji, Arri and Red all end up with two pictures of the same exposure. They both have to mix those two pictures together to increase DR! The only difference is that RED's solution causes "motion blur" issues, while Arri's (or Fuji's) dual gain sensor is less efficient at artificially increasing DR, but doesn't have any "motion blur" issues to deal with. Next, you claim that the task of combining those two pictures will be very difficult and will require a software package whose "cost is anybody's guess". What do you think is so difficult in merging two readings of a same exposure? Do you need an expensive software to be able to read Fuji's SR super CCD pictures? Do you need an expensive software to take advantage of Arri's dual gain inflated DR? Did you even notice the fact that RED (like Arri) does offer the option to do this operation inside the camera? There is nothing particularly complex about merging two readings of the same exposure! The trick for Red will be to work around the motion blur issues, if necessary! That's what Foundry has been working on! There is no reason to put them down! The HDR trick they use is clever! And, at this point, there is simply no reason to believe RED's HDR will give inferior results to Arri's DG HDR. We'll soon see what it does "in the wild".
  11. Speaking of this "MNMB" thing, I've been thinking that chances are this feature might very well consistently give spectacular results. Why? Because it isn't pure interpolation. Unlike what the Foundry does with its high-end "Furnace" plugin, whose "Motion Blur" tool is indeed capable of "guesstimating" a moving-object's motion blur with a higher (more open) shutter angle, in this case, they DO have the correct looking motion blur. No need to guesstimate that: the "long exposure's reading" (LER)'s motion blur is correct. The only issue, is that the LER's motion blur isn't quite color correct. But it's a whole easier job to "guesstimate" the correct looking color of a motion blur rather than having to guesstimate the motion blur itself... In my humble (engineer's) opinion at least...
  12. That's exactly why they called The Foundry for help :) Basically, that's what The Foundry's "MNMB" does: it visually removes this "leading motion blur" by generating a similar looking motion-blur for both readings! Thus, when you merge the the two readings with The Foundry's "MNMB", you'll get a regular looking motion blur, exactly as if you didn't use HDR...
  13. I really don't think so ;) The panning is made from right to left, the lights are thus moving from left to right and the "short exposure" is indeed on the left-side, which proves my point: the "short exposure's reading" is made first, when a relatively low number of photons have been captured by the sensor. Then comes the "long exposure's reading", with 8 (or more) times more photons, hence more motion blur. The trick is to do those two readings without reseting the sensor between the two. Otherwise, you won't be able to smoothly merge the two readings together. It is simply impossible to merge two consecutive exposures together, unless there is nothing in motion in the frame... Adam Wilt's idea to make the "short exposure's reading" after the long one doesn't make much sense at all, from an engineering prospective: in that case, unless you reset the sensor between the two readings, your "short exposure's reading" will have blown-out highlights if it comes after the long one, which is exactly what we wanted to prevent in the first place...
  14. That's not exactly how it works. Basically, Jim was right when he said they didn't combine multiple exposures. In fact, what they do is multiple readings of the same exposure. The very first reading happens very shortly (1/384th of a second if you're shooting 24fps) after the sensor's reset. This reading will provide the highlight details. Then comes the "regular" reading of the sensor's data. This is a very clever method I believe, because you don't actually have to try to merge two different exposures which would only provide good results if nothing was moving in the frame. The only problem they have is related to the motion blur, which will obviously be very different between the two readings: the "highlight reading" will have very little motion blur while the "regular reading" will have... regular motion blur. Merging those 2 readings can therefore theoretically display some very "funky looking" kind of motion-blur. That's where you have two possibilities (in post): 1) "Magic Motion". That's how they call the "algorithm" which, in fact, doesn't do anything at all: it simply merges those two different motion blurs together without changing them. Results are "stunning" according to the happy few who have seen it. They claim the resulting motion looks more "organic", less "stuttery", much closer to what you'd get with a mechanical shutter, thereby implying what we all know: electronic shutters are inadequate to render motion correctly. 2) "MNMB" ("More Normal Motion Blur"). This is an algorithm developed by The Foundry (Nuke, Furnace) where they interpolate what the motion blur on the "highlight reading" should have been if this had been exposed 1/48th of a second instead of 1/384th of a second. Chances are this might not work all the times though, because it works as an interpolation...
  15. What is highly "suspect" IMHO is that most camera manufacturers keep using completely outdated video compression schemes like the ones based on DCT (Discrete Cosine Transform - JPEG). "Wavelet transform" (DWT) based compression, as a theory, has been with us for many, many years. Red is one of the very few who actually managed to use it (it is quite CPU intensive), even though everyone perfectly knows DWT is the way forward! DCT is completely outdated, but I'm sure you know that very well! The question is: why the heck do they all keep using DCT now that it has been proven (by Red) there are powerful enough CPUs for real-time DWT encoding of 30 frames/s in 4K? HDCAM-SR for instance, although "less compressed" stricto sensu than Redcode Raw, looks absolutely awful as soon as you push things up a little too hard in the blacks: 8x8 pixels blocks everywhere. Yuk! Try do do the same thing with a DWT compressed image like those from the RED-ONE: you'll get noisy blacks of course if you're pushing the noise floor up, but you sure won't get ANY visible compression artifacts whatsoever! If I, as an engineer, had to choose between uncompressed and ANY type of DCT based compression (like for instance HDCAM, HDCAM SR, DVCPRO-HD, HDV, DV, and so on), I'd choose to work with uncompressed! No questions asked! But if I had to choose between uncompressed and Wavelet Transform based compression schemes, I'd go with the wavelet without any hesitation! The very small difference between uncompressed and DWT compressed is hardly visible at all, and certainly doesn't seem like a big price to pay for the huge reduction in the amount of data! Most of the commercial success of the Red-One comes from the fact they were smart enough to find a technical solution to use a "wavelet transform" based compression scheme. You can hardly blame Red to have chosen a State-of-the-Art compression algorithm... You certainly should blame the ones who didn't: Sony, Panasonic, etc.
  16. Hello Stephen, I believe everyone was skeptical up to a certain point. And quite rightly so! The word "scam" probably doesn't reflect exactly what was said at that time, but at the very least, most people were saying: "I'll believe it when I see it", exactly what Keith said when he started this discussion. The difference between "This is a scam!" and "I'll believe it when I see it! But in the meantime, I'd be a fool if I was ready to give them a thousand dollars" is very thin in fact... Both are expressing some sort of high level of disbelief or distrust. You are perfectly right in the sense that the development of this camera wasn't exactly as "easy" as Red (seemed to have) anticipated. I think Red (Jim) made the typical mistake of underestimating the challenge they were facing. To be honest, who doesn't make that kind of mistake at one point or another? When you started your career, did you truly expect your first (short or feature) film to be as challenging as it truly was? I certainly didn't! I hope Epic will work reliably from day 1 as well. But regarding the RedRay, I was skeptical up to the point where they made their demonstration in front of hundreds of professionals, a week ago. All these people seems to agree upon the fact RedRay is impressive. So, even though I personally worked on compression algorithms (and I am thus very surprised with Red's claims regarding RedRay) who am I to challenge them? It would be ridiculous, at this point, for me to claim all the people who were lucky enough to get that demonstration, to claim all these people were wrong in their assessment simply because they were "blinded by their so-called fanboyism". :) I really don't consider myself to be a Red-Fanboy in the sense that I would be blinded in my judgment! All I want is to be honest with myself and everyone else: I was skeptical but I have been impressed by the Red-One, even though it wasn't (and still isn't) perfect. I now stand ready to be positively surprised by their future products (Redray) as well... If that kind of honesty makes me a fanboy, according to Keith, then, let me say I'd rather be called a "fanboy" than a liar or a manipulator.
  17. Well... it's rather simple in fact: when you said I believe you must have been joking! Or you never ever read this forum before, now did you?! I hardly remember ANYONE over here who honestly and openly believed (at least in the beginning) this wasn't some sort of scam! You seem to focus solely on the technological feasibility. Of course, theoretically, everything is feasible, but at what price? And by whom? So far, no one came even close to offer something truly comparable to the Red-One. Forget about the D20, the Genesis, the Dalsa, the Phantom. These are all super-expensive devices whose compression technology is nowhere nearly as efficient as the one these "industry new-bees" (Red) implemented in their first product. I can't believe you honestly never doubted these guys, who came out from nowhere, with no industry background whatsoever, could make something like the Red-One (along with its compression), and sell it at such a low price point! Because, no matter what you think or what you claim ("misleading price", "lots of defects", etc.), the Red-One is working great, makes great pictures and is very "affordable"... compared to its competition at least. I honestly maintain that I was wrong in my assumption that they couldn't do it! I'm not going to make that same mistake again. These guys (i.e. the Red team) are true "industry changers", whether you (and I) like it or not.
  18. "Argumentum ad hominem"... Please... The fact of the matter is that I challenge you to find any other example of a "4K" camera which could compress and record actual 4K footage onto a compact-flash card (with data rates of around 30MB/s thus indeed requiring specialized compact flash cards) or an "affordable" hard-drive, while keeping excellent visual quality. Those kind of compressions were (and still are) unheard of in the acquisition world... Unless I am totally mistaken, but if so, please give me any other example... The Arri D20 you mention is shooting HD, and recording it as HDCAM-SR, currently the most expensive and least compressed tape recording format. It is neither capable of recording 4K, nor is it capable to record 10 minutes of footage to something as affordable as a 300$ compact flash... So, this is certainly NOT a good example to prove that I was wrong to be a Red-skeptic two years ago. What else? Genesis? Nope! Dalsa? You've got to be kidding me... Two years ago, this (huge)... "thing" had to be hooked to a "small refrigerator" sized stack of hard-drives to record footage... REDCODE-RAW: that's what really made the Red system "affordable". What's the point of building an "affordable" US$ 17.5K "4K" camera if you have to hook it up to a >US$ 100.000 (tape) recorder? Could you really call such a camera "affordable"? You can claim you never doubted this could be done, both from a technological and financial point-of-view, but I find it very hard to believe. Especially when you realize this came from an unknown company. Sounds like you are a "monday morning quarterback" to me... Let's be honest, please! I, for one, really was a skeptic for a long time... And I know I wasn't the only one over here, on this forum! In fact, I hardly remember anyone, except a few hardcore "fanboys" which I don't think you are part of, who claimed, 2 years ago, he didn't have any single doubt this Red-One camera could be done at all, let alone with half the specs they were claiming at that time. The Red-One was a big surprise for me, and challenged my own perception of what could be achieved real-time in terms of spatial compression, especially on a 4K camera. I thus wouldn't bet against them this time over... but if you do, I'll be glad to bet against you :)
  19. RedRay is based on Redcode (obviously enhanced with some sort of very strong/smart temporal compression), but you can apparently (the contrary would have been quite a disappointment) compress whatever you like/want in the RedRay format. Graeme indirectly answered that question a week ago by saying the "showreel" file they did compress in RedRay was a 10bit RGB footage : http://reduser.net/forum/showthread.php?p=409083#post409083 I know this (RedRay) thing sounds very hard to believe, but so was the idea of an affordable "4K" digital movie-camera about 2 years ago...
  20. With all due respect Max, did you simply read one of those aforementioned (and very interesting) 2 links? Especially the second one? While I do agree with you about the fact Red certainly didn't show any interest in those technologies so far, I wouldn't jump to conclusions either (like "Bigger sensor = less DOF")... And the prospective of being able to forget about focus during the shoot and set/change focus during the post-production is certainly one I'd be interested in...
  21. As a RED-ONE owner (call me a fanboy if you want) why would I be upset by this fantastic news? All my accessories will work fine with the inexpensive and appealing SCARLET, and should I want to upgrade to an EPIC, I can trade-in my RED-ONE body for an EPIC for a total rebate superior to what it costed me (due to the RED-ONE's early adopters 2500$ rebate)! I'm really happy with that kind of customer care! Never had anything comparable in my lifetime! Ever! I wonder what the RED-skeptics will find out to complain about this time around...
  22. Tss! Tss! Just remember those famous words: Some things in life are bad They can really make you mad Other things just make you swear and curse. When you're chewing on life's gristle Don't grumble, give a whistle And this'll help things turn out for the best... And...always look on the bright side of life... Always look on the light side of life... (Monty Python) ;)
  23. Your concerns are quite valid but I think life is a matter of choices. And with many choices comes the "compromise". You may desire the maximum quality (4K + overcrank), but right now this type of quality is still expensive. this is very likely gonna change a few years from now, but right now there only are a few solutions capable of recording 4K RAW. So, either you have the budget and you can afford this sort of quality or you don't... But there are multiple "work around" solutions. 1) Shoot in Redcode Raw, 2K scaled except for the few times you truly need overcranking in which case you'd compromise with 2K Windowed, which might actually be pretty good... 2) Minimize the number of days you might need to rent a recorder. After all, even if you need such a recorder for an entire week for a feature, you won't pay 7x1000 dollars in total, more like 5000$ for the whole week. And what are 5 thousand dollars compared to the typical cost of film (if you include everything) for a feature, not even mentionning the total cost of movie? 3) Should you need more than 60 frames/s, I'm afraid you're gonna have to shoot the sequences that really must be overcranked with good old 35mm film anyway. But I'm sure Red footage will intercut almost perfectly with film, as long as entire scenes are shot on the same medium... Bottom line is: why should we worry? Why should we try to find any reason to complain about this camera? Don't you think what Red proposes us already is almost unbelieveable?
  24. I agree that shooting on film generally tends to show the filmmakers are "more resourceful" and that can generally be considered as a good point. However, I think this is mostly the case with DV/HDV formats because they are obviously cheaper than film. Things are different, in my opinion, when one chooses to shoot on HDCAM-SR: why should he be ashamed of shooting digital and not film? Is he not "resourceful" enough to afford something that isn't significantly cheaper than film (esp. when you include all the costs involved with such a high-end medium)? On the other hand, don't you think what matters the most, in the end, is the content and NOT the format it was shot with? For instance, I just saw "Jesus Camp" yesterday evening and, in my opinion, this was shot on Panasonic DVX100 (IMDB only says it was shot on DV). Yet, I didn't see anyone in the public getting upset because this was shot on a "cheap DV format"... I think the bottom line is: if your story-cast-crew is good enough, chances are producers will finance your project and you might afford to shoot it on film because, no question there, film IS the GOLD STANDARD of picture production right now and for the foreseeable future (yet, Red might get you close enough to that "gold standard" for many indies to choose to spend money on something else than film). But, on the other hand, shooting digital allows you to do certain things (and "looks") that are simply NOT possible to do with film: for instance, I don't think "Jesus Camp" would have been possible if it wasn't for the small (non "obstrusive") size of its DV cameras. So if you truly have the choice and yet, you deliberately choose to shoot digital, why should you be ashamed of your choice? If you don't have the choice, chances are indeed that your project isn't gonna be interesting, but that is a "general rule" and like all "rules" it has its own exceptions... so again, why should you be ashamed of that? On the other hand, I've seen many "indie movies", shot... on film, that were boring like hell and didn't recover half their cost because no one went to see them, not even most of the critics... So, tell me why exactly should their director be "proud of having it shot on film"? In my opinion, the "medium you're shooting on is no reason to be ashamed or proud of". If I was a director, there is one thing that would make me proud of my movie: that is if (at least some) people were moved by it. And, in my opinion, that doesn't relate whatsoever to the medium it was shot with. Good one :) But it would be like vinyl records to try to emulate the digital CD sound. What would be the point of film to try "to achieve the video look"? Again, film is the "gold standard", no question there. The thing is: people are used to watch film. They are used to its look, to its qualities and shortcomings (like grain, or even... shallow DOP or 24p). They even fell in love with all those aspects, even, and that is sometimes hard to understand for an engineer like me, they even fell in love with... the shortcomings of film. When video (and digital video) appeared, people complained they couldn't find all the film properties in the video format. So, engineers worked hard to improve video, to make it look more and more like film. Eventually, we might even get to a point where most professionals will say: digital video is now "good enough for me", because it is more convenient, cheaper, and it provides better colors, more resolution, more dynamic and less noise... We aren't yet at that point, and that's a fact! But I can't believe what happened in the "music" and "audio" industry is never going to happen in the "movie" and "pictures" industry. Does that mean film is gonna disapear? Nope: vinyl is still here with us and will always be, in my opinion, because there shall always be a (niche) market for it!
  25. Now I understand what you actually meant when we were discussing about "how to light for digital" :) And guess what? I fully agree with you! I actually was talking about something I believe one has got to do precisely if one was to find "ways of making digital match the quality of film better", but there are times where I also am trying to "explore the digital image for the ways in which it is uniquely different from film". And, BTW, that's precisely what I'm doing right now for a (pretty low-budget) TV show, shot with Panasonic HVX200s, that is going to be aired this summer on the belgian "La Une" TV channel... Bottom line is: no one should be ashamed of shooting digital (or proud of shooting on film). Those are just tools we need to take advantage of and feel comfortable with because we're gonna have to live with BOTH of them for quite a while. :)
×
×
  • Create New...