Jump to content

Deanan DaSilva

Basic Member
  • Posts

    27
  • Joined

  • Last visited

Everything posted by Deanan DaSilva

  1. Email me at (deanan -a t- red). If you can send us the card, we can try to recover the footage. The cause is most likely a corrupt filesystem which can happen on the computer or camera when the media is not safely unmounted (pulled before ejecting, computer crashing, etc).
  2. A filmout and a print is a reasonable way to compare how the final output will look in todays theaters. Having seen filmouts from the Origin (and other digcin cameras), I would agree with your statement. Deanan
  3. The DVI is the equivalent of our tap. It's not the output. The output is dpx files over a pure data link to data storage and it never goes over a video signal. Film and Video aren't the only two words we are limited to for describing cameras. What's so wrong with digital cinema camera or digital motion picture camera? Deanan
  4. From my experience, I would prefer not to look at mapped images to judge exposure. If you do map to rec709 then you are going to be deliberately compressing your highlights and you're doing the cinematographer a disservice by making it harder to judge his exposure. It is the equivalent of trying to evaluate your negative by looking at the print. Now, if you provide mappings that let you see a very flat image or map the shadows and highlights separately, the dp can then make a more informed decision. Ultimately though, once you get used to shooting RAW on a particular camera, you really only need a light meter. Only in more complicated setups or when you're starting out with a camera does having good exposure tools really help alot. Shooting raw is alot more like exposing film than it is shooting video and the tendancy to make it work like video isn't necessarily adventageous (unless you just want things to work the way you know them to, which alot of people do. Deanan
  5. I think people are already putting you under the microscope and while jpegs may say don't put me under a microscope, people are already putting the jpegs through torture and making assumptions based on an 8 bit image. Cheers, Deanan
  6. I don't know if this is what Alan is trying to say but one very important thing about shooting raw is that you need evaluate the raw signal independantly of the monitoring signal. It's very easy to be misled by a signal that has been converted into a completely constrictive color space. Doing so, you add additional layers of interpretation to the raw data that could lead you to easily make the wrong decisions. Even things like waveform monitors are risky because they were not designed to be use with linear (to light) or log images and you have add a layer or interpretation to put into a video space so the tool works as expected. Of course, monitoring for talent and everyone else is wholly different issue. Deanan
  7. Hi Jim, A 10 or 12 bit or tiff dpx would be more appropriate, no? Just wondering, if you're rating your camera at 100 or 160 asa, wouldn't you expect a noise free image from any decent sensor? I would guess that 160asa is a fairly conservative rating in favor of shadow detail and less noise. It would be intresting to know what your range you think the sensor would be useful for (ie. how far can you push it before the noise becomes too apparent and starts popping the patterns). You can sort of guess by gaining up the jpeg but it would be a very crude guess. (and obviously it's too early to be meaningful because alot of things will be different once it's actually a shipping product). BTW, have you shot with the MasterPrimes yet? They lean more towards the Cooke's warmth than the UltraPrimes and they hold sharpness really well out to the edges. I'm a bit suprised with the falloff at T1.3 though (although, I have to say as a personal preference I really like the falloff at F1 on the Noctilux). Cheers, Deanan
  8. Sorry, I think there was a little confusion about my post. The daylight balance is for the Dalsa Origin and the 29 sq um pixel size is for the Red (5.4 um x 5.4 um as Greg said). Deanan
  9. Generally you're mapping your exposure to a nominal output via a LUT or transfer function that is specific to your sensor's characteristics. This gives you the ability to control both the exposure and the tonality derived from the raw. Deanan
  10. Camera iso rating is one of those highly subjective things and people like to have their own ratings (I do it for every film stock I shoot). As such, we recommend that people test the camera and make their own rating choices based on their own criteria. That said, we generally recommend a starting point of 320 for daylight and 250 for tungsten. However, since the camera has alot of latitude (another rating that everyone has their own way of coming up with figures for), you can certainly rate the camera differently depending on the scene content and how much you want to hold in the shadows vs the highlights. The sensor is natively daylight balanced so you can optionally use varying degrees of blue filters to balance in front of the lens but it's not necessary most of the time (again a subjective decision that also impacts the amt of light needed). I believe the D20 is rated at 300 but some people I've talked to say they generally rate it one or two stops slower. Deanan
  11. Thanks. So, who designed the chip? Just joking... :)
  12. It's lies in who designed it :) (although the fab is important for other reasons)
  13. Reconstruction code for Bayer RAW are generally very camera specific as they have to take into account camera/sensor specific qualities. There is no DNG equivalent in the motion world where you can embed the general sensor characteristics for a generic raw codec. In the long run it would certainly be very nice to have a standardized raw format (possibly put it into dpx v3). The downside is that there are certain camera/sensor characterisitcs that are not easily generalized that can improve the quality of the reconstruction. From our experience, doing realtime final quality 4k reconstruction not really necessary. For editing purposes, you can cut with a proxy quality reconstruction and only generate the final quality 4k in non realtime once you have your final edit. However, that's not to say that a good quality reconstruction cannot be done in realtime (with some tradeoffs in algorithm complexity). A prototype version I'm working on is currently doing the upload/reconstruction/lut/matrix on the gpu in 15 or so ms per frame. Deanan
  14. Sheesh. You have no friggin clue what goes into camera, sensor, and algorithm design. Everytime you have a sensor rev., filter rev, etc. the algorithm needs to be worked on. It's a continual process and takes alot of time. You are incorrect about the heavy use of a low pass filter on the origin. Making up arbitrary and incorrect assuptions is pointless. Especially if you judging images from a camera you have never shot with and don't know the conditions the images were created under. I'm sure Canon came up with a great algorithm in one day just like you did. Personally, I don't care too much for Canon's reconstruction or Nikons and even less the Adobe one. There are some third party ones that are better in some respects but worse in others. It's highly subjective and just because you like one method, doesn't mean every will also or that everyone else is looking at the same things you are. The exact same analog goes for b/w film development. In my opinion, some developers are absolutely rubbish while others suit my taste more. Significantly better. The field is relatively new and there's probably another ten years or more before it starts to mature. We've only started to see some very promising results in optical flow reconstruction algorithms and haven't even scratched the surface of AI and other types of techniques. Yes, there are limits to what you can reconstruct but we're not at a dead end as you seem to be implying. Then why bring up 3CCDs? My point is still that film is great but it's not perfect as everyone who want's to poo poo digital seem to want to gloss over. A camera like the origin is being developed as an alternative, not a replacement because it has advantages that film doesn't have. Likewise, foveon sensors, 3CCD systems, etc all have their advantages and disadvantages. There's nothing pleasing about a cyan or mushy and grainy edge when you're trying to pull a key. Or with having to stabilize, dust bust, or remove scratches (or soften things with the the automatic processes). Again, advantages here, disadvantages there. Shot tons of both primarily for projection and still lament their passing :( (yes, velvia is not the same it was 10 years ago. their improvements made me start to dislike it.) More specifically, from neg to print, you can't keep the saturation. Sure you can scan or DI to get any colors you want but in the end you're back at a print that can't hold the saturation. I was assuming we were talking about a purely film process versus a digital process all the way through. If we're not, then forget about this whole discussion because you won't see any edge artifacts once you're back out to a film stock. You'll hopefully be near 2K but most likely you'll be at 1.5k. Wasn't it you who said 4x the neg size = 4x the pixels? That sounds like more of the same megapixel myth. (except if you're scanning more pixels to avoid grain aliasing) Deanan
  15. Hardly. We've been working on the algorithms for a about four years now with image science phds dedicated specifically to the algorithms. Our algorithms take into account very specific cross over points and spectral contribbutions of each baised photosite. Every pixel contributes to the edge quality, not just the green as you're implying I can't say I know which images you've seen but there has been a continual improvement and some of the older images are not close to where they could have been. If you shoot something completely red, you're implying that only the red sites are picking up red objects when in fact each site picks up a broader spectrum than just the color of that pixel. However, in some cases where the exposure is quite low, the contribution in the other pixels becomes not as useful and then the red resolution starts to suffer. However this does not mean that this happens all the time and therefore the technology is useless. Likewise, you also get edge artifacts on film because of the layer depths (ie. the crappy and soft cyan edges on filmed greenscreen elements). (I think David might have seen the some comparisons). The process by which we can create a repeatable pixel spectrum response is one the very useful patents we got when we purchased the phillips division that designed the sensor for the viper and the spirit. High frequency desaturation tends to happen more often when you try to overcorrect for desaturation. Likewise, in film your high frequency can get muddy and blocked up because not all layers are on the same focal plane. Then you also have the color crosstalk problems which also alter saturation. If you want to talk about desaturation, lets talk about the limited gamut of film where you can never get the nice saturated reds, yellows, etc. no matter how hard you try. I honestly would like to see the results of your algorithm as we're constantly looking for anything that might work better for certain cases. If you're working on your own algorithms then you certainly know that the state of the algorithms only cintinues to improve. And yes, I am working on reconstruction algorithms also (currently a realtime one on the GPU and next week I'll be training on a pixel processing coprocessor to do the same there(along with a few other people here)). It's a combination of things, not just the bayer algorithm that causes edge artifacts. It has to do with the color temperature of the sensor vs the color temperature of the scene, the exposure level, the lens design and the low pass/ir filter . The 80mm and 150mm for the mamiya 7 are two of the sharpest MF lenses I've used and overall considered to be some of the sharpest MF lenses. However, they don't resolve nearly as close to the grain as say the the summicron 50. I would guestimate (highly accurate :) that it's about a 50-60% gain in resolution instead of the expected 100% gain. (ie. a 3k dpi scan vs a 4k dpi scan shows little improvement in resolution (8.2k across 7cm vs 11k across 7cm). Deanan
  16. May I ask which bayer reconstruction software you know uses this algorithm? I would not assume that everyone uses such a simplistic algorithm for their reconstruction. But not necessarily 4x the resolution as even many good medium format lenses don't generally resolve to the same level as a good 35mm lens. I shoot 6x7 with some great Mamiya lenses and scan on a nikon 9000 and only one lens resolves somewhat close to most of my Leica lenses. Not all pixels are created equal but that doesn't mean they're not pixels :) Deanan
  17. Very well said, David. On the other hand, if you look at the professional stills market, in many ways film is better than digital and digital is now more commonplace because it's better than film in many ways also. Aesthetically, the advantages of either can be argued till the cows come home but that does mena that how you shoot is any less important. From the time I started shooting with a nikon fe at nine years old up through undergrad and graduate school (in photography and film) the one thing I was taught over and over and over was that how you shoot (left and right brains) is more important than what you're shooting with (be it an instamatic or a f5). In the professional stills world, the practical side (which in many ways affects the aesthetic side) has been deeply affected (or infected) by the digital workflow advantages. But as you said, digital doesn't do everything for us and it shouldn't and should still be viable option as long as people want to shoot with it. Just was with color and b/w. Now, I'm sure people will argue in response that I'm saying that how you shoot is more important than the aesthetic results. That's not what I'm saying... For example, for all my stills photography I shoot mainly 50 year old, fully manual, no built in light meter Leica M2 and M3s with very specific lenses(both newer and older lenses). The practical operation of the camera is as important to me as the aesthetic quality because it influences the aesthetic results. On the other hand, there are many situations where a nice full frame digital would be much more suitable and more conducive to a better aesthetic end result. Horses for courses... Deanan
  18. Even if the quality is better than film, there are still going to be the same people that claim film is better. In this case, (not marketing to) the rationale for the digital advantages are the OTHER reasons why people want to shoot digital. Just matching film quality alone is no reason to stop shooting film. Matching it AND offering the a better workflow is necessary to offer a viable alternative. Saying that digital can at present be better than film highly subjective and we hear conflicting opinions all the time depending on what features of film you like or dislike. I get the impression from reading this forum over the last year or so that anyone involved with digital camera companies is not welcome here. Somehow we're evil for standing up for our work while Kodak's spin guy continuously unquestioned. Deanan
  19. the immediacy that the mtv generation can't live without. :) Short attention spans that can't wait till the next day for to see what they got... Lots of external pressure to make sure you got the shot right then and there... Lots of pressure to keep rolling and keep the set moving... Pressure to get as much coverage as possible... There's a whole new group of people that doesn't dig the magic of waiting and being pleasantly surprised in dailies. On the other hand, there's the theory that digital projection will push out film projection, which will in turn force the labs to raise the price of neg processing which push more people to digital which will push the price of stock up. Digital cameras are tiny in the big picture compared to digital projection. Deanan
  20. This isn't a dalsa specific question. It applies to any digital origination be it cg elements, cg features, animated tv series, hdcam, accounting data, etc. Obviously for our material, a data archive works best for the raw material but you have to cycle those archives every few years. This part of the industry is in it's infancy and the solutions are just starting to coagulate. For long term archives, the best solution for any material (including color neg) is 3 sep b/w. I'm presuming you think we hate film but actually we still love film. I still shoot only film for stills (mostly fuji or illford and very little kodak because I don't like the direction their stocks have gone in) and I know all four of us on the special projects team regularly shoot film. We're intrested in making the best digital motion camera we can that provides an alternative to film. Not a replacement. (you might hear fluffy digital statements from the marketing dept. but they don't listen to us...) Those we've met that are excited about shooting with a camera like ours (or viper or d20) aren't excited because they're looking for the digital equivalent of film, they're looking at it because it offers them a different look and a workflow more comfortable (you know the whipper snappers growing up with non linear editing). We're happy that there options and we want to provide more options as I'm sure Arri, Red, etc are also trying to do. I for one would not want to be stuck with one mega film manufacturer and a decreasing number of stocks. If anything we'd like to see more options in film stocks, not less. I can't same the same for film vs. digital projection. Watching a great 2k christie every day makes it really painful to watch a jumpy, dirty, gamut limited stock, 1k+ resolution print at the local cineplex. Deanan
  21. Wholehearted agree. Which is why we're custom lenses sets based on leica and nikon glass (two separate versions for each glass). So far we're very pleased with the quality of leica conversions we have and they do cover down to the wides. For situations where you're using a set of lenses where the wides don't cover, we provide a ground glass that is a common extraction based on the coverage of the widest lens. Right now you do trade off some resolution (depending on the aspect ratio). Everyone we've worked with so far hasn't had a problem with this and understands that they're not getting the full sensor resolution. It's analagous to shooting academy with a 1.85:1 extraction... you're capturing more than you final output format. (here come the "it's not 4k" replies, doh!) Keep in mind that this is just the first version of the camera and the decision was made for the larger sensor in order to favor more dynamic range. I can't directly comment about future cameras but obviously we'd be stupid not to improve in many areas. Deanan
  22. If we're specifically talking about 4k bayer vs 4k scanned film or 4k 3ccd or 12k rgb stripe to 2k HDcamsr, I fail to see how it's misleading. You know what a 4k bayer sensor is and you know what a 4k rgb scan is. There's nothing misleading about a 4K bayer sensor. It simply has 4k biased sensor sites and there's nothing magical about it that misleads you into thinking it's 4k with three full samples per pixels. Likewise, there's nothing about "4K" that says it intrinsically means it's a three sample film scan. A 4K cg element rendered in maya is still a 4k cg element even though it has a higher mtf than a 4k film scan. We're not trying to flog anything. We work day in, day out, 20 hour days and 7 days a week trying to make a successful product. We are on the technical side and not sales and marketing. It is very tiring to constantly hear what we feel is misinformation continuously. "4K bayer = 2K rgb film scan" is misleading and you are consistently perpetuating misleading information in an effort to correct our (and other manufacturers) marketing spew ("market guys will always be marketing guys and won't always get it across correctly). Granted, the PV guys have done a bang of job of misinformation but that does not mean it's acceptable for everyone else (ARRI, RED, you, me and everyone else) to make claims without having actually tested or used the equipment in question. It's irresponsible to be misleading whether is coming from a manufacturer or from someone like you. 1. the six sites in their sensor (AFAIK) are not individually addressable so they could only get 3 individual samples per stripe. It also is not amenable to the same types of reconstruction gains that you get with a bayer patter. 2. it's pretty much a sony camera with a pv mount and sony has a vested intrested in promoting a tape workflow to continue to get a ROI on the boatloads of money they investing in developing the best tape head technology in the world. HDcam SR kicks ass but it has it's limitations (frame rate, bandwidth, etc). 3. all the signal processing is done in camera and yes the algorithms have to much simpler and have more contraints than a software based approach. We essentially trade off doing the processing in camera so we can use more complicated algorithms. It also means that we aren't throwing away data to squeeze it into the bandwidth of the tape deck. Both are obviously valid approaches with their own advantages and disadvantages. Three stripes makes sense to them and bayer makes sense to us (at least until we come out with something better). My point was that it's not a perfect 4K sample which is what seems to be the arguement you guys a making. Every different imaging techology has it's trade offs and simply poo-pooing because you don't understand the implications doesn't mean it's automatically bad. Film has it's own set of problems (color crosstalk, varying focal depths for color layers, gate sway/weave, etc) and bayer, rgb stripe, 3ccd all have their own advantages and trade offs. um, no. you don't lose any dynamic range. you sample from the surrounding pixels to reconstruct the bad pixel. What secrets? These very same discussions have been occuring in the stills world for ages?
  23. FYI, it's next to impossible to get a perfect sensor. There will always be dead pixels, hot pixels, or weak pixels that have to compensated for (within reasonable limits of course). Don't forget, your true 4K camera is now recording 3x times as much data as the 4K bayer camera and 3x the data on your servers and 3x the data in your archives. In our case, we go from ~400MB/s at 24fps to ~1200MB/s (16bit bayer vs 16bit RGB). If we're lucky, we'll see a true rgb 4k sensor in 5 years or so and by then shooting 1200MB/s will be a piece of cake :) Deanan Dalsa
  24. Since this is one of those continuing troll type questions, I'll take the bait. The lenses that tend not to cover are the wides at about 24mm and below(say on the Ultra Primes). Likewise, with a larger sensor, the FOV is different, so a 20mm at ~55deg in acad 1.85:1 is about equivalent to using a 32mm in on the Origin (52.8 deg at 1.85 subtended out of the 2:1). With the 1.85:1 extraction you get 3789x2048, you cut off the sides which means you can then go down to a 20mm lens (or ~76deg FOV which is something like a 16mm academy fov). The MasterPrimes have even better coverage but we've been waiting for our sets for 8 months now so we haven't been able to fully test the coverate. Of course, if you're shooting for a 2:35 extraction, then the vignetting is even less of an issue because you're chopping off even more of top/bottom. The increased size of the sensor pixels also improves the dynamic range which in my opinion is far more important than meeting arbitrary pixel counts or even resolution for that matter. I'd take a 4k bayer sensor with more dynamic range any day over a 2k cosited (ala foveon) sensor with less dynamic range and clipped highlights. For those that want to shoot the full frame, we're building two different custom types lens sets based on still camera lenses that have full coverage. Without mentioning names, you probably already know who else has been building most of their prized lenses out of stills glass for many many years. Just curious Jim, do you have 4K bayer image and 4K film scan that shows that a 4K bayer looks like 2k? Alternatively, how about a 4K bayer image next to a 2k film scan or a ~2k 3ccd Viper image that shows the 4K bayer image to look the same as the 2K image? Am I on crazy pills or is it not obvious that a 4k "bayer" sensor is just that and that's why it's called a 4k sensor? It's not called a 4K with three samples per pixel sensor, is it? And what about importance of luminance information versus chroma information or a good bayer reconstruction algorithm or the broad spectral response of each biased pixel that is so easy to ignore when saying 4k bayer = 2k rbg? Gonna have to update my crazy pills perscription... BTW, I've always wondered why no one mentions the lenses in most "4K" scanners adding yet another layer of lens degradation when talking about "true 4K". Deanan Dalsa
×
×
  • Create New...