Jump to content

Deanan DaSilva

Basic Member
  • Posts

    27
  • Joined

  • Last visited

Profile Information

  • Occupation
    Digital Image Technician
  1. A little over two years now.
  2. Email me at (deanan -a t- red). If you can send us the card, we can try to recover the footage. The cause is most likely a corrupt filesystem which can happen on the computer or camera when the media is not safely unmounted (pulled before ejecting, computer crashing, etc).
  3. A filmout and a print is a reasonable way to compare how the final output will look in todays theaters. Having seen filmouts from the Origin (and other digcin cameras), I would agree with your statement. Deanan
  4. The DVI is the equivalent of our tap. It's not the output. The output is dpx files over a pure data link to data storage and it never goes over a video signal. Film and Video aren't the only two words we are limited to for describing cameras. What's so wrong with digital cinema camera or digital motion picture camera? Deanan
  5. From my experience, I would prefer not to look at mapped images to judge exposure. If you do map to rec709 then you are going to be deliberately compressing your highlights and you're doing the cinematographer a disservice by making it harder to judge his exposure. It is the equivalent of trying to evaluate your negative by looking at the print. Now, if you provide mappings that let you see a very flat image or map the shadows and highlights separately, the dp can then make a more informed decision. Ultimately though, once you get used to shooting RAW on a particular camera, you really only need a light meter. Only in more complicated setups or when you're starting out with a camera does having good exposure tools really help alot. Shooting raw is alot more like exposing film than it is shooting video and the tendancy to make it work like video isn't necessarily adventageous (unless you just want things to work the way you know them to, which alot of people do. Deanan
  6. I think people are already putting you under the microscope and while jpegs may say don't put me under a microscope, people are already putting the jpegs through torture and making assumptions based on an 8 bit image. Cheers, Deanan
  7. I don't know if this is what Alan is trying to say but one very important thing about shooting raw is that you need evaluate the raw signal independantly of the monitoring signal. It's very easy to be misled by a signal that has been converted into a completely constrictive color space. Doing so, you add additional layers of interpretation to the raw data that could lead you to easily make the wrong decisions. Even things like waveform monitors are risky because they were not designed to be use with linear (to light) or log images and you have add a layer or interpretation to put into a video space so the tool works as expected. Of course, monitoring for talent and everyone else is wholly different issue. Deanan
  8. Hi Jim, A 10 or 12 bit or tiff dpx would be more appropriate, no? Just wondering, if you're rating your camera at 100 or 160 asa, wouldn't you expect a noise free image from any decent sensor? I would guess that 160asa is a fairly conservative rating in favor of shadow detail and less noise. It would be intresting to know what your range you think the sensor would be useful for (ie. how far can you push it before the noise becomes too apparent and starts popping the patterns). You can sort of guess by gaining up the jpeg but it would be a very crude guess. (and obviously it's too early to be meaningful because alot of things will be different once it's actually a shipping product). BTW, have you shot with the MasterPrimes yet? They lean more towards the Cooke's warmth than the UltraPrimes and they hold sharpness really well out to the edges. I'm a bit suprised with the falloff at T1.3 though (although, I have to say as a personal preference I really like the falloff at F1 on the Noctilux). Cheers, Deanan
  9. Sorry, I think there was a little confusion about my post. The daylight balance is for the Dalsa Origin and the 29 sq um pixel size is for the Red (5.4 um x 5.4 um as Greg said). Deanan
  10. Generally you're mapping your exposure to a nominal output via a LUT or transfer function that is specific to your sensor's characteristics. This gives you the ability to control both the exposure and the tonality derived from the raw. Deanan
  11. Camera iso rating is one of those highly subjective things and people like to have their own ratings (I do it for every film stock I shoot). As such, we recommend that people test the camera and make their own rating choices based on their own criteria. That said, we generally recommend a starting point of 320 for daylight and 250 for tungsten. However, since the camera has alot of latitude (another rating that everyone has their own way of coming up with figures for), you can certainly rate the camera differently depending on the scene content and how much you want to hold in the shadows vs the highlights. The sensor is natively daylight balanced so you can optionally use varying degrees of blue filters to balance in front of the lens but it's not necessary most of the time (again a subjective decision that also impacts the amt of light needed). I believe the D20 is rated at 300 but some people I've talked to say they generally rate it one or two stops slower. Deanan
  12. Thanks. So, who designed the chip? Just joking... :)
  13. It's lies in who designed it :) (although the fab is important for other reasons)
  14. Reconstruction code for Bayer RAW are generally very camera specific as they have to take into account camera/sensor specific qualities. There is no DNG equivalent in the motion world where you can embed the general sensor characteristics for a generic raw codec. In the long run it would certainly be very nice to have a standardized raw format (possibly put it into dpx v3). The downside is that there are certain camera/sensor characterisitcs that are not easily generalized that can improve the quality of the reconstruction. From our experience, doing realtime final quality 4k reconstruction not really necessary. For editing purposes, you can cut with a proxy quality reconstruction and only generate the final quality 4k in non realtime once you have your final edit. However, that's not to say that a good quality reconstruction cannot be done in realtime (with some tradeoffs in algorithm complexity). A prototype version I'm working on is currently doing the upload/reconstruction/lut/matrix on the gpu in 15 or so ms per frame. Deanan
  15. Sheesh. You have no friggin clue what goes into camera, sensor, and algorithm design. Everytime you have a sensor rev., filter rev, etc. the algorithm needs to be worked on. It's a continual process and takes alot of time. You are incorrect about the heavy use of a low pass filter on the origin. Making up arbitrary and incorrect assuptions is pointless. Especially if you judging images from a camera you have never shot with and don't know the conditions the images were created under. I'm sure Canon came up with a great algorithm in one day just like you did. Personally, I don't care too much for Canon's reconstruction or Nikons and even less the Adobe one. There are some third party ones that are better in some respects but worse in others. It's highly subjective and just because you like one method, doesn't mean every will also or that everyone else is looking at the same things you are. The exact same analog goes for b/w film development. In my opinion, some developers are absolutely rubbish while others suit my taste more. Significantly better. The field is relatively new and there's probably another ten years or more before it starts to mature. We've only started to see some very promising results in optical flow reconstruction algorithms and haven't even scratched the surface of AI and other types of techniques. Yes, there are limits to what you can reconstruct but we're not at a dead end as you seem to be implying. Then why bring up 3CCDs? My point is still that film is great but it's not perfect as everyone who want's to poo poo digital seem to want to gloss over. A camera like the origin is being developed as an alternative, not a replacement because it has advantages that film doesn't have. Likewise, foveon sensors, 3CCD systems, etc all have their advantages and disadvantages. There's nothing pleasing about a cyan or mushy and grainy edge when you're trying to pull a key. Or with having to stabilize, dust bust, or remove scratches (or soften things with the the automatic processes). Again, advantages here, disadvantages there. Shot tons of both primarily for projection and still lament their passing :( (yes, velvia is not the same it was 10 years ago. their improvements made me start to dislike it.) More specifically, from neg to print, you can't keep the saturation. Sure you can scan or DI to get any colors you want but in the end you're back at a print that can't hold the saturation. I was assuming we were talking about a purely film process versus a digital process all the way through. If we're not, then forget about this whole discussion because you won't see any edge artifacts once you're back out to a film stock. You'll hopefully be near 2K but most likely you'll be at 1.5k. Wasn't it you who said 4x the neg size = 4x the pixels? That sounds like more of the same megapixel myth. (except if you're scanning more pixels to avoid grain aliasing) Deanan
×
×
  • Create New...