Jump to content

Graeme Nattress

Basic Member
  • Posts

    145
  • Joined

  • Last visited

Everything posted by Graeme Nattress

  1. If you're in a 29.97fps timeline, use Film Effects, or if you need to go to a 23.98fps timeline, the Standards Converter will do the job. Either way, download the demo and drop me an email if you need any help. Graeme
  2. David, you're a voice of reason as always. Graeme
  3. Banding is sensor noise. All sensors have some kind of inherent noise like this. Normally, this gets calibrated out from the data that surrounds the frame in the black pixels. We will be doing this in camera, but we're not currently doing it in software prototype. What you're seeing is a raw, totally uncorrected image, converted to RGB. What we're showing is the raw quality we're starting with BEFORE all these pixel corrections are done. Nobody normally shows you this as, quite frankly, most sensors look really crap before they've been corrected. Graeme
  4. We've had the full sensor running up to it's rated 60fps in the lab for testing, but we don't have any images from that. As we're still recording uncompressed as the codec boards are still being worked on, it gets tricky to record at that fps :-) However, we're totally confident that the sensor will work at that speed. Graeme
  5. There's nothing wrong with it Deanan. But if it isn't film, it must be video, right, Phil? Graeme
  6. I think I'm going out to take some pictures on my Canon D20 D-SLR "video camera", because, well, it takes images digitally, and if you hold down the shutter in burst mode, you can record motion at 5fps! I must be a video camera I tell you!
  7. Got to agree with Alan here. The Dalsa is most certainly a "Digital Cinema" camera of very high quality. Video, to me, implies many things that are not "Digital Cinema" - interlace, "sharpness", blown highlights, over-processed images, over-compressed tape format etc. etc. Digital Cinema is about getting the best quality images you can via a digital / electronic capture. It's a quality aesthetic that seperates it from video, it's a way of thinking that differs from video. A Digital Cinema camera is much closer to a high end D-SLR than any video camera as Alan points out. Even if the Dalsa had HD-SDI on it, it wouldn't make it video camera, any more than a D-SLR becomes a point and shoot because it can make JPEGs. It's more of a philosophy thing than a technology thing. Graeme
  8. Although, at some point, you do have to convert RAW into something more viewable, as you point out, monitoring RAW directly is going to be very misleading. But again, as you point out, converting RAW to REC709 is also misleading as there's so much more data in the RAW than 709 can allow for. At some point, the skill of the cinematographer, is to "see through" these constraints to "see" what the finished result will be. We can all supply tools to do this, but in the end, that's the skill. Graeme
  9. Yes, we do have HD-SDI for monitoring or recording purposes, and yes, a full image processing pipeline to drive that. We also have a DVI port for cheap, plug it into an LCD monitoring. Why? You've got to see what you're shooting. We need the image processing pipeline for the EVF, so again, you can accurately see exactly what you're shooting. Meanwhile you can be recording RAW, or, if you don't want that, RGB. I don't see why you should buy an expensive LUT box when it's cheaper / easier to do it in camera, and the metadata from those settings can travel along with the RAW data. Graeme
  10. 4096x2304x(12/8)x24/1024/1024 = 324MB/s - so that's for 12bit packed linear sensor raw data for 4k. There's not much point outputtting reconstructed data as you're just inflating the file size by a factor of 3 for no good reason. Best to reconstruct at the point of viewing. Graeme
  11. David, I agree. Red images are digital cinema camera images. They don't inherently look like film or video. They just look like high quality digitally produced images. I think, especially shooting raw, they give a fantastic canvas to work on in post to produce the look, no matter what that look it, you wish to create. And yes, digital cinema images have their own unique aesthetic that I think people will being to enjoy for it's own merits.
  12. I'm sure we said the price of the drive was < $1000, the "<" being the key part. Graeme
  13. JVC method sends the image black and white, and "peaks" in colour with a user defined colour around the in focus object. What we're doing is rather different.
  14. The 4k cinema demo was fully uncompressed until it hit the Quvis server. The demo on a viewsonic "4k" medical LCD in the tent was showing a REDCODE prototype. The finished REDCODE will be superior, but I don't think anyone saw any nasties in the demo. I'd reckon that the Quvis codec is probably more brutal to the image than REDCODE, so I'd suggest that what you saw in the cinema would be pretty representitive of final quality in that regards. Obviously, we're got a fair way to go to get the sensor fully dialed in, but we're off to a good start.
  15. Traditional HD focus assists are "peaking" or "1:1 zoom", and neither are what we're talking about here with the RED Focus Assist. It's a new concept, and we're rather proud of how it's prototype is functioning, but because it's new, we can't tell any more about it until it's released.
  16. Come on! It's quite pointless really because no matter what we say or do, you're going to say it was faked. Red does not claim any particular dynamic range in stops. Some viewers of the footage have claimed some rather extra-ordinary figures of how it looked, but all we have said is > 66 db snr. And for those of you think it was 3d generated - why would we put in dead pixels and a couple of small pieces of dirt on the image if that was the case? Looks to me that it's utterly pointless to post here if that's what is going to be suggested. Quite frankly, that the footage looks so good that Rodrigo thinks it was done in Maya is incredibly flattering, but really, it's beyond belief.
  17. Development started 8 months ago. At NAB we'd just got test slices of the sensor. A short while ago we got full working sensors back from fab. I don't know how you can claim that something is "impossible". We are most certainly not using a pre-exisitng sensor. And anyway, no pre-existing sensor does 4900x2580 @ 60fps with > 66db snr. You don't want to see the camera that took the images. It was a quickly assembled frankenstein machine, with a large refrigerator sized drive array to record the uncompressed nearly 5k images.
  18. Please point us at an existing codec that understands raw and has exisiting hardware (chips) and software support. Thanks.
  19. 1k. I don't think that's correct at all. If you'd said 2k, I could agree with a over-simplistic understanding of bayer pattern CFA, and how the interpolation works, and what algorithms can do with the contstruction of the RGB. However, the normally quoted convservative figure is 70%, which gives around 3k measured resolution from a 4k sensor, but again, that's for a simple demosaic, and that's also comparing a bayer sensor with a RGB sensor, but again, that doesn't achieve 100% either without any oversampling. How does keying work at 4k - no problem at all. With a good demosaic algorithm, you're fine. But really, the proof of the pudding is in the eating and I think we'll let people who saw the footage speak on that.
×
×
  • Create New...