Search the Community
Showing results for tags 'adc'.
Found 3 results
Digital cameras can do some amazing things nowadays considering where they were just even five years ago. One thing I sometimes struggle to understand is how these newer cameras with 13+ stops of dynamic range are actually quantizing that information in the camera body. One thing we know from linear A-to-D quantization is that your dynamic range is a function of the number of bits of the converter chip. A 14-bit ADC can store, at best (and ignoring noise for the moment), 14 stops of dynamic range. However, when we do introduce noise into the mix (sensor, transfer charge, ADC, etc.) and linearity errors, there really isn't 14 meaningful stops of dynamic range. I did a lot of research on pipeline ADCs (which I believe are the correct type used) and the best one I could find, as defined by the measured ENOB (effective number of bits), was the 16-bit ADS5560 ADC from Texas Instruments; it measured an impressive 13.5 bits. If most modern cameras, Alexa especially, are using 14-bit ADCs, how are they deriving 14 stops of dynamic range? I read that the Alexa has some dual gain architecture, but how do you simultaneously apply different gain settings to an incoming voltage without distorting the signal? A pretty good read through regarding this technology can be found at this Andor Technology Learning Academy article. Call me a little skeptical if you will. Not to pick on RED, but for the longest time, they advertised the Mysterium-X sensor as having 13.5 stops (by their own testing). Of course, many of the first sensors were used in RED One bodies, which only have 12-bit ADCs. Given that fact, how were they measuring 13.5 in the real world? Now, with respect to linear to log coding, some cameras are opting for this type of conversion before storing the data on memory cards; the Alexa and cameras that use Cineform RAW come to mind. If logarithmic coding is understood to mean that each stop gets an equal number of values, aren't the camera processors (FPGA/ASIC) merely interpolating data like crazy in the low end? Let's compare one 14-stop camera that stores data linearly and one that stores data logarithmically: In a 14-bit ADC camera, the brightest stop is represented by 8192 code values (16383-8192), the next brighest is represented by 4096 code values (8191-4096), and so on and so forth. The darkest stop (-13 below) is only represented by 2 values (1 or 0). That's not a lot of information to work with. Meanwhile, on our other camera, 14-stops would each get ~73 code values (2^10 = 1024 then divided equally by 14) if we assume there is a 14-bit to 10-bit linear-to-log transform. As you can see here, the brighter stops are more efficiently coded because we don't need ~8000 values to see a difference, but the low end gets an excess of code values when there weren't very many to begin with. So I guess my question is, is it better to do straight linear A-to-D coding off the sensor and do logarithmic operations at a later time or is it better to do logarithmic conversion in camera to save bandwidth when recording to memory cards? Panavision's solution, Panalog, can show the relationship between linear light values and logarithmic values after conversion in this graph: On a slightly related note, why do digital camera ADCs have a linear response in the first place? Why can't someone engineer one with a logarithmic response to light like film? The closest thing I've read about is the hybrid LINLOG Technology at Photon Focus which seems like a rather hackneyed approach. If any engineers want to hop in here, I'd be much obliged--or if your name is Alan Lasky, Phil Rhodes, or John Sprung; that is, anyone with a history of technical knowledge on display here. Thanks.
Good afternoon, I currently studying (BSc) Film Production Technology and am writing a media technology report on sensors and cameras systems specifically looking at how full-frame sensors/camera systems perform better than cropped sensors in minimal light. The idea I was wondering people had some views on is whether “camera manufactures say their cameras produce 14-bits, however when you look at their own application notes, performance is shown to have a dynamic range of 1320:1” (Motion Video Productions, 2009) because in reality even an 11-bit camera should produce a dynamic range of 2048:1? Has anyone experienced this with cameras they have worked with before and would a 14-bit ADC actually produce 14-bit images because the bits maybe subjected to lower parts of the image for instance shadows as well? Many thanks Auberon
Hello. I am recording sound on a Nagra 4.2 for a film. It records on 1/2" tape and is crystal sync'd at 60hz. Being that I need to transfer the tape to digital format and need it to sync up with my 16mm film shot at 24fps and converted to 23.97fps for digital editing I need the analog tapes to be resolved so that there is lip sync. This requires the transfer have a resolver capable of this sync time (54.94hz, slowed down from 60hz). Does anybody happen to know any labs that do this in the US? I appreciate your time