Search the Community
Showing results for tags 'digital camera'.
Found 4 results
Hi guys! After some research, I made a little chart one every rich man's favorite camera, the Alexa, which digs into how it really works. So I thought I might as well share it. The content holds true for every Alexa I believe, since they all have pretty much the same sensors and algorithms (even the Alexa 65 I assume, who consists of three Alexa sensors merged). On the right of the table are the only available curves to my knowledge, the LogC ones. Every other curves isn't, but should be available, so here they are. Where did I find this stuff ? A combination of gathering bits of informations here and there, and the ability to think like an Arri R&D guy I guess (they always seem to make the most logical choices in sensor architecture). CLICK and enjoy for zoomable chart.
Digital cameras can do some amazing things nowadays considering where they were just even five years ago. One thing I sometimes struggle to understand is how these newer cameras with 13+ stops of dynamic range are actually quantizing that information in the camera body. One thing we know from linear A-to-D quantization is that your dynamic range is a function of the number of bits of the converter chip. A 14-bit ADC can store, at best (and ignoring noise for the moment), 14 stops of dynamic range. However, when we do introduce noise into the mix (sensor, transfer charge, ADC, etc.) and linearity errors, there really isn't 14 meaningful stops of dynamic range. I did a lot of research on pipeline ADCs (which I believe are the correct type used) and the best one I could find, as defined by the measured ENOB (effective number of bits), was the 16-bit ADS5560 ADC from Texas Instruments; it measured an impressive 13.5 bits. If most modern cameras, Alexa especially, are using 14-bit ADCs, how are they deriving 14 stops of dynamic range? I read that the Alexa has some dual gain architecture, but how do you simultaneously apply different gain settings to an incoming voltage without distorting the signal? A pretty good read through regarding this technology can be found at this Andor Technology Learning Academy article. Call me a little skeptical if you will. Not to pick on RED, but for the longest time, they advertised the Mysterium-X sensor as having 13.5 stops (by their own testing). Of course, many of the first sensors were used in RED One bodies, which only have 12-bit ADCs. Given that fact, how were they measuring 13.5 in the real world? Now, with respect to linear to log coding, some cameras are opting for this type of conversion before storing the data on memory cards; the Alexa and cameras that use Cineform RAW come to mind. If logarithmic coding is understood to mean that each stop gets an equal number of values, aren't the camera processors (FPGA/ASIC) merely interpolating data like crazy in the low end? Let's compare one 14-stop camera that stores data linearly and one that stores data logarithmically: In a 14-bit ADC camera, the brightest stop is represented by 8192 code values (16383-8192), the next brighest is represented by 4096 code values (8191-4096), and so on and so forth. The darkest stop (-13 below) is only represented by 2 values (1 or 0). That's not a lot of information to work with. Meanwhile, on our other camera, 14-stops would each get ~73 code values (2^10 = 1024 then divided equally by 14) if we assume there is a 14-bit to 10-bit linear-to-log transform. As you can see here, the brighter stops are more efficiently coded because we don't need ~8000 values to see a difference, but the low end gets an excess of code values when there weren't very many to begin with. So I guess my question is, is it better to do straight linear A-to-D coding off the sensor and do logarithmic operations at a later time or is it better to do logarithmic conversion in camera to save bandwidth when recording to memory cards? Panavision's solution, Panalog, can show the relationship between linear light values and logarithmic values after conversion in this graph: On a slightly related note, why do digital camera ADCs have a linear response in the first place? Why can't someone engineer one with a logarithmic response to light like film? The closest thing I've read about is the hybrid LINLOG Technology at Photon Focus which seems like a rather hackneyed approach. If any engineers want to hop in here, I'd be much obliged--or if your name is Alan Lasky, Phil Rhodes, or John Sprung; that is, anyone with a history of technical knowledge on display here. Thanks.
(WOODLAND HILLS, CA) - Panavision, the company behind many of the industry's most respected cinema lenses for the last 60 years, has introduced a new line of Primo lenses, the Primo V series, specifically designed to work with today's high-resolution 35mm digital cameras. "Panavision's unmatched optical expertise and high-quality manufacturing capabilities have now been brought to bear on lenses adapted for digital cameras," says Kim Snyder, Panavision's Chief Executive Officer. "We're focused on providing cinematographers with the best tools to tell their stories with vision and creativity. With the industry's ongoing transition to digital capture, we want our customers to know they can continue to trust Panavision to bring innovative, world-class solutions to the marketplace." The Primo V lenses are designed to bring the look and feel of Panavision Primos to digital cinematography, using the lens elements from existing Primo lenses, long an industry standard for top cinematographers. Primo V lenses take advantage of specific design adaptations to work in harmony with digital cameras, maximizing image quality while delivering Primo quality and character. "Cinematographers tell us that the hyper-sharp sensors in today's digital cameras can result in images that are harsh and lack personality," says Panavision's VP of Optical Engineering Dan Sasaki. "That's one reason why there's so much emphasis on glass these days. The Primo V lenses bring the smooth, organic flavor of Primo lenses to the high fidelity digital image. Our philosophy is to take what cinematographers love about the Primos, and update them for the digital world." Digital cameras require additional optical elements including low-pass and IR filters that increase off-axis aberrations. ND filters are sometimes part of the chain. Primo V lenses have been re-engineered to correct for this. Patent pending modifications eliminate the coma, astigmatism, and other aberrations introduced by the additional glass between the lens and the sensor, while preserving the desirable imaging characteristics of the Primo optics. The resulting image appears more balanced center-to-edge. The Primo V lenses are compatible with any digital camera equipped with PL or Panavision 35 mount systems. They cannot be used on film cameras. The internal transports and mechanics of the Primo V lenses will retain the familiar Primo feel. Since the Primo V lenses retain the essential Primo character, imagery from Primo V and standard Primo lenses will intercut well. A set of Primo V primes will include 14.5, 17.5, 21, 27, 35, 40, 50, 75, and 100mm focal lengths. "Filmmakers have embraced Panavision Primo lenses since their introduction 25 years ago," notes Snyder. "Now the classic Primo look has been refined and optimized for use with the latest generation of 35mm sensor digital cameras." # # # About Panavision Panavision Inc. is a leading designer and manufacturer of high-precision camera systems, including both film and digital cameras, and lenses and accessories for the motion picture and television industries. Renowned for its world-wide service and support, Panavision systems are rented through its domestic and internationally owned and operated facilities and distributor network. Panavision also supplies lighting, grip and crane equipment for use by motion picture and television productions Media Contacts: ignite strategic communications Sally Christgau (415.238.2254 / email@example.com) Lisa Muldowney (760.212.4130/ firstname.lastname@example.org)