Jump to content

Otis Grapsas

Basic Member
  • Posts

    44
  • Joined

  • Last visited

Everything posted by Otis Grapsas

  1. Not following the web much, been very busy.
  2. http://www.cinematography.com/index.php?showforum=107 Also on the main page of the forum, forums on all cinematography realted subjects, it really IS a cinematography forum:) http://www.cinematography.com/index.php?showforum=35
  3. You haven't googled "gopro cigarette lighter cable" right? :)
  4. And here is how it appears to the user in preview mode, on touchscreen or laptop screen. The main screen allows single click changes to film emulation model, white balance, saturation mode, shutter, ISO and frame rate. It also provides exposure aid, pixel zoom control and full transport. Other panels get into detailed setup, bulding presets, video and proxy rendering, etc. It's designed so that the user will rarely have to leave the main panel.
  5. This is what a camera head of this type looks like. http://www.alliedvisiontec.com/us/products/cameras/gigabit-ethernet/prosilica-gx/gx1910.html Notice the GigE ports. 1 or 2 depending on the camera. 1 is required for 1920x1080x24/25p/12bit 2 for slow motion. The companies make a tripod mounting plate and the cameras usually include additional M2/M3 threads for mounting equipment or for creative camera mounting/installation This specific company also make a serialy controlled Canon-EF lens mount. I haven't used one yet, but it should be easy to control from the computer for apperture changes. It can also drive focus electronically. Due to 2x crop factor of the 2/3" Kodak, an 70-200 lens makes a 140-400 academy 35 equivalent . The 2/3" GigE Vision cameras always come with a standard c-mount, so they need adapters for PL mount: http://www.c-mount.passion.pro/article/Adapter+pl+mount+to+c+mount.html For C-mount, Fujinon makes some very affordable large aperture (f1.4 and f1.8) 2/3" lenses (about 200-250 euro each): http://www.fujinon.de/en/optical-products/cctv-and-machine-vision/products/fixed-focal-length-lenses/23-5-megapixel/ It's a standard c-mount so it will also adapt to any lens that allows mechanical aperture control. Nikon F-mount etc.
  6. With the latest tests, we can also skip the 2.5" RAID0. A single HDD is finally fast enough. A single 2.5" HDD properly configured will safely do 90 minutes 1920x1080 24p/12bit raw (400GB) for 50 euro. The equivalent 400GB 90minute 2.5" SDD costs 800 euro. On a laptop, the single HDD/SSD can be installed in a HDD/optical bay caddy. Every laptop has one these days.
  7. You mean an one piece camcorder? Yes, that's the standard Drama. The tethered system setup is a way to reduce cost. A tethered Drama will be about 5,000 if one already owns a good laptop. A Drama camcorder will be about 8,000 euro with 3 hours of recording. The Ikonoskop dII that uses one of the Kodak sensors I use is 8,000 euro with C-mount and a card reader but also requires 2,000 euro per hour of recording. 14,000 euro for camera with 3 hours of recording. You know how body pricing is for Alexa, Red etc and how expensive the storage is. You mean HDMI for monitoring? HMDI can be easily added, but it will not be a signal that can be recorded for production output. VGA is more affordable in large cable runs and uses more affordable monitoring vs SDI or even HDMI. It can also achieve the native 4:3 format Drama uses for monitoring in 1:1 pixel precision which is critical for focus and image and user interface clarity. The Drama monitor output is just for preview and includes camera settings and user interface. A problem with raw cameras is that they require very intensive calculations for production quality output. I don't know of any RAW camera that produces a good output in real time. Shooting a test chart reveals that it's a fast debayer with fast downsampling. My prototypes use 800x600 monitor outputs with 800x450 video preview on the top and no overlays. All info and user interface is isolated on a 800x150 strip below the video image. Screen space is very valuable on small touchscreen. Being able to use the full image width allows eliminating zoom modes that make the image smaller when the user interface is usable.
  8. About tethered operation: The camera head can be used at very large distance from the laptop or other system recording the RAW video. 20m is practical with a high quality ethernet cable. The 8" touchscreen can also be used at reasonable distances. If it's for monitoring only, with a 12v battery you can take it to 20meters with good vga cables. In this case, someone will have to start and stop the recording from the laptop. With a USB touchscreen, 4m is the practical limit due to the USB limitation. With special USB cables, this distance can also be increased to 20 meters. The weight of a camera head is a maximum of 300gr and an 8" VGA touschreen monitor is about 700gr. On most laptops, the external monitor can duplicate the on screen image, so the laptop can be used as a second monitor. Both monitors will show the full camera settings and video preview. Great for avoiding mistakes.
  9. You might have heard of Drama, an uncompressed digital cinema camcorder I have prototyped. In order to take the project further I decided to expand its scope. The user interface and image development software used for my Drama prototype digital camera system is now fully compatible with any GigE Vision compatible camera head. These cameras are high quality CCD and CMOS heads that connect to a computer using a gigabit ethernet cable and stream fully uncompressed 12bit raw video. They use 12v power and typically support a very wide voltage range. They come with a C-mount lens mount. These heads come in: Kodak 2/3" 1080p CCD Sony 2/3" 1080p CCD CMOSIS 2/3" 1080p GLOBAL SHUTTER CMOS Sony 1/3" 720p CCD A multitude of other sensors for special applications, 4:3 formats for scanning film, very high resolutions, medium format for stills, etc. System requirements for 1080p 12bit raw capture: 2.8ghz dual core windows xp or later laptop with gigabit ethernet port (every laptop has it these days). High quality SSD second drive (not the OS one) or a good 2x2.5" raid0 HDD or single SSD drive connected to an existing esata port or through expresscard or expresscard sata adapter. Of course, one can use any SATA system, including 10TB Raid towers. Any video card will work, intel embedded or anything else. Any audio input will work but a good USB sound card with preamps is recommended for high quality. The sound is recorded in sync and embedded to the video files. 200 euro will get top quality phantom powered preamps and full monitoring and routing options in this competitive market. USB mixers are also compatible. The cost of such a laptop including 1000gb eSATA HDD RAID0 storage for hours of 1920x1080/24p/12bit RAW recording is about 1,000 euro. The datarate for 1080p/24 12bit RAW is 600mbits/sec. Monitoring is provided on the laptop screen in a similar fashion to a tethered SI-2K mini. A small external VGA monitor can be used or course and it can also be touchscreen. The user interace is identical to that of my digital cinema camera prototypes. Fast, friendly and cinematography oriented with full camera control, transport and powerful non destructive metadata manipulation. It is designed for touchscreen operation but a mouse can also be used. A good 8" LED monitor with USB touchscreen is about 200 euro. It's more practical and the laptop can have its lid closed. The estimated cost for a Drama software and camera head package is from 2,000 to 5,000 euro depending on the camera head chosen. That includes OLPF installation to the head. An upgrade to the Drama portable system enclosure which is a small rugged PC, it's very low noise, it uses internal HDD/SSD storage, has multiple monitoring outputs, Anton Bauer battery support, aux power outputs, industrial quality connectors etc will be available for an approximate 3,000 euro. Everything about the system is open, upgradable and available on the open market from multiple vendors. Even the camera heads. No vendor specific media, monitoring, interfacing etc. An upgrade to a future sensor will cost approximatelty 1,000 to 3,000 euro and will be a drop-in replacement. There is a lot of development effort from Sony, CMOSIS, Kodak and others, especially in the CMOS department. The old camera head can be sold to the astronomy enthusiast community to further reduce upgrade cost, traded in or used with a new license of Drama software to form a second camera system. The system includes full functionality of the Drama Digital Cinema software. Image and video proxy or full quality batch output, the Drama color processing and dynamic look up tables, metadata, film emulation, color control, real time color preview etc. It massively improves the image quality of any machine vision head, because it includes 6 key calibration technologies unique to the Drama software and targeted to the specifics of each individual sensor. These technologies improve the RAW level performance of the camera heads. The camera heads will be individually calibrated using Drama calibration software for optimum performance after the OLPF filter is added. Even though the machine vision camera heads are high quality designs, the vendor processing quality varies depending on their intellectual property investment and skill. Drama solves this problem by complementing or replacing it with processing provided by the Drama software. 2/3" C-mount prime lenses with 5megapixel spec, low distortion and large apertures are 200 to 300 euro each. So, it is: 1,000 for a laptop including 1,000GB eSATA storage. 2,000 to 5,000 for Drama software, a camera head, OLPF installation and calibration. A maximum of 6,000 euro for a complete laptop based system including lots of storage. Optionally, 3,000 euro upgrade for a Drama deck if you decide to upgrade from the laptop to a full system. DIY systems are fully compatible of course, as long as they use Windows and the specs are right. For about 1,500 euro one can put together a system that is smaller, more practical than a laptop, has internal storage and uses Anton bauer batteries.
  10. Yes, we did have resolution discussions even when we were only watching 600 lines PPH projections when we were lucky in cinemas. The first Sony HD features looked pretty bad but it wasn't resolution that bothered me. It was the color and overexposure. The audience didn't seem to care though. It looked as sharp as a 35mm projection and the audience didn't complain. That's the definition of good enough, isn't it? In the still camera camera it's beyond the point of ridiculous, they have 1um pixels now and even their basic ISO contains a lot of noise reduction and produces a very low dynamic range. At least Red chose a large sensor for a large resolution. I hope we don't get to the point where the megapixel number will be as importand as in the digital still camera market. I believe we have come at a point where almost all commercial camcorders provide acceptable sharpness (ignoring compression). The agenda has to shift to aesthetics at some point. Less people complain about the F35 compared to Red One when it comes to color response and there is discussion on the subject. That's a very good sign. When a technology establishes itself and the discussion shifts from specs to aesthetics, it's a sign of maturity.
  11. Brian, John obviously doesn't like my point of view on 3D and tells me to check my eyes (and brain in the amblyopia case!). You don't have to join in with your mom's advice even though it's solid advice :D:rolleyes: Even though I'm still discovering the english terminopoly I have studied vision in depth. What I'm describing is well documented. I don't need any advice on the subject of health and providing it used to be bad etiquette. Markshaw, I agree. One could potentially damage his vision with 3D viewing of action films. Anything that doesn't feel natural and requires recovery is potentially harmful. I wouldn't want my kids to watch 3D 12 hours a day. We know the typical action videogame is not the best thing, lets not make it worse.
  12. Did all these resolution discussions come up before Red came into the digital cinema camcorder market? Resolution is the Red agenta, and we are happily chewing on it:)
  13. I know what it is but I don't have it and I even met anyone that had it. It can't be very common. If you read my post a little more carefully you will see that my point is 3D cinema is not realistic. Humans learn to ignore the limited 3d effect of their vision, it's how the brain fuctions. Cinema exaggerates it so much that they require a period of recovery after watching the large screen 3D extravaganza. I have talked with people who had their vision drastically improved after surgery or optical correction. It's hard to notice the differences while recovering from surgery but when it's corrected by lenses, many can see an interesting difference. They notice a 3d effect after recovery, they see the world as separate layers in depth, it is very pronnounced. After a couple of days the effect wears off because the brain adjusts to ignore it. They are back to normal vision, where one eye dominates the perception on the focus point. You can't argue with survival mechanisms:)
  14. The limit is obviously the lens when pixels get this small. I have done many tests with 5.5um pixels using 16mm lenses and some lenses are very soft even stopped down. The lenses are not really designed for this resolution. Going to 2um pixels wouldn't change the MTF. More chroma resolution but not sharper and a great loss on dynamic range and noise. In real shooting with DOF and motion, I doubt even 7um would look softer to the viewer. We can show film and digital to be very sharp with tests, but is the lens setting and setup typical in filmmaking? Is the ideal aperture typical? Is a flat chart a typical subject? Are subjects as static as a test chart? If you put a highpass filter on a digital release you will rarely see anything pass the filter, unimportant detail on an occational wide angle setup perhaps for a couple of seconds. Real world vs theory. It would only be noticable on a direct comparison, one switching A/B images one on top of the other, probably. We sometimes forget that cinema is not static technical imaging, it's storytelling in motion. (storyshowing for the writers in the forum!). Advancing the technology is good, but the limits of perfectly acceptable are lower than 720p IMHO and the audience does not really care if it's 4K or 8K. What they do care is the aesthetic of the image, the saturation abilities, the skin tone, the tonality. They do perceive those, even if it's downsampled to a DVD or to internet video and heavily compressed. You can touch the audience with low or high saturation and you can involve them with a good skin tone. The character will come to life, it will become something the audience can relate to. If digital video wants to mature, it needs an aesthetic approach, not more pixels. The extra pixels always come at a cost in pixel quality, workflow, processing and archival anyway, nothing is free. If digital developers do not understand why a cinematographer might choose different stocks for different tonal results, film will always have a market. I'm quite young, 35 at the moment, and my generation has shot very little film in motion. Even when we have, we never had enough time and exposure to the medium to experiment with different stocks. Many are shocked when they notice the tonal advantages of film but some are so immersed into generic digital imaging from their camcorder background, that they accept many of the digital artifacts and drawbacks as normal or even advantageous. Most of us are not aware of the tonal differences of different stocks. This is a cinematography forum. Where are the stock comparison threads in subjects that involve skin tone? All this supports the market for high resolution digital camcorders and establishes resolution as a main direction for future improvement. What my generation generaly use to compare filmmaking equipment is resolution and noise, there is no aesthetical decision making whatsoever. I have seen countless comparison of A (wonderful skin tone with a little noise) vs B(terrible skin tone with a flat clean and sharp computer graphics look) and 90% of the times, the second image is what the majority prefer. I'm not talking about the audience, the audience would choose the better skin tone, I'm talking about us, the half educated videographers that read technical articles, do technical comparisons and write Director of Photography on our business cards. I don't believe this will change for many years. The market is shaped by the majority and going after aesthetics in digital imaging is a very small niche the majority does not care about. Not large enough to support a business really. It's like trying to operate a business on high end tube power amplifiers in a world of integrated video receivers with multichannel amplifiers and a great selection of surround decoders and DSP. Another problem I see is that with RAW bayer, digital image development becomes very generic. We have already seen this in still photography. While the manufacturers pack some good aesthetic engineering into in camera production of JPEG and their own RAW development apps, Adobe DNG will make sure the differences will be eliminated at some point. Everything will happily look the same with generic processing designed to match the generic medium quality still camera and hide its flaws. With Cinema DNG we will see the same phenomenon in the digital cinema market.
  15. By googling I found the english word for it, Amblyopia. Is that what you are talking about John? Amblyopia is an extreme form of eye dominance, very rare. If you have it, chances are you are not discussing 3D because 3D projection methods do not even work for you. One eye has to either provide limited feedback while your brain is developing or be very inferior to the other eye in order to be effectively discarded. The normal eye dominance is a survival mechanism. The eye learns to select one eye when the focus is on distant object to help aim when throwing things at predators and other humans:) It also allows using limited brain resources to build skill in one eye and one hand that will handle the job more effectively. In its most effective form it's combined with hand dominance. Right handed and left eyes or left handed and left eyed. This allows aiming and also keeping both eyes open when aiming. If you focus on a distant object and point to it with your finger without thinking about it and without switching focus to the finger, you will see two images of the out of focus finger and one image of the subject, and your finger will have automatically landed so that the dominant eye view of the finger is in front of the subject. Humans without this mechanism would get a more confusing view of the world and would have problems aiming without closing one eye. They would enjoy 3D cinema fine though, even better probably. They wouldn't be so annoyed by the exaggeration of difference between the two eyes. IMHO this is why we get tired of 3D easily. Exaggeration itself, and how exaggeration discards eye dominance. The very existence of the eye dominance mechanism supports the view that binocular vision is not very important to humans. We have to live with binocular vision that provides a wider angle of view of the environment but we learn to ignore it when parallax becomes a problem in important survival skills. What's the point of the feature then? Enjoying flowers in their 3d glory? :) Perhaps binocular vision was planted there in case we get so advanced that we can totally isolate the eyes and see Avatar... I believe that in a few years, studies will show that watching a lot of 3D material with its exaggerated parallax is damaging to vision. They will recommend against allowing kids to watch more than 1 hour a day, that sort of thing. Artifical 3D is certainly not a normal way to see the world. Mark Pesce, one of the early pioneers working for Sega, Silicon Graphics and other on 3D technology wrote this: "I helped Sega develop a head-mounted display (fancy VR headgear) that could be plugged into the Sega Genesis (known as the Mega Drive in Australia). Everything was going swimmingly, until we sent our prototype units out for testing." "Your brain is likely to become so confused about depth cues that you'll be suffering from a persistent form of binocular dysphoria. That's what the testers told Sega, and that's why the Sega VR system - which had been announced with great fanfare - never made it to market." "Either 3D television will quickly and quietly disappear from the market, from product announcements, and from broadcast plans, or we’ll soon see the biggest class-action lawsuit in the planet’s history, as millions of children around the world realize that their televisions permanently ruined their depth perception. Let’s hope 3D in the home dies a quiet death." http://www.abc.net.au/unleashed/32814.html
  16. I hope that was humor John:) If you see differently, science would gain a lot by studying your vision. I don't know anyone without a dominant eye. I know a few people with cross dominance but that's still one dominant eye (and a viewfinder problem).
  17. I can see 1080 luma lines PPH on a synthetic 1080p source debayered with a good algorithm. The problem is real sensors with real lenses. We are dealing with 100lp/mm today. In any case, I wouldn't say a 1080p source is 540 lines PPH, because I typically see more than 900 lines. No difference from a 3CCD/3CMOS source really in luma, because 3CCD/3CMOS is also limited by sensor MTF and lens MTF, the difference is that the debayer losses are substituted by the beam splitting prism losses. The 3CCD typically allows luma to alias instead of using an antialias filter, but OLPF only changes midrange MTF and can be compensated using digital post filtering. You do not need 8000lines to resolve 4000lines (2000 line pairs). By sampling theory, you need half of that. You need 100hz sampling to resolve 50hz and 100 lines to resolve 50 line pairs = 100 lines. You do need 2x for resolving lines of chroma, but we are less sensitive to chroma and chroma is typically distributed with 4:2:0 formats so the demands are 1/2 of luma. We usually agree that 1920x1080 HD with 5um pixels is sharper than 16mm film. Some HD networks do not even consider 16mm sharp enough for HD programming. Why don't we also agree that a 5um s35 academy/s35 sensor is sharper than 35mm academy/s35 film? From a lp/mm point of view it's exactly the same. But the digital issues have nothing to do with resolution, the problem is the tonal response, the digital look. I don't see many people complaining HD is not as sharp as film. It's sharper than a 2K scan which is adequately sharp in my book. Reason for editing: I had a few words disappear in random places in the post?!
  18. Many camcorders have adequate color out of the camera, some Panasonic models and the higher end Sonys are in this category. They are generally ok in the midrange but when you approach overexposure they do produce false color and unnatural digital overexposure though, especially the Panasonics. The Sonys have better gamma curves. Overexposure aside, the Canon SLRs are pretty good also, there is evidently lots of design experience in these designs which makes sense considering the experience of the manufacturers and the years they have in the digital market. The problem with RAW cameras is that RAW power is usually a substitute for spending more time in imaging design. The RAW signal itself is just photon counting on the 3 sensor types of pixel, all cameras are equally good if we ignore noise level and the infrared problems some manufactuerers allow to get higher sensitivitity. The sensor is basically irrelevant to the color quality of the output or overexposure characteristics. The only thing it affects is noise levels. The TONE itself is in the matrix and gamma curves, it's where manufacturers need to put more work. The sensor are technically wonderful these days. The following images are all from the same Kodak sensor. The exposure level is slightly different but it does demonstrate what gamma is capable of: The 1st and 3rd images are using a very agressive stack of "vivid negative"+"film print" model. The 2nd and 4th images are using a spec Rec709 gamma. It would be easy to blame the sensor for the differences, claiming one cannot achieve saturation and has bad overexposure, but it's the same sensor. Blame the designer instead:)
  19. You mean this? Instead of this? It's the camera but also the processing and most importantly agressive color correction applied on poor input. RAW cameras take much of the control from the cinematographer and give it to the color corrector. The DP that uses a RAW camera cannot really rely on a look provided by film stock or even a standard set of matrix/gamma provided by the camera manufacturer. While good old timing provided limited control, digital RAW provides way too much power. Add the camera source problems that need to be corrected and you have a real problem. I'm surprised we don't get many fist fights over color correction with the DP control of the creative intent lost to such a degree. I guess a DP could learn the digital image development and color correction tools and handle the color correction directly, but what are the chances of that happening? The tools typically work in the wrong domain and distort the color space further because they attempt to fix images using the wrong approach (3-way to fix color space issues and so on).
  20. I have designed a film look process that can do the job. It processes linear digital images in 16bit format without matrix or gamma curves applied. One can get from Red footage to this requirement. The process operates ideally on uncompressed linear raw. Compressed raw is OK for Red although it has already lost some texture. Any processing applied by the camera software is damaging if you want to emulate film. The process skips all Red processing, it operates on the photon count of what the R,G,B bayer pixels actually captured (but compression loss is still there of course). The entire look is custom processing. Optical filtering is critical for a good film look. The Red footage is polluted by infrared so many lighting situations will suffer without well designed optical filtering. Even if you don't see obvious problems, the infrared has creeped into the skin tones, making them cold and their tonality very compressed and flat. Once the infrared light is in, it cannot be removed and the film emulation will be compromised in terms of skin tone and saturation. If you use complex optical filtering for working around Red issues, the process takes these into account as long as you shoot the tests used to build the film stock emulation presets using the same filtering. The processing is customised to the optical chain, even lens color issues will be corrected. I can emulate a number of film stocks and looks. The ideal way to do it is to shoot a colorchecker chart with the film stock you want to emulate and provide the image as reference. Based on the reference and a series of portrait or other subject shots, I can provide sample processed images similar to those of a color timing test and a decision has to be made only once for each type of light. Decisions can be discussed on contrast and saturation. Tonality can also be changed for getting the required skin tone for the project using a custom system that only manipulates color without changing white or grey balance. If you like the result we can make a deal and I can send all the footage through the presets we decided on. White balance is critital. It must be adjusted in linear domain before the processing is applied, shot by shot. Exposure level deviations can be adjusted before the process to achieve a uniform tonal quality in all shots shot in the same light. This is done according to the zone system guidelines and in the actual film model, so highlights are compressed to a smooth whiteout and not lost. Large exposure deviations will sacrifice tonality and add digital noise but the tones will still match. Mixed light is OK, as long as it is consistent. The process takes care of it. The process does not produce false color or digital overexposure artifacts, even if the output contrast is decided to be very high, similar to that of a film print. Low ISO setting is also critical. Even if the image looks clean, the tonality loss is severe. 1/8th the tonality for every 2x ISO. If you are after a rich filmic image try to not use the digital gain Red provides in post. It doesn't take much ISO boost to get the damage in the domain of an 8bit YCbCr distribution. I do not emulate grain. The purpose of the process is filmic color, rich skin tones, filmic highlights and shadows. The look of a film print for digital video or digital projection release. If going for a film print with video gamma images is not acceptable, LOG can be used without losses, but depending on how you handle the LOG in color correction it could be inferior. Video gamma provides total predictability for digital distribution and digital projection. An advantage of this process is that you can practically skip corrective color correction if you use this process because the color will already be where you need it and exposure will be almost ideal. You can get ideal scene matching with this method. You will only need color correction to deviate entire scenes creatively, not to correct or match between shots. The process does not sacrifice any shadows or highlights, the information is still there if you want to pull it back in post. The process was designed for my digital cinema camera project, but since it only requires linear RAW input it can be used by any camera that stores bayer images and all will produce identical results. If you are not interested in going for custom film look processing, still note these technical issues. You will save time in the color correction stage. Good color takes some planning, digital is not as easy as film.
  21. Consider the actual DOF reality, sensor size and lens availability today. On 1,85:1 projects the difference is only 2x vs 2/3", Red One f2.8 70mm is identical to Red 2/3" Scarlet f1.4 35mm in angle of view and depth of field. Do you have fast lenses for the 2/3" camera? Then, it's very different from what you might expect based on slow lenses and zooms. 2/3" lenses are easier to make faster and in a wide variety of focal lengths. If the camera uses a single sensor you can use very wide apertures, f1.2 f0.95 etc. If it has a beam splitting prism you are limited to f1.4 and the fast primes lenses will be extremely expensive being 3CCD designs. What are the actual limits of the s35 camera lenses from an aperture point of view? Fast lenses might not be available. Many people are using f2.8 zooms with Red One and you can match that with f1.4 on 2/3" as I mentioned. How shallow do you need to go? Most standard setups in cinematography are not f1.4, they are f2.8 to f5.6 which you can match with f1.4 to f2.8 on 2/3". You also get a two stop advantage on the 2/3" when you do match the DOF. ISO1600 on s35, ISO400 on 2/3" with the same lighting. That could be very important from a production point of view. Larger sensors sometimes have larger pixels, so ISO1600 might be as usable as ISO400 on 2/3", but it's something to consider. If you have used a Canon 5dII you know that sometimes you need ISO3200 where a 2/3" camcorder would work at a zero gain ISO200, just because shallow DOF becomes a limitation and you have to step the lens down. The bottom line is that if s35 f2.8 DOF is shallow enough for you, you can get identical DOF results and a cheaper production using the 2/3" camera.
  22. Dutch angle 2D to 3D John? Lets's hope the industry does not find out! Theory aside, in my vision, covering one eye makes no real difference. I see no handicap other than the angle of view loss which will be important in things like driving and sports. I don't see any other loss really. I don't even need a second of adjustment to the new vision paradigm. My perception does not appear to rely much on binocular vision. What my left eye sees appears to be ignored in most cases. The right eye provides the actual viewpoint I see when I use both eyes, the left eye only takes over if the right eye is covered. If the left eye provides something, it does not appear to be important for our lifestyle. If we were hunting and trying to grab our prey using our jaws, when milliseconds were impotant and we only got one chance, the difference could be important to survival, but such distances are very rare in modern lifestyle. We don't follow spoons coming to our mouth, right? And by savoir-vivre and practicality, we don't dive into food, we bring the food to our mouth:) Binocular vision is nothing like the 3D movie exaggerated gimmick. The only usefulness I see in binocular vision is a blurry extra coverage that allows me to notice something hiding behind an object even though the right eye does not have it in its viewpoint. Try hiding a LED light behind the edge of an object so your right eye does not see it but the left eye does. The LED will be visible but it's nothing like a 3D movie effect, it's like two 2D images added one of top of the other. Exploiting binocular vision so that it will be noticable 3D for filmmaking purposes requires massive exaggeration, an unrealistic difference between left and right eye images. This is why it gets tiring after a while and why it will never be a standard way to shoot. 2D is far more realistic.
  23. When I was in university I saw some old stereoscopic viewers in which you could use two photos taken from a plane at an interval of many meters. When seeing the normal 2d photo with both your eyes, you couldn't really see much information on the ground morphology. When you used the stereosopic viewer, perspective opened up and you could spot even tiny details, you had a great 3d view of the mountain or whatever was the subject. Very useful for the application but not realistic. If you were on the plane next to the camera, you would see a flat 2D image of the ground, just like when seeing a 2D photo with both eyes. The stereoscopic viewer placed your eyes 100 meters apart and that's not how we see the world. If 3D cinema actually went for realism, there wouldn't be much 3D effect left in the shots.
  24. I don't really see much in 3D. Everything beyond 100cm is practically 2D in my vision, very similar to what a single eye sees. I would only care for 3D if I was a joggler working very close to my eyes. 3D effect in cinemas is way beyond realistic vision. Very exaggerated. Since those who have lost vision in one eye usually complain about the dramatic angle of view loss and not 3D effect or positional clue loss or anything similar, I would say the only reason for us having two eyes is because with two eyes we could avoid danger more effectively by having a wider angle of view, and because 2 eyes increase the odds of vision surviving trauma.
×
×
  • Create New...