Jump to content

Chris Bowman

Basic Member
  • Posts

    43
  • Joined

  • Last visited

Profile Information

  • Occupation
    Camera Operator

Contact Methods

  • Website URL
    http://
  1. Quantum Dot film for your CMOS They are pitching this for cellphone cameras, but it seems to me this could be a great thing for serious photography as well. I don't think I know anyone who would complain about four times the light sensitivity and double the dynamic range . . . From what I read though it is limited to use on CMOS though, so rolling shutter is still an issue.
  2. To me, the most offensive use of shaky cam was 007: The Quantum Of Solace. The film had the budget, the staging, and the skill available to do anything. Instead, the director chose to use shaky hand held camera work to exaggerate the fight sequences. I felt ill and had to close my eyes (something I've never had happen on ANY roller coaster). The "artistic" decision to use shaky cam grossly cheapened the film. I avoid watching films in theaters which I know or suspect will have much use of shaky cam. Films which use cheap gimmicks deserve to be watched on the cheap. I wait until they are available from Redbox, and watch them on a small screen. That way I don't feel sick, and I don't reward poor production values. I'm not saying all hand held cam is bad (I loved the camera work on Saving Private Ryan). I do think, though, that the camera work should be appropriate to the film. Bond, for example, ought to be classy and elegant, a perfect big screen experience (yes, even in the action and violence). Shaky cam, alas, simply is not any of those things.
  3. Paul, I did some lazy man's research on wikipedia, and noticed two possible restrictions on your server farm USB 3.0 over CAT5 bliss. First, SuperSpeed requires a grounded shield, which CAT5/6 doesn't have, so runs of more than a very short length will probably suffer from serious interference and signal degradation. Secondly, although not an official limitation, it has been estimated that the real world max length of a run is about 3 meters (say about 10 ft.)
  4. If I've got all of my facts straight (with bits gathered hither, thither, and yonder) it was a limitation of XP 64 bit that no individual instance of a program could address more than 3GB (Default was 2 but could be changed by altering BOOT.INI). Vista added the ability to address blocks of 4GB or more per instance, but addressing more than 4GB caused memory leak issues, so was capped at 4GB by most programmers. Windows 7 has supposedly fixed the memory leak issue, but no one has coded for it yet as far as I'm aware. Also remember that you can't have your NLE using all the RAM, gotta leave 1GB or so for Windows itself, and extras for all the other background stuff, like anti-virus, Photoshop, and whatever else you may be using on a given day. So, lots of RAM is still a good thing. ;)
  5. For the record. I'm not saying don't get 16GB+ of RAM, only don't sacrifice a decent CPU to do it.
  6. I hate to disagree with Adrian, but I think CPU is far more important than RAM. This is mostly because, to the best of my knowledge, all of the Windows NLEs are only capable of addressing a maximum of 4GB of RAM (in 64 bit, only 3GB in 32 bit). AE can run multiple background instances, with each instance addressing up to 4GB, but this is useless unless you also have a dedicated CPU core for each instance as well. Also, this limitation includes the memory on the GPU. Supposedly, this will be incresed in CS5 and Vegas 10. (But don't they always say it'll be ready in the next version?) A decent Quad Core CPU is a very cost effective way to improve performance. The Intel Core I7 has a pretty substantial edge currently. Everything else that has been said, I agree with, but DON'T buy a dual core CPU and 16GB of RAM and expect great performance. It's better to get the quad core and 8GB RAM than a dual core and 16GB. Also, FYI, Windows 7 can address up to 128GB of RAM, according to Microsoft. Finding 16GB sticks to load up an 8 slot server board might be a challenge, though.
  7. Mr. Miller is correct that 24p video looks more like film because it is is the same exposure rate as film, but he didn't really address the "why" of the question. Film exposes the entire picture to the recording medium simultaneously, a natural effect of exposing a chemical coating to light. Exposure is almost universally done using approximately a 180 degree shutter, giving a 1/48 second exposure. This results in a specific amount of motion blur, which nearly every person who watches films has by now been psychologically conditioned to equate with "cinematic" movie experiences. A camera running at 30p also records the entire picture at once, just like film, but has an exposure of 1/30th second, or possibly 1/60th second. 1/30th second exposure gives more motion blur than film, creating a slightly unpleasant blurriness to any motion. 1/60th second exposure is significantly shorter than film's 1/48th second exposure and results in images that seem to sharp in motion areas for a "cinematic" experience. A camera running at 60i is effectively shooting half the resolution but at twice the 30p frame rate. It is exposing every other line of the image, rather than the entire image at once. This requires an exposure of 1/60th or less, resulting in much less motion blur than film. The cinematic feel and the film look are inextricably linked and embedded in the public psychology at a subconscious level. Most people couldn't tell you why it looks cinematic, but if it isn't exposed like film, they will notice it isn't cinematic.
  8. Probably most of the "home movie feel" can be explained by three things, frame rate, depth of field, and lighting. Most every professional film made in the last century has been shot on film being pulled through the camera at 24 frames per second. Cheap video cameras will almost always shoot in 60 interlaced fields per second or 30 progressive frames (50i/25P in PAL land). 60i/30p give a very distinct home movie feel. I once had a woman who knew no more about movies than that she likes to go see them ask me why I shot a lecture on film (and isn't that expensive?). I had shot it on 24 frame progressive mode on a digital camera. It's that big a difference. The second issue in "the home movie feel" is the depth of field. Most consumer camcorders have very small sensor chips in them to detect light. While these chips are often very high resolution, it is the physical size of the recording medium (CCD, CMOS, Film etc.) that largely determines how thick a slice of the distance in front of the camera can be in focus at any given setting (the size of the aperture is also an important factor). 35mm film has a fairly large area compared to a 1/6 inch CCD, which results in a much narrower depth of field (thinner slice of space) that can be held in focus. Most everybody has considers a shallower depth of field to be more "cinematic." The third, and arguably the most important issue in the "home movie feel" (and the avoiding thereof) is properly lighting the scene. Most home movies are shot using whatever lighting happens to be present, or bring in lights just to make it bright enough to see. Cinematographers light things intentionally. They aren't concerned that there is enough light to see so much as how the light manipulates what we do and don't see on screen. Lighting is the as much a part of the art of film making as acting, directing, scenery, and soundtrack. Just using what's there already almost never cuts it.
  9. If I recall correctly, that option is only available when shooting in SD 24f DV mode, NOT HDV mode. HDV 24f mode on the XL-H1 records at 24fps and uses 20% less compression than 30f/60i modes. I believe that all of the NLEs now support this Canon codec, so you shouldn't have much of a problem cutting. The reason you are given this option in DV mode is that the DV specification only allows 60i encoding, the camera must conform its 24fps from the CCDs to the 60i of the tape. It records progressive scan by recording 2 fields (every other line of a the picture) from the same frame, rather than recording each field from a separate image as in interlace. 2:3:3:2 (also called Advanced profile 24P) is the superior choice if you ever want the footage actually displayed at 24fps (filmout/blu-ray/web distro/etc). This is because, in the conforming of any 24 fps footage to 60i, there must be repeats of half frames. 2:3:3:2 puts 2 half repeats right next to each other (the 3:3 part) where any half decent NLE will throw them out to achieve true 24 fps. The 3 fields are often referred to as "dirty frames" because they have a field from 2 different fields. This is undesirable in true 24fps material because it means that there are actually interlaced frames mixed in. It is acceptable in interlaced outputs. 2:3:3:2 throws away all of the dirty frames in post for true 24P, but looks strange before post processing. 2:3 pulldown is appropriate when the final product is going to be exclusively interlaced. 2:3 is the method all of the big studios use when converting 24 fps material to 60Hz interlaced. The actual interleave looks like this 2:3:2:3:2:3
  10. Getting signal out of an analog camera and into a computer requires that you have a capture card that supports analog formats. These cards have Analog to Digital converters which allow the video from the camera to be understood by the computer. Cards with this capability are manufactured by Pinnacle, Black Magic, and Matrox (and probably more than a few others I can't think of right now). Many studio cameras only output through a multi-function cable which is designed to plug into a Camera Control Unit (CCU). The controls on the camera can be operated remotely by this unit, and it is also the only way to get a standard output to send to your recording deck/capturing device. As to the advantages of CCD over tubes . . . They are numerous. CCD's use much less power, they weigh grams, rather than tons (maybe I'm exaggerating a little), they don't go soft over time . . . about their only disadvantage is their fixed grid pattern (which isn't much of a disadvantage since most displays also have this drawback).
  11. Hi Marcus Building your own hard drive recorder is an interesting challenge. One which I wanted to do myself, until I realized just how challenging it would really be. It's not that the hardware part of it is very complex (slap a firewire hard drive in an enclosure with a battery and presto!), the real challenge is in the software interface. Digital cameras and hard drives are designed with complex control interfaces have to be able to communicate with each other in order to work together. Without these interface protocols, and an accompanying controller chip to act as an intermediary between the camera's controller and the hard drive's, there is no way to attach the camera directly to the hard drive in any meaningful way. The camera will output information to the drive, but the hard drive will just sit there because it never got the proper command to perform a write. All that is assuming a firewire connection from camera to hard drive. Once we get into analog cameras, Serial Digital Interface (SDI), and HDMI (High Definition Media Interface), things start to get really messy. Analog connections need special Analog to Digital converters (A/D or D/A or sometimes DAC are common abbreviations) to convert the signal into something recordable on the hard drive. SDI and HDMI require special controllers with enormous bandwidth in order to process the vast amounts of uncompressed data coming off the camera. There are of course, computers which can do this, and the controller cards are available for both mac and pc by vendors such as Black Magic, but fitting all of this into a unit portable enough for fieldwork is a real trick. That's why, when you see equipment like hard disk recorders for sale, they are so expensive. It takes a lot of effort, capital, and engineering to make something like that possible. Even more to make it work seamlessly with the camera.
  12. Marcus, Wow, you sure do have a lot of questions! It's great that you want to learn, but very few members of this forum have the time to give comprehensive answers to all of these questions. That is why when you ask something that is fairly common knowledge, you will often get a reply from someone to research it yourself. While nearly everyone here is willing to help with a sincere question that you can't find an answer for, none of us really have time to create a comprehensive curriculum on this website for the totally uninitiated. In doing research, Wikipedia is your best friend. I know a lot of universities don't accept it as a credible academic reference, but most of the information on there (especially on topics of cinematography) is written by experts in the field who have given a lot of their valuable time to give comprehensive, searchable, and cross-refferenced explanations of these complex subjects. Also, Wiki has pictures and animations that can be invaluable for actually understanding how these things work. I would highly recommend that if you come across a term or a concept that you don't understand, Wiki it first. Even if you don't understand the concept entirely after reading the wiki, it will probably give you at least a partial understanding that will allow you to make a more specific and concise question for the professionals here to answer. You will also learn about other aspects of filmmaking that probably hadn't even occurred to you as you read the articles. As for industry acronyms, there is a fairly substantial glossary here, so take a look. I'll continue to read your posts and answer questions when I have the time and knowledge to do so, but doing your own research really is an essential part of learning. It will vastly improve you knowledge, and it will leave time for the many experts and enthusiasts here to answer the more complex questions that don't have answers that are easy to find elsewhere with a quick google or wiki search. I have never had a single class of film school in my life, so almost everything I know about film and video I have learned through my own research, or through blundering experience. I know it can sometimes be difficult to find the answers you are looking for, but you will also find many times that the information you need can be had with the first or second hit on Google. It would be nice if a complete primer on all things video were available on cinematography.com, but the reality is that it is a forum of busy people who see a 30 part question and say "There's no way I have time to answer that!" If anyone had time to write out all of this information, he would probably send it to a publisher and charge $45 for a copy. (See the recommended reading thread for more info. :P )
  13. Mostly right, but I would like to clarify a few points. Your time lapse explanation seems pretty sound. Slow-Motion is NOT available on most cameras in the consumer and semi-pro range. It is a feature that is largely preserved for professional cameras, like the VariCam. Some consumer and semi-pro cameras can achieve a limited slow-mo performance, but most are extremely limited. Super slow motion can be shot at frame rates up to 1,000,000 frames per second, but this is entirely reserved to the realm of specialty cameras. If you need to shoot Super Slow Motion, you would almost certainly rent the camera from a specialty equipment facility, since purchasing one outright is not cost effective unless your job is analyzing bullet impacts or car crash tests. As a point of interest, the NFL shoots it's games at 120 frames per second in order to achieve the slow motion effect so commonly seen on its productions. It does this with both its video cameras and its 16mm film cameras (that's a lot of film!) Once the footage is captured, it is loaded into an editing program. Most editing programs have an "interpret footage" feature which allows you to tell the system the frame rate that you want the footage to play back at. Once this is set to the normal playback speed for the project (say 60 frames per second for NTSC) the effect is achieved.
  14. It should also be noted that not every camera has the capability to shoot time lapse or slow motion. Time lapse can be cheated by starting and stopping the camera every few minutes, then extracting just one frame from each of your many recording sessions for your final product. (This is, of course, very tedious) Slow-mo is another story. If you are planning to show your product at 30 frames per second (Progressive scan) and your camera tops out at 60 frames per second (Progressive scan), than the best you can do is show a 1/2 speed slow down. If you can't use progressive scan, your slow-mo will have very poor quality, because slowing down interlace results in your perceiving a 50% decrease in resolution.
  15. What you are talking about here is really 2 different things. A flower blooming or a sunset going down are usually time-lapse (greatly speeded up), while a bee's wings would have to be extreme slow motion. These are actually opposite effects. Time lapse is accomplished by recording at a very slow rate, perhaps one frame per second, or maybe only one frame per minute. The CCD will still be set for a very short exposure (say 1/60 for video or 1/48 for film) in order to prevent motion blur. The recording is then played back at normal speed, showing an event that happened over the course of hours in a few seconds. Slow motion works exactly the other way around. Images are recorded at very high frame rates (up to 1000 frames per second) and then played back at normal speed, so that something that took only a second or two can be shown very slowly over a minute or more. Higher speed exposure leaves less time for the image sensor or film to gather light, which means that very intense lighting is necessary in order to produce a bright enough image at very high speed recording. What you seem to be talking about in your post is shutter speed. Shutter speed, whether on a CCD or film, controls the amount of light gathered for each image, and the amount of motion blur. Longer exposures allow more light, but also increase blur, because more motion can occur while the exposure is happening. Shutter speed is actually independent from the frame rate, especially in video. Film usually has a frame rate of 24 frames per second, and a 180 degree shutter (motion picture cameras have circular spinning shutters). This gives the camera time to advance the film to the next frame before exposing it and prevents unwatchable motion blurring. It also effectively gives normal speed filming a 1/48 second shutter speed. Video has no mechanical linkages to worry about, allowing the shutter to run at any speed you want. My Canon XH-A1 will allow me to shoot at shutter speeds of 1/4 second (very bright, blurry, surreal). It can also expose as each frame as short as 1/2000 second (basically no motion blur, but requires huge amounts of light). So basically, in order to shoot time lapse, set the camera to a very slow frame rate and a normal shutter speed. In order to shoot slow motion, set the camera to a high frame rate and a fast shutter speed.
×
×
  • Create New...