Jump to content

Chris Bowman

Basic Member
  • Posts

    43
  • Joined

  • Last visited

Everything posted by Chris Bowman

  1. Quantum Dot film for your CMOS They are pitching this for cellphone cameras, but it seems to me this could be a great thing for serious photography as well. I don't think I know anyone who would complain about four times the light sensitivity and double the dynamic range . . . From what I read though it is limited to use on CMOS though, so rolling shutter is still an issue.
  2. To me, the most offensive use of shaky cam was 007: The Quantum Of Solace. The film had the budget, the staging, and the skill available to do anything. Instead, the director chose to use shaky hand held camera work to exaggerate the fight sequences. I felt ill and had to close my eyes (something I've never had happen on ANY roller coaster). The "artistic" decision to use shaky cam grossly cheapened the film. I avoid watching films in theaters which I know or suspect will have much use of shaky cam. Films which use cheap gimmicks deserve to be watched on the cheap. I wait until they are available from Redbox, and watch them on a small screen. That way I don't feel sick, and I don't reward poor production values. I'm not saying all hand held cam is bad (I loved the camera work on Saving Private Ryan). I do think, though, that the camera work should be appropriate to the film. Bond, for example, ought to be classy and elegant, a perfect big screen experience (yes, even in the action and violence). Shaky cam, alas, simply is not any of those things.
  3. Paul, I did some lazy man's research on wikipedia, and noticed two possible restrictions on your server farm USB 3.0 over CAT5 bliss. First, SuperSpeed requires a grounded shield, which CAT5/6 doesn't have, so runs of more than a very short length will probably suffer from serious interference and signal degradation. Secondly, although not an official limitation, it has been estimated that the real world max length of a run is about 3 meters (say about 10 ft.)
  4. If I've got all of my facts straight (with bits gathered hither, thither, and yonder) it was a limitation of XP 64 bit that no individual instance of a program could address more than 3GB (Default was 2 but could be changed by altering BOOT.INI). Vista added the ability to address blocks of 4GB or more per instance, but addressing more than 4GB caused memory leak issues, so was capped at 4GB by most programmers. Windows 7 has supposedly fixed the memory leak issue, but no one has coded for it yet as far as I'm aware. Also remember that you can't have your NLE using all the RAM, gotta leave 1GB or so for Windows itself, and extras for all the other background stuff, like anti-virus, Photoshop, and whatever else you may be using on a given day. So, lots of RAM is still a good thing. ;)
  5. For the record. I'm not saying don't get 16GB+ of RAM, only don't sacrifice a decent CPU to do it.
  6. I hate to disagree with Adrian, but I think CPU is far more important than RAM. This is mostly because, to the best of my knowledge, all of the Windows NLEs are only capable of addressing a maximum of 4GB of RAM (in 64 bit, only 3GB in 32 bit). AE can run multiple background instances, with each instance addressing up to 4GB, but this is useless unless you also have a dedicated CPU core for each instance as well. Also, this limitation includes the memory on the GPU. Supposedly, this will be incresed in CS5 and Vegas 10. (But don't they always say it'll be ready in the next version?) A decent Quad Core CPU is a very cost effective way to improve performance. The Intel Core I7 has a pretty substantial edge currently. Everything else that has been said, I agree with, but DON'T buy a dual core CPU and 16GB of RAM and expect great performance. It's better to get the quad core and 8GB RAM than a dual core and 16GB. Also, FYI, Windows 7 can address up to 128GB of RAM, according to Microsoft. Finding 16GB sticks to load up an 8 slot server board might be a challenge, though.
  7. Mr. Miller is correct that 24p video looks more like film because it is is the same exposure rate as film, but he didn't really address the "why" of the question. Film exposes the entire picture to the recording medium simultaneously, a natural effect of exposing a chemical coating to light. Exposure is almost universally done using approximately a 180 degree shutter, giving a 1/48 second exposure. This results in a specific amount of motion blur, which nearly every person who watches films has by now been psychologically conditioned to equate with "cinematic" movie experiences. A camera running at 30p also records the entire picture at once, just like film, but has an exposure of 1/30th second, or possibly 1/60th second. 1/30th second exposure gives more motion blur than film, creating a slightly unpleasant blurriness to any motion. 1/60th second exposure is significantly shorter than film's 1/48th second exposure and results in images that seem to sharp in motion areas for a "cinematic" experience. A camera running at 60i is effectively shooting half the resolution but at twice the 30p frame rate. It is exposing every other line of the image, rather than the entire image at once. This requires an exposure of 1/60th or less, resulting in much less motion blur than film. The cinematic feel and the film look are inextricably linked and embedded in the public psychology at a subconscious level. Most people couldn't tell you why it looks cinematic, but if it isn't exposed like film, they will notice it isn't cinematic.
  8. Probably most of the "home movie feel" can be explained by three things, frame rate, depth of field, and lighting. Most every professional film made in the last century has been shot on film being pulled through the camera at 24 frames per second. Cheap video cameras will almost always shoot in 60 interlaced fields per second or 30 progressive frames (50i/25P in PAL land). 60i/30p give a very distinct home movie feel. I once had a woman who knew no more about movies than that she likes to go see them ask me why I shot a lecture on film (and isn't that expensive?). I had shot it on 24 frame progressive mode on a digital camera. It's that big a difference. The second issue in "the home movie feel" is the depth of field. Most consumer camcorders have very small sensor chips in them to detect light. While these chips are often very high resolution, it is the physical size of the recording medium (CCD, CMOS, Film etc.) that largely determines how thick a slice of the distance in front of the camera can be in focus at any given setting (the size of the aperture is also an important factor). 35mm film has a fairly large area compared to a 1/6 inch CCD, which results in a much narrower depth of field (thinner slice of space) that can be held in focus. Most everybody has considers a shallower depth of field to be more "cinematic." The third, and arguably the most important issue in the "home movie feel" (and the avoiding thereof) is properly lighting the scene. Most home movies are shot using whatever lighting happens to be present, or bring in lights just to make it bright enough to see. Cinematographers light things intentionally. They aren't concerned that there is enough light to see so much as how the light manipulates what we do and don't see on screen. Lighting is the as much a part of the art of film making as acting, directing, scenery, and soundtrack. Just using what's there already almost never cuts it.
  9. If I recall correctly, that option is only available when shooting in SD 24f DV mode, NOT HDV mode. HDV 24f mode on the XL-H1 records at 24fps and uses 20% less compression than 30f/60i modes. I believe that all of the NLEs now support this Canon codec, so you shouldn't have much of a problem cutting. The reason you are given this option in DV mode is that the DV specification only allows 60i encoding, the camera must conform its 24fps from the CCDs to the 60i of the tape. It records progressive scan by recording 2 fields (every other line of a the picture) from the same frame, rather than recording each field from a separate image as in interlace. 2:3:3:2 (also called Advanced profile 24P) is the superior choice if you ever want the footage actually displayed at 24fps (filmout/blu-ray/web distro/etc). This is because, in the conforming of any 24 fps footage to 60i, there must be repeats of half frames. 2:3:3:2 puts 2 half repeats right next to each other (the 3:3 part) where any half decent NLE will throw them out to achieve true 24 fps. The 3 fields are often referred to as "dirty frames" because they have a field from 2 different fields. This is undesirable in true 24fps material because it means that there are actually interlaced frames mixed in. It is acceptable in interlaced outputs. 2:3:3:2 throws away all of the dirty frames in post for true 24P, but looks strange before post processing. 2:3 pulldown is appropriate when the final product is going to be exclusively interlaced. 2:3 is the method all of the big studios use when converting 24 fps material to 60Hz interlaced. The actual interleave looks like this 2:3:2:3:2:3
  10. Getting signal out of an analog camera and into a computer requires that you have a capture card that supports analog formats. These cards have Analog to Digital converters which allow the video from the camera to be understood by the computer. Cards with this capability are manufactured by Pinnacle, Black Magic, and Matrox (and probably more than a few others I can't think of right now). Many studio cameras only output through a multi-function cable which is designed to plug into a Camera Control Unit (CCU). The controls on the camera can be operated remotely by this unit, and it is also the only way to get a standard output to send to your recording deck/capturing device. As to the advantages of CCD over tubes . . . They are numerous. CCD's use much less power, they weigh grams, rather than tons (maybe I'm exaggerating a little), they don't go soft over time . . . about their only disadvantage is their fixed grid pattern (which isn't much of a disadvantage since most displays also have this drawback).
  11. Hi Marcus Building your own hard drive recorder is an interesting challenge. One which I wanted to do myself, until I realized just how challenging it would really be. It's not that the hardware part of it is very complex (slap a firewire hard drive in an enclosure with a battery and presto!), the real challenge is in the software interface. Digital cameras and hard drives are designed with complex control interfaces have to be able to communicate with each other in order to work together. Without these interface protocols, and an accompanying controller chip to act as an intermediary between the camera's controller and the hard drive's, there is no way to attach the camera directly to the hard drive in any meaningful way. The camera will output information to the drive, but the hard drive will just sit there because it never got the proper command to perform a write. All that is assuming a firewire connection from camera to hard drive. Once we get into analog cameras, Serial Digital Interface (SDI), and HDMI (High Definition Media Interface), things start to get really messy. Analog connections need special Analog to Digital converters (A/D or D/A or sometimes DAC are common abbreviations) to convert the signal into something recordable on the hard drive. SDI and HDMI require special controllers with enormous bandwidth in order to process the vast amounts of uncompressed data coming off the camera. There are of course, computers which can do this, and the controller cards are available for both mac and pc by vendors such as Black Magic, but fitting all of this into a unit portable enough for fieldwork is a real trick. That's why, when you see equipment like hard disk recorders for sale, they are so expensive. It takes a lot of effort, capital, and engineering to make something like that possible. Even more to make it work seamlessly with the camera.
  12. Marcus, Wow, you sure do have a lot of questions! It's great that you want to learn, but very few members of this forum have the time to give comprehensive answers to all of these questions. That is why when you ask something that is fairly common knowledge, you will often get a reply from someone to research it yourself. While nearly everyone here is willing to help with a sincere question that you can't find an answer for, none of us really have time to create a comprehensive curriculum on this website for the totally uninitiated. In doing research, Wikipedia is your best friend. I know a lot of universities don't accept it as a credible academic reference, but most of the information on there (especially on topics of cinematography) is written by experts in the field who have given a lot of their valuable time to give comprehensive, searchable, and cross-refferenced explanations of these complex subjects. Also, Wiki has pictures and animations that can be invaluable for actually understanding how these things work. I would highly recommend that if you come across a term or a concept that you don't understand, Wiki it first. Even if you don't understand the concept entirely after reading the wiki, it will probably give you at least a partial understanding that will allow you to make a more specific and concise question for the professionals here to answer. You will also learn about other aspects of filmmaking that probably hadn't even occurred to you as you read the articles. As for industry acronyms, there is a fairly substantial glossary here, so take a look. I'll continue to read your posts and answer questions when I have the time and knowledge to do so, but doing your own research really is an essential part of learning. It will vastly improve you knowledge, and it will leave time for the many experts and enthusiasts here to answer the more complex questions that don't have answers that are easy to find elsewhere with a quick google or wiki search. I have never had a single class of film school in my life, so almost everything I know about film and video I have learned through my own research, or through blundering experience. I know it can sometimes be difficult to find the answers you are looking for, but you will also find many times that the information you need can be had with the first or second hit on Google. It would be nice if a complete primer on all things video were available on cinematography.com, but the reality is that it is a forum of busy people who see a 30 part question and say "There's no way I have time to answer that!" If anyone had time to write out all of this information, he would probably send it to a publisher and charge $45 for a copy. (See the recommended reading thread for more info. :P )
  13. Mostly right, but I would like to clarify a few points. Your time lapse explanation seems pretty sound. Slow-Motion is NOT available on most cameras in the consumer and semi-pro range. It is a feature that is largely preserved for professional cameras, like the VariCam. Some consumer and semi-pro cameras can achieve a limited slow-mo performance, but most are extremely limited. Super slow motion can be shot at frame rates up to 1,000,000 frames per second, but this is entirely reserved to the realm of specialty cameras. If you need to shoot Super Slow Motion, you would almost certainly rent the camera from a specialty equipment facility, since purchasing one outright is not cost effective unless your job is analyzing bullet impacts or car crash tests. As a point of interest, the NFL shoots it's games at 120 frames per second in order to achieve the slow motion effect so commonly seen on its productions. It does this with both its video cameras and its 16mm film cameras (that's a lot of film!) Once the footage is captured, it is loaded into an editing program. Most editing programs have an "interpret footage" feature which allows you to tell the system the frame rate that you want the footage to play back at. Once this is set to the normal playback speed for the project (say 60 frames per second for NTSC) the effect is achieved.
  14. It should also be noted that not every camera has the capability to shoot time lapse or slow motion. Time lapse can be cheated by starting and stopping the camera every few minutes, then extracting just one frame from each of your many recording sessions for your final product. (This is, of course, very tedious) Slow-mo is another story. If you are planning to show your product at 30 frames per second (Progressive scan) and your camera tops out at 60 frames per second (Progressive scan), than the best you can do is show a 1/2 speed slow down. If you can't use progressive scan, your slow-mo will have very poor quality, because slowing down interlace results in your perceiving a 50% decrease in resolution.
  15. What you are talking about here is really 2 different things. A flower blooming or a sunset going down are usually time-lapse (greatly speeded up), while a bee's wings would have to be extreme slow motion. These are actually opposite effects. Time lapse is accomplished by recording at a very slow rate, perhaps one frame per second, or maybe only one frame per minute. The CCD will still be set for a very short exposure (say 1/60 for video or 1/48 for film) in order to prevent motion blur. The recording is then played back at normal speed, showing an event that happened over the course of hours in a few seconds. Slow motion works exactly the other way around. Images are recorded at very high frame rates (up to 1000 frames per second) and then played back at normal speed, so that something that took only a second or two can be shown very slowly over a minute or more. Higher speed exposure leaves less time for the image sensor or film to gather light, which means that very intense lighting is necessary in order to produce a bright enough image at very high speed recording. What you seem to be talking about in your post is shutter speed. Shutter speed, whether on a CCD or film, controls the amount of light gathered for each image, and the amount of motion blur. Longer exposures allow more light, but also increase blur, because more motion can occur while the exposure is happening. Shutter speed is actually independent from the frame rate, especially in video. Film usually has a frame rate of 24 frames per second, and a 180 degree shutter (motion picture cameras have circular spinning shutters). This gives the camera time to advance the film to the next frame before exposing it and prevents unwatchable motion blurring. It also effectively gives normal speed filming a 1/48 second shutter speed. Video has no mechanical linkages to worry about, allowing the shutter to run at any speed you want. My Canon XH-A1 will allow me to shoot at shutter speeds of 1/4 second (very bright, blurry, surreal). It can also expose as each frame as short as 1/2000 second (basically no motion blur, but requires huge amounts of light). So basically, in order to shoot time lapse, set the camera to a very slow frame rate and a normal shutter speed. In order to shoot slow motion, set the camera to a high frame rate and a fast shutter speed.
  16. While I wouldn't claim to be an expert on CCD technology, I think I can answer a few of your questions. 1) CCD technology in broadcast video cameras showed up sometime in the early 80's, you can probably find a more exact answer on wikipedia. 2)CCDs replaced cathode ray tubes for image capture because of their much lighter weight, lower power consumption, sharper image that didn't soften over time, and a number of other reasons. 3) The size of the CCD chip has a number of affects on the image. First, a camera's sensitivity to light is directly proportional to the size of the chip's photosites (pixels). Small chips with high resolutions have tiny photosites which require more light or more gain (which causes "noise." Larger photosites gather more light, and therefore tend to require less gain, and are therefore less noisy. The size of the CCD also affects the "depth of field" for focus. Larger image sensors produce a shallower area that can be held in focus. Since we have all grown accustomed to the fairly shallow depth of field that 35mm film's relatively large image size produces, many people find larger sensors produce a more pleasing depth of field. There are even cameras now that have image sensors the same size as a frame of 35mm film so that they will have exactly the same depth of field (and can mount the same lenses). The size of the CCD also affects the length of the lens that is required to achieve a certain magnification (zoom). The smaller the image sensor, the shorter (also lighter and more compact) the lens must be to achieve a desired level of magnification. Three chip CCD arrays split the light passing through the lens into the three primary colors with a prism, and expose each of the three chips with the entire picture in only one primary color. This allows the camera to capture each primary color at the full resolution of the chips. Single CCD designs must place a color filter over the CCD in a Bayer pattern. This means that the camera gathers each of the primary colors at only part of the resolution of the chip. 4) Cameras, unlike the human brain, record exactly what is there. Unfortunately that means that where there are fluorescent lights there is a huge amount more green than any of the other colors, unless the light was designed very carefully not to (These tend to be expensive). Our brains usually filter out this green spike, but cameras don't. Many of them have a "florescent" white balance setting which is designed to compensate for this, but that's simply the way these lights are. There is a control on video cameras called the white balance, which allows you to control the camera responds to different light sources. Incandescent household lights typically output light in the 2800 Kelvin range which is actually very red, while studio incandescent are typically 3200 kelvin. Natural sunlight at noon is probably somewhere close to 5600 Kelvin. The camera can be adjusted to display white as white under all of these varying lighting conditions. Once white is white, all of the other colors tend to follow. As far as how to light for a specific purpose . . . this really depends on what you are trying to accomplish. A news crew on site will probably just use whatever lighting is available, since it is inconvenient to carry, set up, and power light kits on the run. Just about everyone else doing professional film and video will adjust lighting as much as they can to make everything "just right." Lighting is really what makes photography of any kind an art. For a kitchen in a movie, it wouldn't be at all uncommon for there to be 20 or more very, very carefully positioned light sources. Making the lighting perfect is probably the biggest part of a Director of Photography's job.
  17. 4) Film was the original standard. The vast majority of film has been shot at 24 frames per second (24fps) over the last century or so. 1) When video came into existence, the engineers who designed broadcast television and video systems decided, for a variety of technical reasons, to work at a different rate than film. Here in the US, the standard was to refresh (scan a new picture) at a rate of 60Hz. However, the equipment of the day could not produce an entire frame 60 times a second, so broadcast equipment was made to interlace the picture. Interlacing means that rather than scanning the entire picture every refresh, only half of the lines are scanned. The other half of the lines are scanned on the next refresh. These separate sets of lines are called fields, and when one field is being scanned the other is sitting idle. This means that if your camera or TV is in 480i @ 60Hz what you are really getting is fields of 240 lines of resolution alternating, with each field scanned 30 times per second. Progressive Scan is an attempt to make video higher quality and more film-like. Instead of having 2 fields, Progressive scan will scan the entire image in one pass, so that if your camera or TV is in 480P @ 60 Hz mode, it is scanning 480 lines of resolution 60 times per second. 2&3) Progressive mode will affect a shot in several ways. Most cameras cannot shoot as high of frame rates in progressive mode as in interlaced mode because progressive literally doubles the amount of data being processed. This means that you may be limited to shooting at only 30 frames per second in progressive mode, when interlaced would let you shoot 60 fields per second. Although the progressive frames are higher resolution than the interlace fields, the interlace fields will likely make fast motion look smoother because of the grater number of image samples. Progressive is the ONLY way to go if you are going to be rotoscoping or doing a final output to film. Otherwise you are just shooting yourself in the foot. Progressive is also much better for computer applications and web video. This is because computers natively work in progressive mode, and because interlacing makes the compression that most computer formats (and especially web formats) use less efficient. 4 (again)) Film is almost always shot at 24fps, and is always shown in theaters at this speed. 5) DVDs are able to be recorded in either 480i@60Hz, 480P@30Hz or 480P@24Hz. When a DVD is made from a film source, it must use a method called pull-down to make the framerate work on a TV that can only display 60i. Basically each frame is split into fields, and in order to fill all 60 fields from only 24 frames, some fields must be displayed twice in a manner something like this. AB*BC*CD*EF. A DVD may be recorded with this pull-down already added, or may be recorded as 480P@24Hz, causing the DVD Player to add the pull-down on the fly. 6) The local news is almost always shot at 60Hz. In these days of HDTV it is often shot in progressive scan now, but for many decades the standard was 480i@60Hz. Hope this clarifies things.
  18. My church recently shot a lecture by a guest speaker which was originally intended for distribution on it's website. Several weeks later, I was asked to broadcast it on the local community access channel in place of the our regular service broadcast. The problem I have is that it was shot in 24F HDV on a Canon XH-A1, rather than the usual 60i DV that I use for broadcasts. The guys up at the access channel tell me that their broadcast server will only support 60i DV and DVCAM (NTSC), so I need to down-convert and add pulldown. When I tried exporting the footage as a 29.97fps DV AVI from Premiere the results were absolutely terrible. The image was tearing, blurring, and incredibly soft. Is there a better way to do this downscale and pulldown? The only programs I have access to right now are Adobe Premiere and After Effects (I have almost no experience with AE). Would After Effects give me a better result than Premiere? This is all being done on a non-existent budget and volunteer work, and I've never really had to bother with frame rate conversion for TV broadcast before.
  19. The government basically decided that if you could afford to be an early adopter and buy an HD ready TV, you could also afford a tuner. Of course, that's before the economy tanked and put several hundred thousand people out of work. Since the voucher program began, the price of the tuners has essentially been fixed at around $49.99 plus taxes. The manufacturers and retailers know that there is a $40 subsidy, and consumers won't complain about $10 or so. These base models would probably sell for around $35 if there was no subsidy, and the cost of full resolution tuners would most likely follow accordingly. That said, until I can find a better paying job, all I have to watch is a 27" Zenith that is a year older than I am. Ten dollars to have TV again doesn't seem such a bad idea, even if it is only 480i.
  20. All too true, Phil. I was mostly referring to people who live deep in the woods of Ohio and western Pennsylvania when I referred to "the sticks." It's one of those regions with enough hills, valleys, and trees that receiving any radio transmission is difficult. I can't tell you how many cell phones I've had to return from these same people because they can't get a signal anywhere near their home, despite the fact that there are dozens of towers in the area. As I mentioned before with the train example, ATSC doesn't hold up all that well to multi-path. It gets confused very easily and falls apart. It also makes using directional antennas more difficult, since a misalignment of only 2-3 degrees can be the difference between perfect reception and no picture at all. Analog signals were much more forgiving, even if their pictures were not as clear and crisp.
  21. I work in an electronics department part time, and the biggest misconception that we run into is people insisting that they need a "digital antenna." It is sometimes very hard to convince people that what is important is the tuner/converter, and that the massive power-rotating antenna they already have mounted on the roof of their 3 story house really will work better than the tiny set top antenna that some salesman working on commission at another store told them to buy for $90. We also get a lot of complaints from people who live out in the sticks who lose a channel or two that they (barely) got on the analog broadcast. Lots and lots of complaints from people who don't know how to hook up anything that doesn't have 300 ohm leads (it's all color coded, can it possibly be THAT hard?), and people who don't know how to work anything electronic. I've had several older people (including my grandmother!) insist that the new TV's are no good because they don't have a dial. (It took grandma about an hour to decide the remote was her new best friend, though!) There are also a lot of people who think that if the picture falls apart because of bad reception that it means the TV is broken, and assume it's my fault (ah, retail!) One of my friends lives near train tracks. Every time the train goes past the image completely falls apart and freezes until the train is gone, while the sound remains unaffected. The analog signal used to go a little snowy, but would generally remain watchable.
  22. Phil's comment brought up another thing which would be important to know, what country are you in? Also, is the band's primary place of business in the same country? My knowledge of contract law is limited to the US legal system, and while there may be legal parallels in another country, it is equally likely that there are not. Remember also that you really don't want to go to court if you can help it. Find out what your rights are, and inform the customer of them, but try to maintain a positive business relationship if at all possible. They may only need reassurance that you can get the job done. There may be something about the way their other production house does things that they like and which you normally don't do, but could if you knew they wanted it that way. They may have been offered a lower price for the work which you could match (it's usually better to take a one time hit if it means you keep a customer's repeat business). Find out what they want, and let them know that you can accommodate them. If they absolutely want to give the DVD project to the other production house, tell them you'll release the footage if they will pay you for the work already done. If you went to court you could probably be compensated for the entire job (assuming US contract jurisdiction) but the costs, damage to your relationship to the band, and damage to your company's reputation from their word of mouth would almost certainly not be worth it. Make it clear that you don't want to give them a hard time, but that you deserve to be paid for the work they asked you to do.
  23. This all depends very much on what your written contract with the band (if one exists) actually says, and I am not a lawyer, but here's my take. You were told to both film the night and edit the DVD, and since this is the same arrangement you have had in previous dealings with this band you have a reasonable expectation of being compensated under the same terms of the previous DVD. You have already performed considerable work in the expectation of being compensated as you were for previous work. This would seem to meet all of the criteria for an implied contract-in-fact, meaning you were in fact contracted to use this footage to produce a DVD with all of the editing and post-production which that entails. Although not quite as solid a defense as a written contract, the law takes these contracts very seriously. You have every right to place a lien against the band by withholding delivery of the film until you receive compensation for the hours worked, and any usual fees associated with the editing and production process. They may still give the both the footage and the DVD project to the other production house, but only after they have paid for the work you have done with a reasonable expectation of compensation. Unless your written contract states otherwise, here's what I think should be done. Tell them in writing as plainly and pleasantly as possible that you were instructed by them both to film the show and to produce the DVD, and that production of that DVD from filming to final finished delivery is an exclusive right which belongs to you. Inform them that they do not have copyright to the footage until you have been paid for the DVD (or at the very minimum, work done). The payment which they have made to you was for the services of your camera crew and rental of the equipment and until you have received payment in full (as under the terms which you were compensated for the previous DVD) you are not willing to release the footage or transfer the copyright.
  24. You're right, I should have specified. I assumed Saul was talking about DLP rainbow, since it is similar to what he described, and he said he wasn't sure what kind of projectors do it. Too often I've heard people use LCD as a generic term for any video projector, regardless of what the actual technology used is. Sometimes I forget that there are some people who actually know and care about the difference other than me. :blink:
  25. As far as I understand it, there is no way to attach the camera directly to that monitor. The XH-A1 has the following outputs: 1) Composite (3.5mm to yellow, red, and white RCA cables*) 2) Composite over coaxial (BNC connector) 3) Component (D-shell to Red, Green, and Blue RCA cables*) 4) FireWire * Indicates an adapter that comes with the camera If you want to monitor in HD, your only options are analog Component 1080i or FireWire (Labled "DV" on the camera). The monitor does not support either of these formats. If you have Canon's Console software, you can use that to monitor the camera through the computer over the FireWire, but the software costs ~$500 USD and has a lot of bugs. You could try to get a component to VGA signal converter, but a quick search of B&H Photo only came up with one that would do that, and it is $1,700 USD. I'm sure that there are cheaper ones without all the bells and whistles that it had, but I couldn't tell you where to find them.
×
×
  • Create New...