Jump to content

David Ghast

Basic Member
  • Posts

    33
  • Joined

  • Last visited

Profile Information

  • Occupation
    Director
  1. Soft and warm, yes, that describes it beautifully. I love that look, instead of the sterile digital look that comes from modern lenses. If i were shooting mutlicam's, what are the chances of a set of angie retro lenses producing the same looking image (given that they werent scratched or anything)?
  2. Dont do it. Thats all i have to say. You've been warned.
  3. I think you all are missing the point. Premiere is about as good as it gets if your a small production or a lone-editor, however it doesnt work in large productions with dozens of editors because there is no centralized media management, and well, avid is made for reliability and support, which would explain the fact its interface is from the 90's and nothing else besides format support has been updated in it for quite some time. FCP is marginally better than premiere, however you'd be missing the forest for the trees if you thought working on a proprietary locked down platform was in any way a wise choice. If they ported FCP to the PC, i'd use it over premiere any day, but never again will i go through the problems that the "once you go mac, you can never go back" apple philosophy caused when i found that there was no way to do what i wanted to do on an apple, and no way to port my files to a PC. Hell, maybe you can get away with working on a mac, but your making a deal with the devil, and sooner or later, steve jobs cock's gonna end up in your mouth.
  4. You have to understand that an editor is in fact a collection of different programs. Some processes utilize the GPU, others the CPU, and sometimes with more than one core. I believe AE supports multicore by spawning seperate instances of itself, which is an example of how these programs, which werent made with mutlicores in mind, are rigged to adapt. Some functions of MC probably do work with mutlicore, you just didnt notice, but then again, were talking about avid, and well, it is what it is.
  5. You might want to use lancoz resize instead of relying on your editor to do it. Never rely on your editor for image processing of that nature, they all are terrible at it. edit: when i say editor, i mean the program not the person.
  6. They tracked the actors face and then pinned static pictures of arnold to it. Then they created a depth map to enable a faux parallax so it wouldnt look weird when the perspective changed. Then they did some single warping to make him blink or sing. THen they hired some crappy voice actors to somehow make the audio track more obnoxious than the video.
  7. lol, how the hell do you export a single field? I imagine that would look strange. you probably just let your editor deinterlace it and made your footage look like crap. If you want to deinterlace it, use virtual dub or an avisynth script.
  8. This effect is produced by shooting directly at a light source with a 3chip CCD camera, though i forget if the effect only vertical or not.
  9. Looks like he shot the whole thing on a bolex with some dramatic directional lighting. Gives the same effect of shooting something deep underwater. Meh.
  10. How do you light up a church? A few cans of gasoline and some matches should work.
  11. lol, ok, ill give you credit cause your new at this, ill explain what you did wrong: mpeg is a final compression format, it uses interframe prediction (feel free to google that) to enable it to compress to 10% of the original file size (typically). This interframe is bad for editing, cause the program has to guess the source frame from the inter-predicted frames. Any sort of compression format shouldnt be used unless you A) know what your doing or B ) are done with post. The general rule when doing post is to A) keep it in its original compressed format, that is, if your original format is compressed (that requires your knowledge of the camera and its compression, but is the often the smallest filesize), B ) use an uncompressed format (the largest filesize), or C) use a lossless codec like lagarith which results in filesizes in between A & B. As for your framing issue, you didnt create your project correct to reflect the dimensions of your source footage. Look at your footage in the footage window (or whatever its called), right click it, and select interpret, it should tell you most everything you need to know about it. Write that down and create a new project to reflect this. Be careful when you export that all these settings are in line with your video. edit: you cant write "B )" next to eachother without invoking those god damn emoticons. Not everyone likes to express emotion through retarded graphics from the 90's. edit2: forgot to mention that your working with HDV files, which are like a modified mpeg2, which is still bad for editing. Your best bet is to re-encode them to lagarith.
  12. Theres no reason to become obsessed or confused with all this information, although it is important to understand. A general rule of thumb is: 1. If you can record in progressive, do it, and keep it progressive. If you plan on doing any rotoscoping or local effects, you'll understand why. 2. If you record in interlaced, dont deinterlace it unless your doing rotoscoping. 3. Deinterlacing is a very lossy process, interlacing is not. And for the love of god, dont let telecine con you into doing a hard pulldown on your film material (which will interlace it), you can easily do this in post, even though theres no reason to.
×
×
  • Create New...