Jump to content

Tyler Purcell

Premium Member
  • Posts

    7,833
  • Joined

  • Last visited

Everything posted by Tyler Purcell

  1. All matrix sound tracks go through Dolby Labs for licensing and encoding. Regular labs can't make them. So if you're OK with mono or stereo, then it's .40 - .60 cents a foot depending on the lab.
  2. This is the problem with 35mm sound today. Back in the day, you'd send your mag tracks to the audio house and they would record all the elements onto film, run those films through multiple mag machines in sync with the print and mix it. That all stopped in late 80's, early 90's. Mostly all of the mag facilities, threw their equipment away and went digital. This is good however because mag mixing was a real pain and unlike film editing, which delivers an arguably better image then DI, analog mag audio mixing doesn't. Digital mixing is far superior and A LOT cheaper in the long run. The best way is to digitize the entire film (after neg cut) and very carefully cut in all the audio from the original source audio. Then export an OMF file for your audio guy, they will love you forever. This way, they can work with all the original stems and fine tune your audio directly from your source, rather then a dirty mag track. It's a time consuming process on your part, but it's well worth it in the long run. I do all my audio digitally on set and edit in Avid. I love bench editing, but it's too costly today and getting good quality audio out the back end is a pain. So once you have the audio mixed, then comes the hard part. Unlike DCP (digital cinema package) which is pretty much open source. Film soundtracks are licensed and the licensing is expensive. Dolby stereo (A) optical tracks are the cheapest, but they still cost considerable amount of money. Dolby Digital tracks are roughly $15k for licensing and then you've gotta deal with making the prints. The total cost of a DD soundtrack on 35mm can be upwards of $30k. Dolby makes the only system capable of making the sound track, so they charge a lot for it. DTS is another option, but they're around the same price and you've still gotta make a timecode track. There really is no benefit to printing film today unless you have a way to project it. So cut the negative, transfer the cut negative, throw it into Avid, do your temp audio mix and send it on it's way to the audio house. When you get the 5.1 back, send it and your picture to get a DCP made and you're done.
  3. Yep, I see a lot of C series canon and XDCAM/XAVC cameras mostly because the cameras are cheap, have XLR inputs, built-in filter wheel, zoom lenses (auto iris) and the workflow in post is pretty easy since the cameras shoot decent REC709 color space. These are the basics that are necessary for shooting reality. Reality needs to post very fast, so they don't have the time to deal with coloring before watching. So they stick to standard broadcast formats in camera, 1080p, 29.97, REC709 etc. This gives the editor everything they need within each clip, so they can throw it together quickly. They need this speed because the story is made in post, so they need to edit and report back to set with the next plot point for the following day's shooting. They move very fast, almost at a newsroom speed, so they've gotta deal with standard media. I've done a lot of reality shooting with my blackmagic cameras and honestly, they suck. It's the little things that are missing... the tilt viewfinder, the motorized zoom lens, the auto iris, built in filters, XLR inputs without poop sticking out the side of your camera, the long battery life, long card life, you don't think about these things until you shoot reality and then you're like, poop... it's the wrong camera.
  4. Thelma is kinda old school, she uses it because she likes the bench editor interface, which exists for all systems.
  5. We're analog beings and light is an analog system, so is film. Digital cameras take photon's of light and converts them to one's and zero's. However, there are guidelines about how many one's and zero's you can fit into a given stream of data. An example of this would be slightly over exposing. The guidelines of the imager and data output, doesn't allow the differential between 99.5% full brightness and 100% full brightness. It's the same with blacks, it's hard to differentiate between full black and 1% brightness. This is where bit depth comes into play. The higher the bit depth, the more differential there is in the image. Yes, there are other details like quality of imager, but MOST of latitude/dynamic range is limited by the bit depth which in computer terms is called; word size. In contrast, film is an analog format, it doesn't have a bit depth, there is no translation. When film is scanned to digital, the imager on the scanner itself is very slow in order to write a high bit depth image. The reason why we don't have high bit depth digital cinema cameras today is quite simple. You have two choices for getting high bit depth with digital formats. One is to have a super fast processor, the other is to have a slow capture. It's hard to have that fast of a processor in a camera, the heat sync necessary to cool it, would be huge. Film scanners hold the frame in place for a lot longer, allowing a slower processor to take it's time and scan the image. All of the CMOS based digital cinema cameras on the market are hybrids. They've found a happy medium between processor speed and bit depth. The very best one's can capture a 14 bit RAW image. In contrast, modern color negative contains upwards of 32 bits worth of information. So until small/low wattage processors become more powerful, we will continue to see slow increases in bit depth, with even larger increases in data file size as a consequence. It will be decades before 24bit digital cinema cameras are the main stay and personally, I don't think it will ever get better then that.
  6. Interesting, yea back when it was originally released, I was editing film and tape to tape linear. I didn't start NLE editing until Media 100 came around in 96' because broadcasters started investing. I used Avid a bunch in the late 90's as I worked for a post house that used it. But for my own projects, I stuck with the then developing Final Cut Pro, because it allowed for raw editing of DV material without transcoding. Or maybe it's just a UK thing.
  7. Strange... As a professional editor, Lightworks is so far off my radar, I thought it was a rental house until I did a google search. I think every software has it's idiosyncrasies and it's really down to learning them. Then you can either embrace or push away. As a "creative", I've found once the tools are learned, sticking with them is a lot easier. People give up too soon and resort to using tools that nobody else uses because they're simply too lazy or cheap. Yes, you maybe able to edit a tiny bit quicker, but that's only due to lack of education and not due to a software glitch.
  8. Really? The industry has moved to Premiere.
  9. FCP 7 is a 32 bit program, it's very slow and limited. It barely works on Yosemite, but it does function. I moved to Avid and it's way better once you get use to it. The nice thing is 64 bit processing, so its way faster and has far fewer issues in terms of real time rendering. Premiere is another option because it's open to different codec's and media formats.
  10. I haven't shot much with it, but I have done a lot of editing and finishing (coloring) on RED shows. In terms of the camera, the "modular" body design just isn't me. I come from the ENG days, so cameras like the Arri SR, Aaton Penelope, Blackmagic URSA mini, these are what I'm use to shooting with. To me, the camera form factor is critical. I also don't like the fact RED bodies have open ports and fan's, that's a complete fail. Even though the URSA mini has open ports, they're only for heat to resonate from the heat sync's and aren't direct access to the boards like the RED. Since I mostly work with post production, I can explain my borderline hatred of what they've done. First; their color science is still whack. The cameras natural tendency is to steer towards magenta. Yes, you can easily build a LUT in DaVinci to resolve this, but the information still isn't there. As a colorist, you're always fighting to get back that lost information. Second; noisy blacks and contrast ratio in under-exposed dark areas in general are a huge problem. I've worked with Epic and Dragon footage recently, shot by top cinematographers and been dismayed when coloring shots with dark areas. When cinematographers don't light perfectly and some stuff is left maybe 6 stops under, bringing those areas up so they aren't completely crushed, does leave artifacts. In a lot of cases there is no data there at all, which is surprising because cameras like the 5DMKIII (B camera on that particular shoot) didn't have any issues. We wound up renting another 5DMKIII with magic lantern and shooting all the night stuff with those cameras instead of the RED. Third; I've found pulling good key's from RED cameras, can be really challenging for no reason. I was working on children's TV series last year and it was shot entirely on green screen. We took the raw r3d files and handed them off to our artists. Within hours of the hand-off, they called us after checking one of the shots and said they'd have to roto everything by hand. For some reason, due to either the magenta hazing or bit depth issue of the RED Dragon, there simply wasn't appropriate data to pull a key. I was shocked because so may VFX film shoot with RED that I asked around. Sure enough, it only took two call's and my issue was confirmed. We wound up having to turn the project over to a special effects company who had to hand-roto all of the material. Now, it could be the compression settings (they were high), but the cinematographer was a top guy, maybe even a member of this site. The shot was perfectly lit, perfectly exposed, so yea... huge problem. Finally and in my opinion most important... the codec. For shooters, it doesn't matter. They shoot, they hand the cards off to a DIT and that's the last thing they see of that format. However, for post production, the format is a train wreck. We're back to the day's of lab's having to take your dailies, process them and hand back lower-res files to work with. For people who can't afford lab's, you're having to convert everything. Even with fast computers, the conversion process is overwhelmingly slow and tedious. Plus, it's always a two step process; edit off-line and then go to online. Wanna turn something out quick? Good luck! For HUGE shows, this doesn't matter. For little stuff, it's a HUGE problem. Outside of those issues, there are many others including; IR contamination, over-exposure blooms and proprietary media. When you add everything up, you can see why I dislike the camera. Sure as a cinematographer you can create beautiful shots. However, when 35mm (and Alexa) don't have any of those issues, why would anyone in their right mind shoot with RED? Cost savings? Yea sorry... it doesn't add up. In my eyes, nobody should pay more money for less quality, it just doesn't make sense.
  11. They've still got a long road ahead of them. It's really too bad they've hit so many road blocks, I just hope they can recoup and turn out a good product! :)
  12. I just think the whole thing his hilarious. RED digital cinema cameras don't even function without accessories. So sure, the body is $6k, but once you turn it into a working camera, it's $12 - $14k. So it's really not a good deal, it's just cheaper then the other crap... whoops... sorry "stuff" RED makes. What bothers me even more is Arri's reluctance to release a true 4k cinema camera. If they did that, they'd end this whole debate and RED wouldn't have a leg to stand on. However, because the Alexa's are all under 4k (even if only slightly), it makes those idiots who 'require' 4k for "future proofing" look towards RED as a solution. Since nobody takes Blackmagic Designs seriously, the URSA Mini is a waste of... wait a sec... URSA mini is 4.6k, CinemaDNG RAW, 12bit 444 camera that with ALL the accessories comes out to $8k. Hmm... maybe people will start taking Blackmagic Designs more seriously? RED = Fail URSA Mini = Maybe good? Only time will tell! Go Aussies! :)
  13. Tyler Purcell

    Super 16

    If you check out Pav's vimeo page, all of his films have a similar softness to them. My guess is, there's a common thread in there somewhere. I do agree there are some stock issues as well, but even with bad stock, the grain should be crisp.
  14. If all you need is MOS camera, just call up slow motion inc, they have at least one working Fries 5 perf 65mm camera. They have great movements and should work fine. Best of all, I don't think anyone knows they exist! :) http://www.slowmotioninc.com/#!rentals/c1alk
  15. Beaumont vistavision cameras can't be used for sound either, they're way to noisy. You've gotta get the DB range into the 40's or less, for sync sound to be useful. Most sync sound cameras produce around 20db. Plus, vistavision workflow doesn't really exist anymore. 65mm in contrast, is pretty regular business at FotoKem and they've got a great workflow and competitive pricing that makes sense.
  16. Usually you don't go to companies for work as a DP/Operator. Generally most work will be adverts that you'll reply with your demo reel and resume. Once on a shoot, it's very easy to get more work from the crew you're working with. Building connections is how you get more work. All it takes is landing on a good show and you could be set for a while. But getting on that good show can take a decade or more. That's why a lot of people have resorted to working freelance, owning their own equipment and making their own projects. So you need to separate yourself from being a DP/Operator if you wish to be a Director. Those are two completely different jobs unfortunately. If your goal is Direct, you need to make your own stuff and go to festivals with it, preying someone notices. It's a lot of work and most people work outside of the industry, shooting their own stuff on the weekends whilst paying the bills. In my view, you need your own equipment, you need a lot of talent and most importantly, a lot of time to make it all happen. Gotta stay focused on producing an excellent quality product and simply make it happen. BTW What's the "big city"
  17. Tyler Purcell

    Super 16

    The transfer is odd, looks like a 25fps telecine. I think that's really why the sharpness is missing.
  18. The only cameras on the west coast are the Panavision ones. Arri rental on the east coast has the 765. There are also a few fries 65mm cameras out there, but nothing for sound. The only sound cameras are the 765 and Panavision. Arri is going to stop renting them as well, so you may wanna call and plead with them.
  19. The 2012 Retina's have a faulty graphics chip, so the video stops working. Apple will warranty it forever as a consequence. Just bring it into your local Apple store and they will replace the board once more. The Retina's are the only mac laptop's that have enough power. I wouldn't touch a windows computer because you're limited on external storage options. - USB 3 isn't fast enough - No firewire or thunderbolt on a windows machine So good luck dealing with high-res files on a windows laptop. Plus... its got all right features... The 2014 + models are a lot better.
  20. I just see a scanning issue. Just because the flatbed scanner says it can scan at X resolution, doesn't mean it's imager and glass is capable of doing that. The only reason why your 35mm material looks acceptable is due to it's physical size compared to the smaller gauge. If you were to take that S8 material and drop it off at a lab for scanning, I bet you'd find it looks fine.
  21. It really breaks down to how glass is made. All measurements and testing of glass is done from the center outwards. So the center of the glass will always be the sharpest, no matter what glass you're using (unless physically damaged). This means if you're only using the center of the glass to create an image, you're using the "meat" of the element. Cheaper lenses will have more out of focus edges and sometimes even artifacts. When you crop the glass, so only the center portion is used, those focus and artifact issues are less relevant. So on a full-frame sensor camera, you can't use cheap Rokinon glass without seeing these issues. However, on a cropped sensor, it's not a problem. This means you can OWN glass vs rent glass for your shoots, which is a big deal.
  22. The center of the glass is always the best. One of the biggest issues with large sensor cameras is that you need higher quality glass because the imperfections in lower quality glass are more seen, especially around the edges with the aperture all the way open.
  23. I bought both of my Blackmagic Pocket cameras, lenses, mic's, tripods, shoulder mount kit, all the batteries and cards you could ever want for around $3500 - $4000. All I shoot is documentaries today, multi-cam interviews, run and gun, lots of setup's per day, etc and I wouldn't dream of having anything else then the pocket camera. So you get 10 bit Pro Res HQ or 12 bit CinemaDNG raw in 1920x1080 on standard run of the mill SD cards. The "cinema" look without the expense. You get 2.88 times the focal length on lenses which means, you can buy much cheaper glass and the imperfections won't show. Plus you can buy any glass you want from PL, Nikon, Canon, Zeiss, even C mount. The cameras are small, so you can go places most video camera's can't. Plus and this is a big one, they're easy to operate. No fancy menu's you've gotta flip through constantly to change settings, they're so easy. In terms of post production, I shoot everything in Raw color space 10 bit 4:2:2 Pro Res HQ, AMA link in avid (no transcoding) and edit my show. When I'm done, I use the free DaVinci software to color my sequence and export for delivery. The workflow actually works well and on multiple platforms as well, Mac and PC doesn't matter. With documentaries, it really doesn't matter what resolution you deliver. You'll be building a DCP out of DaVinci and it will automatically scale to 2k because that's the minimal size and it's only 120 or so pixels different anyway. The last two documentaries I produced, both sold to relatively big distributors without anyone even flinching when we told them our source was an HDCAM SR tape (1920x1080). So I wouldn't worry unless as David points out, you're doing nature cinematography or something of that nature. I'd save the money, get 2 cameras for the price of one and stick with 1920x1080. I have lots of samples of this camera on my website (link below).
  24. Cool! That would be a lot of fun! I have some camera negative that I use for testing cameras. I can spool some off if you want. However, the camera rental place will absolutely have some, so I'd just ask 'em when you pickup the camera. They need it for testing the cameras and training, it's very common. You'll have to grab a good meter. I suggest one with spot capability because that's how video camera's work and people who are use to shooting video, may understand that type of metering more. In terms of actually buying stock to shoot. http://www.filmemporium.comhas stock for pretty good deals. It's short ends and out of date stuff, but it will work fine for you guys. I highly suggest shooting positive because it would be awesome to project what you shoot. I'm sure some others will chime in on this topic as well. ;) Glad you're taking the initiative, I think everyone who goes to film school should learn on film. Digital is easy and if you learn film, you will find it very easy to make the transition to digital.
×
×
  • Create New...