Jump to content

Tyler Purcell

Premium Member
  • Posts

    7,477
  • Joined

  • Last visited

Everything posted by Tyler Purcell

  1. Anyone with a laptop? We had the last generation Macbook Retina onset for the feature I just finished cutting. It was a 2.8ghz i7 quad core with 16gb of memory and we had very fast thunderbolt storage. It was transcoding at around 6fps from XAVC 410Mbps 4k to Pro Res HQ 175Mbps 1080p. It would have taken weeks to do what my tower did in a few days. Honestly, with modern software like Final Cut X and Premiere, the LUT is applied within the software automatically. So you don't need to transcode anything, which saves tremendously on time. Again, on bigger shows with a lot of moving parts, time isn't a problem because you can fight it with manpower and hardware. It is a problem on smaller shows where you don't have the money and you've gotta turn around product in a week or two WITH client notes. Lets face it, most shows these days are small. I rarely get to work on bigger shows and honestly, I don't like it because I get frustrated being pigeonholed doing one job. I like being a cinematographer/operator and editor/colorist. It's fun to have so much creative input and work with the director on multiple levels on set and off. You're a true collaborator when as a cinematographer, you're thinking about how something will cut together. It's one of the reasons I like working on smaller, more intimate shows with directors who are open to having one main collaborator. I shoot most of my personal projects in LOG with my Pocket Camera. I edit the whole thing in LOG directly from the camera originals. I then push the project to DaVinci, apply a LUT to everything, do some basic correction and spit the project out from DaVinci. Yea, it doesn't look pretty whilst cutting, but I know it will look fine in DaVinci, so I'm not concerned and there are no clients over my shoulder. On bigger shows with clients over your shoulder, I understand the necessity of having good color for clients, it's a huge problem we have today in this industry, people need to see what the final is going to look like even before it hits coloring. I waste so much time doing pre-color and cleanup shot replacements whilst I edit because someone on set screwed up and the shots aren't even close to matching. Each codec has a specified dynamic range. Nobody really discusses this because everyone shoots LOG today, so the dynamic range is compressed into the codec. However, I've been doing lots of tests and have studied the codecs very closely. There is absolutely a pattern with higher bit rate, higher bit depth and greater color space codec's, rendering a higher dynamic range imager over their lesser bit rate, lesser bit depth, lesser color space alternative codecs. IE; 10 bit 4:2:2 XAVC iFrame 410Mbps vs standard 8 bit 4:2:0 AVC Long GOP 50Mbps, using the same camera/imager. It's hard to do these tests because not very many cameras support both a high quality and super low-quality codec, but some do like the Canon C300MKII. Next time I get my hands on one, I will record a test to show what I'm talking about. Until then, there is no reason to bicker.
  2. Well, on the last two shows I worked on, we were delivering scenes during production and had quite a bit to watch the day after the show was done. This is the new way to make low-budget shows, you literally cut as you're shooting. In order to make that workflow possible, you've gotta work with camera original formats which work natively with fast editors like Premiere and Final Cut X. With those editors, we can bang out scenes so fast, it'll make your head spin if you're watching from the sidelines. On the last industrial show we just wrapped, it was 5 x 20 minute shows, wall-to-wall dialog, 2 camera with one AC between the two of us. I was the cinematographer and A camera operator and the AC was the DIT. I'd bring the drives home at night and transcode right off the A drive set to my raid. It would actually just about finish before I had to leave each morning. So 24hrs after the shoot was over, I'd have everything into Avid and ready to edit. I had first cuts of all 5 shows by the end of the first week. That's how fast we do things today because sometimes client notes take a week to turn around and we've gotta get moving onto the next project. So here is the math... It was 10TB of total footage = 3300 minutes (a and b camera) The transcode engine (with lut applied) was rendering at 14fps. So one second of processed footage took around 1.5 seconds total. That means it took around 4500 minutes to process everything. So it took 3 days +/- a few hours, to transcode everything. Then we had to sync all of the audio. I used Pluraleyes for the bulk, but because we had wind noise in the camera and no timecode thanks to the cinematographer and audio guy not thinking it was important since we had a timecode slate... dumbo's. So I had to sync around half of the project by hand, which is very labor intensive. So that's why it took 9 days... 3 days to transcode and 6 days to get the audio synced so we could start editing. The show we just wrapped, I was the DP, so we had audio put to the camera via wireless and the timecode matched on all the equipment including the slate. Obviously with digital that's the only way to do it and it's annoying when people choose not to.
  3. I'm discussing playback, which is a "decoding" process. To playback in real time 1:1 R3D files, requires special hardware. My system will make LUT applied Pro Res XQ files from 4k RED code at around 30 - 35fps. Most computers without the special hardware, are around 5 - 9fps. When I mean "master" this refers to the final edit and color correct with embedded multi-track audio from the final mix. This will be the file used for the entire life of the product. Nobody goes back to the camera originals unless the "master" has been lost somehow -which has happened many times. For television, they want the master to be the same format as broadcast, so again Transport Stream 50Mbps 59.94 1080i. We'll also do DNX115 29.97 or 59.94 1080i deliverables for syndicated shows, as this is the codec MOST shows finish in, so many distributors will accept it today. Once one of these files is handed off, we will make an archive Pro Res master and textless master with splits. We'll also do a decompose/consolidate and store it on LTO. For commercial work, almost all of it is done in Pro Res, with the exception of a few Avid editors who refuse to switch from DNX because they have older systems. We'll make final deliverables based on the requirements, 9 times out of 10 its Pro Res HQ, with a few companies asking for lower-res DNX files as well -generally DNX115 like broadcast-. Again, once finished we'll make textless masters with splits and decompose/consolidate. The entire workflow is Pro Res though and the masters are 100% Pro Res. For industrial work, it's slightly different because most industrial clients don't know what they're talking about. So they don't have specs. We generally shoot and edit in 1080p 29.97 and deliver Pro Res masters with .h264 Quicktimes at varying resolutions with textless masters and splits. Again, the masters are always Pro Res. For feature, documentary and narrative, the workflow is pretty much the same. The only difference is making a DCP, which is not required for any other workflow but cinema. The DCP's are generally sent to a different department then the master, as we have different Aspera login's. The master hand-off is 9 times out of 10 Pro Res with some clients using DNXHR as a backup incase the post house is using PC's. Most content distributors -not ma and pa clients, but the big boys- pay for a 3rd party to make those files during the archiving process. All we do is upload the files to Aspera and the rest is done through the archiving agency. They also QC the files to insure there aren't any technical issues and they'll securely upload the necessary files to the different delivery locations. - A "master" is the final finished edited and color timed show. - "Camera original" are the original camera files. - "Camera transcodes" are the files an editor will use to cut with. - "Mastering" is the process of taking the camera originals and conforming them to the edit. - "Deliverable" is a file a client requests based on the "master" in most cases. I think you'd be surprised how many shows use Pro Res camera originals and/or transcode from camera original to Pro Res and never look back.
  4. Yea, we just did a film with brand new Vision 3 7203, exposed perfectly and the print had actual noise, but the 2k scan was practically noise free. I know the scan was RAW and colored by some bloke over at FotoKem, so maybe they applied some NR? I can usually see that, but I didn't see any signs of noise reduction. We also shot 13' and 19' on the same show, all of it looked pretty much the same when printed. The 19' stood out on the digital scan because it was slightly under exposed, but you couldn't tell the difference between the 03' and 13', they were nearly identical on the scan. Probably thanks to proper exposure... something a lot of people these days don't understand. I've got some 35mm stuff, but no way to project it. My flatbed is down for the count and I don't have a projector locally I can throw stuff up on. I will at some point soon though, I'm looking high and low for a portable 35mm projector for my school.
  5. I've done quite a bit of work print's with 16mm 7213 and 7219 sources. Honestly, the 13 is more fine grained, but the 19 looks totally fine if you expose properly. It falls apart when you try to push the stock too much in how you light and expose. I've pushed 19 two stops before and it wasn't nearly as bad as I expected it to be. At the same time, I've been working quite a bit with 7203 recently and I'm not enamored by the difference between it and the 19. The prints appear to be just as grainy, it's only when you digitize that things look crisper on the 03. Point being... the added grain from the print really changes everything. I use to get work prints from my 35mm commercial work, but we'd use a flatbed to view them, so it's hard to know what they'd look like on a big screen.
  6. Yea, I saw my mistake. I misread the article and assumed based on what I read that Deakins uses Pro Res whilst shooting. In reality, the entire workflow is Pro Res until final color, where they go back to the Arri Raw and color that. Sorry about the confusion. Most features are mastered in Pro Res XQ. It's the "common delivery" format today, with DNX being the "backup" deliverable. For television, Transport Stream 50Mbps 1080i 59.94 is what most people will request. Most studio's also request DCP's, but that's ONLY for digital cinema projection, it can't be used as a master like what you'd use for marketing and make BluRay/web versions from. That's why they request a more mainstream codec. Shooting is a different story, there really is no good reason to shoot Pro Res vs Cinema DNG, Red Code and Arri Raw. The file formats aren't much bigger for RAW and obviously there is a huge advantage of RAW being frame based, which means less likelihood of corruption. With that said, we're seeing a steady trend of more and more shows either being shot in Pro Res or being transcoded and never going back to the camera originals. These aren't necessarily features, though there have been a bunch of very well-known documents of this workflow on major features like 'Focus', of which I'm intimately aware of their workflow because I had meetings at lightiron to help them develop it years before they shot the movie. The only mistake I made was with the Deakin's quote, you couldn't even spell his name right. Actually the JPEG2000 codec that RED uses, requires special hardware to decode at full resolution. Most high-end GPU's can handle it without too much fuss, but there is a pretty heavy cost associated to get it playing back 1:1. Pro Res 4k, is a multi-threaded codec that decodes using the CPU. So if you have a crappy processor, it won't decode. The more cores and threads you have available, the better it decodes. I don't have a crazy computer, but it decodes every codec flawlessly outside of XAVC 410Mbps 4k. That's the only one I've ever found to simply not work properly.
  7. Umm yes, that's what I'm talking about. I know it sounds odd, but it's a VERY common thing to do today. Computers are very much up for the task and on very fast turn around shows, where you literally can't afford the transcode time, you need to edit material right away. Also, when you need to deliver a cut 24hrs after the shoot finishes, you simply don't have the time to futz around with transcoding. All of your off-line editing duties should be syncing, logging and doing an initial cut, not waiting for 4 days to transcode everything. That's an exaggeration, but it's really not. As I said before, it took 9 days to transcode the material from XAVC 410Mbps to Pro Res 1080p HQ, with a very fast Mac Pro chewing 24/7.
  8. That's exactly the workflow, never said anything else BUT that. Everyone who uses XAVC transcodes and it works fine if you do that. Not trying to say anything, XAVC 410Mbps iFrame 4k has plenty of dynamic range.
  9. Yea, they change cards all the time. I've been working on features and television shows in the camera department for years. I actually developed one of the original real-time replay systems for early digital cinema. Everyone swaps cards, there is still a "loader" on digital sets, it's quite amazing. I haven't seen that Cfast to SSD box before, that thing looks brand new. I know a lot of top DIT's who work on huge movies and they've told me, even on bigger 4k shows, they rarely go past 100TB. Heck, this show I'm cutting now is a 4k feature shot with 2 cameras and the bulk media is only 10TB. The biggest show I've personally worked on was a bit less then 50TB of camera raw. We condensed it down to 4TB after transcoding from 4k to 1080p for editing. So yea, I don't think it's a problem... just don't shoot 100:1 ratio! LOL :P
  10. Well you're right, if all you do is TV work in 100Mbps XAVC iFrame 1080p, most editors can use that media natively with a plugin from Sony. However, in this thread we weren't talking about 100Mbps XAVC iFrame, we were talking about 4k which is 410Mbps, which is quite a different beast all together. Where it's absolutely true, the editing software does accept XAVC 4k, ALL of them need to make transcodes of some kind or another. This is because XAVC does not have a multithreaded playback engine. Since there is zero native support within Windows or Mac OS, the integration comes down to code development. Apple has spent millions developing a multithreaded iframe codec. Avid has spent quite a bit on DNX, which is a form of JPEG 2000, that decodes on multithreaded GPU's perfectly. Yet Sony and Canon haven't bothered because they for some reason are the only two companies pushing XAVC. The plugin's simply aren't a priority for them and actually, from looking at how they process the material, it seems as if they're 32 bit as well. I already produced a video about how poor the XAVC Long GOP codec is. I'll gladly produce another video to show you how poorly XAVC 410Mbps 4k performs without transcoding. It's a bit harder to do because it eats up all the system resources, leaving not enough to do high quality screen grabs. That's basically the only reason I haven't bothered doing it in the past since this argument comes up all the time between us. You are an avid XAVC guy, but that's because you have XAVC cameras. I shoot with everything, RED, Alexa, Blackmagic and Sony, so I get to see what codec's are better and what works in post since I generally edit what I shoot. This is how I come to my opinion, which is not tainted by the cameras I own. Ohh and pretty much every feature made in the last 5 years was mastered with Pro Res XQ. Even the great Roger Deakins shoots everything in XQ, not bothering to deal with Arri Raw. I do about 2 - 3 feature deliverables a year, on top of my regular work which is industrials and commercials. I have yet to deliver anything that wasn't Pro Res as a master. Sure, clients will ask for "low quality" MPEG files as well, but the masters are always Pro Res. It's also true that television shows are MPEG from start to finish, but that's because they're 1080p and XAVC works great for those clients. If your product is going to be shown once as a premiere and then maybe on syndication a dozen more times, the requirements are a lot more lax. At that point, it's more about speed then anything else. If those shows were forced to deliver 4k, their world would explode. The last TV series I worked on, we delivered on tape, so yea about that. :)
  11. You could always go with more of a film noir look, classic 50's studio look? I think the under-lit look is over done these days.
  12. I'm so excited about this new film stock. I can't wait for you guys to have the lab up and producing stock. Will you be selling direct like Kodak does, or will you have dealers? Also, when will you have test samples available? I run a film school here in LA and we would love to get our hands on something new and cool like this. We shoot a lot of 16mm film and can push quite a bit of great content your way, shot by some fantastic students to help promote the use.
  13. Yea, they transcode everything. Which on a big project, takes manpower, CPU power and lots of drive space because you're basically duplicating media. For smaller productions that need instant results, it's a problem. Otherwise 410Mbps XAVC iFrame does look fine for most situations. The amount of dynamic range is about equal to Pro Res HQ.
  14. Yep, LTO is currently the best method of long-term storage once you're finished with the project.
  15. I wonder if HBO requires 4k for all shows done internally and that's why they couldn't use S16... Amazon, Netflix and Cinemax do.
  16. Yea, the cameras can write as a raid zero, so both cards get the same data. You can't connect anything to a camera to record CFast that I know of.
  17. With professional shoots, the footage is ingested on set as they are shooting. This is the job of the DIT and using Cfast 2.0 cards with thunderbolt connection, a 512gb card with a thunderbolt storage, it takes around 15 - 20 minutes to download! It's super fast. They'll use two identical raid solutions and software like shotput pro, which distributes the files between two drives at once. They'll also use DaVinci and on the fly, do the low-res transcode on the fly. Thunderbolt has been the "industry standard" for on-set DIT work since it came out a few years ago. As a consequence, most DIT's use Mac's because there is no comparable portable substitute for PC/Windows machines. USB 3 is to slow, E-Sata is to slow, Firewire is WAY to slow and almost all of the other solutions like 8GB fiber and a multitude of direct attached solutions, aren't designed for portable use. At home my computer has 6 x 6Gbps sata ports on the motherboard. So I can run a boot drive, optical drive and 4 storage drives as a raid 0. I can put 4 x 8TB drives in there if I want and get 32TB of storage without the cost of lack of speed that's common with external storage solutions. My current system has 3 x 4TB drives, which raided together is plenty of room for a 4k Pro Res XQ feature. It's around 550MBps throughput and I have a super fast USB 3.1 card in my machine, so 512GB cards download in around 40 minutes. The way I work is to use those little 2.5" portable drives on set. I'll download the cards of whatever camera original we use, through shotput pro, onto two 2.5" drives. I'll store the A set in my safe and the B set will go with the director. Every night after shooting, I'll run a batch transcode overnight onto my clean internal raid. It's risky to have the media out overnight, but it's a low risk compared to the alternatives. I rinse and repeat every day on set, until the drives are all full. This last show we had 10 2TB drives, 5 for the A set and 5 for the B set. By the time the show is finished, everything is already on the editing bay ready to roll, the A set will then permanently live in my safe. Since everything has already been transcoded for editing, I will then duplicate the transcoded media and put that in my safe as well. This way if anything goes wrong, it's all good. I will pull it out every once in a while as assets are added, to insure the backup and the master are identical to one another. On this last show, the director wanted a "portable" version of the transcode, so I bought an inexpensive Raid 5 ESata/USB3 array, so he could be working on stuff whilst I was. During finishing, we simply plugin all the drives with the camera originals and re-link them in DaVinci. Then we duplicate JUST that media onto the working raid and color the film. Final output comes out of DaVinci after correction. Unfortunately, there isn't any inexpensive automatic download system. This is because you still need some operating system to make storage work properly. So why not use a cheap computer? 80% of the time I use standard SD, SXS and QXD media. C Fast is a bit more rare, it's only on a few cameras. The other 20% of the time, it's RED media, which is expensive and kinda slow. RED has newer cards for their newer cameras but the cameras I use, the cards are pretty slow compared to CFast. Hope that helps.
  18. Remember, if you're looking for the S16 look, big S35mm sized imagers in 4k is the polar opposite. The field of view will look like 35mm, not 16mm. If you use longer lenses, nobody will notice the difference, but it's a huge difference on the wider side of things. The other thing to think about is the workflow. The XAVC and Sony RAW workflows are atrocious, they need transcoding in order to work well with any editing program. I've been cutting with XAVC material since its inception and I refuse to use that codec after the headaches I've had over the years. The Sony Raw codec is no different. You can't use programs like pluraleyes to do automatic audio syncing with XAVC iframe or Sony Raw either. At least companies like RED cine, offer solutions. I have a Red Rocket card in my tower, that allows full real-time integration of up-to 5k files with most editors, without any transcoding. Blackmagic shoots Pro Res, which is native for every editing program on the market today, including Avid. This is why my workflow is 100% Pro Res today. The files work perfectly with all the programs and pluraleyes for automatic audio syncing. Premiere has auto sync as well, but it's nowhere as good. These are critical things to think about when choosing a camera because post production can either be smooth and simple, or a complete nightmare. I spend most of my time editing and I'd rather have less of a camera that's integrated into post better, then a camera that needs conversion. Remember, every time you convert, you loose quality because each codec puts it's own spin on the files. I like to work with camera originals if I can, but it's hard to when you're dealing with codecs that require a lot of horsepower to decode. Those are just a few things to think about. It's why when I shoot 4k, I work with Red, Arri or Blackmagic cameras, they have a simpler workflow that's designed for post production. As a side note, the A7SMKII is 8 bit 4:2:0 long gop MPEG camera, no raw. It doesn't get close to matching the F55.
  19. How about the blackmagic pocket camera and ol' 16mm cine primes? They work great on my pocket and you can absolutely achieve the look your after, without spending the money on bigger kits. The pocket is a real Super 16 sized imager, shoots RAW and pretty much takes any lenses you want without fuss. I put Arri B mount S16 glass on mine and it looks awesome. Kick the gain up to 800 and it has a pretty filmic look. Worth checking out for sure. Here is a little sample of some BTS material I recently shot with that package, Super 16 Zeiss 12-120 zoom and pocket. This is shot Pro Res HQ 1080p 23.98 200ISO and 45 deg shutter, with a .6 IRND and 1.0 grad. https://www.dropbox.com/s/xonfgpua4u2kqyn/Blackmagic%20Pocket%20with%20Zeiss%2012-120.mov?dl=0
  20. At least in space, the weight of the RED isn't as noticeable. LOL
  21. Even on short subject pieces, we break down the script and go into great detail. It starts with location scout, lots of still's and a detailed analysis of the shots to be preformed and where. Then it's down to designing the lighting rigs based on the notes and storyboards. Most of the time you'll do this in your head and just list the equipment you need. For more complex shoots, you'd draw a simple diagram of a lighting rig to understand in greater detail. There is also previs software designed to help build lighting rigs, but for normal non VFX narratives, I doubt it's used much. Prior to the shoot, your gaffing team will be brought up to speed on your preproduction notes, giving their feedback based on what they see and putting together solutions to solve issues. It's so critical to have that team on board prior to the shoot date because generally, they are the one's insuring you have the right equipment on the truck. If you've done your homework, shooting should be very straight forward. A brief morning meeting prior to setup to remind people, based on your notes and the gaffing/electrical team will get to work. Obviously that's best case scenario. Reality can be much different, depending on budget and lead in time.
  22. Looked good, shows what lots of color correction, a gimbal and slow-mo can do. :) The jitter was probably just a side effect of the mpeg encode in post. Sometimes it can't deal with those things very well.
  23. The variable ND still puts another element in front of the imager to shoot through. I've had nothing but problems with the FS7 and dust collecting on the imager. You have to dismantle the whole front of the camera to access it and blow off the imager. It's a shitty design in my view and the variable filter puts an even bigger road block between the back of the lens and the imager. As a cinematographer, I don't want the camera manufacturer to put a permanent object between the imager and the back of the lens. Removable like a RED OLPF? No problem. It would be nice to have a variable ND that screws into the camera body, so it's easy to remove.
  24. I think they're doing proxy files online Perry, I heard something about that and flash storage delivery of final files with your film. I mean, I'm constantly uploading and downloading 20 - 50GB files all the time, it's just the norm in today's world. We use FTP services and it works fine with standard ol' cable modems. I get around 3 - 5MBps on my modem and it's nothing special.
  25. My contacts at Kodak say the Super 8 files will be 2k. Remember one roll isn't very long at 24fps, so the files won't be very big. I don't think it will effect YOUR business at all, but I hope it effects those people charging $250/hr for un-colored 1080p telecine, like Pro 8 here in Los Angeles. They've cornered the super 8 market and they're very expensive as a consequence. They claim to have the only super 8 telecine machine around and I'm like, dude everyone has one today. I've done some archival 16mm work with them and not only were the results not very good, but they charged $1500 to transfer an already timed print that was only 22 minutes long.
×
×
  • Create New...