Jump to content

Tyler Purcell

Premium Member
  • Posts

    7,819
  • Joined

  • Last visited

Posts posted by Tyler Purcell

  1. 12 hours ago, Dom Jaeger said:

    He made six cameras in total. They were not quiet, designed mainly for plates and effects. They were used for certain shots in Babe 1 and 2, among other films. He showed me a photo from the Babe shoot, where they had the camera mounted sideways for a vertical shot that would allow panning down in post. He had to design a special mag that could run on its side for that shot.

    Yea, it's slightly more than vaporwear, which is unfortunate because it seemed like a great idea. He checked nearly all of the boxes, but clearly never got them quiet enough for actual sync sound production, which is a shame. I'm all for people making something new, but it needs to be better than what existed before and this evidently for some reason, just didn't fit the bill. I would make the gross assumption it was cost and good ol' Hollywood not needing MORE vistavision solutions, at a time when there were PLENTY of MOS VV cameras at rental houses. Seems like someone did wind up using it in Europe, but who knows if any actually sold. 

    Thanks for the pictures, these are the only ones I've ever seen after researching this camera for years and coming to my own conclusions after nobody had ever seen one. 

    • Haha 1
  2. 9 hours ago, Dirk DeJonghe said:

    We had problems with Aaton XTR with 7222, it shows up as vertical stripes on panning shots over a grey wall for example; this can happen if you overexpose. The light striking the emulsion is travelling through the emulsion and getting reflected off the shiny parts of the film pressure plate and exposing the emulsion from the back a second time.  Never a problem with regular rem-jet color negatives.

    I've shot a bit of 7222, never noticed it. I guess it depends on exposure. Also 7222 is a different backing, Kodak has already said they're using a silver backing on this new stock to prevent the added halation, so hopefully it works? It looks more like the Ektachrome backing and having shot a bunch of that, never seen it either. 

    The Aaton 35mm cameras have chromed pressure plates, so that would be a problem. 

    I need to first see the results and get feedback from people using chromed gates. If there are no real issues, then we'll just all move on. If there are issues and people have cash to deal with them, I can easily have new pressure plates for the Aaton cameras made, no big deal. I have all the specs, it would be a cinch to model them and get them made with DLC coating. I'm pretty sure there won't be any issues, because the stock is currently being used on a wide array of productions without modified cameras evidently. 

    • Like 1
  3. 2 hours ago, Dom Jaeger said:

    Don't Aaton XTR mag pressure plates have two chrome strips running down the middle? LTR mags have four chrome strips if I recall.

    That actually doesn't effect anything evidently. 

    2 hours ago, Dom Jaeger said:

    If memory serves Clairmont Camera had a bunch of 35-3, 435 and Arricam pressure plates blackened for use with B&W film. I wonder what happened to them.

    Yea, well Andree didn't get them, he's trying to get some made. So who knows. 

  4. On 4/22/2025 at 5:12 PM, John Rizzo said:

    Tomorrow myself and some local DPS are going to TCS and will be testing the non remjet  7219  7207,7213 and 50D on a Arri 416 these DP s are concerned  because most 16mm cameras have chrome backplates. 

    Aaton 16mm cameras don't have chrome backplates, they are specifically designed to deal with black and white film which has no remjet. Arri cameras have chromed backplates, across the board. I know some people did make modified pressure plates which were black, but there aren't enough to go around. My main concern are the Arri 2Cs, Arri 35-3, 425, 235, Arricam, Moviecam, 416, SR, etc. We have some ideas on how to solve this problem, but without tests, I don't know if it's worth it. Currently Kodak has new stock available, but they won't give me a roll to test. 

  5. 10 hours ago, Dan Baxter said:

    Well I wouldn't call it a "trap", that's how most scanners worked and it's why upgrading or changing the optical module costs $90K+ in some of them. I'm sure some of it was existing Cintel International IP too.

    Well yea, you can't really upgrade line scanners that easily. 

    10 hours ago, Dan Baxter said:

    You should look up the costs involved to replace one of the logic boards in a Spirit or a Scanity before declaring it the superior design. There are advantages and disadvantages, and if you're using an 8K CCD line sensor for 1080p, 2K and 4K then of course you want to stack it in the scanner and down sample - but that design is obsolete. All the new scanners use area sensors, and they are all able to make use the full resolution of their cameras. As you say the only limiting factor there is the bandwidth for the raw data flow.

    Line scanners tho, kinda whole other world. The Cintel II is a single module and imager. It's really not the end of the world. The only thing different from doing an imager upgrade in a Lasergraphics is the added magic box. 

    10 hours ago, Dan Baxter said:

    The FPGAs/logic boards in video cameras are generalisable. In other words, you can use the same (or nearly the same) components in a heap of different products. The scanner-specific tasks like perf detection and stabilisation to the perfs are not of any use to any other product. The pixel-shift stabilisation isn't done in hardware anyway, the calculation for it is done in hardware but that is stored in the .CRI metadata and then Resolve does the actual stabilisation.

    Well... they now have full AI auto tracking/AF. So I mean, that tech an easily be used to perf stabilize. Canon and Nikon use the same tech to de-warp lenses. This could be used to de-warp film. So I do think there is merit to the example I used, it's just not implemented by anyone. Scanner companies do not think like restoration technicians and they need to. The first person to release a scanner that you feed film and out the back end comes a fully restored file (within reason obviously) will win this battle. Where it's true scanner companies have attempted to sell concepts to help hide dirt/scratches, today with AI tools, there is no reason why a lot of the cleanup can't be done on the fly. Not saying it would necessarily be done in hardware, but you get my point. 

    I was told multiple times by several people at BMD, over multiple years at going to dozens of events related to this industry, the entirety of the process is done ON SCANNER. They proved it by going into the real-time logs and showing that Resolve is literally doing nothing when scanning. I also asked them why their stabilizer tool doesn't work anything like the scanner tools, to which they said "because it's all done on scanner". Now, obviously playing back the file is different, it does use a tiny bit of resources when doing so because well, the CRI files are compressed. I know the CRI's are not stabilizing in playback. 

    10 hours ago, Dan Baxter said:

    You're also more dependent on the manufacturer with the logic boards. If they leave the market, or don't provide support, then you need access to another machine to copy the "software" off the FPGA to repair your own one.

    The only reason Blackmagic has a custom CMOSIS 4K camera is because they wanted a 4K CMOS camera for their machine in 2014, they wanted it to do 24fps, and they wanted to sell it for $30K retail. Those goals are the limiting factor.

    They've never sold the machine-vision camera retail for other applications. I imagine the next one will be exactly the same - they may use a better sensor, but they won't sell the camera itself retail. That's not a clever design when they can use an off-the-shelf product instead. And they're not going to sell it as a product because they don't compete in the machine-vision camera space and that market is already very competitive.

    I was told that camera is identical to the original 4k Blackmagic camera. Not sure how true that is, but they did not develop the imager at all. 

    Yes, they should have updated the blackbox by now. Imagine how many 2k scanners were thrown in the trash for the same reason? 

    10 hours ago, Dan Baxter said:

    I don't know how you can say that the noisy CMOSIS cameras sold in a brand new scanner in 2025 is acceptable. LaserGraphics moved on from the 5K CMOSIS that they were using almost six years ago. Kinetta and DCS support what the customer wants and Filmfabriek uses Sony Pregius S. Blackmagic are quite literally the only scanner manufacturer that is still using an optical module from 10 years ago in brand new scanners.

    Yea, I mean it's a big problem, no argument. They make it work by doing realtime HDR scanning, which hides the issues with the noise floor. I'm not gonna sit here and say their imager is good, because it's not even in HDR mode, but I have been pretty happy with the results over the years, if you don't actually care about getting 4k out of it and understand its limitations. 

    10 hours ago, Dan Baxter said:

    Whoever told you that was probably making assumptions. How do you do "most of the debayer" inside or outside? You either do it or you don't. The CRIs are 12bit encoded using JPEG-SOF3 (lossless), not debayered, and not pre-stabilised either - they contain metadata for the frame-by-frame pixel shift to be done in software. So they're really not doing as much as you think they are inside the scanner.

    Partial debayer is when you deal with de-noise, profiling and edge reconstruction in imager, this saves considerably on data AND most importantly, means your computer does not need to do these tasks, which lightens the load. 

    10 hours ago, Dan Baxter said:

    In saying this, part of the core issue is that basically nothing supports .CRI other than Resolve. If Blackmagic delivered .DNG instead they'd solve a lot of problems, but they chose to use their own propriety/not-well-documented format.

    Yes, it's very proprietary. 

  6. They've been working on it for years. 

    Remjet is a big problem because it means MP and still film are basically an entirely different product. Without Remjet, you can process both ECN-2 and C41 without much consequence. This would mean one string to make all films. It would also mean still only emulsions, could be used in MP cameras (proper perforating) which means more cross pollination between 35mm stocks. 

    I see this as someone in their MP team wanting to keep their product alive and negotiating with corporate. I know the guys in CA have done a shit ton of work to keep MP prices down and keep it even existing. Corporate wants to raise MP prices to still prices per foot, which would basically end MP as we know it today. Possibly the way they determined to fix this, is to simply change the emulsion to lower cost, a great way would be to keep the emulsions all the same. 

    The new anti-halation layer supposedly doesn't work well, in initial testing people are complaining about lots of halation especially on the lower speed stocks, where there is so much light bouncing of the pressure plate. One of my friends is doing a test this week, let's see what he says. 

    It would be easy to fix cameras, the pressure plates would need to be DLC coated. It would not be difficult OR very expensive. The thickness is the only real problem as we would not want to upset that. DLC can be made black AND non-reflective if necessary. It's also very slippery so it would work great as an anti-friction device as well. 16mm cameras may not be as effected, as many already have black pressure plates. Where I'm worried this MAY actually rase prices AND cause people to back away from film, I see it as a means to an end for Kodak, at a time where the cost of goods is going to skyrocket. I have seen the writing on the wall for a few years now, so I've been divesting in film personally and buying Digital equipment so I can keep making films when Kodak finally raises prices to a point I can no longer participate. 

    • Upvote 1
  7. image.thumb.png.ed37a2bddebb7286772c50a68cc3e937.pngGot it, turn on classic stabilizer, turn cloud tracker on, set it to "rotate" and not zoom, then hit the left right button on the left, hit stabilize and BAM, perfect. 

    The horizon on this, with the rotation the camera, really screws with the built in stabilizer. I noticed it's a 4k source, is this done with a pocket 4k? Looks like it doesn't have the metadata for the auto stabilizer. 

    https://www.dropbox.com/scl/fi/2l7m83fw3ufjeusw8fu0z/Test-stablizer.mov?rlkey=cfu37n7b8lv8kqp9wkd05n4l0&dl=0

  8. 6 hours ago, Aapo Lettinen said:

    It is entirely possible to make a simple pulldown system from scratch. simplest cameras like Krasnogorsks use this style of very simple movement which should be pretty "easy" to make to work OK.

    Yea, your drawings are basically similar to what I would do, even if it was 3D printed. One COULD theoretically just yank that assembly out of a K3, maybe cut the front of the K3 off and just power it by a motor, it maybe the easiest way to go about it. The nice big flange the movement is mounted to in the ACL, that's just grand. I really like that design and it's why I'm gonna be using it for my prototype. We still have to get the metal machines in, but with our current commander and fascist, there is probably no way we'll ever be able to afford them OR the raw materials anymore. We ordered a new style of 3D printer (at great expense) two months ago, but seeing as they're coming from China, I doubt we will see it in the next month or two. But theoretically it can print more accurate parts for prototyping that would allow us to make the thing 3D properly (with a copied movement) and then send it out for metal manufacturing. In the US however, there is very little metal prototyping going on, they only want contracts for big jobs. So we would have to send over to Asia, but now that we're cut off from them, I have no idea what to do. The entire project is sidelined. 

  9. 5 hours ago, Perry Paolantonio said:

    You're making my point here. Our ScanStation is now 12 years old. It was the first one Lasergraphics shipped - So old it doesn't even have their logo branding on it because I told them I didn't want to wait for them to get that and apply it. I was delivered as a 2k, 8mm/16mm only scanner. And precisely because it's modular and based on a PC we were able to continually upgrade it so that it can now scan 12 different film gauges/formats, on the same basic chassis we bought in 2013, from 8mm through 35mm. All of these upgrades were done in-house, and all of them took about an hour. It wasn't hard.

    When you did the 6.5k imager swap, I'm certain it was not cheap. I've asked them before, going from an older 4k imager to the 6.5k with the required support contract, was practically the same cost as a whole new BMD Cintel II. Plus, the amount of bandwidth you need to deal with the larger files, require a whole new discussion about storage, which is grossly expensive. Where I do like the modular and more open source design, had BMD offered an upgrade for their black box, it probably would have been 1/4 the price of the scanner. Plus, there would be little to no storage changes as the thunderbolt mac's, already have enough storage bandwidth integrated into them, unlike some old windows box. 

    6 hours ago, Perry Paolantonio said:

    The vast majority of computers in business are Windows. Is it as nice to use as a mac? not really, but it's not exactly exotic or an unknown quantity. 

    Having high speed connectivity integrated into the bus, is game changing. There is no integrated high speed connectivity on X86 systems, they basically expect you to by a threadripper/Xeon with lots of lanes and use a NAS solution, which of course is not client friendly in any way stretch of the imagination. Sure, capturing to Pro Res? No problem, you can use slow drives. However, when working with DPX/Tiff/Targa even .CRI files from the Cintel, high speed storage is a must. This is why mac's are such a killer offering for creatives, even with older TB4, the 40Gb interface was just fast enough to get the job done. Today with TB5, it's a game changer and working with high res files like the 12k off my BMD Ursa Cine, is a piece of cake. Impossible to do on an X86 system without extremely high speed storage, which is again, very costly. So in the end, you wind up paying 3 - 4x more in the X86 world to deal with problems you literally never have to deal with in the Mac world. Mind you, no other scanner uses Mac's, so it's hard to quantify how much savings there would be with something like a ScanStation, but having worked on both platforms for years, it's probably considerable. 

    6 hours ago, Perry Paolantonio said:

    This is a red herring that has nothing to do with the FPGA. The BMD scanner is fast because the scanner is moving the raw, un-debayered image over thunderbolt, and it's being debayered afterwards ...in the PC. It's not the same in terms of bandwidth, as moving that much uncompressed (DPX, EXR, etc) data over the same pipe. Different thing entirely. 

    I was told most of the debayer happens in the magic black box on scanner. It's similar to how BRAW works. It's why there is little to no way you can make adjustments AFTER you've scanned. The finished CRI file has your look baked in, there is no magical headroom or changes you can make post scan, like you can with any other raw file type. Heck even BRAW has "some" leeway, but very little compared to Red or Arri raw codecs. I also know playing back the CRI files that come out of the Cintel II, is very easy for a potato computer, as I edited an entire UHD short film from a lowly thunderbolt SSD on a 6 year old Intel Mac laptop (shit computer) without ANY issues. It barely used any system resources to work with those files, unlike any other camera raw system which would be debayering in playback. Also, when you're in preview mode and scanning, when you make adjustments to the color, there is a delay because the changes you make, are being done in real time on the FPGA in scanner. 

  10. 13 hours ago, Henry kidman said:

    Hey Tyler, 

    thanks so much for your input. Yeah the timing of the shutter with the film movement mechanism was a little bit of a headache, but it now works in such a way that whilst they aren’t mechanically linked as each has its own stepper motor, the film advance sprocket waits until the motor has completed a specific amount of its rotation before it will move, this way it’s not relaying on perfect time keeping or complex mechanical links. 
     

    Using electronics to join multiple motors together is fine, but with 16mm cameras especially, you can't use a sprocket intermittent to pull the film down, you will need some sort of pulldown claw. A sprocket, won't work. The reason cameras use pulldown claws, is because they can have very tight tolerances. A sprocket has very bad tolerances, which would cause lots of up and down movement in the image. Even if you were to reduce that with a spring loaded side rail, it would still be an extremely unusable/unstable image. I know it runs through the camera ok, but those micro movements can't be seen by the naked eye. 

    As Aapo suggested, the best thing to do is start with a movement from another camera. So that all your main components are made of metal already. I really like the ACL movement (minus the mirror) as it's mounted to a nice thick piece of metal, that you could easily find a front housing for, even perhaps an ACL one. To save even more money, re-working a K3 movement, is probably pretty straight forward. 

    13 hours ago, Henry kidman said:

    yes I did fear the ffd would be the part of this that stumps me. With that being said, I have ordered a depth micromiter so I’m just going to try to get it dialled as best I can. If you were servicing a camera and the ffd was out, how would you adjust it? I’m sure that varies from camera to camera but is it a shimming set up, or are you adjusting set screws? 

    Shims yea, that's the way most cameras work. Arri has a few cameras that the entire movement can move back and forward but for the most part on 16mm cameras, it's done with shims between the lens mount and the body. 

    13 hours ago, Henry kidman said:

    and as for the pressure plate its interesting you say it’s not meant to apply any pressure on the film it’s self. So should I be making a tiny shelf that the pressure plate pushes against that is 0.15mm above the height of the gate so the film can just fit through the gap? 

    The kit to measure would have a flat piece of metal for measuring that is the width of the 16mm gate. A FFD gauge, which usually has a flange that you would push onto the lens mount. Then a tester tool, so you can set the FFD gauge to zero before putting it onto the camera. FFD tools aren't horribly expensive and are available for sale, but the 16mm width flat piece of metal, maybe harder to find. I had to buy one from another tech. 

    13 hours ago, Henry kidman said:

    One final question, if I were to purchase a piece of ground glass and place it on the film gate to be able to visually check focus while I’m adjusting the ffd, would the result be indicative of the result when using film as far as focus goes? 

    You can't use the ground glass method because it adds too much depth. A very thin piece of smoked paper can work on 35mm cameras to see if you're in the ballpark, because the image is so much bigger, but on 16mm cameras with that small of an image, it's just impossible with out proper FFD tools. No matter what, to "dial it in" you will 100% need the tools, there is nothing you can do about that. A collimator could be used theoretically, but they're more expensive. 

  11. 5 hours ago, Perry Paolantonio said:

    FPGAs are used in cameras because you can't fit a computer in a camera. An FPGA is a chip, not a full blown system, which is why they're used in embedded systems.

    Cameras that have full blown AI ML engines in them today, which use what, maybe 5 watts? You talk about frame registration, when FPGA's can literally dewarp in real time these days, Canon has that tech built in to their original 5 year old R5, today with the Nikon Z8 and Z9, a single FPGA can automatically remove lens distortion at 60fps in 8k. Again, with a chip that literally uses no energy. So for someone doing restoration, being able to have the AI engine on board, deal with things like examining the outside of the frame line and looking for warping issues, then automatically de-warping as it's scanning, these are all huge advancements that nobody has done yet and all the tech is available right now as programmable FPGAs. 

    5 hours ago, Perry Paolantonio said:

    IMO, using an FPGA in the scanner is a dumb design decision, not a smart one. It's inherently limiting, and doesn't allow you to adapt to change as easily.

    I mean, BMD built their FPGA based scanner a decade ago, if you're saying a decade old scanner is going to deliver images comparable to a modern one, that's up to you. But generally the imagers are the issues and if you want a better imager, you need a whole new system; computer, software, imager and back end (network) to really "update" anything anyway. So the fact you can't update an FPGA easily, is irrelevant when everyone is buying all of those bits every decade anyway. 

    5 hours ago, Perry Paolantonio said:

    With a PC-based system using off the shelf parts, yes, you might need 2 GPUs but I mean, who cares? You're connecting the scanner to a computer anyway and you have built-in flexibility to upgrade capabilities at any time. With an embedded processor you do not. What does it matter if some of the work is done in-scanner or external? the end result is the same.

    Yea and you need a windows specialist to keep a system running when you are updating components all the time. It just isn't something the average consumer running a business can deal with. Not only that, but the computer needs to be a powerhouse, at a time when good GPU's are $4k each and threadripper/Xeon are the only chipsets capable of having the lanes, which are grossly expensive in of themselves. This isn't 2018, building a modern X86 desktop system today for this work is more than a Mac Studio Ultra and in the end, all you get is something that will be out of date in 2 years.

    The great thing about a scanner that's plug and play, is that your computer can literally be a toaster. You don't even need network storage. The compressed CRI files the BMD scanner delivers, are relatively small and a day of scanning can easily be put onto a decent sized thunderbolt NVME device. Then you can unplug it and carry it over to your finishing workstation. You save, oh gosh, tens of thousands of dollars as a business owner doing that.

    We scan DPX and TIFF/TARGA, which range from 10 bit to 14 bit. I have a lot of raw data to move around so I'm stuck to high speed network drives unfortunately. But when I've used BMD scanners in the past, the thunderbolt workflow is very good and saves a lot of money. 

    5 hours ago, Perry Paolantonio said:

    I don't think you understand how easy it is to do stuff like machine vision perf registration. Using open source tools like OpenCV, on modest hardware, you can do two or three image transforms to locate and reposition the frame in a matter of milliseconds, on images that are 14k in size. The PC running our scanner was built in 2018 and can do this on 14k images in under 100ms. And that's on the CPU, by the way, no GPU involved in our setup.  It'll be faster with a newer machine, which will happen when we're done coding and know what bottlenecks we need to address.

    It may be simple, but you've clearly been working on it for a while, majority of people who are trying to pay their rent, don't have that kind of down time. It's disingenuous to think people running restoration businesses, are going to be writing their own code. 

    5 hours ago, Perry Paolantonio said:

    Most frame grabber PCIe boards offer FPGA equipped versions of their hardware, which have the ability to do some image analysis/machine vision stuff on board.  So yes of course this is all possible to do inside the scanner as well using the same chips. But to do that requires substantial programming resources, and you are limited by the capabilities of the FPGA you choose. 

    This is why the scanner manufactures do it. This isn't some BMD only thing either, MOST scanners work this way, they deliver a finished image to the computer, you know this. They may use different hardware, but in the end, it's even MORE locked in than an FPGA. Good luck updating a Spirit, Scanity, Arri scan or any of those line scanning systems to anything modern, not gonna happen. Your "open source" scanner concept is great, absolutely understand it, but the majority of people will not make that kind of investment. They need something reliable and capable of recouping their investment immediately, not in a decade. 

  12. So 3D printed material, no matter what it is, will not have the tolerances for proper FFD, never going to happen. Manufacturers spend an ungodly amount of time engineering the gate to flange distance and making it perfect. The gate and lens mount, really need to all be made of metal and somehow connected. 

    So the tolerances are based directly on film channel/float. Some cameras have a pretty large film channel, this is the gap between the pressure plate and aperture plate. Metal gates are used because you can polish the side rails and the film can be at the same depth as the physical aperture. The pressure plate, shouldn't actually be putting pressure on the film itself, it should be very smooth but held firmly in place by the gate laterally to prevent wobble. 

    Yea, you will need gauges that go down to .01, as the FFD range is .00-.03mm, depending on how much float the film has. Some cameras can be run at .03 and not be a big deal, others need to be spot on .00 and that's the tricky part. Engineering the pulldown, the consistency with the FFD across the frame, film channel, a spring loaded side rail gate, timing with the shutter, all of these things are very challenging to get right and I'm afraid, no way a 3D printed camera would actually create proper images. Someone tried it with a 2 perf 35mm 3D printed camera and it was basically unusable. The tolerances are incredible, .01mm off and you go from working to not working. As someone who services film cameras for a living, it's a miracle any motion picture camera works at all. 

    I commend the work tho, I'm glad to see you playing around with it. I'd love to see if any images come out. 

     

  13. Good movies, can (not always) have a lot of magic behind them. Moments that weren't necessarily scripted a certain way, which just fell into place and worked. For the audience to know everything about production, including how those magical moments came to be, may undermine the filmmakers skillset. It may make them seem less like a genius and more like someone who struck oil accidentally and accepted the revenue simply because they touched the ground. 

    On the other hand, some productions are so tedious and difficult, when you hear about what the filmmakers went through, it kinda opens your eyes a bit. I almost prefer the horror stories, because you can learn what NOT to do. 

    I generally don't purposely seek out production information, but I will read the trades if there is an interesting story. I haven't let those stories effect my enjoyment of a film. 

     

  14. 15 hours ago, Dan Baxter said:

    Well of course there's optical calibration involved when the optics are changed, but are you saying they don't have a simple piece of software to recalibrate the machine? What happens if your scanner can't focus correctly at the moment and requires recalibration - surely the user can contact support and be talked through how to recalibrate their machine's optics?

    The Cintel II uses manual focus on the outside. I assume when you change the optic, something about that focusing system changes and it's more complicated to setup right. I have not been under the hood, but they said it's much more complicated. 

    15 hours ago, Dan Baxter said:

    That's my understanding too, but "in hardware" is still software in a chip on a logic board that can be updated when necessary.

    It's an FPGA based system, so the bandwidth of the system is limited, that's why they can't just change the way it works. They would need to change the FPGA entirely, which means updating everything including software. This is the trap they got themselves into when developing it in the first place. They're still using TB2 for transferring data, which is extremely slow compared to the modern TB5 120Gb/s protocol which they would probably move over to with any new hardware. 

    15 hours ago, Dan Baxter said:

    I do not think they can support any other camera with the existing hardware, if they change the camera they change more than just the software on the logic boards which means you're buying a new scanner.

    The "black box" is fully replicable evidently. I assume their goal would be to offer an upgrade for older systems, like they did with the lamp replacement, updated gates and updated drive system. They've been very good about updating older systems. It would be a camera and black box swap out, which is basically the entire back bone of the scanner yes. 

    15 hours ago, Dan Baxter said:

    I wouldn't say that approach is "simple", I would say that bypassing a stack of logic boards and having a beefy host PC take over most of the computational tasks simplifies a lot of the design. The only thing the hardware needs to do is protect the film and its own hardware if the host computer crashes or does something unexpected. The preference now with most of the other machines is to do the complicated computational tasks in the PC.

    Na, PC's are horrible at this work because they're just using raw power to chew through processes. FPGA's are night and day better, it's why everyone uses them for cameras. You can buy specific FPGA's built for tasks like processing the bayer imager and encoding the raw file. Those tasks in of themselves done via software, require MASSIVE power. Have you worked with Red Raw 8k before? It'll gobble up half your GPU just playing back at full res because there is NO optimization. Scanners like the Scan Station have/had (not sure if they've updated this or not) two dedicated GPU's, to do what a basic modern cinema camera does in real time at 80fps. It's just a cost reduction method.  Blackmagic's concept is lightyears better, but they are using decade old tech, that's the problem. I'm not sure how the Director works, but the spirit, scanity, imagica and arri scan, do "nearly" everything on the scanner.  

     

  15. On 4/12/2025 at 4:01 PM, Owen A. Davies said:

    Would you say there's any significant gap in quality between the 4k Director and the 10/13k Director? I always thought pumping a gratuitous amount of extra resolution into scans to be unnecessary and counterproductive. In my experience, all it really does at 10k is serve to sharpen and exemplify the film's grain structure without really adding any increase to the resolvable detail in the image. 

    As Dan said, they're entirely different imagers, so yea, there is a significant gap. 

    Nobody is scanning vertical 35mm in 10k anyway, the point of the high resolution scanner is for VistaVision and 65mm formats like 5 perf and IMAX. 

  16. 11 hours ago, Dan Baxter said:

    There's obviously a bit more to it than simply changing the optics as the 8/16 model drops support for 35mm entirely. If it was as simple as offering a second optics module to put into an existing scanner then they wouldn't need to sell it as a separate product.

    Actually, it's literally just switching the optics. The reason why they can't have users do the swap, is because there is calibration involved and supposedly it's "burred" in the scanner, so they don't think it's something users can swap. I had a lengthy talk with their engineer at NAB 2024 about this, as it was already done and working at that point. It's saw it running at NAB 2024. 

    11 hours ago, Dan Baxter said:

    Also they're not actually using the "full imager" for 16mm either. If they were then 16mm would work the way that 35mm works on the 35mm Cintels where only half the perfs are visible:

    Yea, they're using the full imager, they're just shooting the perforations, so the usable image area is of course less. The scanner works no different in S8 mode then 35mm mode, it's using ML to stabilize using the imager's data. It does this IN HARDWARE, not on the GPU of the client system like many scanners do. This way, they send a pre-stabilized image to the client system, which then dumps it to a drive. All of the corrections and adjustments the user makes, is done to the actual scanners hardware in real time, the stream off the scanner is fixed. This way of doing things, where brilliantly simple, also leads to major issues when you want to upgrade or change anything. This is the reason why BMD have been delayed on their new imager scanner. I have been told YET AGAIN, the new imager is on its way, but not to expect it anytime soon. They're now expecting 2 year lead time. 

  17. 3 hours ago, Jon O'Brien said:

    And anyway, I am totally happy with film. I'm just going to keep using it.

    Here in So Cal the "film" scene is dying fast. Not long ago, I had commercial agencies knocking on my doors non stop for shoots, even double booking sometimes by accident. Now, all they want is digital and they want fast/cheap. It's really unfortunate. I actually had to buy a real digital cinema camera because otherwise, I would get no work. It's really sad, but I hope to keep shooting film. I got dehancer and a few other tools to help make the digital camera look more "filmic" and it's working, but inside I know it's not film.  

    • Upvote 1
  18. Damn, yea I totally feel ya. This is a very common issue with creatives; not being able to prove their skills. I work with people all the time who are in the same boat, in fact one of them just wrapped their first narrative production on 4 perf 35mm. Very excited for them because I know that feeling and just completing something you worked hard to make, is a great feeling. Here in LA, it's almost comical how many people are doing "spec" shoots like this, just friends getting together with some actors making something quick and dirty to build a demo reel. If that industry stopped, I'd lose quite a bit of business because we scan a lot of those films. 

    With that said, I've been involved in the local community here in LA for a long time, so finding scripts is so easy, it's almost like the streets are paved with them. So I understand how frustrating it can be if you can't find something to shoot. Where I do write a lot of scripts, I've found my personal work to be overly complex to make, which is the main reason I haven't personally invested in any of them outside of multiple drafts and shelving them. With SUPER short films like you're talking about, I think other writers won't take you seriously. Sure you can probably find something from one of the multitude of websites, but conversing with people directly to help, maybe more challenging. 

    So if I were in your shoes, I'd take my time and write something. The main reason is simply due to your experiences and access to locations/people. As the filmmaker, your personal experiences should be in the story and you should frame it around what you have available resource wise. You may spin your wheels for weeks trying to find something shootable for the budget, crew, cast and locations you have available. It's far easier to get a list of things you know, places you can use, story concepts that work and mix it all together, especially when you're talking about 5 - 7 pages. Funny enough, I'm writing a story right now that has literally no on-screen audible dialog, partially because I'm going to be working with non-actors and partially because of shooting speed, you can do one take and move on, when its just physical actions happening on camera. 

    I do like super short subject films, I think it's the best way to get your feet wet, hold audiences attention AND make your money spread over multiple subjects, rather than doing one big film. I have made the mistake of making "epic" short films way too many times, they just don't play because they're impossible to book at festivals and peoples attention span is too narrow these days. So being under the 12 - 13 minute cut-off time, is smart and I would try to find a script that has little to no dialog, so you don't need to worry as much about actor quality OR dealing with flubbing dialog lines. 

    Anyway, those are my .02 cents. 

     

     

    • Thanks 1
  19. The Cintel has not been updated for NAB 2025. 

    They made a slight alteration to the magnifying optic, which allows them to use the full imager for S16mm formats. They then made a slight change to the software which allows a crop-in for 8mm formats. It's not a new scanner at all, zero major changes made. 

    The diffuse HDR light source is not a recent addition, it's been around for at least 3 years. The 8mm gates debuted last year FYI. They did this to increase the speed of scanning with a SINGLE pass HDR mode. This was one of the biggest slowdowns in the older generation scanners, having to re-scan for HDR. 

    With AI tools, scratch removal is easy, PFClean does it without an human intervention. The updated versions of DRS Nova do as well. PFClean has the benefit of working native with Apple Silicon and utilizing it's massive AI potential to render in real time. This is a breakthrough for people who don't want to invest in huge workstations that are sheer horsepower to crack this nut. The lower cost solutions are getting substantially better and PFClean is a very cost effective monthly license for people who actually would use it. 

     

    • Like 1
  20. 6 hours ago, Owen A. Davies said:

    It is definitely not the date in which they were shot. These are just three examples of shorts from the past five years that I found on YouTube. All of which were shot on 16mm with Kodak 7222. I think that ones personal preference and subjective opinions play an important role, but these three clips are prime examples of footage that has what I would consider to be a distracting level of grain. 

    Yea, the grain structure is challenging on YouTube, especially when you don't know the exposure or post process. I work with 7222 all the time on my own scanner and system, so I have a better understanding of how it compares to other stocks. I do find it to be pretty much in line with 250D in grain structure, but 50D is absolutely less grainy. However, you will have LESS black detail with 50D, it's a lost cause trying to get that detail, it just won't be there no matter what you do on set. Blacks are a big problem with these finer grain stocks, you have to light everything, you can't let anything just roll off, it will be unrecoverable. Fine for a film noir, but not for anything else.

×
×
  • Create New...