Premium Member Tyler Purcell Posted April 19 Premium Member Posted April 19 15 hours ago, Dan Baxter said: Well of course there's optical calibration involved when the optics are changed, but are you saying they don't have a simple piece of software to recalibrate the machine? What happens if your scanner can't focus correctly at the moment and requires recalibration - surely the user can contact support and be talked through how to recalibrate their machine's optics? The Cintel II uses manual focus on the outside. I assume when you change the optic, something about that focusing system changes and it's more complicated to setup right. I have not been under the hood, but they said it's much more complicated. 15 hours ago, Dan Baxter said: That's my understanding too, but "in hardware" is still software in a chip on a logic board that can be updated when necessary. It's an FPGA based system, so the bandwidth of the system is limited, that's why they can't just change the way it works. They would need to change the FPGA entirely, which means updating everything including software. This is the trap they got themselves into when developing it in the first place. They're still using TB2 for transferring data, which is extremely slow compared to the modern TB5 120Gb/s protocol which they would probably move over to with any new hardware. 15 hours ago, Dan Baxter said: I do not think they can support any other camera with the existing hardware, if they change the camera they change more than just the software on the logic boards which means you're buying a new scanner. The "black box" is fully replicable evidently. I assume their goal would be to offer an upgrade for older systems, like they did with the lamp replacement, updated gates and updated drive system. They've been very good about updating older systems. It would be a camera and black box swap out, which is basically the entire back bone of the scanner yes. 15 hours ago, Dan Baxter said: I wouldn't say that approach is "simple", I would say that bypassing a stack of logic boards and having a beefy host PC take over most of the computational tasks simplifies a lot of the design. The only thing the hardware needs to do is protect the film and its own hardware if the host computer crashes or does something unexpected. The preference now with most of the other machines is to do the complicated computational tasks in the PC. Na, PC's are horrible at this work because they're just using raw power to chew through processes. FPGA's are night and day better, it's why everyone uses them for cameras. You can buy specific FPGA's built for tasks like processing the bayer imager and encoding the raw file. Those tasks in of themselves done via software, require MASSIVE power. Have you worked with Red Raw 8k before? It'll gobble up half your GPU just playing back at full res because there is NO optimization. Scanners like the Scan Station have/had (not sure if they've updated this or not) two dedicated GPU's, to do what a basic modern cinema camera does in real time at 80fps. It's just a cost reduction method. Blackmagic's concept is lightyears better, but they are using decade old tech, that's the problem. I'm not sure how the Director works, but the spirit, scanity, imagica and arri scan, do "nearly" everything on the scanner.
Site Sponsor Perry Paolantonio Posted April 21 Site Sponsor Posted April 21 On 4/19/2025 at 4:26 PM, Tyler Purcell said: FPGA's are night and day better, it's why everyone uses them for cameras FPGAs are used in cameras because you can't fit a computer in a camera. An FPGA is a chip, not a full blown system, which is why they're used in embedded systems. IMO, using an FPGA in the scanner is a dumb design decision, not a smart one. It's inherently limiting, and doesn't allow you to adapt to change as easily. Everything about an FPGA is custom and proprietary. With a PC-based system using off the shelf parts, yes, you might need 2 GPUs but I mean, who cares? You're connecting the scanner to a computer anyway and you have built-in flexibility to upgrade capabilities at any time. With an embedded processor you do not. What does it matter if some of the work is done in-scanner or external? the end result is the same. I don't think you understand how easy it is to do stuff like machine vision perf registration. Using open source tools like OpenCV, on modest hardware, you can do two or three image transforms to locate and reposition the frame in a matter of milliseconds, on images that are 14k in size. The PC running our scanner was built in 2018 and can do this on 14k images in under 100ms. And that's on the CPU, by the way, no GPU involved in our setup. It'll be faster with a newer machine, which will happen when we're done coding and know what bottlenecks we need to address. Most frame grabber PCIe boards offer FPGA equipped versions of their hardware, which have the ability to do some image analysis/machine vision stuff on board. So yes of course this is all possible to do inside the scanner as well using the same chips. But to do that requires substantial programming resources, and you are limited by the capabilities of the FPGA you choose. The cards we are using in our 70mm scanner can do this. I looked into it, and honestly it wasn't worth the effort to have to program the FPGA, because that locks you into using that card manufacturer's API. Instead, we are using generic camera communication protocols (which are built into the card's drivers), and would let us swap out the camera and frame grabber for any other one that uses GenICam. Which is to say, a lot of machine vision cameras. It doesn't limit us to a particular interface, or a particular brand of frame grabber, or a particular brand of camera. Right now we use CoaxPress. We could swap that camera and frame grabber for a 25GbE setup and not have to change a single line of code.
Premium Member Tyler Purcell Posted April 21 Premium Member Posted April 21 5 hours ago, Perry Paolantonio said: FPGAs are used in cameras because you can't fit a computer in a camera. An FPGA is a chip, not a full blown system, which is why they're used in embedded systems. Cameras that have full blown AI ML engines in them today, which use what, maybe 5 watts? You talk about frame registration, when FPGA's can literally dewarp in real time these days, Canon has that tech built in to their original 5 year old R5, today with the Nikon Z8 and Z9, a single FPGA can automatically remove lens distortion at 60fps in 8k. Again, with a chip that literally uses no energy. So for someone doing restoration, being able to have the AI engine on board, deal with things like examining the outside of the frame line and looking for warping issues, then automatically de-warping as it's scanning, these are all huge advancements that nobody has done yet and all the tech is available right now as programmable FPGAs. 5 hours ago, Perry Paolantonio said: IMO, using an FPGA in the scanner is a dumb design decision, not a smart one. It's inherently limiting, and doesn't allow you to adapt to change as easily. I mean, BMD built their FPGA based scanner a decade ago, if you're saying a decade old scanner is going to deliver images comparable to a modern one, that's up to you. But generally the imagers are the issues and if you want a better imager, you need a whole new system; computer, software, imager and back end (network) to really "update" anything anyway. So the fact you can't update an FPGA easily, is irrelevant when everyone is buying all of those bits every decade anyway. 5 hours ago, Perry Paolantonio said: With a PC-based system using off the shelf parts, yes, you might need 2 GPUs but I mean, who cares? You're connecting the scanner to a computer anyway and you have built-in flexibility to upgrade capabilities at any time. With an embedded processor you do not. What does it matter if some of the work is done in-scanner or external? the end result is the same. Yea and you need a windows specialist to keep a system running when you are updating components all the time. It just isn't something the average consumer running a business can deal with. Not only that, but the computer needs to be a powerhouse, at a time when good GPU's are $4k each and threadripper/Xeon are the only chipsets capable of having the lanes, which are grossly expensive in of themselves. This isn't 2018, building a modern X86 desktop system today for this work is more than a Mac Studio Ultra and in the end, all you get is something that will be out of date in 2 years. The great thing about a scanner that's plug and play, is that your computer can literally be a toaster. You don't even need network storage. The compressed CRI files the BMD scanner delivers, are relatively small and a day of scanning can easily be put onto a decent sized thunderbolt NVME device. Then you can unplug it and carry it over to your finishing workstation. You save, oh gosh, tens of thousands of dollars as a business owner doing that. We scan DPX and TIFF/TARGA, which range from 10 bit to 14 bit. I have a lot of raw data to move around so I'm stuck to high speed network drives unfortunately. But when I've used BMD scanners in the past, the thunderbolt workflow is very good and saves a lot of money. 5 hours ago, Perry Paolantonio said: I don't think you understand how easy it is to do stuff like machine vision perf registration. Using open source tools like OpenCV, on modest hardware, you can do two or three image transforms to locate and reposition the frame in a matter of milliseconds, on images that are 14k in size. The PC running our scanner was built in 2018 and can do this on 14k images in under 100ms. And that's on the CPU, by the way, no GPU involved in our setup. It'll be faster with a newer machine, which will happen when we're done coding and know what bottlenecks we need to address. It may be simple, but you've clearly been working on it for a while, majority of people who are trying to pay their rent, don't have that kind of down time. It's disingenuous to think people running restoration businesses, are going to be writing their own code. 5 hours ago, Perry Paolantonio said: Most frame grabber PCIe boards offer FPGA equipped versions of their hardware, which have the ability to do some image analysis/machine vision stuff on board. So yes of course this is all possible to do inside the scanner as well using the same chips. But to do that requires substantial programming resources, and you are limited by the capabilities of the FPGA you choose. This is why the scanner manufactures do it. This isn't some BMD only thing either, MOST scanners work this way, they deliver a finished image to the computer, you know this. They may use different hardware, but in the end, it's even MORE locked in than an FPGA. Good luck updating a Spirit, Scanity, Arri scan or any of those line scanning systems to anything modern, not gonna happen. Your "open source" scanner concept is great, absolutely understand it, but the majority of people will not make that kind of investment. They need something reliable and capable of recouping their investment immediately, not in a decade.
Site Sponsor Perry Paolantonio Posted April 21 Site Sponsor Posted April 21 (edited) 1 hour ago, Tyler Purcell said: mean, BMD built their FPGA based scanner a decade ago, if you're saying a decade old scanner is going to deliver images comparable to a modern one, that's up to you. You're making my point here. Our ScanStation is now 12 years old. It was the first one Lasergraphics shipped - So old it doesn't even have their logo branding on it because I told them I didn't want to wait for them to get that and apply it. I was delivered as a 2k, 8mm/16mm only scanner. And precisely because it's modular and based on a PC we were able to continually upgrade it so that it can now scan 12 different film gauges/formats, on the same basic chassis we bought in 2013, from 8mm through 35mm. All of these upgrades were done in-house, and all of them took about an hour. It wasn't hard. 1 hour ago, Tyler Purcell said: Yea and you need a windows specialist to keep a system running when you are updating components all the time. What on earth is a Windows Specialist? Our ScanStation was upgraded to 6.5k as soon as that camera came out and it came with a turnkey PC. We haven't touched Windows on that machine since then and we use it daily. I think that was, what? 4 or 5 years ago? Windows is a pig, but it works just fine and isn't that complicated or hard to use. 1 hour ago, Tyler Purcell said: It just isn't something the average consumer running a business can deal with. The vast majority of computers in business are Windows. Is it as nice to use as a mac? not really, but it's not exactly exotic or an unknown quantity. 1 hour ago, Tyler Purcell said: Not only that, but the computer needs to be a powerhouse, at a time when good GPU's are $4k each and threadripper/Xeon are the only chipsets capable of having the lanes, which are grossly expensive in of themselves. This isn't 2018, building a modern X86 desktop system today for this work is more than a Mac Studio Ultra and in the end, all you get is something that will be out of date in 2 years. No. First - the ScanStation is running a $400 ASUS gaming motherboard, and a Core i7 CPU, I think. The dual GPUs in it weren't even that high end when it came out. GTX 1060s, if I'm not mistaken. upper mid-range, at best. Secondly, any PC you buy - mac or windows - is going to be superseded by new tech in two years. But if you want to talk about high end windows PCs - the machine we run Phoenix in cost about $5000 to build from scratch and it took me a couple hours to put together. It's a 32 core Threadripper in a Supermicro server motherboard. I built that 2 years ago and it's still in daily use restoring 6.5k scans. 1 hour ago, Tyler Purcell said: We scan DPX and TIFF/TARGA, which range from 10 bit to 14 bit. I have a lot of raw data to move around so I'm stuck to high speed network drives unfortunately. But when I've used BMD scanners in the past, the thunderbolt workflow is very good and saves a lot of money. This is a red herring that has nothing to do with the FPGA. The BMD scanner is fast because the scanner is moving the raw, un-debayered image over thunderbolt, and it's being debayered afterwards ...in the PC. It's not the same in terms of bandwidth, as moving that much uncompressed (DPX, EXR, etc) data over the same pipe. Different thing entirely. 1 hour ago, Tyler Purcell said: It may be simple, but you've clearly been working on it for a while, majority of people who are trying to pay their rent, don't have that kind of down time. It's disingenuous to think people running restoration businesses, are going to be writing their own code. I never said anyone should or would roll their own. I'm using my *actual* experience with this stuff to try to explain to you how it works. But twist my words if you must. As for the timeframe, yes it's taken a long time - that's because I've been running a business and doing this stuff when I have time, which there isn't a lot of. My actual labor hours over the past few years on the coding part of the process are probably less than 200. And that's for the whole application, which includes the user-facing front end, all the image processing, all the communications with the frame grabber, the motion controller, and the lighting setup. This is hardly any time at all, it's just spread out over a longer period because of external reasons. 1 hour ago, Tyler Purcell said: This isn't some BMD only thing either, MOST scanners work this way, they deliver a finished image to the computer, you know this. Um, actually they don't and for certain BMD is not. I am not familiar enough with the inner workings of Arri and Scanity to say. Spirit hasn't been made for years so really shouldn't factor in. My understanding of the Arriscan XT is that much more is done on the computer than the original version of the scanner, which had a lot more proprietary image processing hardware. Nobody wants to have to make that stuff if they don't have to - it's expensive and takes time and resources so why re-invent the wheel when a software-based solution using more generic hardware is readily available? Lasergraphics and Kinetta both send raw data to the PC that is processed - not a finished image at all. In fact, both are sending the raw data from the camera, which is why they're able to do it over relatively low bandwidth connections. BMD is NOT sending a finished DPX or TIFF image over thunderbolt from the scanner - it's sending camera raw and debayering it in the computer and that's how they're working at high speed. Edited April 21 by Perry Paolantonio
Site Sponsor Robert Houllahan Posted April 22 Site Sponsor Posted April 22 The Arriscan ALEV (1?) sensor goes into a frame grabber card on the Sun Opteron workstation which does the assembly of the frames from the piezo and HDR multi flash into the final DPX and or TIFF full res and proxy res files. I do not see any proprietary image processing hardware unless it is on the frame grabber.
Premium Member Tyler Purcell Posted April 22 Premium Member Posted April 22 5 hours ago, Perry Paolantonio said: You're making my point here. Our ScanStation is now 12 years old. It was the first one Lasergraphics shipped - So old it doesn't even have their logo branding on it because I told them I didn't want to wait for them to get that and apply it. I was delivered as a 2k, 8mm/16mm only scanner. And precisely because it's modular and based on a PC we were able to continually upgrade it so that it can now scan 12 different film gauges/formats, on the same basic chassis we bought in 2013, from 8mm through 35mm. All of these upgrades were done in-house, and all of them took about an hour. It wasn't hard. When you did the 6.5k imager swap, I'm certain it was not cheap. I've asked them before, going from an older 4k imager to the 6.5k with the required support contract, was practically the same cost as a whole new BMD Cintel II. Plus, the amount of bandwidth you need to deal with the larger files, require a whole new discussion about storage, which is grossly expensive. Where I do like the modular and more open source design, had BMD offered an upgrade for their black box, it probably would have been 1/4 the price of the scanner. Plus, there would be little to no storage changes as the thunderbolt mac's, already have enough storage bandwidth integrated into them, unlike some old windows box. 6 hours ago, Perry Paolantonio said: The vast majority of computers in business are Windows. Is it as nice to use as a mac? not really, but it's not exactly exotic or an unknown quantity. Having high speed connectivity integrated into the bus, is game changing. There is no integrated high speed connectivity on X86 systems, they basically expect you to by a threadripper/Xeon with lots of lanes and use a NAS solution, which of course is not client friendly in any way stretch of the imagination. Sure, capturing to Pro Res? No problem, you can use slow drives. However, when working with DPX/Tiff/Targa even .CRI files from the Cintel, high speed storage is a must. This is why mac's are such a killer offering for creatives, even with older TB4, the 40Gb interface was just fast enough to get the job done. Today with TB5, it's a game changer and working with high res files like the 12k off my BMD Ursa Cine, is a piece of cake. Impossible to do on an X86 system without extremely high speed storage, which is again, very costly. So in the end, you wind up paying 3 - 4x more in the X86 world to deal with problems you literally never have to deal with in the Mac world. Mind you, no other scanner uses Mac's, so it's hard to quantify how much savings there would be with something like a ScanStation, but having worked on both platforms for years, it's probably considerable. 6 hours ago, Perry Paolantonio said: This is a red herring that has nothing to do with the FPGA. The BMD scanner is fast because the scanner is moving the raw, un-debayered image over thunderbolt, and it's being debayered afterwards ...in the PC. It's not the same in terms of bandwidth, as moving that much uncompressed (DPX, EXR, etc) data over the same pipe. Different thing entirely. I was told most of the debayer happens in the magic black box on scanner. It's similar to how BRAW works. It's why there is little to no way you can make adjustments AFTER you've scanned. The finished CRI file has your look baked in, there is no magical headroom or changes you can make post scan, like you can with any other raw file type. Heck even BRAW has "some" leeway, but very little compared to Red or Arri raw codecs. I also know playing back the CRI files that come out of the Cintel II, is very easy for a potato computer, as I edited an entire UHD short film from a lowly thunderbolt SSD on a 6 year old Intel Mac laptop (shit computer) without ANY issues. It barely used any system resources to work with those files, unlike any other camera raw system which would be debayering in playback. Also, when you're in preview mode and scanning, when you make adjustments to the color, there is a delay because the changes you make, are being done in real time on the FPGA in scanner.
Dan Baxter Posted April 23 Posted April 23 On 4/20/2025 at 6:26 AM, Tyler Purcell said: It's an FPGA based system, so the bandwidth of the system is limited, that's why they can't just change the way it works. They would need to change the FPGA entirely, which means updating everything including software. This is the trap they got themselves into when developing it in the first place. They're still using TB2 for transferring data, which is extremely slow compared to the modern TB5 120Gb/s protocol which they would probably move over to with any new hardware. Well I wouldn't call it a "trap", that's how most scanners worked and it's why upgrading or changing the optical module costs $90K+ in some of them. I'm sure some of it was existing Cintel International IP too. On 4/20/2025 at 6:26 AM, Tyler Purcell said: Na, PC's are horrible at this work because they're just using raw power to chew through processes. FPGA's are night and day better, it's why everyone uses them for cameras. You can buy specific FPGA's built for tasks like processing the bayer imager and encoding the raw file. Those tasks in of themselves done via software, require MASSIVE power. Have you worked with Red Raw 8k before? It'll gobble up half your GPU just playing back at full res because there is NO optimization. Scanners like the Scan Station have/had (not sure if they've updated this or not) two dedicated GPU's, to do what a basic modern cinema camera does in real time at 80fps. It's just a cost reduction method. Blackmagic's concept is lightyears better, but they are using decade old tech, that's the problem. I'm not sure how the Director works, but the spirit, scanity, imagica and arri scan, do "nearly" everything on the scanner. You should look up the costs involved to replace one of the logic boards in a Spirit or a Scanity before declaring it the superior design. There are advantages and disadvantages, and if you're using an 8K CCD line sensor for 1080p, 2K and 4K then of course you want to stack it in the scanner and down sample - but that design is obsolete. All the new scanners use area sensors, and they are all able to make use the full resolution of their cameras. As you say the only limiting factor there is the bandwidth for the raw data flow. The FPGAs/logic boards in video cameras are generalisable. In other words, you can use the same (or nearly the same) components in a heap of different products. The scanner-specific tasks like perf detection and stabilisation to the perfs are not of any use to any other product. The pixel-shift stabilisation isn't done in hardware anyway, the calculation for it is done in hardware but that is stored in the .CRI metadata and then Resolve does the actual stabilisation. You're also more dependent on the manufacturer with the logic boards. If they leave the market, or don't provide support, then you need access to another machine to copy the "software" off the FPGA to repair your own one. 14 hours ago, Tyler Purcell said: I mean, BMD built their FPGA based scanner a decade ago, if you're saying a decade old scanner is going to deliver images comparable to a modern one, that's up to you. But generally the imagers are the issues and if you want a better imager, you need a whole new system; computer, software, imager and back end (network) to really "update" anything anyway. So the fact you can't update an FPGA easily, is irrelevant when everyone is buying all of those bits every decade anyway. The only reason Blackmagic has a custom CMOSIS 4K camera is because they wanted a 4K CMOS camera for their machine in 2014, they wanted it to do 24fps, and they wanted to sell it for $30K retail. Those goals are the limiting factor. They've never sold the machine-vision camera retail for other applications. I imagine the next one will be exactly the same - they may use a better sensor, but they won't sell the camera itself retail. That's not a clever design when they can use an off-the-shelf product instead. And they're not going to sell it as a product because they don't compete in the machine-vision camera space and that market is already very competitive. I don't know how you can say that the noisy CMOSIS cameras sold in a brand new scanner in 2025 is acceptable. LaserGraphics moved on from the 5K CMOSIS that they were using almost six years ago. Kinetta and DCS support what the customer wants and Filmfabriek uses Sony Pregius S. Blackmagic are quite literally the only scanner manufacturer that is still using an optical module from 10 years ago in brand new scanners. 7 hours ago, Tyler Purcell said: I was told most of the debayer happens in the magic black box on scanner. It's similar to how BRAW works. It's why there is little to no way you can make adjustments AFTER you've scanned. The finished CRI file has your look baked in, there is no magical headroom or changes you can make post scan, like you can with any other raw file type. Whoever told you that was probably making assumptions. How do you do "most of the debayer" inside or outside? You either do it or you don't. The CRIs are 12bit encoded using JPEG-SOF3 (lossless), not debayered, and not pre-stabilised either - they contain metadata for the frame-by-frame pixel shift to be done in software. So they're really not doing as much as you think they are inside the scanner. In saying this, part of the core issue is that basically nothing supports .CRI other than Resolve. If Blackmagic delivered .DNG instead they'd solve a lot of problems, but they chose to use their own propriety/not-well-documented format.
Premium Member Tyler Purcell Posted April 23 Premium Member Posted April 23 10 hours ago, Dan Baxter said: Well I wouldn't call it a "trap", that's how most scanners worked and it's why upgrading or changing the optical module costs $90K+ in some of them. I'm sure some of it was existing Cintel International IP too. Well yea, you can't really upgrade line scanners that easily. 10 hours ago, Dan Baxter said: You should look up the costs involved to replace one of the logic boards in a Spirit or a Scanity before declaring it the superior design. There are advantages and disadvantages, and if you're using an 8K CCD line sensor for 1080p, 2K and 4K then of course you want to stack it in the scanner and down sample - but that design is obsolete. All the new scanners use area sensors, and they are all able to make use the full resolution of their cameras. As you say the only limiting factor there is the bandwidth for the raw data flow. Line scanners tho, kinda whole other world. The Cintel II is a single module and imager. It's really not the end of the world. The only thing different from doing an imager upgrade in a Lasergraphics is the added magic box. 10 hours ago, Dan Baxter said: The FPGAs/logic boards in video cameras are generalisable. In other words, you can use the same (or nearly the same) components in a heap of different products. The scanner-specific tasks like perf detection and stabilisation to the perfs are not of any use to any other product. The pixel-shift stabilisation isn't done in hardware anyway, the calculation for it is done in hardware but that is stored in the .CRI metadata and then Resolve does the actual stabilisation. Well... they now have full AI auto tracking/AF. So I mean, that tech an easily be used to perf stabilize. Canon and Nikon use the same tech to de-warp lenses. This could be used to de-warp film. So I do think there is merit to the example I used, it's just not implemented by anyone. Scanner companies do not think like restoration technicians and they need to. The first person to release a scanner that you feed film and out the back end comes a fully restored file (within reason obviously) will win this battle. Where it's true scanner companies have attempted to sell concepts to help hide dirt/scratches, today with AI tools, there is no reason why a lot of the cleanup can't be done on the fly. Not saying it would necessarily be done in hardware, but you get my point. I was told multiple times by several people at BMD, over multiple years at going to dozens of events related to this industry, the entirety of the process is done ON SCANNER. They proved it by going into the real-time logs and showing that Resolve is literally doing nothing when scanning. I also asked them why their stabilizer tool doesn't work anything like the scanner tools, to which they said "because it's all done on scanner". Now, obviously playing back the file is different, it does use a tiny bit of resources when doing so because well, the CRI files are compressed. I know the CRI's are not stabilizing in playback. 10 hours ago, Dan Baxter said: You're also more dependent on the manufacturer with the logic boards. If they leave the market, or don't provide support, then you need access to another machine to copy the "software" off the FPGA to repair your own one. The only reason Blackmagic has a custom CMOSIS 4K camera is because they wanted a 4K CMOS camera for their machine in 2014, they wanted it to do 24fps, and they wanted to sell it for $30K retail. Those goals are the limiting factor. They've never sold the machine-vision camera retail for other applications. I imagine the next one will be exactly the same - they may use a better sensor, but they won't sell the camera itself retail. That's not a clever design when they can use an off-the-shelf product instead. And they're not going to sell it as a product because they don't compete in the machine-vision camera space and that market is already very competitive. I was told that camera is identical to the original 4k Blackmagic camera. Not sure how true that is, but they did not develop the imager at all. Yes, they should have updated the blackbox by now. Imagine how many 2k scanners were thrown in the trash for the same reason? 10 hours ago, Dan Baxter said: I don't know how you can say that the noisy CMOSIS cameras sold in a brand new scanner in 2025 is acceptable. LaserGraphics moved on from the 5K CMOSIS that they were using almost six years ago. Kinetta and DCS support what the customer wants and Filmfabriek uses Sony Pregius S. Blackmagic are quite literally the only scanner manufacturer that is still using an optical module from 10 years ago in brand new scanners. Yea, I mean it's a big problem, no argument. They make it work by doing realtime HDR scanning, which hides the issues with the noise floor. I'm not gonna sit here and say their imager is good, because it's not even in HDR mode, but I have been pretty happy with the results over the years, if you don't actually care about getting 4k out of it and understand its limitations. 10 hours ago, Dan Baxter said: Whoever told you that was probably making assumptions. How do you do "most of the debayer" inside or outside? You either do it or you don't. The CRIs are 12bit encoded using JPEG-SOF3 (lossless), not debayered, and not pre-stabilised either - they contain metadata for the frame-by-frame pixel shift to be done in software. So they're really not doing as much as you think they are inside the scanner. Partial debayer is when you deal with de-noise, profiling and edge reconstruction in imager, this saves considerably on data AND most importantly, means your computer does not need to do these tasks, which lightens the load. 10 hours ago, Dan Baxter said: In saying this, part of the core issue is that basically nothing supports .CRI other than Resolve. If Blackmagic delivered .DNG instead they'd solve a lot of problems, but they chose to use their own propriety/not-well-documented format. Yes, it's very proprietary.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now