Jason Rodriguez
Basic Member-
Posts
175 -
Joined
-
Last visited
Profile Information
-
Occupation
Cinematographer
Recent Profile Visitors
2,025 profile views
-
According to Anthony Dod Mantle, the shooting ratio was 60/40 for SI-2K vs. Film in "Slumdog". Step-framed sequences and timelapse were shot with the Canon 1D MKIII. Thanks, Jason
-
Keep in mind the Sony cameras have in-camera sharpening, where-as we do not (this would be counter-productive to sharpen RAW, even during the RAW decode, since sharpening is irreversible). If you want to reduplicate that in-camera sharpening effect, you will have to sharpen in post. From my experience with cameras like the F900, HDC-1500, etc., a similar "softness" can be achieved with the Sony cameras that allow you to turn the sharpness all the way off.
-
Also make sure you are not using the DPX, JPEG, or BMP from the "SAVE IMAGE" frame-grabs. Only use the DNG for resolution tests. The others do not have a great debayer algorithm, and so they will appear soft.
-
SI-2K IBC Theatre Presentation
Jason Rodriguez replied to Jason Rodriguez's topic in Silicon Imaging
Yes, the SI-2K will be displayed in the P+S Technik booth at Cinec as well, although I'm not sure about any large-venue screenings there. You would have to contact P+S Technik for more information on any other events outside of Cinec like that. Thanks, Jason -
Hello Everyone, For those who are visiting IBC this year, there will be a nice treat for you, and a great chance to see footage shot with the SI-2K camera system on the big-screen via 2K digital projection. This will be a wonderful opportunity to see what a "cinema-experience" with SI-2K footage can look like, so if you're going to IBC, and you have questions about how the footage will look when projected, this is an excellent opportunity and venue for that level of evaluation. The event will be Sept. 13th at 13:30. You can contact us for more information. We will also be demonstrating the SI-2K at the P+S Technik booth, located in Hall 11 at 11.E28. Thanks, Jason
-
Hi, Yes, this would be a mistake on the site. Can you send me a link? Thanks, Jason
-
How does the camera compare to Red?
Jason Rodriguez replied to Adam Smith's topic in Silicon Imaging
Hi Nate, Sorry if this is confusing, but we're not using the TCP/IP stack of Windows, which would mean we also wouldn't use the TCP/IP stack in Linux . . . our cameras involve custom hardware that use a standard gigabit ethernet signal pathwa, but rather than TCP/IP, they are using a custom protocol via UDP/IP on a custom transport layer stack to transmit the signal with DMA support for less than 1% CPU usage. As for the ARB_Fragment support, yes, Intel has been increasing their support lately for Linux, and by extension for OpenGL in their display drivers, but 2-3 years ago (and back then we had the GMA900) when we started none of this support existed . . . and even then the latest stuff is in "beta" . . . Suffice to say the Windows kernel is giving us very good stability, and we have the software flexibility to add features and support for some very innovative developments. For instance, is there another camera system on the market that provides full 64-point internal custom user-definable 3D LUT support and film-stock emulation? For 3D-shooting is there a integrated camera system that has the ability liked we previewed at NAB this year that can take two streams and combined them into a single live 3D stereo image? Also how about a camera system that natively shoots to QT and AVI and can be natively ingested (i.e. no proxies or conversions) into two of the most popular NLE's on the market as well as a number of other media-related apps? Add on top of that only 3.5:1 compression and 4:4:4 decoding, basically the equivalent of a HDCAM-SR deck? There are a lot of great development platforms out there . . . hopefully people can look at the features we're able to offer and see the benefit of the development platform we've chosen. -
How does the camera compare to Red?
Jason Rodriguez replied to Adam Smith's topic in Silicon Imaging
The proprietary UDP/IP ethernet protocol we use to make sure that you can get 100MB/s across the gigabit ethernet line with no drop-outs and DMA (so very low CPU usage). Secondly, the Intel GMA950 doesn't support the ARB_Fragment shader extensions for OpenGL like we need, so we have to use DirectX. A dedicated GPU (i.e, Nvidia) in the SI-2K would consume way too much power. Thanks, Jason -
How does the camera compare to Red?
Jason Rodriguez replied to Adam Smith's topic in Silicon Imaging
The source code for the drivers we're using is not available. Also since the drivers were designed for Windows, I know simply having the source-code would not make it as easy as a re-compile. Linux is not being used simply due to lack of drivers for our required hardware. And this is not a "simple" problem. Even for video drivers, Nvidia Detonator drivers are reportedly 15 million lines of code . . . we're talking some very complex stuff here that large companies spend lots of money making sure they are stable on a specific OS platform. Having to deal with bad drivers is much worse that the fairly petty problems we've encountered with threading on the Windows kernel. For instance, Linux will have kernel panics, and that's the same thing as a Windows blue-screen . . . since we don't have blue-screen issues, I don't see why we would want to trade-away that underlying OS/hardware stability for the perception of a "stable" platform with Linux that due to bad or buggy drivers would be even more prone to actual crashing at the kernel level. We haven't counted out Linux, and if that platform gains the hardware support we need in the future, then we will definitely entertain development for that platform. But for right now, it's not feasible. Thanks, Jason -
How does the camera compare to Red?
Jason Rodriguez replied to Adam Smith's topic in Silicon Imaging
Hi Evangelos, The imaging specs on the Altasens sensor are very good, and it does produce some stunningly good image quality with great dynamic range, color-fidelity, and low-noise. As you've noted, the 35mm DOF issue is one downside of using 2/3" sensors, but compared to anything else on the market, the Altasens sensors are far-and-away the best CMOS sensors that money can currently buy for digital cinema production without starting from complete scratch with all the uncertainties and cost of a new custom sensor design. Obviously as demonstrated by Arri, RED, and others, there are 35mm-sized sensors that can be made, but those are all custom designs. With the SI-2K that was not an option for us. We actually do have some 35mm-sized sensors in camera heads that we have made, but they are not digital cinema quality. Thanks, Jason -
How does the camera compare to Red?
Jason Rodriguez replied to Adam Smith's topic in Silicon Imaging
There is a 12-bit uncompressed mode that gives you the full linear dynamic range from the sensor, pixel-for-pixel how it was transferred from the A/D converter . . . doesn't get much better than that. It's a custom file format called .SIV, but IRIDAS supports it in their SpeedGrade and FrameCycler product-line, so you can use any of those products for batch conversion to DPX files. We also have a DNG converter for the .SIV files in the camera software itself that re-wraps the RAW data from the SIV and puts a DNG header on it so that it can be opened directly in After Effects, Photoshop, or any other RAW converter that supports DNG files. Now this is the standard DNG file format from Adobe (which they have now submitted as a ISO-standard), not their "Cinema DNG" that they announced this past NAB, which is a forthcoming format. Point is we can deliver a great workflow using compressed wavelet (CineForm), which BTW is very light compression at only 3.5:1 right now, or we can give you full uncompressed 12-bit linear. Your choice. Finally, yes, you are right, a well tuned Linux kernel can beat the Windows XPe kernel for speed, only problem is that we would have to-do the tuning ourselves, and if you 'hand-tune' it wrong, you create instability issues . . . now when there's a "bug", you're not sure if it's at the OS level, or in your software, or what, so development/support issues get compounded and everything gets exponentially more complex. It's much easier to know you're using a stable kernel and then isolate any issues that are in the software-only rather than having to fish around for issues at the OS level. If you don't tune the Linux kernel and are just using the vanilla distributions, then the differences between XPe and Linux become more subtle, especially again when you go back to the issue of custom hardware driver support, and the stability of those drivers. Thanks, Jason -
How does the camera compare to Red?
Jason Rodriguez replied to Adam Smith's topic in Silicon Imaging
We take the 12-bit linear RAW from the sensor head and apply a 10-bit LOG curve to the data in order to preserve the dynamic range of the information being encoded to CineForm. David Newman has a great explanation of why you want to have LOG vs. linear encoding for compressed material here on his blog: http://cineform.blogspot.com/2007/09/10-bi...bit-linear.html After compression, the RAW data decompresses to 10-bit 4:4:4 RGB, not 4:2:2 YUV. For FCP it supports the 32-bit float YUV encoding so that it can make seamless round-tripping between RGB and FCP's native YUV color-space. -
How does the camera compare to Red?
Jason Rodriguez replied to Adam Smith's topic in Silicon Imaging
The choice for OS was really pretty easy . . . it all comes down to drivers and the ability of the OS to interact with custom hardware. Good video drivers for all the real-time pixel shaders we're running in the main interface, including the 64-point 3D LUT engine. Gigabit ethernet drivers for transmitting low-latency information from the camera head to the capture host computer, whether it's a laptop or the SI-2K. CineForm is very important as well, with the ability to natively encode to either QT or AVI, and have the benefit of developing with the QT API and the DirectShow API, both of which are abscent on Linux. While techically CineForm could be adapted for Linux, the amount of work would be quite high, and then we'd still be stuck on the driver front. The threading "issues" that are mentioned here are not really issues, especially with the speed of today's processors . . . I'm sure if processors were half as fast as they are now, then this would be a very large issue, but modern Core 2 Duo processors make any overhead that Windows might have compared to Linux miniscule. Lastly, we're running Windows XP Embedded, not standard XP 32-bit. This allow us to customize the OS quite a bit, removing all the "fluff" from the commercial package, and make a very stable and controlled environment on the SI-2K. I think there is some mis-information out there when people say "Linux is more stable than Windows" . . . the fact is that there is a lot of junk out there that people can mess up their Windows installs with, but at the base, core functionality of the OS's, you will find that the key to stability when integrating custom hardware is to have very stable drivers. We found that Windows was able to provide better offerings and more mature choices than the counterparts available on Linux. The nice thing about Linux is that it does provide a lot of extensability, and it's kernel is very stable. With some custom development, I'm sure it would make a great OS for what we're doing. We foudn the XPe kernel to be equally as stable though, and again, there is a lot more support in the development community for the type of hardware we're using to make the camera system possible. And that's basically where the decisions to use XPe over Linux came from. -
Hello Everyone, Just wanted to give you all a head's up on the new announcements we have for NAB this year. First off, we will be in a joint booth with CineForm, Wafian, and IRIDAS, at SL10608. We will be demonstrating some exciting new 3D technology where two cameras can plug directly into a single computer and copy of SiliconDVR and show live analgyph, and other non-anaglyph 3D visualization features while you shoot (so i.e., you won't have to sit around and try to rig up polarized monitors, etc. to see your shots in 3D . . . it will all be right there on the preview screen just as-if you were shooting normal 2D). Our cameras will be mounted on an exciting new professional 3D rig from P+S Technik. We will also be showing a new remote interface between SpeedGrade OnSet and the SI-2K, allowing users to pass reference image frames and .look files over ethernet (or WiFi), so that one user can be color-correcting the camera directly from SpeedGrade OnSet without tying the camera up (and color-correcting the camera in a very nice GUI). So basically no more need for the camera operator to have to save a file out, etc., everything can done by the SpeedGrade remote user, and in the SpeedGrade interface itself. Basically imagine the most supped-up paint-box you could imagine for a camera, and this is basically it. There will also be some other great items such as select clips from Dark Country playing back in the booth on a Samsung 3D monitor, and plenty of demos of the entire workflow from shooting to finishing. For more information and additional details and announcements you can read out press releases here: http://www.siliconimaging.com/DigitalCinem..._08_08_NAB.html http://www.siliconimaging.com/DigitalCinem...arkCountry.html Thanks again and hope to see you there! Jason
-
First time with the SI-2K...
Jason Rodriguez replied to Jonathan Bowerbank's topic in Silicon Imaging
We're still at around 45 seconds. It won't be getting any faster anytime soon unfortunately. Thanks, Jason