Jump to content

Gabriel Devereux

Basic Member
  • Posts

    106
  • Joined

  • Last visited

Profile Information

  • Occupation
    Digital Image Technician
  • Location
    Australia / United Kingdom

Contact Methods

  • Website URL
    gabrieldevereux.com/imaging

Recent Profile Visitors

3,443 profile views
  1. Just to confirm, you are changing the base ISO. Jump between 4000 and 800 for a test.
  2. As a side note, While a camera's CFA and initial spectral primaries impact colour rendition, it is essential to note that any tristimulus observer can resolve any colour. It is also important to note that almost all cameras do not conform to the Luther-Iver; by that, I mean a single linear transform away from XYZ CMFs (a single linear transform away from our photopic visions cone functions). So you have your initial spectral primaries (the RGB value of each primary - R, G, B), and you apply gain to each primary to manipulate your spectral primaries and, therefore, your gamut. That is how a 3x3 linear matrix works regarding an OETF (optoelectric transfer function). So most, if not all, cameras (apart from scientific use cases) generally resolve primaries that can only be transformed linearly with error to our XYZ/photopic vision. The CFA densities and primaries perform well on a Luther-Iver front but poorly on an SNR and dynamic range front. So all cameras on a sensor level resolve something wacky, all manufacturers apply a 3x3 matrix to transform to their acquisition gamut or as close to 709 as possible etc. Note that the limitation here is the 3x3 linear matrix. Think about an RGB cube; you have three axes, and your 3x3 linear matrix applies a transform on these three axes; you can only transform linearly. That is Steve Yedlin's whole display prep schtick. He gives control to the secondaries as well as the primaries. He divides each primary and secondary into tetrahedra, applying interpolation across values dependent on our gain level to each primary and secondary RGB value.
  3. Generally, Codex Compact w/ TB3 reader is advertised at 10Gbps or 1GB/s. On average with Checksum and a NAS (RAID) it averages at 800MB/s or 6.4Gbps (occasionally higher, but I calculate off that average). Note you must double all your transfer times to take into account an xxHash or MD5 checksum. So with a 1TB Compact Codex Drive it'd take 42 minutes to ingest. With checksum. You must always carry enough media to support each camera for 24 hours of predicted shooting time (for example on average you shoot 8 hours / day).
  4. Hi All, Quick question, has anyone ever tried a Multicam with Teradek Serve Pros or, more importantly, Teradek Cubes with HEVC compression? I've had numerous people talk about the Serves. Most couldn't go over 4-units on a single AP. This makes sense with the fundamental limitations of the 802.11ax radio. However, nobody I know has tried the above with Cubes. I'm Australia-based, so I'm hoping someone across the pond has given it a go! Interested to hear about how many cameras you managed to view, constant latency etc.
  5. Can you elaborate on the zero sum game aspect of a larger sensor? I’ve always followed the simple logic of the larger the sensor the inherently more sensitive it is. So hearing an alternative would be great. As the simplistic logic above does apply, but the potential loss of say the larger column lines does interest me.
  6. And one thing to learn is an understanding and respect for others that don’t speak English as their first language!
  7. I have a few! SMPTE 2110 and 2022-6 - complete VoIP solutions utilising Micron, EVS and so forth. With Micron, I can live grade hundreds of cameras (realistically 6-7, but it scales). This also allows remote recording with the goal of local media just for redundancy. Colour Science, in the broad and local sense. Understanding cameras are merely tristimulus observers, and Yedlin's whole thing back in 2019 is entirely accurate and poignant. Looking at cameras as a photon counter, a poor one at that allows a better understanding of the entire pipeline. Note; I always thought colour science was essential to understanding colour from a camera, but it's essentially the opposite. It's to understand colour as a concept. Noting that in our world, colour is inherently three-dimensional, understanding how to manipulate values effectively using tools such as tetrahedral warping or multi-dimensional tetrahedral warping. Radiometry, light is simple EMF, and all the same, laws apply. As we move into larger volume-based productions, the time is in place to do the math to ensure everything works. COFDM is more niche but coded orthogonal frequency division multiplexing was and has been the standard for communications for decades. However, its throughput didn't allow for FHD-UHD video streams. Let alone 3G-SDI/12G-SDI streams. As COFDM technology advances and we can use higher constellations with the same orthogonal multi-channel carriers, we can send high-bitrate signals with significant redundancy (forward error correction). To loop back to the beginning of this post, cameras will become more like sound devices; we can relinquish a cable for robust remote recording over RF. In summary, my forward-thinking advancements are - people will become more comfortable with colour science and build a greater understanding of cameras allowing for superior look development and less camera loyalty; remote recording (which is already happening by cable and SMPTE 2110) will start happening over RF, extensive lighting exercises will move away from the sole know-how of gaffers with decades of experience but work harmoniously with contemporary physics technology.
  8. https://gabrieldevereux.com/calculator/ Here’s how to calculate large near-lambertian (matte) source propagation. A simplistic view of it - explained crudely for the umpteenth time. 1, 2 or however many light sources can be illuminating the reflector. It is assumed these light sources are at an acute angle to the normal of the reflector and the surface is near lambertian. The reflectance value of muslin is approximately 95%/0.95. It then models the reflecting soft surface as being composed of many point sources, m*n (the amount of point sources scale with the size of the reflector), whose total light output is the same as the total luminous flux of the original light sources (minus losses) If the total luminous flux of the sheet is L, then the luminous flux of a single point source is L/(m*n). These point sources emit light isotopically at a solid angle of 2pi steradians (only in front of the surface). The algorithm computes the luminance at some point z on the axis perpendicular (the normal) to the reflector passing through its centre. To this end, it simply needs to find the luminance produced at this point by each point source, apply certain laws such as lamberts cosine, and then sum these values to get the total luminance. Note that the number of point sources, m*n, can become arbitrarily large. However, for m*n => 50 the approximation is almost perfect (note there are approximately 7*7 (49) point sources per m**2 so this algorithm is accurate for sources larger than 3’ x 3’). The rest is basic geometry. Fall-off and contrast is not related. Fall-off is a term, I believe, taken from radiometry. A very poignant topic for cinematography. Contrast in our world is artistically relative. How you interpret the two is up to you but, it’s probably easiest to view the two separately. Also if you have any questions about the above calculator please do ask. It’s all reasonably simplistic math.
  9. By this, I mean the ability to sample spatial frequency detail. I should also emphasise this is not the case in the Ursa Mini Pro 12k, just the example CFA given in that link.
  10. To follow up on what Joshua said above (all of which, to my knowledge, is correct). The article you linked has greatly misconceived the absolute negatives of the 12k sensor. And is very wrong! To an absurd degree... you get better DR, cause noise gets so small by down sampling that it almost looks like grain, so you can lift your shadows to a level, which would have not be possible before (or would have needed massive NR – with the well known side effects) Nope, it's a 2.2-micron photo-site; downsampling assumes the mean of assuming pixels, decreasing differentiation from the mean (SNR). However, the initial readout from the 2.2-micron photosite has a very high SNR due to size. A photodiode works by its electrons absorbing the Magna of energy from photons allowing charge to pass through. The charge is the G in the power source follower that enable the passing of VDD on the local photosite level (it's a local amplifier). Therefore the small charge that the photodiode gains allows a much greater voltage to pass through; therefore, discrepancy (differentiation from the mean) causes noise. A larger photodiode has a more significant amount of charge. A larger photodiode requires a larger photosite! TLDR; Big photosites reduce SNR, and downsampling reduces SNR; however, a larger photo site is preferred, as the sensitivity scales. By that, I mean the large photosites depletion region and the multitude of smaller depletion regions, if scaled perfectly, have the same sensitivity! But it doesn't! The size of transistors does not scale as well as the size of the diode and more importantly its depletion region; a smaller photosite has an exponentially smaller diode than a larger photosite in proportionality. Even with micro optics, this doesn't scale the same. Aliasing and Moire are vanishing without the need of a heavy OLPF, preserving important fine image information without artifacts. After my longer-than-anticipated explanation above, I'll keep this long answer short. Nyquist sampling makes the above statement incorrect. But, to put it simply, the Nyquist sampling limit can't be higher than d scan /2; your article CFA is very wrong as there is no alternating white photosite... as that would mean the horizontal sampling limit requires 6 pixels which would make the horizontal resolution 2000 px. Color fidelity (especially in the higher frequencies) is way better – giving you better perceived sharpness and clarity – while still maintaining buttery skin, without the need of diffusion filters. Nope, any tristimulus observer can resolve ANY COLOUR yes, that means your Canon T3i can determine the same amount of colours as the Arri Alexa 35 Pro Max Super Speed Digital Colour 3000 or whatever their new naming convention is. A wider gamut of colours to work with after acquisition is more a linear matrix deal than on a sensor level. See above the Nyquist sampling limit on why you don't achieve better 'colour fidelity' at higher spatial frequencies. Better SN ratio. I like bold statements with no proof as well... Better chroma resolution due to the full RGB color readout. Nope, any tristimulus observer can resolve ANY COLOUR. Less artifacts No. Nyquist sampling limit see above... I assume the article is just talking about moire. I will admit it! It does have a better sampling limit than a 4k Bayer CFA. Not a lower SNR, though! Great skin tones Nope, any tristimulus observer can resolve ANY COLOUR. That's the linear matrix they choose to use... which is all overdetermined and error-prone. The richness, smoothness, and – in lack of a better word – fatness of the images, that the sensor delivers is amazing. sure
  11. Not to say what they're doing isn't highly skilful. But, in terms of making a camera... to state the obvious - a digital camera is far more finicky and takes a lot more expensive machinery to fabricate. I'm not exactly saying anything revolutionary here... try to make a MOSFET gate that's half a micron big and you'll get the gist. I believe the correct answer is that building a film camera wouldn't be profitable. Which we all know and accept.
  12. To a certain degree! The 500$ Vaxis system you talk about is a 264 encoder going to a WiFi Radio that can to P2P but also can go to an iPad and so forth (I believe that's the product?) That's more of a different VoIP product that, with WiFi, has Restream capability, channel jumping and so forth; it also has a smaller carrier than an Amimon-based product, so with saturated WiFi channels, it can 'fit'. The downside of a VoIP product is that it has inherent latency - Amimon is less than two scan lines (0.06ms, I believe?) whereas with VoIP you're looking at a 2-frame+delay (100-300ms) with cumulative latency if it's all on the same network. The Vaxis Storm and Boxx Atom unit has the same Amimon chipset as the old Teradek Bolt but it doesn't comply with DFS, Vaxis doesn't comply with WiFi channel bands (it goes crazy into 'illegal' spectrum) which therefore is a better product as you can operate outside of the legal range, lower noise floor (no competition from WiFi) and no DFS (scanning for one minute when booted - its WiFi 802.11ax compliance).
  13. Thanks Stephen, On a side note, Boxx, Teradek, and Vaxis use the same Amimon chips (the FHD models not 4k models). So the compression, RF transmission and reconstruction is the same. The only difference is Teradek was more compliant with RF regulations (which makes a easier to market but a worse product).
  14. Note my 3:1 compression ratio was for HD streams. No clue about UHD. Teradek and Amimon are very hush hush about the internal workings.
×
×
  • Create New...