Jump to content

Gabriel Devereux

Basic Member
  • Posts

    110
  • Joined

  • Last visited

Profile Information

  • Occupation
    Other
  • Location
    Australia / United Kingdom
  • Specialties
    Technical Engineer. Cameras, Wireless Video Transportation (COFDM), LED.

Contact Methods

  • Website URL
    https://infinityvision.tv

Recent Profile Visitors

5,060 profile views
  1. Production is down significantly in the UK and Australia. A lot less money is being spent, and a lot less money is going into traditional media.
  2. Here's a theory: we often say, 'Muslin has a loss of 2 stops of light' or 'Grid cloth has a loss of 1 stop of light'. Why is this the case? It’s important to note that muslin/grid cloth has little light loss—meaning there’s a very small absorption coefficient, relative permeability, or permittivity—so it does not conduct radiation. All light hitting the diffuser is reflected or dispersed in our context (a little generates heat). The diffuser itself does not absorb significant energy to be impactful as we measure in photographic stops. A greater-density diffuser, such as grid cloth vs opal, scatters light more than a lower-density diffuser. When diffusing light, you're essentially re-scattering it from a specific point. You're taking a planar wave (directional light) and scattering it in all directions. Each diffuser recreates many new light sources across its surface in a simplified effect. Each point scatters light much like how light from a bulb propagates in all directions, and its energy falls off based on the expanding sphere’s surface area; this same principle applies when light propagates from a diffuser. I wrote a Python script to quantify light propagation from a diffuser; below is an example of me applying the calculator. Code to download https://github.com/GabrielJDevereux/soft_light/blob/277450eb2c3ef219c393cd897f7b7a462544ffb2/reflectorcalc.py Gabriel Infinityvision.tv
  3. This calculation isn’t too complex if we make some assumptions about the variables involved. When diffusing light, we start by considering the light at the boundary before the diffuser, assuming it’s in the optical far-field and behaving as a planar wave. As this light passes through the diffuser—whether it’s muslin, grid cloth, or another material—it’s scattered randomly and uniformly. For simplicity, let’s assume this scattering occurs in an isotropic manner. From the perspective of the inverse square law, as light radiates from a point source, we can recalibrate this from the point of diffusion. The light emitted from the initial source in the optical far field, just before hitting the diffuser, is re-scattered at the diffuser's surface. Assuming the diffuser behaves as a Lambertian surface, which is a reasonable approximation for most heavy diffusers, we can calculate the diffuser’s luminous flux. This involves dividing the flux across a grid of points on the xy-axis, summing the contributions from each point, and applying the inverse square law to account for the distance from these points. Lambert’s cosine law also comes into play here, helping to accurately estimate how the light is distributed. This approach provided a good enough estimation for my purposes. In fact, I created a simple JavaScript tool on my old website, https://gabrieldevereux.com/calculator/, that performs all these calculations for you. As a side note, while it’s true that different diffusers can have varying levels of light absorption, this absorption is usually minimal and has a negligible effect on the perceived photographic output. The more significant factor is how the diffuser scatters light. When you take a planar wave—a relatively focused ray of light—and scatter it, the light spreads out over a larger area. Because the energy of the light decreases with the square of the distance (as per the inverse square law), the more you scatter the light, the greater the perceived loss of light intensity. I do have a reflectance coefficient in there, you should set it to 99 or 100.
  4. I often found online education to be quite resourceful. Ultimately, everything we do is quantifiable. Otherwise, the fabrication of cameras and sensors would not be possible. While learning literature may not directly make you a better artist, it helps you understand the tools you're using and the fundamentals of light.
  5. Just to confirm, you are changing the base ISO. Jump between 4000 and 800 for a test.
  6. As a side note, While a camera's CFA and initial spectral primaries impact colour rendition, it is essential to note that any tristimulus observer can resolve any colour. It is also important to note that almost all cameras do not conform to the Luther-Iver; by that, I mean a single linear transform away from XYZ CMFs (a single linear transform away from our photopic visions cone functions). So you have your initial spectral primaries (the RGB value of each primary - R, G, B), and you apply gain to each primary to manipulate your spectral primaries and, therefore, your gamut. That is how a 3x3 linear matrix works regarding an OETF (optoelectric transfer function). So most, if not all, cameras (apart from scientific use cases) generally resolve primaries that can only be transformed linearly with error to our XYZ/photopic vision. The CFA densities and primaries perform well on a Luther-Iver front but poorly on an SNR and dynamic range front. So all cameras on a sensor level resolve something wacky, all manufacturers apply a 3x3 matrix to transform to their acquisition gamut or as close to 709 as possible etc. Note that the limitation here is the 3x3 linear matrix. Think about an RGB cube; you have three axes, and your 3x3 linear matrix applies a transform on these three axes; you can only transform linearly. That is Steve Yedlin's whole display prep schtick. He gives control to the secondaries as well as the primaries. He divides each primary and secondary into tetrahedra, applying interpolation across values dependent on our gain level to each primary and secondary RGB value.
  7. Generally, Codex Compact w/ TB3 reader is advertised at 10Gbps or 1GB/s. On average with Checksum and a NAS (RAID) it averages at 800MB/s or 6.4Gbps (occasionally higher, but I calculate off that average). Note you must double all your transfer times to take into account an xxHash or MD5 checksum. So with a 1TB Compact Codex Drive it'd take 42 minutes to ingest. With checksum. You must always carry enough media to support each camera for 24 hours of predicted shooting time (for example on average you shoot 8 hours / day).
  8. Hi All, Quick question, has anyone ever tried a Multicam with Teradek Serve Pros or, more importantly, Teradek Cubes with HEVC compression? I've had numerous people talk about the Serves. Most couldn't go over 4-units on a single AP. This makes sense with the fundamental limitations of the 802.11ax radio. However, nobody I know has tried the above with Cubes. I'm Australia-based, so I'm hoping someone across the pond has given it a go! Interested to hear about how many cameras you managed to view, constant latency etc.
  9. Can you elaborate on the zero sum game aspect of a larger sensor? I’ve always followed the simple logic of the larger the sensor the inherently more sensitive it is. So hearing an alternative would be great. As the simplistic logic above does apply, but the potential loss of say the larger column lines does interest me.
  10. And one thing to learn is an understanding and respect for others that don’t speak English as their first language!
  11. I have a few! SMPTE 2110 and 2022-6 - complete VoIP solutions utilising Micron, EVS and so forth. With Micron, I can live grade hundreds of cameras (realistically 6-7, but it scales). This also allows remote recording with the goal of local media just for redundancy. Colour Science, in the broad and local sense. Understanding cameras are merely tristimulus observers, and Yedlin's whole thing back in 2019 is entirely accurate and poignant. Looking at cameras as a photon counter, a poor one at that allows a better understanding of the entire pipeline. Note; I always thought colour science was essential to understanding colour from a camera, but it's essentially the opposite. It's to understand colour as a concept. Noting that in our world, colour is inherently three-dimensional, understanding how to manipulate values effectively using tools such as tetrahedral warping or multi-dimensional tetrahedral warping. Radiometry, light is simple EMF, and all the same, laws apply. As we move into larger volume-based productions, the time is in place to do the math to ensure everything works. COFDM is more niche but coded orthogonal frequency division multiplexing was and has been the standard for communications for decades. However, its throughput didn't allow for FHD-UHD video streams. Let alone 3G-SDI/12G-SDI streams. As COFDM technology advances and we can use higher constellations with the same orthogonal multi-channel carriers, we can send high-bitrate signals with significant redundancy (forward error correction). To loop back to the beginning of this post, cameras will become more like sound devices; we can relinquish a cable for robust remote recording over RF. In summary, my forward-thinking advancements are - people will become more comfortable with colour science and build a greater understanding of cameras allowing for superior look development and less camera loyalty; remote recording (which is already happening by cable and SMPTE 2110) will start happening over RF, extensive lighting exercises will move away from the sole know-how of gaffers with decades of experience but work harmoniously with contemporary physics technology.
  12. https://gabrieldevereux.com/calculator/ Here’s how to calculate large near-lambertian (matte) source propagation. A simplistic view of it - explained crudely for the umpteenth time. 1, 2 or however many light sources can be illuminating the reflector. It is assumed these light sources are at an acute angle to the normal of the reflector and the surface is near lambertian. The reflectance value of muslin is approximately 95%/0.95. It then models the reflecting soft surface as being composed of many point sources, m*n (the amount of point sources scale with the size of the reflector), whose total light output is the same as the total luminous flux of the original light sources (minus losses) If the total luminous flux of the sheet is L, then the luminous flux of a single point source is L/(m*n). These point sources emit light isotopically at a solid angle of 2pi steradians (only in front of the surface). The algorithm computes the luminance at some point z on the axis perpendicular (the normal) to the reflector passing through its centre. To this end, it simply needs to find the luminance produced at this point by each point source, apply certain laws such as lamberts cosine, and then sum these values to get the total luminance. Note that the number of point sources, m*n, can become arbitrarily large. However, for m*n => 50 the approximation is almost perfect (note there are approximately 7*7 (49) point sources per m**2 so this algorithm is accurate for sources larger than 3’ x 3’). The rest is basic geometry. Fall-off and contrast is not related. Fall-off is a term, I believe, taken from radiometry. A very poignant topic for cinematography. Contrast in our world is artistically relative. How you interpret the two is up to you but, it’s probably easiest to view the two separately. Also if you have any questions about the above calculator please do ask. It’s all reasonably simplistic math.
  13. By this, I mean the ability to sample spatial frequency detail. I should also emphasise this is not the case in the Ursa Mini Pro 12k, just the example CFA given in that link.
  14. To follow up on what Joshua said above (all of which, to my knowledge, is correct). The article you linked has greatly misconceived the absolute negatives of the 12k sensor. And is very wrong! To an absurd degree... you get better DR, cause noise gets so small by down sampling that it almost looks like grain, so you can lift your shadows to a level, which would have not be possible before (or would have needed massive NR – with the well known side effects) Nope, it's a 2.2-micron photo-site; downsampling assumes the mean of assuming pixels, decreasing differentiation from the mean (SNR). However, the initial readout from the 2.2-micron photosite has a very high SNR due to size. A photodiode works by its electrons absorbing the Magna of energy from photons allowing charge to pass through. The charge is the G in the power source follower that enable the passing of VDD on the local photosite level (it's a local amplifier). Therefore the small charge that the photodiode gains allows a much greater voltage to pass through; therefore, discrepancy (differentiation from the mean) causes noise. A larger photodiode has a more significant amount of charge. A larger photodiode requires a larger photosite! TLDR; Big photosites reduce SNR, and downsampling reduces SNR; however, a larger photo site is preferred, as the sensitivity scales. By that, I mean the large photosites depletion region and the multitude of smaller depletion regions, if scaled perfectly, have the same sensitivity! But it doesn't! The size of transistors does not scale as well as the size of the diode and more importantly its depletion region; a smaller photosite has an exponentially smaller diode than a larger photosite in proportionality. Even with micro optics, this doesn't scale the same. Aliasing and Moire are vanishing without the need of a heavy OLPF, preserving important fine image information without artifacts. After my longer-than-anticipated explanation above, I'll keep this long answer short. Nyquist sampling makes the above statement incorrect. But, to put it simply, the Nyquist sampling limit can't be higher than d scan /2; your article CFA is very wrong as there is no alternating white photosite... as that would mean the horizontal sampling limit requires 6 pixels which would make the horizontal resolution 2000 px. Color fidelity (especially in the higher frequencies) is way better – giving you better perceived sharpness and clarity – while still maintaining buttery skin, without the need of diffusion filters. Nope, any tristimulus observer can resolve ANY COLOUR yes, that means your Canon T3i can determine the same amount of colours as the Arri Alexa 35 Pro Max Super Speed Digital Colour 3000 or whatever their new naming convention is. A wider gamut of colours to work with after acquisition is more a linear matrix deal than on a sensor level. See above the Nyquist sampling limit on why you don't achieve better 'colour fidelity' at higher spatial frequencies. Better SN ratio. I like bold statements with no proof as well... Better chroma resolution due to the full RGB color readout. Nope, any tristimulus observer can resolve ANY COLOUR. Less artifacts No. Nyquist sampling limit see above... I assume the article is just talking about moire. I will admit it! It does have a better sampling limit than a 4k Bayer CFA. Not a lower SNR, though! Great skin tones Nope, any tristimulus observer can resolve ANY COLOUR. That's the linear matrix they choose to use... which is all overdetermined and error-prone. The richness, smoothness, and – in lack of a better word – fatness of the images, that the sensor delivers is amazing. sure
×
×
  • Create New...