-
Posts
110 -
Joined
-
Last visited
Everything posted by Gabriel Devereux
-
Can anyone point to whether Arri is actually struggling financially?
Gabriel Devereux replied to Edith blazek's topic in ARRI
Production is down significantly in the UK and Australia. A lot less money is being spent, and a lot less money is going into traditional media. -
Here's a theory: we often say, 'Muslin has a loss of 2 stops of light' or 'Grid cloth has a loss of 1 stop of light'. Why is this the case? It’s important to note that muslin/grid cloth has little light loss—meaning there’s a very small absorption coefficient, relative permeability, or permittivity—so it does not conduct radiation. All light hitting the diffuser is reflected or dispersed in our context (a little generates heat). The diffuser itself does not absorb significant energy to be impactful as we measure in photographic stops. A greater-density diffuser, such as grid cloth vs opal, scatters light more than a lower-density diffuser. When diffusing light, you're essentially re-scattering it from a specific point. You're taking a planar wave (directional light) and scattering it in all directions. Each diffuser recreates many new light sources across its surface in a simplified effect. Each point scatters light much like how light from a bulb propagates in all directions, and its energy falls off based on the expanding sphere’s surface area; this same principle applies when light propagates from a diffuser. I wrote a Python script to quantify light propagation from a diffuser; below is an example of me applying the calculator. Code to download https://github.com/GabrielJDevereux/soft_light/blob/277450eb2c3ef219c393cd897f7b7a462544ffb2/reflectorcalc.py Gabriel Infinityvision.tv
-
Estimating output from large overhead soft boxes
Gabriel Devereux replied to Gerrit Fahr's topic in Lighting for Film & Video
This calculation isn’t too complex if we make some assumptions about the variables involved. When diffusing light, we start by considering the light at the boundary before the diffuser, assuming it’s in the optical far-field and behaving as a planar wave. As this light passes through the diffuser—whether it’s muslin, grid cloth, or another material—it’s scattered randomly and uniformly. For simplicity, let’s assume this scattering occurs in an isotropic manner. From the perspective of the inverse square law, as light radiates from a point source, we can recalibrate this from the point of diffusion. The light emitted from the initial source in the optical far field, just before hitting the diffuser, is re-scattered at the diffuser's surface. Assuming the diffuser behaves as a Lambertian surface, which is a reasonable approximation for most heavy diffusers, we can calculate the diffuser’s luminous flux. This involves dividing the flux across a grid of points on the xy-axis, summing the contributions from each point, and applying the inverse square law to account for the distance from these points. Lambert’s cosine law also comes into play here, helping to accurately estimate how the light is distributed. This approach provided a good enough estimation for my purposes. In fact, I created a simple JavaScript tool on my old website, https://gabrieldevereux.com/calculator/, that performs all these calculations for you. As a side note, while it’s true that different diffusers can have varying levels of light absorption, this absorption is usually minimal and has a negligible effect on the perceived photographic output. The more significant factor is how the diffuser scatters light. When you take a planar wave—a relatively focused ray of light—and scatter it, the light spreads out over a larger area. Because the energy of the light decreases with the square of the distance (as per the inverse square law), the more you scatter the light, the greater the perceived loss of light intensity. I do have a reflectance coefficient in there, you should set it to 99 or 100. -
What’s the best advice you’ve received?
Gabriel Devereux replied to Ryan Ivy's topic in General Discussion
I often found online education to be quite resourceful. Ultimately, everything we do is quantifiable. Otherwise, the fabrication of cameras and sensors would not be possible. While learning literature may not directly make you a better artist, it helps you understand the tools you're using and the fundamentals of light. -
As a side note, While a camera's CFA and initial spectral primaries impact colour rendition, it is essential to note that any tristimulus observer can resolve any colour. It is also important to note that almost all cameras do not conform to the Luther-Iver; by that, I mean a single linear transform away from XYZ CMFs (a single linear transform away from our photopic visions cone functions). So you have your initial spectral primaries (the RGB value of each primary - R, G, B), and you apply gain to each primary to manipulate your spectral primaries and, therefore, your gamut. That is how a 3x3 linear matrix works regarding an OETF (optoelectric transfer function). So most, if not all, cameras (apart from scientific use cases) generally resolve primaries that can only be transformed linearly with error to our XYZ/photopic vision. The CFA densities and primaries perform well on a Luther-Iver front but poorly on an SNR and dynamic range front. So all cameras on a sensor level resolve something wacky, all manufacturers apply a 3x3 matrix to transform to their acquisition gamut or as close to 709 as possible etc. Note that the limitation here is the 3x3 linear matrix. Think about an RGB cube; you have three axes, and your 3x3 linear matrix applies a transform on these three axes; you can only transform linearly. That is Steve Yedlin's whole display prep schtick. He gives control to the secondaries as well as the primaries. He divides each primary and secondary into tetrahedra, applying interpolation across values dependent on our gain level to each primary and secondary RGB value.
-
Generally, Codex Compact w/ TB3 reader is advertised at 10Gbps or 1GB/s. On average with Checksum and a NAS (RAID) it averages at 800MB/s or 6.4Gbps (occasionally higher, but I calculate off that average). Note you must double all your transfer times to take into account an xxHash or MD5 checksum. So with a 1TB Compact Codex Drive it'd take 42 minutes to ingest. With checksum. You must always carry enough media to support each camera for 24 hours of predicted shooting time (for example on average you shoot 8 hours / day).
-
Teradek Cube and Serve Pro Multicam
Gabriel Devereux posted a topic in Camera Assistant / DIT & Gear
Hi All, Quick question, has anyone ever tried a Multicam with Teradek Serve Pros or, more importantly, Teradek Cubes with HEVC compression? I've had numerous people talk about the Serves. Most couldn't go over 4-units on a single AP. This makes sense with the fundamental limitations of the 802.11ax radio. However, nobody I know has tried the above with Cubes. I'm Australia-based, so I'm hoping someone across the pond has given it a go! Interested to hear about how many cameras you managed to view, constant latency etc. -
Should I buy a new Arri Alexa 35 or a used mini LF?
Gabriel Devereux replied to Edith blazek's topic in ARRI
Can you elaborate on the zero sum game aspect of a larger sensor? I’ve always followed the simple logic of the larger the sensor the inherently more sensitive it is. So hearing an alternative would be great. As the simplistic logic above does apply, but the potential loss of say the larger column lines does interest me.- 4 replies
-
- arri
- alexa mini lf
-
(and 1 more)
Tagged with:
-
Future of Cinematography! What’s next?
Gabriel Devereux replied to Saikat Chattopadhyay's topic in General Discussion
And one thing to learn is an understanding and respect for others that don’t speak English as their first language!- 32 replies
-
- cinematograhy
- technology
-
(and 1 more)
Tagged with:
-
Future of Cinematography! What’s next?
Gabriel Devereux replied to Saikat Chattopadhyay's topic in General Discussion
I have a few! SMPTE 2110 and 2022-6 - complete VoIP solutions utilising Micron, EVS and so forth. With Micron, I can live grade hundreds of cameras (realistically 6-7, but it scales). This also allows remote recording with the goal of local media just for redundancy. Colour Science, in the broad and local sense. Understanding cameras are merely tristimulus observers, and Yedlin's whole thing back in 2019 is entirely accurate and poignant. Looking at cameras as a photon counter, a poor one at that allows a better understanding of the entire pipeline. Note; I always thought colour science was essential to understanding colour from a camera, but it's essentially the opposite. It's to understand colour as a concept. Noting that in our world, colour is inherently three-dimensional, understanding how to manipulate values effectively using tools such as tetrahedral warping or multi-dimensional tetrahedral warping. Radiometry, light is simple EMF, and all the same, laws apply. As we move into larger volume-based productions, the time is in place to do the math to ensure everything works. COFDM is more niche but coded orthogonal frequency division multiplexing was and has been the standard for communications for decades. However, its throughput didn't allow for FHD-UHD video streams. Let alone 3G-SDI/12G-SDI streams. As COFDM technology advances and we can use higher constellations with the same orthogonal multi-channel carriers, we can send high-bitrate signals with significant redundancy (forward error correction). To loop back to the beginning of this post, cameras will become more like sound devices; we can relinquish a cable for robust remote recording over RF. In summary, my forward-thinking advancements are - people will become more comfortable with colour science and build a greater understanding of cameras allowing for superior look development and less camera loyalty; remote recording (which is already happening by cable and SMPTE 2110) will start happening over RF, extensive lighting exercises will move away from the sole know-how of gaffers with decades of experience but work harmoniously with contemporary physics technology.- 32 replies
-
- cinematograhy
- technology
-
(and 1 more)
Tagged with:
-
https://gabrieldevereux.com/calculator/ Here’s how to calculate large near-lambertian (matte) source propagation. A simplistic view of it - explained crudely for the umpteenth time. 1, 2 or however many light sources can be illuminating the reflector. It is assumed these light sources are at an acute angle to the normal of the reflector and the surface is near lambertian. The reflectance value of muslin is approximately 95%/0.95. It then models the reflecting soft surface as being composed of many point sources, m*n (the amount of point sources scale with the size of the reflector), whose total light output is the same as the total luminous flux of the original light sources (minus losses) If the total luminous flux of the sheet is L, then the luminous flux of a single point source is L/(m*n). These point sources emit light isotopically at a solid angle of 2pi steradians (only in front of the surface). The algorithm computes the luminance at some point z on the axis perpendicular (the normal) to the reflector passing through its centre. To this end, it simply needs to find the luminance produced at this point by each point source, apply certain laws such as lamberts cosine, and then sum these values to get the total luminance. Note that the number of point sources, m*n, can become arbitrarily large. However, for m*n => 50 the approximation is almost perfect (note there are approximately 7*7 (49) point sources per m**2 so this algorithm is accurate for sources larger than 3’ x 3’). The rest is basic geometry. Fall-off and contrast is not related. Fall-off is a term, I believe, taken from radiometry. A very poignant topic for cinematography. Contrast in our world is artistically relative. How you interpret the two is up to you but, it’s probably easiest to view the two separately. Also if you have any questions about the above calculator please do ask. It’s all reasonably simplistic math.
-
To follow up on what Joshua said above (all of which, to my knowledge, is correct). The article you linked has greatly misconceived the absolute negatives of the 12k sensor. And is very wrong! To an absurd degree... you get better DR, cause noise gets so small by down sampling that it almost looks like grain, so you can lift your shadows to a level, which would have not be possible before (or would have needed massive NR – with the well known side effects) Nope, it's a 2.2-micron photo-site; downsampling assumes the mean of assuming pixels, decreasing differentiation from the mean (SNR). However, the initial readout from the 2.2-micron photosite has a very high SNR due to size. A photodiode works by its electrons absorbing the Magna of energy from photons allowing charge to pass through. The charge is the G in the power source follower that enable the passing of VDD on the local photosite level (it's a local amplifier). Therefore the small charge that the photodiode gains allows a much greater voltage to pass through; therefore, discrepancy (differentiation from the mean) causes noise. A larger photodiode has a more significant amount of charge. A larger photodiode requires a larger photosite! TLDR; Big photosites reduce SNR, and downsampling reduces SNR; however, a larger photo site is preferred, as the sensitivity scales. By that, I mean the large photosites depletion region and the multitude of smaller depletion regions, if scaled perfectly, have the same sensitivity! But it doesn't! The size of transistors does not scale as well as the size of the diode and more importantly its depletion region; a smaller photosite has an exponentially smaller diode than a larger photosite in proportionality. Even with micro optics, this doesn't scale the same. Aliasing and Moire are vanishing without the need of a heavy OLPF, preserving important fine image information without artifacts. After my longer-than-anticipated explanation above, I'll keep this long answer short. Nyquist sampling makes the above statement incorrect. But, to put it simply, the Nyquist sampling limit can't be higher than d scan /2; your article CFA is very wrong as there is no alternating white photosite... as that would mean the horizontal sampling limit requires 6 pixels which would make the horizontal resolution 2000 px. Color fidelity (especially in the higher frequencies) is way better – giving you better perceived sharpness and clarity – while still maintaining buttery skin, without the need of diffusion filters. Nope, any tristimulus observer can resolve ANY COLOUR yes, that means your Canon T3i can determine the same amount of colours as the Arri Alexa 35 Pro Max Super Speed Digital Colour 3000 or whatever their new naming convention is. A wider gamut of colours to work with after acquisition is more a linear matrix deal than on a sensor level. See above the Nyquist sampling limit on why you don't achieve better 'colour fidelity' at higher spatial frequencies. Better SN ratio. I like bold statements with no proof as well... Better chroma resolution due to the full RGB color readout. Nope, any tristimulus observer can resolve ANY COLOUR. Less artifacts No. Nyquist sampling limit see above... I assume the article is just talking about moire. I will admit it! It does have a better sampling limit than a 4k Bayer CFA. Not a lower SNR, though! Great skin tones Nope, any tristimulus observer can resolve ANY COLOUR. That's the linear matrix they choose to use... which is all overdetermined and error-prone. The richness, smoothness, and – in lack of a better word – fatness of the images, that the sensor delivers is amazing. sure
-
Not to say what they're doing isn't highly skilful. But, in terms of making a camera... to state the obvious - a digital camera is far more finicky and takes a lot more expensive machinery to fabricate. I'm not exactly saying anything revolutionary here... try to make a MOSFET gate that's half a micron big and you'll get the gist. I believe the correct answer is that building a film camera wouldn't be profitable. Which we all know and accept.
-
Teradek Bolt Old vs. New?
Gabriel Devereux replied to Stephen Sanchez's topic in Camera Assistant / DIT & Gear
To a certain degree! The 500$ Vaxis system you talk about is a 264 encoder going to a WiFi Radio that can to P2P but also can go to an iPad and so forth (I believe that's the product?) That's more of a different VoIP product that, with WiFi, has Restream capability, channel jumping and so forth; it also has a smaller carrier than an Amimon-based product, so with saturated WiFi channels, it can 'fit'. The downside of a VoIP product is that it has inherent latency - Amimon is less than two scan lines (0.06ms, I believe?) whereas with VoIP you're looking at a 2-frame+delay (100-300ms) with cumulative latency if it's all on the same network. The Vaxis Storm and Boxx Atom unit has the same Amimon chipset as the old Teradek Bolt but it doesn't comply with DFS, Vaxis doesn't comply with WiFi channel bands (it goes crazy into 'illegal' spectrum) which therefore is a better product as you can operate outside of the legal range, lower noise floor (no competition from WiFi) and no DFS (scanning for one minute when booted - its WiFi 802.11ax compliance). -
Teradek Bolt Old vs. New?
Gabriel Devereux replied to Stephen Sanchez's topic in Camera Assistant / DIT & Gear
Thanks Stephen, On a side note, Boxx, Teradek, and Vaxis use the same Amimon chips (the FHD models not 4k models). So the compression, RF transmission and reconstruction is the same. The only difference is Teradek was more compliant with RF regulations (which makes a easier to market but a worse product). -
Teradek Bolt Old vs. New?
Gabriel Devereux replied to Stephen Sanchez's topic in Camera Assistant / DIT & Gear
Note my 3:1 compression ratio was for HD streams. No clue about UHD. Teradek and Amimon are very hush hush about the internal workings. -
Teradek Bolt Old vs. New?
Gabriel Devereux replied to Stephen Sanchez's topic in Camera Assistant / DIT & Gear
Same chip manufacturer, As Phil said the greater overhead allows theoretically higher reliability at HD. However this is internal encoding and decoding into Amimon ‘language’ as Teradek (all Amimon chips) go through a 3:1 compression ratio to fit within the 40Meg carrier. The actual RF reliability itself - no difference. As said above, the greater bandwidth for encoding, decoding and rebuilding signal is greater. But, at the end of the day it’s a 2 way mimo TX to a 4 way mimo RX with the same constellations and so forth. We found the Boxx, Vaxis transmission systems as reliable and non-DFS compliant and no pairing (which is better for high density RX’s) -
I believe the ideology was that with such pixel density few lenses would exceed the nyquist sampling limit of the camera and therefore wasn’t necessary. re the 12k - it’s dumb. Whoever thought interpolating from an RGBW CFA was a good idea was a great salesman. Diagonal sampling limit is poor. And a 2.2ym photosite with an exceptionally small diode…
-
Teradek Mounting
Gabriel Devereux replied to Dustin Supencheck's topic in Camera Assistant / DIT & Gear
As an update on this and correcting poor terminology You block down to IF 'intermittent' frequency. Which can run along coax, specifically RG1697 for 500+ feet. You should nearly always deploy two antennas, and an RF matrix can serve an absurd amount of of RX's Most use 264 as a compressor (realistically you have a 50Mbps bandwidth which is excessive so high density encoding like HEVC is unnecessary). COFDM is an RF style data transfer style. Amimon was initially developed to push signal from your living room to your bedroom. COFDM was developed for high-intensity high-risk (think special forces type deployments) where reliability is life or death. The initial development I think speaks worlds. -
Question, I'm not familiar with CinemaDNG file types. This 'forward' matrix, if it's baked in the metadata... are you sure this isn't a standard XYZ to 709 linear transform - as if it's in the metadata and applied on ingesting in a 32-bit floating point engine if you used a linear transform from 709 to AP0/1 or XYZ it would have no loss - I think. The reason why I say this is that there is no standard 709 transform. A colour correction matrix from RGB primaries is, of course, partially dictated by RGB primaries and partially dictated by XYZ/709 RGB values of the inherent illuminant (I assume D50 in your use case? - not sure why it wouldn't be D65?). Could you post the forward matrix? The ideology of reviewing a matrix and assuming it's compressing X to Y is convoluted - at least in my head, it is. I'm unfamiliar with scanning film completely. But, let's say for a minute, negate the primaries of film and let's assume that the scanner is using a standard D65 illuminant with an adequate SPD. Let's also assume that the scanner is using a bayer filter - that doesn't abide by the Luther-Iver condition (I would've thought personally a scanner would, as the downsides of abiding by the CIE 1931 CMF colour target filtration scheme are noise and latitude - I would've thought that scanning a negative itself would partially negate that and then just throw a shit ton of light at it, but ah well maybe they couldn't be bothered designing a new sensor? - #FilmIsDead, I'm joking... maybe). So with this, we have the RGB primaries of the Bayer CFA in the scanner, our standard illuminant (D65) and our output Primaries XYZ/709. This matrix shouldn't match any other matrix. It shouldn't look like any matrix you've seen (unless you're a colour scientist used to calibrating cameras - for which I apologise) as it's partially - primarily dictated by the RGB primaries of the scanner - I've spent some time calibrating phone cameras to XYZ - even though the final spectral locus/primaries are defined due to the variance in input RGB primaries the values are never the same - usually drastically different! Also if the debayer has taken place prior to ingest - I believe (taking reference from SMPTE RDD31:2014) that the colour correction matrix from the tristimulus RGB primaries of the bayer CFA would've already been executed.
-
Hi All, I've been doing a series of experiments on matching cameras in Nuke/Fusion/Resolve utilising 3x 1D LUTs for tone maps and Tetrahedral interpolation (akin to Yedlins display prep). My math for matching Sony SLog3 SGamut3.Cine and the RED IPP2 pipeline to LogC3 AWG3 seems to be working successfully and delivers pleasing results (I've been using it in replacement of a CST node in Resolve for a while now). To further my understanding I'd like to acquire some footage of 5219, 5213, 5207 and 5203 shooting a colour checker chart (exposed correctly). I'd like 12-bit/10-bit Log DPX files preferably. Only 1-2 seconds of each. If you have any footage from any of the stocks above I'll be more than happy to either purchase and/or share my findings during and at the end of the pipeline - including the math and code. If possible email gabjol@me.com Thanks a bunch! G
-
Does anyone else prefer the look of 2K to 4K?
Gabriel Devereux replied to M Joel W's topic in General Discussion
Maybe try utilise codex HDE on ingest if a hefty codec is a problem.