Jump to content

Gabriel Devereux

Basic Member
  • Posts

    105
  • Joined

  • Last visited

Everything posted by Gabriel Devereux

  1. As a side note, While a camera's CFA and initial spectral primaries impact colour rendition, it is essential to note that any tristimulus observer can resolve any colour. It is also important to note that almost all cameras do not conform to the Luther-Iver; by that, I mean a single linear transform away from XYZ CMFs (a single linear transform away from our photopic visions cone functions). So you have your initial spectral primaries (the RGB value of each primary - R, G, B), and you apply gain to each primary to manipulate your spectral primaries and, therefore, your gamut. That is how a 3x3 linear matrix works regarding an OETF (optoelectric transfer function). So most, if not all, cameras (apart from scientific use cases) generally resolve primaries that can only be transformed linearly with error to our XYZ/photopic vision. The CFA densities and primaries perform well on a Luther-Iver front but poorly on an SNR and dynamic range front. So all cameras on a sensor level resolve something wacky, all manufacturers apply a 3x3 matrix to transform to their acquisition gamut or as close to 709 as possible etc. Note that the limitation here is the 3x3 linear matrix. Think about an RGB cube; you have three axes, and your 3x3 linear matrix applies a transform on these three axes; you can only transform linearly. That is Steve Yedlin's whole display prep schtick. He gives control to the secondaries as well as the primaries. He divides each primary and secondary into tetrahedra, applying interpolation across values dependent on our gain level to each primary and secondary RGB value.
  2. Generally, Codex Compact w/ TB3 reader is advertised at 10Gbps or 1GB/s. On average with Checksum and a NAS (RAID) it averages at 800MB/s or 6.4Gbps (occasionally higher, but I calculate off that average). Note you must double all your transfer times to take into account an xxHash or MD5 checksum. So with a 1TB Compact Codex Drive it'd take 42 minutes to ingest. With checksum. You must always carry enough media to support each camera for 24 hours of predicted shooting time (for example on average you shoot 8 hours / day).
  3. Hi All, Quick question, has anyone ever tried a Multicam with Teradek Serve Pros or, more importantly, Teradek Cubes with HEVC compression? I've had numerous people talk about the Serves. Most couldn't go over 4-units on a single AP. This makes sense with the fundamental limitations of the 802.11ax radio. However, nobody I know has tried the above with Cubes. I'm Australia-based, so I'm hoping someone across the pond has given it a go! Interested to hear about how many cameras you managed to view, constant latency etc.
  4. Can you elaborate on the zero sum game aspect of a larger sensor? I’ve always followed the simple logic of the larger the sensor the inherently more sensitive it is. So hearing an alternative would be great. As the simplistic logic above does apply, but the potential loss of say the larger column lines does interest me.
  5. And one thing to learn is an understanding and respect for others that don’t speak English as their first language!
  6. I have a few! SMPTE 2110 and 2022-6 - complete VoIP solutions utilising Micron, EVS and so forth. With Micron, I can live grade hundreds of cameras (realistically 6-7, but it scales). This also allows remote recording with the goal of local media just for redundancy. Colour Science, in the broad and local sense. Understanding cameras are merely tristimulus observers, and Yedlin's whole thing back in 2019 is entirely accurate and poignant. Looking at cameras as a photon counter, a poor one at that allows a better understanding of the entire pipeline. Note; I always thought colour science was essential to understanding colour from a camera, but it's essentially the opposite. It's to understand colour as a concept. Noting that in our world, colour is inherently three-dimensional, understanding how to manipulate values effectively using tools such as tetrahedral warping or multi-dimensional tetrahedral warping. Radiometry, light is simple EMF, and all the same, laws apply. As we move into larger volume-based productions, the time is in place to do the math to ensure everything works. COFDM is more niche but coded orthogonal frequency division multiplexing was and has been the standard for communications for decades. However, its throughput didn't allow for FHD-UHD video streams. Let alone 3G-SDI/12G-SDI streams. As COFDM technology advances and we can use higher constellations with the same orthogonal multi-channel carriers, we can send high-bitrate signals with significant redundancy (forward error correction). To loop back to the beginning of this post, cameras will become more like sound devices; we can relinquish a cable for robust remote recording over RF. In summary, my forward-thinking advancements are - people will become more comfortable with colour science and build a greater understanding of cameras allowing for superior look development and less camera loyalty; remote recording (which is already happening by cable and SMPTE 2110) will start happening over RF, extensive lighting exercises will move away from the sole know-how of gaffers with decades of experience but work harmoniously with contemporary physics technology.
  7. https://gabrieldevereux.com/calculator/ Here’s how to calculate large near-lambertian (matte) source propagation. A simplistic view of it - explained crudely for the umpteenth time. 1, 2 or however many light sources can be illuminating the reflector. It is assumed these light sources are at an acute angle to the normal of the reflector and the surface is near lambertian. The reflectance value of muslin is approximately 95%/0.95. It then models the reflecting soft surface as being composed of many point sources, m*n (the amount of point sources scale with the size of the reflector), whose total light output is the same as the total luminous flux of the original light sources (minus losses) If the total luminous flux of the sheet is L, then the luminous flux of a single point source is L/(m*n). These point sources emit light isotopically at a solid angle of 2pi steradians (only in front of the surface). The algorithm computes the luminance at some point z on the axis perpendicular (the normal) to the reflector passing through its centre. To this end, it simply needs to find the luminance produced at this point by each point source, apply certain laws such as lamberts cosine, and then sum these values to get the total luminance. Note that the number of point sources, m*n, can become arbitrarily large. However, for m*n => 50 the approximation is almost perfect (note there are approximately 7*7 (49) point sources per m**2 so this algorithm is accurate for sources larger than 3’ x 3’). The rest is basic geometry. Fall-off and contrast is not related. Fall-off is a term, I believe, taken from radiometry. A very poignant topic for cinematography. Contrast in our world is artistically relative. How you interpret the two is up to you but, it’s probably easiest to view the two separately. Also if you have any questions about the above calculator please do ask. It’s all reasonably simplistic math.
  8. By this, I mean the ability to sample spatial frequency detail. I should also emphasise this is not the case in the Ursa Mini Pro 12k, just the example CFA given in that link.
  9. To follow up on what Joshua said above (all of which, to my knowledge, is correct). The article you linked has greatly misconceived the absolute negatives of the 12k sensor. And is very wrong! To an absurd degree... you get better DR, cause noise gets so small by down sampling that it almost looks like grain, so you can lift your shadows to a level, which would have not be possible before (or would have needed massive NR – with the well known side effects) Nope, it's a 2.2-micron photo-site; downsampling assumes the mean of assuming pixels, decreasing differentiation from the mean (SNR). However, the initial readout from the 2.2-micron photosite has a very high SNR due to size. A photodiode works by its electrons absorbing the Magna of energy from photons allowing charge to pass through. The charge is the G in the power source follower that enable the passing of VDD on the local photosite level (it's a local amplifier). Therefore the small charge that the photodiode gains allows a much greater voltage to pass through; therefore, discrepancy (differentiation from the mean) causes noise. A larger photodiode has a more significant amount of charge. A larger photodiode requires a larger photosite! TLDR; Big photosites reduce SNR, and downsampling reduces SNR; however, a larger photo site is preferred, as the sensitivity scales. By that, I mean the large photosites depletion region and the multitude of smaller depletion regions, if scaled perfectly, have the same sensitivity! But it doesn't! The size of transistors does not scale as well as the size of the diode and more importantly its depletion region; a smaller photosite has an exponentially smaller diode than a larger photosite in proportionality. Even with micro optics, this doesn't scale the same. Aliasing and Moire are vanishing without the need of a heavy OLPF, preserving important fine image information without artifacts. After my longer-than-anticipated explanation above, I'll keep this long answer short. Nyquist sampling makes the above statement incorrect. But, to put it simply, the Nyquist sampling limit can't be higher than d scan /2; your article CFA is very wrong as there is no alternating white photosite... as that would mean the horizontal sampling limit requires 6 pixels which would make the horizontal resolution 2000 px. Color fidelity (especially in the higher frequencies) is way better – giving you better perceived sharpness and clarity – while still maintaining buttery skin, without the need of diffusion filters. Nope, any tristimulus observer can resolve ANY COLOUR yes, that means your Canon T3i can determine the same amount of colours as the Arri Alexa 35 Pro Max Super Speed Digital Colour 3000 or whatever their new naming convention is. A wider gamut of colours to work with after acquisition is more a linear matrix deal than on a sensor level. See above the Nyquist sampling limit on why you don't achieve better 'colour fidelity' at higher spatial frequencies. Better SN ratio. I like bold statements with no proof as well... Better chroma resolution due to the full RGB color readout. Nope, any tristimulus observer can resolve ANY COLOUR. Less artifacts No. Nyquist sampling limit see above... I assume the article is just talking about moire. I will admit it! It does have a better sampling limit than a 4k Bayer CFA. Not a lower SNR, though! Great skin tones Nope, any tristimulus observer can resolve ANY COLOUR. That's the linear matrix they choose to use... which is all overdetermined and error-prone. The richness, smoothness, and – in lack of a better word – fatness of the images, that the sensor delivers is amazing. sure
  10. Not to say what they're doing isn't highly skilful. But, in terms of making a camera... to state the obvious - a digital camera is far more finicky and takes a lot more expensive machinery to fabricate. I'm not exactly saying anything revolutionary here... try to make a MOSFET gate that's half a micron big and you'll get the gist. I believe the correct answer is that building a film camera wouldn't be profitable. Which we all know and accept.
  11. To a certain degree! The 500$ Vaxis system you talk about is a 264 encoder going to a WiFi Radio that can to P2P but also can go to an iPad and so forth (I believe that's the product?) That's more of a different VoIP product that, with WiFi, has Restream capability, channel jumping and so forth; it also has a smaller carrier than an Amimon-based product, so with saturated WiFi channels, it can 'fit'. The downside of a VoIP product is that it has inherent latency - Amimon is less than two scan lines (0.06ms, I believe?) whereas with VoIP you're looking at a 2-frame+delay (100-300ms) with cumulative latency if it's all on the same network. The Vaxis Storm and Boxx Atom unit has the same Amimon chipset as the old Teradek Bolt but it doesn't comply with DFS, Vaxis doesn't comply with WiFi channel bands (it goes crazy into 'illegal' spectrum) which therefore is a better product as you can operate outside of the legal range, lower noise floor (no competition from WiFi) and no DFS (scanning for one minute when booted - its WiFi 802.11ax compliance).
  12. Thanks Stephen, On a side note, Boxx, Teradek, and Vaxis use the same Amimon chips (the FHD models not 4k models). So the compression, RF transmission and reconstruction is the same. The only difference is Teradek was more compliant with RF regulations (which makes a easier to market but a worse product).
  13. Note my 3:1 compression ratio was for HD streams. No clue about UHD. Teradek and Amimon are very hush hush about the internal workings.
  14. Same chip manufacturer, As Phil said the greater overhead allows theoretically higher reliability at HD. However this is internal encoding and decoding into Amimon ‘language’ as Teradek (all Amimon chips) go through a 3:1 compression ratio to fit within the 40Meg carrier. The actual RF reliability itself - no difference. As said above, the greater bandwidth for encoding, decoding and rebuilding signal is greater. But, at the end of the day it’s a 2 way mimo TX to a 4 way mimo RX with the same constellations and so forth. We found the Boxx, Vaxis transmission systems as reliable and non-DFS compliant and no pairing (which is better for high density RX’s)
  15. I believe the ideology was that with such pixel density few lenses would exceed the nyquist sampling limit of the camera and therefore wasn’t necessary. re the 12k - it’s dumb. Whoever thought interpolating from an RGBW CFA was a good idea was a great salesman. Diagonal sampling limit is poor. And a 2.2ym photosite with an exceptionally small diode…
  16. As an update on this and correcting poor terminology You block down to IF 'intermittent' frequency. Which can run along coax, specifically RG1697 for 500+ feet. You should nearly always deploy two antennas, and an RF matrix can serve an absurd amount of of RX's Most use 264 as a compressor (realistically you have a 50Mbps bandwidth which is excessive so high density encoding like HEVC is unnecessary). COFDM is an RF style data transfer style. Amimon was initially developed to push signal from your living room to your bedroom. COFDM was developed for high-intensity high-risk (think special forces type deployments) where reliability is life or death. The initial development I think speaks worlds.
  17. Question, I'm not familiar with CinemaDNG file types. This 'forward' matrix, if it's baked in the metadata... are you sure this isn't a standard XYZ to 709 linear transform - as if it's in the metadata and applied on ingesting in a 32-bit floating point engine if you used a linear transform from 709 to AP0/1 or XYZ it would have no loss - I think. The reason why I say this is that there is no standard 709 transform. A colour correction matrix from RGB primaries is, of course, partially dictated by RGB primaries and partially dictated by XYZ/709 RGB values of the inherent illuminant (I assume D50 in your use case? - not sure why it wouldn't be D65?). Could you post the forward matrix? The ideology of reviewing a matrix and assuming it's compressing X to Y is convoluted - at least in my head, it is. I'm unfamiliar with scanning film completely. But, let's say for a minute, negate the primaries of film and let's assume that the scanner is using a standard D65 illuminant with an adequate SPD. Let's also assume that the scanner is using a bayer filter - that doesn't abide by the Luther-Iver condition (I would've thought personally a scanner would, as the downsides of abiding by the CIE 1931 CMF colour target filtration scheme are noise and latitude - I would've thought that scanning a negative itself would partially negate that and then just throw a shit ton of light at it, but ah well maybe they couldn't be bothered designing a new sensor? - #FilmIsDead, I'm joking... maybe). So with this, we have the RGB primaries of the Bayer CFA in the scanner, our standard illuminant (D65) and our output Primaries XYZ/709. This matrix shouldn't match any other matrix. It shouldn't look like any matrix you've seen (unless you're a colour scientist used to calibrating cameras - for which I apologise) as it's partially - primarily dictated by the RGB primaries of the scanner - I've spent some time calibrating phone cameras to XYZ - even though the final spectral locus/primaries are defined due to the variance in input RGB primaries the values are never the same - usually drastically different! Also if the debayer has taken place prior to ingest - I believe (taking reference from SMPTE RDD31:2014) that the colour correction matrix from the tristimulus RGB primaries of the bayer CFA would've already been executed.
  18. Hi All, I've been doing a series of experiments on matching cameras in Nuke/Fusion/Resolve utilising 3x 1D LUTs for tone maps and Tetrahedral interpolation (akin to Yedlins display prep). My math for matching Sony SLog3 SGamut3.Cine and the RED IPP2 pipeline to LogC3 AWG3 seems to be working successfully and delivers pleasing results (I've been using it in replacement of a CST node in Resolve for a while now). To further my understanding I'd like to acquire some footage of 5219, 5213, 5207 and 5203 shooting a colour checker chart (exposed correctly). I'd like 12-bit/10-bit Log DPX files preferably. Only 1-2 seconds of each. If you have any footage from any of the stocks above I'll be more than happy to either purchase and/or share my findings during and at the end of the pipeline - including the math and code. If possible email gabjol@me.com Thanks a bunch! G
  19. Maybe try utilise codex HDE on ingest if a hefty codec is a problem.
  20. To play devil's advocate, I will add the point of Steve Yedlin's demo (for the most part) was that one shouldn't apply a look or visual preference to an acquisition format... "More doesn't always look better" is true, but it's also essential to keep in mind larger photosites, and lower-pixel count sensors are more susceptible to moiré. (The Nyquist sampling limit in Bayer CFAs ain't great due to the space of colour samples). Also, more pixels, more information... Now don't get me wrong! I'm not an advocate of high-density arrays. A 12K S35 sensor is dumb (and I'll say that until I'm blue in the face, and I'll explain my lengthy reasoning for it). But, the balancing act between larger photosites being inherently more sensitive, higher fill factor etc. and smaller photosites mitigating moire and having a higher spatial frequency is something to keep in mind from the acquisition side of things. One could blur/soften the image in post. Or with cool filtration/lenses. On a side note though, I read an article on - Subwavelength Bayer RGB colour routers with "perfect optical efficiency". This simultaneously makes pixels more sensitive and increases the sampling limit of a Bayer CFA. So who knows whats to come...
  21. Re your LED situation - it's very 'murky' so to speak. From a level of surety you're dealing with an unknown variable (the spectral content of the fixture) any app you use or any meter for that matter will most likely have a different spectral response from your film stock (however, film is apparently slightly more forgiving) so therefore you're dealing with two unknown variables. I've gelled back bi-colour LED's that have shifted off the planckian as the two 'white' phosphors have the spectral content to allow subtractive colour mixing. The idea of cutting more spectrum from these already (most likely spiky) LED's keeping in mind most older LEDs for architectural use weren't designed with substantial spectral content in mind - is worrying. On digital, I've had DP's gel back LED torches and I've finished it off in the grade but everything does (in my opinion) look artificial which I imagine is due to the lack of gamut being resolved. I understand film is more forgiving. So a tl:dr if it was me, I'd replace the bulbs/dummy with fixtures. Even the worser of film orientated LED fixtures have at least an attempt at greater spectral content.
  22. I should note this isn't advised. Generally you want to deploy two for diversity close to each other (so its in the same phase) for MRC
  23. Yeah Amimon is QAM initially it was a single chip...almost dye like piece then they built a circuit to a 2way mimo for transmitter and 5-way mimo for the receiver. It isn't bad but utilises 40mhz of spectrum where was with the initial dye you could use a 4 way mimo on the transmitter and only utilise 20mhz of spectrum. Note AMIMON's QAM approach was clever, it's becoming somewhat dated... as is teradek... With COFDM (coded orthogonal frequency division multiplexing) there are a few different 'stages'. Cobham in the UK are the big big boys that one uses in the OB world... Domo creates a lot of RF transmission for the US, British and a few other militaries but also delved into broadcast. However, the one I've seen used the most is Wave Central RF which is more targeted towards film making. The downside to all COFDM systems is they inherently have a HEVC x265 (or other level) of compression with that it introduces delay. The Amimon chip has 0ms of delay, or works in a realm where its about two scan lines, otherwise near unquantifiable for our means. You can get COFDM transmission with zero delay (18ms less than a frame) (Note look at Wave Central CineVue) but you're looking at 70k per transmission package. The significant benefits of COFDM transmission is, less spectrum. You can shove a fullHD transmission down 5mhz of spectrum or even less... keep in mind we can really only operate in licensed wifi channels. On a recent job I had a total of 23x Amimon based transmission systems, only 10 could be operational at a time (that's just limited by the amount of spectrum between 5.2 and 5.8) and when competing with WIFI AP's only 9 where ever stable. With each COFDM channel being 5mhz (keep in mind COFDM in itself is multiplexing across RF) you can fit a crap ton more channels down the same frequency band. Note if you have a amimon based chip with more than 9-10 channels... congratulations your operating in 'illegal' spectrum. Which is fine as it's typically temporary deployment... but, I wouldn't go near a place that cares about RF with such a tool. The other benefit of a utilising less spectrum is your utilising less power in emitting a broad spectrum x amount of distance. With that you just have inherently more power from the same system power and dbi antenna if that makes sense. So with a COFDM system you can go the distance in relation to Amimon. Last but not least the Amimon system works on a 5-way mimo in for receivers. It likes to be receiving on all 5 inputs and builds a signal on a combination of all 5. So theoretically for the best possible input you run coax from the 5 inputs and spread out each antenna about 2m a part and set them at different polarities. All COFDM systems for broadcast (as far as im aware) work on diversity antennas. What this means is, if I want to set up a 10-15 camera shoot the minimum amount of antennas I need to deploy is.... one singular antenna. Realistically you should set-up two for diversity so the receivers get to choose which input has a better PSNR and they like that (it helps with different polarities etc). What's also good about COFDM is you generally block down from 5ghz RF to 'LF' (a lower frequency RF between 300-1000mhz) which can run along coax and more importantly go into a DA (distribution amplifier). Anyways that's a poorly written summary of contemporary use of COFDM transmission and its now starting to take shape in our world. I'm surprised it hasn't caught on like wildfire with everyone starting to do oners... I heard on the Team Deakins podcast an issue in 1917 was getting video tap across all those long shots... which surprised me! That's about a two hour job haha. If you'd like to know more Phil feel free to contact me by email (gabjol@me.com). Note... I don't work for any of these companies I'm just a keen DIT. Thanks G
  24. Even more so! A large electrical wholesaler was going to build a HDMI extender from the AMIMON chip before anyone even knew about it. Ah, who would've thought our 'industry revolutionary academy award winning technology' was originally intended to boost your CD player to your bedroom. I believe all the amimon chips (the full HD versions at least) pull under 15w (nothing compared to WaveCentrals COFDM 4k 85w transmitter). However, the bolt, with genlock I believe has nearly 4-5 frames of delay. They haven't figured out 4k yet...
×
×
  • Create New...