Jump to content

Gabriel Devereux

Basic Member
  • Posts

    106
  • Joined

  • Last visited

Posts posted by Gabriel Devereux

  1. On 8/11/2022 at 7:22 PM, Phil Rhodes said:

    Ah, Nyrius! Mine died a while ago but served well for a while.

    I have a Bolt 4K 1500 here if anyone wants me to run any tests. Can't remember how much power the transmitter pulls.

    Even more so! A large electrical wholesaler was going to build a HDMI extender from the AMIMON chip before anyone even knew about it. 

    Ah, who would've thought our 'industry revolutionary academy award winning technology' was originally intended to boost your CD player to your bedroom. 

     

    I believe all the amimon chips (the full HD versions at least) pull under 15w (nothing compared to WaveCentrals COFDM 4k 85w transmitter). However,  the bolt, with genlock I believe has nearly 4-5 frames of delay.

    They haven't figured out 4k yet...

  2. 12 hours ago, Dustin Supencheck said:

    @Gabriel Devereux I'm gunna be honest a lot of that went over my head lol, but I appreciate the knowledge! I obviously have a lot to learn about wireless signals. 

    So you prefer mounting your wireless transmitter in this way on to the gold mount at the back of the camera? 

    Thank you! 

    Exactly right! 
     

    Occasionally with the Bolt and Atom I stick it to the side of the camera with various attachments. But, especially with larger systems like an 85w COFDM link that’s 50% heatsink (trust me it makes the Bolt look like an ice cube) I mount it on the rear plate.

    It’s where I seem to receive the least amount of complaints!

    On a side note, a Bolt can ‘over heat’ however at incredibly high temperatures and the first thing that goes is the multi in HDMI to SDI board. It just becomes wildly unstable… The rest of it can withstand fire… keep in mind once upon a time the insides of a Teradek where meant for wide consumer use! 


  3. Nowadays with contemporary cameras getting smaller and smaller and contemporary lenses seemingly doing the opposite I typically mount microwave on the back gold/vlock plate.

    The Box meridian, Wave Central AXIS, Domo Broadcast and Vaxis systems generally just go vlock to vlock. 
     

    Re antennas and LOS. No difference.

    At the end of the day Teradek uses a 5 way mimo with the amamon chip so even if the operators head is made out of lead it likes reflections at different polarities. 

    I think Teradek makes a higher gain antenna (than the standard 5dbi omnidirectional one) at two different polarities which is hard. As the issue of going above 5dbi is they’re typically no longer omnidirectional. Which speaking from a daring experience is rather impractical especially with the janky Teradek workflow. 

    • Like 2
  4. What's your final deliverable?

    Does it have to be 12-bit? RAW? What resolution?

    A good option without breaking the bank is the FX9. 

    A 'cheap Alexa is a heavy Alexa. I know many operators that even though they're sled, horizon stabiliser all that kit will take the weight of an XT generally prefer a mini or mini LF. 

    Re colour matching. I've done a show matching a Gemini, Helium, EPIC Dragon, Venice, Mini LF and FX6. Most contemporary cameras have enough latitude to match with ease. I'll note - the nuance of the Venice and Mini LF in terms of highlight roll-off etc do show when cutting side by side with the FX6 in coverage but then if that nuance is even visible in the final deliverable depends on the final deliverable. 

  5. Hi,

    so! Those settings aren’t inherently applicable to what you desire… without opening a can of worms -

    if your using resolve to grade you can’t use the viewer as a reference. You need to pull an out via the numerous external BMD outputs such as a 3G US Monitor to then a broadcast standard monitor. 
     

    What theoretically you’re doing (and what monitor calibration essentially is). Is taking an input and transforming it (via lut or further) to resolve correctly on your display. 

    colofront mapped the input transform and displays for the new line of MacBook Pro’s and more so you can view a 709 or P3 XYZ stream near accurately on a MBP21-22 display. However I’m not sure an equivalent tool exists for a color NLE.

  6. Just now, Gabriel Devereux said:

    Forgive me but, I haven’t read the entirety of your post I’ve just seen you’re going down a rabbit hole.

     

    https://gabrieldevereux.com/2022/01/15/misc_log_container/

     

    Above is a fair bit of info I wrote on my website a while  ago.

     

    Here’s the general gist (of how it works briefly in a crude fashion as always).

    Your linear signal has an offset (of 256 values for Log-C) prior to encoding to the container. A camera log container has a linear bias (in the case of 12-bit Log-c) it’s linear to 1024 and then compression takes place (the gist is it compresses each stop above 1024 to 512). No interpolation occurs except if you attempt to reconstruct the 16-bit linear curve after encoding.

     

    re more steps in the output than digitised by the ADC, that’s not necessarily truly. Most HDR cameras sample below the noise floor.
     

    I’m not sure where everyone is getting values from… but if your looking in a bog-standard color NLE such as resolve etc values aren’t necessary accurate in terms of WFM, RGB picker readouts. 

     

    “Photosite values are radiometrically linear representations of the energy they receive, represented as 16-bit unsigned integers, incorporating an offset such that a photosite receiving no energy would have a 16-bit unsigned integer value of 256.”

    Just to take a direct quote from an ARRI RDD

  7. Forgive me but, I haven’t read the entirety of your post I’ve just seen you’re going down a rabbit hole.

     

    https://gabrieldevereux.com/2022/01/15/misc_log_container/

     

    Above is a fair bit of info I wrote on my website a while  ago.

     

    Here’s the general gist (of how it works briefly in a crude fashion as always).

    Your linear signal has an offset (of 256 values for Log-C) prior to encoding to the container. A camera log container has a linear bias (in the case of 12-bit Log-c) it’s linear to 1024 and then compression takes place (the gist is it compresses each stop above 1024 to 512). No interpolation occurs except if you attempt to reconstruct the 16-bit linear curve after encoding.

     

    re more steps in the output than digitised by the ADC, that’s not necessarily truly. Most HDR cameras sample below the noise floor.
     

    I’m not sure where everyone is getting values from… but if your looking in a bog-standard color NLE such as resolve etc values aren’t necessary accurate in terms of WFM, RGB picker readouts. 

     

  8. There is a quote (I believe from Toby Tomkins).

    We live in a world where a computers scaling algorithm impacts colour more than a colourist nuance...

    Is the colour shift constant throughout scaling from 99% to 1%? If its constant its a software bug - if it shifts sporadically (IE not a constant) then its potentially a signal path / hardware issue that requires software compensation.

    G

     

  9. 20 hours ago, d shea said:

    There are two pathways from the dual gain sensor. Every photo-sites output is boosted to a target voltage level before it hits the ADC. The voltage level plus the well capacity determine the saturation point of the photo-site. The ADC matches the range of the incoming signal.  In dual sensors, a separate pathway with lower than  usual  amount of gain is added. The lower amount of gain increases the saturation point(brings back the highlights). The difference in gain determines how many stops can be added to the top of the dynamic range of the dual gain system.  The photo-site itself has a limited dynamic range, but this engineering trick acts like you are combining the readings of two photo-sites at the same location. It also doubles the processing power needed since you are combining 2 frames to get one.

    I don’t mean to be obtuse but, that isn’t my question. 
    At what point is a sensors capacitance reached?

    A photo site well doesn’t exist -
    A photodiode has a depletion region. A photo site is the area of the photodiode plus circuitry.

    In terms of managing capacitance with tiny readouts are we talking about actual latitude of the analogue ‘wire’ the photo sites ‘drain’ as in the depletion region itself of a photodiode or the readout time in relation to the shutter.

    I should also add a photo site doesn’t store photons… it’s a common misconception. Instead electrons absorb the magna of energy from the incoming photons and generate a current.

  10. On 6/2/2022 at 8:12 AM, Phil Rhodes said:

    Yes, although at some point there has to be a path from the photosites to the amplifiers. OK, more modern stacked semiconductor manufacturing can provide more flexibility in exactly how this is done, and I would imagine Arri has paid for every modern convenience in pursuit of exactly this sort of performance, but the single biggest issue in every cinema-grade sensor is managing capacitance as these tiny signals go flying around and that isn't a problem that can be entirely worked-around.

    Now what happens when they make the LF version of this...

    As always your posts are brilliant and enlightening - 

    May I ask, what do you mean in terms of managing capacitance? Is that in terms of storing charge at the photosite itself? The MOSFET power follower? The impedance prior to the drain causing noise? Or the actual read-outs running to the column amplifiers and then to ADC?

    Thanks

    G

  11. I guess the question becomes - as far as I’m aware the easiest thing to scale in a 3T/5T APS Photosite is the photodiode itself. Metal wiring/channels could theoretically get smaller with a smaller voltage but, it’s a constant. Which is why past a threshold a smaller photosite drastically impacts the diode and therefore the total sensitivity of the camera. 
     

    so…?

    who knows 

  12. It helps that a kino Flo is currently much less expensive than a rival LED of a similar nature in terms of spectral content and arguably build quality. 

    However, LED tech is still, to a certain degree, in its infancy and soon they'll equalise. 

    A more fun answer is...

    I should emphasise most/all LED fixtures are different.

    Even a full white light LED doesn't have full consistent 'RGB' spectral content - in the terms of an even coverage over the visible light spectrum in relation to its apparent color temperature.  

    Generally when trying to achieve a certain RGB colour (in most cases mixing multiple frequency bands) the spikiness/unevenness of the spectral content is more pronounced even with RGB... insert more diodes here... WW LED's. An RGB LED has a discontinuous spectrum and RGBWW LED I believe all frequency bands overlap when emitting variations of white (a mixture of warm white and cool white with minor correction to remain on planckian) however, when achieving certain colours is discontinuous. 

    A fluorescents spectral content while spiky is not discontinuous. 

  13. 3 hours ago, Craig_Murfin said:

    Am I correct in thinking that when shooting all the different scenes required for a Commercial or Film I would keep my contrast ratio consistant?

    Considering a lot of DP's don't even use contrast ratios, no.

    How you wish a face/scene to be lit in terms of key/fill, mood, chiaroscuro in general is personal taste. 

    Generally, continuity dictates you don't dramatically change the key/fill, lighting in general of a scene without just cause. 

    However, like all aspects of lighting, key/fill ratio's are just a guideline to help you achieve your vision. The only exception I can think of is when second unit is trying to match first. In some cases the cinematographer has to attempt to match these ratios in which its a more exact science. 

    • Like 2
  14. 22 hours ago, Stephen Sanchez said:

    I've guessed that it takes energy to retransmit the photon after impact.

    I'd recommend getting a spot meter. I own an older L788. It gives 3° and 1°. Which has been perfect for isolating small elements from the whole. It gives CDm2 reflectance measurement. Which is on the same SI measurement scale as lux. (Hence why I use lux instead of FC.)

    It's interesting, I'm trying to find the exact law I poorly quoted above. 

    The calculator currently gives an output in LUX. It's written and from my tests, working (in terms of near accurate approximations). Shoot me an email when you're free (have put it above) and I'll send it through as would be great for you to have a play!

  15. 1 hour ago, Eric Eader said:

    Gabriel,  

    You had a minor "brain fade."  Footcandles are Light Incident to subject.  Spot meters read Footlamberts then translate to f-stops. (Brightness or reflected light measurement).

    The relationship of 'lamberts to 'candles was explained in the 1960's ASC manual, (when B&W was king), but because I had a minus 15 for the eight days I took Algebra,  I never quite understood it.  As I think on it now, it was probably because I didn't have a meter that read footlamberts directly.

    Spectra and Sekonic do have meters that read fl directly.  (758 Cine and above).

    You have an interesting project going.  One that raises more questions.  I see walking an arc (like a sniper's range card) measuring and notating Light Incident -- falloff-- at various distances from your reflector.  

     

     

    Thank you!

    I've been using a photon counter and using some fun math taking into account certain standards to arrive at photographic values. 

    However, foot lamberts is as perfect! If not more in this scenario.

    WIth Ev being a constant in this scenario. The variable is R (which as a reference grey card as shown bellow would be a reflectivity of 0.18, I believe muslin should read about a 0.735 which is the figure I'm currently playing with but, unsure!)

    L_\mathrm v = E_\mathrm v \times R,

    L_\mathrm v is the luminance, in foot-lamberts,
    E_\mathrm v is the illuminance, in foot-candles, and
    R is the reflectivity, expressed as a fractional number (for example, a grey card with 18% reflectivity would have R = 0.18).
     
    I've never used a spot meter with foot lamberts (or foot candles). In a sense, according to some law of quantum dynamics (which I've forgot) a photon is destroyed and created on reflection - in a sense you're measuring after destruction and the newly formed photon count is of course theoretically less than and the ratio in a sense between the two is the reflectivity. If one doesn't have a spot meter theoretically you could achieve the same with a light meter (that you wouldn't mind clamping to a controlled point).
     
    Thank you again!
  16. 1 minute ago, Gabriel Devereux said:

    Yes! The angle does however, I find in most lighting scenarios DoP's try... (or potentially should) try and keep the normal of a bounce perpendicular to the subject. If they don't generally the light is sharper and the projected light is uneven (unless of course this is a desired effect... which does somewhat elude me) note the infamous Roger Deakin's cove does make a lot more sense!

    Funnily enough though, lamberts cosine law is pretty simple to calculate. In fact in terms of what we do a simple rule is if you're subject is more than 60 degrees from your reflectors normal (a hypothetical direct line being projected perpendicularly from the reflector) you loose a stop and at that point it does increase exponentially. 

    In terms of specularity, I believe you're correct! I remember reading a post you wrote a while back. I'm not too ofay with the exact point a surface becomes 'near' lambertian to which this calculator is <85% accurate, but I've noted most non-specular materials such as Ultra bounce are near enough however, materials such as silver stipple is another question.

    The issue is well is as soon as you need to start inputting the normal of your incoming source in relation to the normal of your reflector the inputs become more complex. It's fine for people that know, but the goal of this preliminary one is for it to be a (while slightly inaccurate in terms of a quarter of a stop which I believe should be negligible) a fool proof calculator with the only information required is available from a fixtures photometric chart. 

    Would be great to talk more! As always talking makes one think which of course helps.

    My email is gabjol@me.com

    I will note, one thing I'm playing with at the moment is massive reflectors like 40' by's.

    As of course if you're subject is 10' away from a 40' (which is a bit nuts) lamberts cosine law would be required to take into account the outer edges and such. Should be easy enough to compute. 

  17. 5 hours ago, Stephen Sanchez said:

    I can help with this. I'll try to plot out some time soon.

    On the subject of a calculator. I've been looking for a solution as well. I don't understand all the math terms, but I can say from my tests that the angle of the surface relative to camera determines the light falloff. Or cosine.

    Specularity comes from simply how uniformly smooth the surface is. But if that surface is then wrinkled, like on Styrofoam insulation, we consider it "specular" but it is just a wrinkled smooth surface. Provided the texture is uniform, then it will meter like a hard light.

    We should talk more on the subject indeed.

    Yes! The angle does however, I find in most lighting scenarios DoP's try... (or potentially should) try and keep the normal of a bounce perpendicular to the subject. If they don't generally the light is sharper and the projected light is uneven (unless of course this is a desired effect... which does somewhat elude me) note the infamous Roger Deakin's cove does make a lot more sense!

    Funnily enough though, lamberts cosine law is pretty simple to calculate. In fact in terms of what we do a simple rule is if you're subject is more than 60 degrees from your reflectors normal (a hypothetical direct line being projected perpendicularly from the reflector) you loose a stop and at that point it does increase exponentially. 

    In terms of specularity, I believe you're correct! I remember reading a post you wrote a while back. I'm not too ofay with the exact point a surface becomes 'near' lambertian to which this calculator is <85% accurate, but I've noted most non-specular materials such as Ultra bounce are near enough however, materials such as silver stipple is another question.

    The issue is well is as soon as you need to start inputting the normal of your incoming source in relation to the normal of your reflector the inputs become more complex. It's fine for people that know, but the goal of this preliminary one is for it to be a (while slightly inaccurate in terms of a quarter of a stop which I believe should be negligible) a fool proof calculator with the only information required is available from a fixtures photometric chart. 

    Would be great to talk more! As always talking makes one think which of course helps.

    My email is gabjol@me.com

  18. Hi,

    I'm currently finishing off a soft-light large reflector fall-off calculator.

    The calculator, in premise, works by modelling the reflecting soft surface as being composed of many point sources, say m*x, whose total light output is the same as total luminous flux of the original light sources (minus losses). If the total luminous flux of the sheet is L, then the luminous flux of a single point source is L / m*n

    Of course above is heavily simplified, the goal was to take into account the majority of the variables one can compute while not making user input absurd. The only factor not taken into account is lamberts cosine law in terms of an assumption the point sources emit light isotropically in a solid angle of 2(pi) steradians (only in front of the surface). The calculator calculates light-output along the normal of the reflector - with that if the calculated point is 'off' normal one would need to take into account the appropriate light loss.

    In terms of reflector material, that's where I'm stuck as I don't have a large inventory of industry standard rags (doing this while on holiday in a very remote region). A simplified way of calculating light loss from each reflector material (ultra bounce, muslin, grey card) is to just compute from it's LRV (Light reflectance value - such as a standard grey card being 18%). I'm currently calculating from muslin and assuming a LRV of 73.5% as that's the reflectance value of cotton fibre. However, if anyone had at there disposal industry standard Muslin rags (with fireproofing etc), Ultrabounce and so on and would be willing to do a few tests for me, that'd be incredibly useful!

    There are 'proper' ways of doing it, but, from my tests with non-standard material if one has a spot meter (preferably one that gives an FC output), lights and a grey card it can be easily found.

    My set-up works by setting up a grey card and lighting appropriately so that the projected light is, within reason, even. I then try to have my card read at 50fc, approx T4, 400ASA, 180 degrees. I then set up the tested rag directly adjacent or directly behind the grey card and take another reading. Typically (when the grey card is set-up in front of the material) I'll take a reading, remove the card and then take a reading in the exact same place on the material.

    https://en.wikipedia.org/wiki/Light_reflectance_value

    If anyone has any spare time and fancies contributing some information to this project it would be fantastic and greatly appreciated!

    G

    • Like 1
  19. In terms of designing a permanent set, with the goal of it lasting years to come and it being capable to adapt to multiple camera packages store bought commercial LED's are probably not going to be your friend.

    Flicker, a more quantifiable issue is more easily overcome in terms of just finding an LED fixture with a fast, consistent frequency that aligns with your frame-rates. However, colour shifts and spectral content is a variable dependent on another variable, your camera package. 

    From my understanding only quite expensive fixtures designed for superior spectral content (in terms of RGB LED's) or again white light fixtures designed with spectral content in mind would be suitable for this kind of set-up. 

    I know aperture made a bunch of discontinued 3200k 'white light' LED's. I've never used or tested them but that may be an interesting avenue to explore! I imagine they wouldn't be too expensive now. (The only reason I recommend them as logic would dictate they've taken a higher frequency band LED and used a strong fluorescent phosphor to emit a wide even frequency band, as a single phosphor LED typically has more spectral content). 

  20. Hi All,

    For an upcoming shoot I'm utilising multiple FX3's and need to record ProRes RAW for a final deliverable of ProRes 4444 (transcode). 

    I need to record from multiple FX3's for a number of hour long increments and was hoping to record externally to a Shogun or such with 8TB SATA SSD's. 

    I'm thinking of taking the UHD 16-bit Linear RAW out (from HDMI which is odd) > Cat3 HDMI > BMD 12G HDMI to SDI Adapter > 12G BNC to Shogun. I'm going off the fact you can take a similar output utilising a 12G out from the FX6. 

    Has anyone ever tried this?

    Also has anyone tried the Samsung/Sony 8TB SATA SSD's with 500 MB/s write speed + on the Atomos external recorders?

    Thanks

    Gabriel

  21. Potentially metamerism - the lack of spectral content from a few modern LED fixtures leads to a smaller gamut of colours being resolved (less energy/information = less colours). 

    Note, this isn't just with LED fixtures but most spiky spectrums. LED's are just potentially spiky and discontinuous which does emphasise this effect. The camera package and more importantly the spectral curves of the camera lining up with the spectral distribution of the fixture also has impact. As then you have less information being emitted and then the camera not even capturing that information. 

    • Like 1
  22. I've never heard this term before - by that I assume you mean an element of jitter/discrepancy of sampling time between each frame with the ideology that film doesn't sample each frame at a perfect 1/50th (25 frames) of a second down to a nanosecond interval. So digital cameras should try and represent this? Or that each camera at x frame rate at 180 degree shutters have different sampling times? 

    In terms of discrepancy between manufacturers, most sensors are all designed a little differently in terms of physical circuitry between model and manufacturer. Different sensors will have different paths for read-out and most 'HDR' cameras that boast higher latitude typically have some level of multiple or decision amplification.

    I read a Sony white paper on Gain Adaptive Column amplifiers that store linear analogue signals in temporary memory for a faster digital 'shutter' that exceeds the clock speed of the comparator sending signals to different column amplifiers. The point being, different signal paths may alter the read-out time from the physical photo site and overall sensor which would marginally impact sampling time.

    Generally a digital shutter is the combination of power follower and reset transistors (output and reset) in a 3T APS CMOS photo site. Some cameras do have additional circuitry per-photo site however, as each element you add reduces the total space for photodiode/'s it's from my understanding reasonably similar from sensor to sensor. I would imagine the differentiation of sampling time would be due to different analogue circuitry prior and during sampling... but I don't think this would really impact motion unless an incredibly inferior scheme is used.

  23. I'm confused...

    Are you wishing to offload media from the card and transcode media?

    In terms of encoder that's typically in relation to what you want to achieve, with software like Pomfort Silverstack you can 'dump' a card through a somewhat rigorous checksum (depending on you're choice of algorithm) verification and from there transcode to multiple different types of deliverables.

    However I imagine a $600/year software probably isn't high on the priority list (note you can get project licenses). Shot-put or Yayota are similar in terms of checksum verification just with less bells and whistles. You dump a card through the software to a folder structure you lay out and then you can transcode through resolve etc. Typically for high levels of compression I transcode through resolve from OCN to an intermediary codec (typically ProRes), this allows me to adjust/grade, sync audio and so on then into handbrake as I find it's H264 encoder is superior and has more latitude in controls in comparison to the one in resolves. 

    However personally I utilise the whole Pomfort suite (which is pretty bloody brilliant ill add, the only issue I've run up against is multi node library sharing in terms of having multiple offload machines). Which just allows an easy transfer of 'looks' from LiveGrade to Silverstack which I can then transcode directly from Silverstack's library UI and transcode while offloading.

    Makes life easy.

  24. 56 minutes ago, Gabriel Devereux said:

    it's similar to ISO in relation to it being a perceived standard in terms of SNR

    I phrased this incorrectly and poorly. It's an assumed standard, a constant, a FPN. Which it somewhat is but the amount of variables that 'eat up' the bottom of the signal are indeed variables which are space dependent to a certain degree. I think just highlights that the appropriate digital gain is entirely perceptual and as long as it isn't destructive a native is somewhat obsolete in terms of performance. 

×
×
  • Create New...