Jump to content

Gabriel Devereux

Basic Member
  • Posts

  • Joined

  • Last visited

Everything posted by Gabriel Devereux

  1. I'm confused... Are you wishing to offload media from the card and transcode media? In terms of encoder that's typically in relation to what you want to achieve, with software like Pomfort Silverstack you can 'dump' a card through a somewhat rigorous checksum (depending on you're choice of algorithm) verification and from there transcode to multiple different types of deliverables. However I imagine a $600/year software probably isn't high on the priority list (note you can get project licenses). Shot-put or Yayota are similar in terms of checksum verification just with less bells and whistles. You dump a card through the software to a folder structure you lay out and then you can transcode through resolve etc. Typically for high levels of compression I transcode through resolve from OCN to an intermediary codec (typically ProRes), this allows me to adjust/grade, sync audio and so on then into handbrake as I find it's H264 encoder is superior and has more latitude in controls in comparison to the one in resolves. However personally I utilise the whole Pomfort suite (which is pretty bloody brilliant ill add, the only issue I've run up against is multi node library sharing in terms of having multiple offload machines). Which just allows an easy transfer of 'looks' from LiveGrade to Silverstack which I can then transcode directly from Silverstack's library UI and transcode while offloading. Makes life easy. G
  2. I phrased this incorrectly and poorly. It's an assumed standard, a constant, a FPN. Which it somewhat is but the amount of variables that 'eat up' the bottom of the signal are indeed variables which are space dependent to a certain degree. I think just highlights that the appropriate digital gain is entirely perceptual and as long as it isn't destructive a native is somewhat obsolete in terms of performance.
  3. How meaningful is a native ISO... or how every camera has one. Theoretically isn't ISO just a standardised speed rating for a camera? The ideology of one being native and it giving 'peak' performance is as David says perceptual. I remember it being explained to me that 'native ISO' means the image/signal has only undergone analogue amplification, this is of course wrong... as almost all useful images that have undergone demosaicing have had a level of digital gain introduced (an example being 1.64* for the Red channel at 5600k for Alexa sensors (ALEV)). And with the idea that the analogue gain applied to be the mean, native, signal... even non 'DGA' Sony sensors from my understanding typically employ a comparator system at the end of each column (typically where substantial analogue amplification takes place prior to ADC) that decides if a signal is under a certain value - a hypothetical 4, and the ADC, line or whatever gets saturated at a value of 10. It'll amplify by an extra 6db or such (2*) to mitigate quantisation noise and such. With the amount of image processing in today's cameras, I think the ideology of a native is nice, but it's not really a meaningful specification... outside of one high - untouched and reconstructed at such level, analogue gain and one low in terms of dual 'native'. The only aid I see it being is an understanding that at a said point you have a relatively even distribution of signal above and below middle grey. However, even then as said above - the noise floor of any imaging sensor is measurable but different, it's similar to ISO in relation to it being a perceived standard in terms of SNR but is an outcome of numerous variables different from design to design and even location to location, with thermal noise, dark current noise, reset noise... and that would eat away at the bottom of your signal... so you may not want to evenly distribute it. Sorry for the ramble.
  4. Likewise re G2 and G1 Once went through 3x UMP's in 3 months. Sadly - in search of a 'good deal' two years ago, all out of warranty (by about a week). The third one, after a $3000 AUD PCB board replacement, still works today. However I nurture it and on bigger shoots I rent a more reliable camera.
  5. Correct me if I'm wrong - The Bayer CFA exists with twice the amount of green photosites - 'sensors' because our eyes - our photopic vision is more sensitive to the green (550nm or about, frequency band). With this, let's look at the sensitivity and the collection of data in an incredibly simplistic sense from a Bayer CFA. Looking at colour science and colour rendition in a similar simplistic fashion. My eye is perceiving an incoming wavelength with a certain amount of value in each RGB frequency band, an amount of energy/photons in each. This theoretical ray of light has a value of 20 in the red channel, a value of 20 in the blue channel and a value of 20 in the green channel (emphasising this is an extreme hypothetical). For this example, attributes such as the density and efficiency of each filter on the CFA pattern will be ignored, each filter is even in its respective frequency band and as efficient as possible (the same with the QE of the photo-sites bellow). So with this, the data collected from the imager should be R20, G20, B20, G20 and after demosaicing (I'm tempted to attempt a calculation using a relatively basic Bicubic interpolation algorithm however I think it'd be unnecessary with such simplistic numbers) the general gist is the two green channels would be more 'sensitive' a higher value, than all the others. The hypothetical pixel created would be around R20 G40 B20. The goal/hope would be my eye - my photopic response would achieve similar with this increased sensitivity in the same frequency bands. If I took the same ray of light, the same amount of photons and somehow got an RGB readout like a camera from my eye the values would be similar. Cameras are weird. The goal from my understanding is not to achieve a true RGB value from incoming light but match or achieve near similar results if it'd be our photopic vision interpreting said information. With this, I do wonder what kind of nifty amplification has been applied to the green channel on the UMP12k or other RGBW CFA cameras in an attempt to match this. I do believe if they were to capture all RGB values at 'full resolution', sampling all three values at the same sensitivity, the image produced would look a lil weird.
  6. To expand and question, now that we've had time to somewhat dissect and look at the image and its pro's and cons. I find the ideology - potentially the wrong term; of a sensor out resolving a lens flawed. As you say above the photosite pitch, the total pixel count crammed onto the sensor exceeds the resolving power of near, if not all-optical systems especially in realistic situations even with a hypothetical perfect lens. The main issue argued with the above is low capacity photo sites, easily saturated, total camera latitude etc. What you say makes logical sense about noise - I can understand to a certain extent assuming the singular white/clear photo site in each 'matrix' acts as a higher gain channel to interpolate shadow information to improve the dynamic range but the compromises are endless, yes all imaging and optical systems are a series of compromises. However, with the rather sudden and rigid highlight roll-off that is especially apparent with RGB clipping, the lack of workflow infrastructure around the total resolution and even with that, they haven't necessarily made a 12k camera as to somewhat even compete with image fidelity downsampling is a must. The selling point of the camera seems redundant. To return to my initial statement about a sensor out resolving a lens. Maybe I'm being a fool, but, from my rather limited understanding of the engineering of an imaging sensor the ARRI with its 8.25um (diagonal) photo size 'size'/pixel pitch is in near-perfect harmony with the majority of its optical systems, a perfect example being the master prime. I believe achieving MTF(74) at 70lp/mm resolves an airy disk around a similar size to the photo site. It's relatively balanced so why compromise further on photosite size for pixel count. - it almost seems if they're attempting to reproduce the qualities of multiple channels (with different gains) from each photosite but in a two-dimensional sense which compromises, as you say above the total area to sample from. Which realistically results in a lower fidelity image (concerning SNR and other aberrations) even downsampled to 8k at a 1:1 pixel ratio with its competitors.
  7. Just got off a shoot with 13 or so FX6’s, an FX9 and a few A7’s. I needed all cameras to output a logarithmic signal for me to monitor while the rest of VV needed to see a regular standard rec709 signal (about 5 or so monitors, 2 on switchers). I personally went with the switchers going through inline LUT boxes (Teradek COLORs) and the rest of the monitors where looped through BM 12G HDMi to SDI converters and a few on a teranex and occasionally a ultra studio rack. In your scenario you can load custom LUT’s onto 12G converts and I believe 6G and 4G (maybe) converters as well (however unsure) they have a small form factor and are relatively cheap. Teradek color’s are good when partnered with a live grading solution however in your case I don’t really believe it necessary. I did end up using an older BM I/O 3G box it was able to take multiple channels and worked with pomfort so in my situation it was perfect but yeah for something small the newer 6G-12G converters are something to look into. Apologise for terrible wording, wrote on phone.
  8. As long as your willing to pack accordingly. A few times around Australia I flew with a shot bag for my monitor stand. A friend of mine would travel with one as well while we waited for either freight or for the truck arriving with the rest of our gear. I always kept it in one of the pelican's though. I'm not sure if traveling with them loose would be the greatest idea. It'd be an interesting check in experience.
  9. Hi, I'm currently in need of an IR converted mirrorless camera (preferably Sony) to rent this coming weekend in Australia. I've seen them all around the US, some even CFA stripped (which would be preferable). It preferably would susceptible from anywhere between 700/750nm - 810-850nm for deep IR recording. The goal is to film 'invisible light' with an IR emitting source that is far more susceptible to our cameras than the eye. Thanks Gabriel
  10. As said above, everything LED is missing spectral content - now this is OK as long as you can control the camera systems being used under said limited frequency bands of lights. Now I'm going to assume (purely from the point that doing otherwise would be almost absurd) that you can't tell the studio to only shoot on one particular camera system. With this, unless you go down the route of say Kino Freestyle panels (that attempt to match the spectral response of numerous camera systems) you may want to give up the versatility of RGB based (RGBWW, RGBW etc) LED's for just plane white LED's (either CW or WW) for the sake of somewhat colour accuracy. As said above Aputure and Nanlite are good when looking at it's simple single white LED fixtures (WW in this instance if your going to use large tungsten fixtures for sunlight) so it may be your best option. White light LED's, while lacking in versatility do have the most uniformed spectral power distribution emitting a singular wider frequency band that will harmonise with more camera systems and allows a larger gamut of colours to be resolved, meaning less inorganic skin tones and serious colour shifts. At the end of the day you may want to look again at tungsten space lights. Permanently installing LED fixtures may be a little unwise considering the extreme amount of misinformation and the technologies infancy. I remember listening to an interview with John Higgins 'Biggles' on his work on 1917 and he talked about purchasing LED fixtures and how doing so at the moment is unwise as a long term investment.
  11. To jump in, and disagree. On film the image is not 'created' on the set anymore than digital. I've yet to see chemists on set altering chemical compounds of said emulsion in an attempt to boost the magenta channel/layer while you're attempting to get a shot. If anything the look of film is somewhat predicated by the stock you choose disregarding the post process the emulsion and prints will go through similar to that of a digital pipeline however arguably with less control. I don't see why you view a film being graded in a certain way sad. Is choosing a certain stock for a certain look, sad? Is it because the film maker has more latitude and can make more refined decisions? Because it means they can potentially fix an error? The latter two I believe are the main causation for the DI to be knocked but personally I don't mind it. The amount of times people throw around the term 'The DP's job is too *insert remark here*' I find interesting yet, isn't our primary job, above all else, to deliver an image? The images that are being delivered 'nowadays' I would partly consider more exhilarating, exciting than in previous decades - this is of course disregarding the actual creating of the image which as said above is becoming easier with more latitude and tools. With that - the bar of quality is getting higher, while the required skill is getting lower. It's almost as if cinematography is becoming easy to learn, hard to master. Which is different, but I wouldn't knock it.
  12. As said above, your options are endless. There is no singular way to replicate sunlight. To break it down typically you'd have a singular powerful harsh source (acting as the sun), and then a surrounding softer source acting as the sky (at least that's typically how I'd look at it). So, with your goal being to achieve an outdoor scene at noon (keeping in mind, I don't know if this is an exterior set, an interior set or if this outdoors... if so you may just want to shoot at noon). I'd imagine you'd want one harsh powerful overhead source, as harsh and as high as possible with a wide beam spread - maybe a 20k skypan (keeping in mind to operate and rig said fixture requires a skilled crew)... or something of that nature. Then a surrounding softer source, a hypothetical array of space lights, potentially with a very mild diffusion bellow. This, like everything said above is up to personal taste.
  13. Contrast is a difference in illuminance between two points. For example, to take it out of light and/or radiology. The contrast between 'terrible' and 'wonderful' - there is no path between those two words they are just on the opposite ends of the meaning. Such as light and dark. By definition - the state of being strikingly different from something else in juxtaposition or close association. Fall-off, in theory, is looking at plane wave propagation in free space. 'Modifiers' and different set-ups that David is using as examples does not alter the fact that light is an electro-magnetic wave. You cannot modify light into something else. If you diffuse light (scatter it) or refract it etc - which I may add your source that you most likely are using for your example, say a blondie or a Tweenie are doing said 'modifications' on a much smaller, potentially minuet to the eye, scale. There are a dozen great practical examples in this thread - however light on an elemental (which it isn't) level is just electromagnetic radiation. That, to put it crudely, does dilute over space, that has no real relationship with a relatively artistic term.
  14. Yes and no, If you want to obtain an 4k image, so you shoot at 8k (in S35, so note photo sites smaller than 3-4ym (micrometer)) to hypothetically crop into and deliver, as said above, a 4k image, while your overall total pixel count may be 3840x2160 pixels, disregarding the camera it won't have the same fidelity on an optical level and it most likely NEVER will. With resolution and cameras it's always hard to say '8k will never have the fidelity of 4k' as camera technology (especially digital) is constantly advancing. Of course a higher resolution camera will most likely always suffer from low capacity photo sites (if kept to S35) however technological advancements over the years may eventually negate or help the issue. However, one technology that isn't in its infancy and that we can look at the properties of, is optics and can calculate from a hypothetical 'perfect lens'. Which has attributes such as its elements being as transparent as air (somewhat unrealistic) among others only limited by the diffraction limit. However it still has its limitations when it comes to resolving power that already modern day cameras (such as the 2.2ym photo sites of the UMP12k) are already exceeding. Bellow is a bit taken from another post on the forum (Joshua Cadmium). This is calculated from a hypothetical perfect lens at MTF(50), it is important to note that it's only at MTF(74) that an airy disk is similar in size to that of the photo site of the camera. "For the Blackmagic 12K sensor at green 550nm, that puts MTF(50) at f3.2 and for the Alexa sensors that puts it at f12.1. At red 700nm light, the Blackmagic 12K would hit MTF(50) at f2.5 Alexa would at f9.5 At violet 400nm light, the Blackmagic 12K would hit MTF(50) at f4.4 and the Alexa would at f16.7" Just to quickly duck dive into MTF, to put it crudely you're looking at the way lights channeled through an optical system. Looking at how it channels an airy disk (the point of light the camera measures which is formed by light passing through the diagram (iris) of the optical system). You calculate it through line pairs per mm in relation to what the lens can resolve. Here is a quick insert I wrote on another post - to show the point I’m making point we need to calculate the line pair per millimetre for a sensor. That is done by taking 1mm (1000um) / pixel pitch (2.2um) / 2 (line pair). The UMP12k would be 227.27 lp/mm and just for a comparison the Alexa is 60.61 lp/mm Now, to briefly look at a modern lens that we all know and love, the master prime. One of the 'sharpest' lenses one can use on a S35 system - "If I remember correctly a master prime just about gets MTF (74) at 70lp/mm". Sadly I don't remember exactly which focal length this is however the point is there. The increase of a pixel count on a S35 system to 'crop in' is flawed. One most likely will never be able to achieve the same fidelity, unless you use a larger format camera (not compromising photo site size while being able to increase total pixel count).
  15. Yes, and no... mostly no. Here is a crude (and somewhat inaccurate example) https://na.panasonic.com/us/audio-video-solutions/broadcast-cinema-pro-video/dual-native-iso-camera-technology-cinematic-low-light-video-production Different sensors have different analogue outputs, different 'native' analogue amplifications. Some 200, 400, 800 and so on, and in this case, for extreme circumstances, the ability to switch to a higher analogue output. It's similar to that of ARRI's DGA (dual gain architecture) where each photo site has two analogue 'native' outputs - one high one low. However in this instance, rather than combining the two, they are for two separate outputs. Hence the name 'Dual Native ISO' because each photo site has a dual 'native' analogue output. You are right, there is most likely analogue noise mitigation (I believe there is in most contemporary Sony cameras, and most cameras nowadays) and I will add Panasonics graph stating in a way that noise starts at digital amplification... is wrong. However, it does allow to theoretically capture a certain amount of latitude above and bellow middle grey (similar to that of shooting at a regular cameras native, most commonly 800 ISO) at a much higher sensitivity, without any digital amplification. I think.
  16. The image fidelity is quite poor. This does bring up an interesting question of shooting things in a wide to crop in etc. I remember watching older RED advertisements that you can 'Shoot in a wide! Crop in, multiple shots in one!' blah, blah. However from looking at the RED image fidelity and also the resolving power of contemporary optics, or even perfect optics I doubt we'd ever be able to seriously crop into an image without impact to the image fidelity, unless we all started to shoot on incredibly large formats.. oh wait... As already with the UMP12k the 2.2 micrometer photo sites are far too small for any contemporary lens or even an 'impossible' perfect lens to resolve... well... (MTF(70+)) at most likely the f-stop required in the case of shooting for multiple shots. I do wonder if to achieve the possibility of using one camera to shoot multiple shots, broadcast companies will start moving towards larger formats as well.
  17. I've had relative luck and have faith in the Kino Freestyle line. However, you're asking a lot from any fixture. Most, if not all LED's can't resolve every LEE/Rosco/whatever filter colour, for every camera. Even if you accurately input information due to the discontinuous spectrum of all RGB LED fixtures it's likely the frequency bands emitted while being perceived as accurate from our photopic vision or even one camera system it may not be on another (due to different camera spectral response curves). Even if you measured with a colorimeter or such to match the two. You're matching them both to one constant when in reality the 'detector' - to put it crudely, is also a variable.
  18. I'm not sure if a true formula or calculation would ever arise and be user friendly/simple. I keep getting stumped (admittedly no mathematician or physicist, in fact far from it). Even once understanding how to calculate light fall-off from a large source - assuming that the diffuser in this instance or bounce's intensity is uniform (which it often isn't with the nature of our lighting instruments). All that allows is calculating the fall off of light from said source (bounce, diffuser). Attempting to calculate the fall-off of the bounce source with only luminance information being the source lighting the bounce source adds variables upon variables and sadly not ones one can easily dismiss for an approximation, which there are many of as well.
  19. Miss-information is a bit harsh - now disregarding minor discrepancies of which there are MANY. And the following is a tested approximation. I found that a frame (6x6' frame of Ultrabounce) when 6ft away from the subject (distance is the same as source size). With the frame evenly lit by a source (again another variable, however I found it gave minor discrepancies) placed 3-4' in front of the frame (in-between the subject and the frame). The light loss - is around 2 1/3 stops - 2 2/3 stops in relation to the fixture at the combined distance from source to frame and frame to subject. Now, as you can see - even here the light loss isn't the 'guesstimate' of the 2.5 stops above. HOWEVER, I find it to be a good guesstimate. I will openly disclaim that there are a sheer number of variables - in my original post above I disclaimed the sheer number of variables and I also stated a good course of action to potentially get a more accurate answer. However having tried going down that path myself, it's a long one. The above helps me, as a guesstimate - an estimate based on a mixture of guesswork and calculation.
  20. Emphasise on the word 'guesstimate' - of course you're correct but I find it helps better than nothing.
  21. As said above there are too many variables to be able to make an accurate calculation, however, I did once go down the path of trying to find a solution of calculating light fall-off from a soft source. I asked several cinematographers and none knew an exact answer (some recommended with Ultra bounce just cut 2 1/2 stops from your original output - this isn't accurate when your measuring close to the bounce however at a distance it is somewhat a good guesstimate). I went ahead and asked on a physics forum and got this answer - this answer was about shooting light through a diffusion. "If your diffuser is good enough that the light from any portion of it close to uniform from any direction, and the light from all portions is equal in intensity, then the total illumination on your point is proportional to the solid angle from the point that intercepts the light. At large distances (small angles) this will become equal to inverse square. At shorter distances, the falloff is slower. Very close to the source, the intensity becomes nearly uniform with distance. The actual solid angle formula is not one that I tend to use much, but there are some formulas and references on Example Formula's" I got this response sadly at a busy time (the original question was in relation to an upcoming job) so I never truly followed it up but plan on doing so. I do think having a somewhat accurate formula of calculating light fall-off from a large source, especially considering - "the inverse square rule is often still a useful approximation; when the size of the light source is less than one-fifth of the distance to the subject, the calculation error is less than 1%"
  22. Out of curiosity regarding the RGB clipping - how would one fix it? From my understanding the white clipping stems from all RGBG photo sites being fully saturated (full, unable to interpret more incoming data) and therefore interpreting it as white light. Surely the only way to fix it would be to increase the physical size of said photo site (something camera manufacturers are doing the opposite of).
  23. If you're correcting multiple fixtures with multiple bulbs differing in age they all would be marginally different. Also if your supplementing daylight and so on.
  24. (As a younger aspiring/studying cinematographer). On commercial work and narrative work I always use my metre and spot metre, analogue and/or digital. This is for several reasons. During a pre-light/rig often the production I'm working on can't afford to have the camera package for that day. Even if they could it would be a waste and money and why would I need it if I can judge exposure through other means. This is the same with scouting... if you need to know the level of light in a dark room etc taking a camera package to judge exposure with, to me, is nonsensical. Not only that but often the monitors I use to judge the image are uncalibrated and are not harmonised to that of the camera (when a production is willing to hire someone younger, they very rarely have the budget for a knowledgeable DIT). I do think the lack of understanding of the properties light and exposure from some (potentially due to digital technology) is an issue with contemporary film education.
  • Create New...