Jump to content

Ryan Emanuel

Basic Member
  • Posts

    88
  • Joined

  • Last visited

Everything posted by Ryan Emanuel

  1. I think its important to understand what that noise is. The biggest goal in images is to have visually seamless gradients between pixels in representation of the scene. One of the variables the gradients are dependent on is bit depth. The bit depth per stops of DR of the capture device are not uniform, they are actually exponential and follow the formula 2 ^x where x is the stop of DR increment. So for the stop between 4 and 5 stops of light, there's 2^5 - 2^4 bits that will be allocated to the gradient between those light levels per channel. So as you can imagine, lower stops get less bits and higher stops get exponential more bits. Noise comes from inaccuracies due to not enough bit depth to make a visually seamless gradients. Since the camera is a machine and only can see discrete values, when it guestimates a value for a pixel each frame it can incorrectly measure the same value up or down a step simply cuz of rounding errors. When you have so many steps that a incorrect guess is still sooo close to the next step, visually you will never see it, but if there aren't that many steps than errors are visible as noise. So we want to increase bit depth in stops of DR that are allocated less steps in the gradient and more susceptible to noise. Many people never ask why we encode sensor data in a log container instead of gamma. We want to 'linearize' the relationship some what to make the bits per stop more uniform. When you have a function and want to undo the effect of the function you use the inverse. Log base 2 is the inverse of 2^x, it will restore the function back to y=x. Thats why we use a log function to reallocate the bit depth. When you have a 'linear' signal for the sensor data, that will not give you a linear bit depth response per stop. It will follow the 2^x allocation, The graph shows both how few bits are allocated to the first couple stops and how fast it explodes. You don need 200 bits+ bits to show the seamless gradation per stop, 30 steps of gradation are probably fine. So the log transform boost significantly the signal before encoding on the low end and suppresses it in the highlights. So variance is heavily reduced from linear sensor data. When you change ISO down and re-expose for the scene, like the green curve, it reallocates more bit depth to the low end, increasing the fineness of the gradient, which reduces noise. Other ways can accomplish the same thing like ETTR. Its all the same, but you just want to place the important the detail in the scene in a portion of the log curve that captures more information per step of DR, then the gradation was be visually seamless even after transforms down the road in post. SORRY MISLABELED THE LEGEND FOR THE GRAPH!! ORANGE = LINEAR BLUE = LOG
  2. Luminance and gamut mapping is just normalization, take every value, divide by the max in the range and multiply by the max of the new range. Same with gamut mapping. You can do this with luts, Resolve just doesn't allow you to, to influence more dependency on the software, (my opinion). Thats why davinci isn't the best software to work with luts. It can only apply them really, you can't trouble shoot and qualify or normalize luts in Davinci. Also the Arri rec 709 lut is certainly different from a CST rec 709 gamma and color space conversion. Not necessarily better, just different looks made in completely different ways.
  3. I really hope not, they are not the same thing. CST is basically a 1,1,1 lut for 3D and a 10 lut for 1D, while 3D luts can be any resolution, you can have 128,128,128 with 2 million individual coordinates in the color space that can all be independently adjusted. 3d Luts are far more powerful. CST matches color space primaries, but just because pure red, green and blue are matched, ie three edges of the cube are matched doesn't mean the inside of the cube is matched. The true transform that maps one camera perfectly to another, is non-linear, so CST is ruled out cuz it can only do linear operations. The goal is to get your approximation transform as close to the true transform as possible. Thats literally the definition of a data scientist's job, this issue is statistics its not really color grading. Lets say you take some camera samples to a data scientist and the run a sequential neural networks to approximate every non-linearity in the color space transform. Their trained model will take pixel coordinate inputs from the first camera and output where that coordinate should be transformed to match the second, but you don't want to pass images through cuz millions samples would take forever to get processed. So you pass an identity lut through like a 33x33x33 with only around 30,000 samples. Also the neural networks can be normalized from 0-1 so no clipping. This is gonna be the future for color processing and how to solve that problem. We gotta just bring the right people to the table, a colorist won't be able to do this.
  4. Theres a few things going on here. First off any node based visual FX software works the same way. Each node is a math expression applied to the footage. In a two node system the first is nested inside the second. So whether or not the order of operations matters, is dependent on whether or not the order of operations matters for the math. If the first node is gain, then you are simply multiplying each pixel by a number. If the gain is set to 1.3 then if x stands for each pixel value, then the expression is 1.3 * x. The software stores that output in another variable like y and input that to the second node. Lets say thats a gamma correction, that math expression is square root of the image, or z^(0.5) ( z to the one half power is the same as the square root), z represents all the pixels values at the second node. Now lets input y for z, (1.3*x)^(.05). The question is if you do the operations backwards do you get the same thing? Not really you get this 1.3*(x^(0.5)). If you input the pixel value .6, you get .883 the first way after the color correction and 1 the opposite way. So for some operations the order matters its the same is you multiply in one node and apply a log curve in another 2 * log(x) is not the same as log(2*x). I strongly recommend messing around with desmos.com and plugging those two expressions in to see how vastly different the curves are. The second thing is encoding. Its important to know that images are just spreadsheets of numbers they can be anything. The caps that happen to those numbers are due to the monitor. It has a clipping function where any number past a threshold number gets saved as the threshold number, that is usually 1 when image values have been normalized between 0-1. BUT, that only happens when you encode, or save the numbers. If they haven't been saved then you can have pixel values at 99,000, when the monitor can only show 0-1, as long as you don't encode those values, the clipping function won't clip the actualy data the computer still knows the original values and you can bring those values back into range with an inverse function non destructively. What many people miss is that when you encoded or shot the footage with a log curve, There is already a math expression applied to the footage. So off the bat non of the color grading tools really work as advertised. Gain will effect the black point and lift will effect the clipping point of the footage. So in some image processing workflows you have to go back to linear as long as the manufacturer isn't hiding the expressing how to do that, so there is no math applied to the image before you work on it. Grading before or after the lut, is dependent on the operation you are trying to do and how complex the lut is. Like you don't want to do skin tone correction before the contrast curve and matrix adjustment. If the values are log they will be hard to key. That might be better after the lut, but scaling and offseting is probably not good before.
  5. This is one of those use the variable issues. Saturation from the HSL color space is the traditional variable people change for saturation on digital, but if you want digital to function like film, its the wrong variable for perceptible saturation. Film saturation is dependent on luminance and hue changes, so anybody adding HSL saturation to the shadows of digital won't get the right results. The tools in color grading applications aren't built for that. Its one of the biggest reasons why the common looks of digital are different than film, its the digital video engineers deciding the structure of how the color grading tools work. Its definitely not that its impossible to get it right, but in HSL its a square object trying to go through a circle. We'd all like to transform the square to a circle, but davinci won't do that, at least not easily. http://www.yedlin.net/NerdyFilmTechStuff/TwoKindsOfSatuartion.html
  6. This is what a lut actually is, sorry for the low res. Its a vector field, each arrow represents the transform for a coordinate in the color space volume. The lut is the arri rec709 33x lut subsampled down by a factor of 50 so you can actually see the different vectors. It would be hard to imagine that a colorist could capture all those vectors directions with color grading tools even subsampled for the entire space. I just don't know why there really isn't post software that helps you visualize the luts to see where the color is "flowing" in different segmentations of the lut. Visualizations like this(in higher rez and rotatable of course) would make the VF transforms more intuitive. You can also see from the perspective of the still that some of the trends of how the vectors are changing through the space are curves they are not straight lines, so linear operations won't capture the complexity.
  7. I would say there's literally zero places whether online or in film schools(other than AFI) that teach filmmakers what a look up table actually is. A lut stores 6 columns of tens or hundreds of thousands of coordinate data in a 2d list (hence the taxes). There are three inputs columns hidden in the indices, and three outputs that are the 2d list. 3 ins and 3 outs is basically a vector field. So any algorithms that apply to vector fields can be used for LUTs and color spaces. Most of the tools in davinci are linear operations, color matches are non linear functions. Usually color match color chart tools in software are projection or sum of least squares linear regression algorithms, they are still linear. Linear will always have significant error since the true function for the color space transform is non linear. If we want to solve some of these problems, we have to choose the right variables to work with. Like the halation thing. Highlight roll off and halation are not the same variable. Halation is the luminance artifacts that occurs at very high contrast edges, its not dependent on highlight roll off in the color space, so LUTs can't help you there. Edges are spatially dependent so then you need to convolve i.e. filter the effect spatially. You have to transform the image into the gradient domain where each pixel value changes to the difference in brightness between the pixel and the 8 surrounding neighbors. On high contrast edges that difference will be close to one, for all low contrast pixels will be close to zero. You have to run some tests to see how how large the halation is with respect to a point source and find a function that would approximate the shape and style of the halation. If the point source is has a 30 pixel diameter in the shot, is the halation diameter 60 pixel, 120 pixels, is it a circle or a star? Once you have that, activate the halation approximation at high values in the gradient domain. You can take this paragraph to a developer and they will make you a halation algorithm as long as you have the budget for the film samples. You gotta have the right variables that the problem depends on though. If you tell the developer the halation is from the lens, any algorithm they write is gonna be wrong. If a DP doesn't want to dive into these rabbit holes thats all good, literally zero careers are dependent on this stuff, but if we want to our craft to progress in the correct way, we have to be careful with what variables we say an effect is dependent on.
  8. His old algorithms are on github. Davinci in its newest iteration copied some of the idea.
  9. It depends, if you can't do your taxes in the software, its not the ideal place to make a look up table.
  10. I guess my question is why are people saying it can't be done, wouldn't it benefit everybody if it could. Like I was saying we should bring the problem to the people who can solve it. If you want unique spatial fidelity responses at different levels of detail for different texture patterns, that question sounds like a convolutional neural network problem, a deep learning data scientist can probably solve it. The hard part is matching pixel response for sample images, if you could match the pixels even for a small region of the chip on the two formats, a well constructed CNN would do the rest. People are gonna try stuff like that, my hope is that there are cinematographers there. Personally I don't know if color goes unnoticed, I think thats a pretty big component and still pretty difficult to transform really. You either need to automate thousands of color sample colors on both formats that span the entirety of the space and interpolate, or you work with a sparse data set and extrapolate anything outside the sample hull. That part is tricky, its one thing to linearly interpolate when you have sample targets near every possible point, but when you don't the chances of a linear extrapolation getting close to true values is pretty low. So you need some non linear extrapolation algorithm that matches the generalized trend of how each color changes on their hue/saturation vector. Thats a pretty iterative process, you get a lot wrong before you get anything right.
  11. A lot of issues get conflated, there is color reproduction, noise structure, halation artifacts, and a few other pertinent classifications for a look match. They all need to be discussed separately since the challenges are very different, when they get muddied together, its hard to make sense out of anything. It sounds like what you are talking about is noise structure and not color reproduction. Having one noise structure and layering another on top won't make a match. You need to take one noise structure and transform it to approximate another. Noise is basically errors. The distribution function that maps the variance of the errors can be transformed just like color spaces. Denoising is mapping the variance close to zero, but it can also be mapped to another distribution. If you add grain in software its probably just adding normally distributed noise, but film might not have a normal distribution structure for errors, it might have a different shaped distribution function. So you need someone who understands statistics, distribution functions, and signal processing to make the match. Again a colorist can't do that, and you don't know how film convert approximated the distribution, they could be right, they could be wrong. If you want to know its right, a data scientist or developer for signal processing can help. The democratization of all looks is coming whether we like it or not. Any look will be available on any camera that captures enough accurate data, once the right technicians are brought into the convo.
  12. Some art forms embrace technology others don't. It's tricky cuz aesthetically we plateaued decades ago as far as the most aesthetically pleasing color spaces for cinema. Most people would agree that film stocks have those color spaces, so many just shoot on film, it is the path of least resistance to the look. Nontheless, technology is allowing further and further transformation of color spaces, but DP's in general are not spearheading that endeavor. The issue for the future is the DP's primary collaborator for color is the colorist, who is an artist as well. They aren't an image processing technician, so for the problem at hand, a developer is needed as a collaborator. When you say digital can't match film, you are really saying there does not exist a function that can transform digital camera RGB color space, into film color space, it can't even be approximated. You are saying that there is nothing in computational geometry, supervised machine learning, or vector field interpolation that can solve the problem. That's probably not true, harder problems have been solved or approximated. A data scientist would probably say this problem is a piece of cake, DP's just need to talk to the right people.
  13. 1. yes that was the original question. 2. 20A double pole, 30A breaker 3. Just one neutral for the coffee maker, it said in the manual that it only uses the 120v power for the clock, for the HMI both on the neutral. 4. You have to put the ground and neutral together, cuz the hmi will send more back on the neutral. 5. No, and don't know. 6. No it wasn't, for the light you would need a gfci 7. Its single phase so we were getting 245v from the two lines, if it was three phases we would get 208v, and that would raise the amp draw of the ballasts and potentially impact the correct breaker and wire. 8.Yes if the power factor correction is not 1, the ballast will pull more amps, potential higher than the threshold for the wire or connections.
  14. L14-30 is what I was talking about for changing the receptacle.
  15. The same way you would run 240 lights off a dryer outlet. All I'm saying is convert a double duplex with two 120 lines to one 240 receptacle. I'm not saying keep the panel cover off and leave energized wires connected to the buses exposed, just use the duplexes already run in the house. The only rewiring you have to do is swap the receptacle and the wire a new 240 breaker breaker, so on the panel side you are disconnecting two wires and placing them right back in the same spot, the neutral and ground don't need to be touched cuz they are already connected to the bus.
  16. I was referring to subpanels, so you don't have to tie in, you can just turn of the breaker for the sub panel.
  17. Is this ever done to get 4k-6ks running off of house power? My sister in law is starting a cafe and all their coffee appliances are 220v, but all their breakers are 120v. An electrician quoted them for 1000$. I did some research and figured out I could re-wire two 20amp 120 circuits to a double breaker and swap the receptacle for a 220 one. It worked and only cost $20. Do electricians on productions ever just add breakers to panels and grab higher wattage power that way instead of generators at least for some 4ks and 6ks?
  18. Gamma in this case means exponential function like x squared (x^2) or the square root of x (x^1/2). If you don't know the difference between linear and gamma corrected signals you should research it online. Your sensor records linear data and applies a gamma correction which is an exponential function. So every pixel is receiving a function like if x = pixel brightness, new pixel brightness = square root of x. Square rooting all the pixel values between zero and one makes the brightness and contrast of the image closer to the human eye. I think when they say multi gamma, I think its just a fancy way of saying contrast curve. I might be wrong, but the formulas for s-curves for contrast can be considered multi gamma because how the code works is usually if the pixel value is above .5 then apply a positive exponential function and if its below .5 apply a negative exponential function, so there are "two" gamma curves being applied. All in all its probably just marketing and wont change anything fundamentally that you do with the camera.
  19. there are a couple variables that will determine the final color other than the gel, are your RGB primaries still set to wide gamut or are the reset to rec709? That might make the difference in whether you need a jade gel or cyan 60. If your primaries are not set to rec709 or at least close, what the monitor will show will most certainly be different that what your eye sees on set some times by a large margin. Check your image processing workflow, that will usually determine the right gel.
  20. Something like this for a whole bunch of ,chip charts, cameras, exposures, and color temps? EXP ALEXA R ALEXA G ALEXA B BM R BM G BM B f2.8 180 shutter, xrite, 5600k 585 592 588 614 622 606 690 787 818 774 908 928 855 799 683 935 919 725 776 832 831 875 938 923 675 683 683 748 757 740 818 738 790 933 839 886 813 823 726 932 938 781 770 783 821 875 896 930 754 759 759 859 868 852 877 841 713 935 938 759 685 660 713 752 724 778 701 720 661 787 804 709 811 814 821 929 936 924 791 656 645 919 724 689 823 718 729 933 815 801 738 774 805 830 885 908 855 858 862 934 935 934 720 782 707 829 901 778 691 735 811 758 839 927 838 800 790 936 931 885 892 895 895 940 941 940 630 687 778 659 776 883 842 749 661 935 868 704 716 677 653 801 750 697 f2.8 90 shutter, xrite, 5600k 520 527 526 522 531 515 624 723 751 681 815 834 789 734 610 899 822 636 710 776 765 799 879 846 606 617 617 656 664 647 750 670 722 854 748 795 748 764 654 853 869 703 706 714 754 783 804 838 687 694 693 776 776 758 811 784 643 933 898 687 618 596 649 661 632 687 635 658 592 695 712 617 747 751 752 853 861 845 728 592 580 825 663 599 757 652 663 859 718 709 672 708 740 736 791 814 792 795 796 917 926 909 650 713 635 735 806 687 627 671 747 664 746 832 773 736 722 882 835 792 826 829 830 934 938 923 565 622 718 569 685 791 776 686 591 894 766 617 650 613 587 709 659 606 22.5 shutter, xrite, 5600k 395 403 402 356 365 350 489 596 623 500 632 646 658 608 482 711 639 458 578 649 633 614 694 662 480 490 490 475 486 470 624 545 596 666 586 612 617 640 519 668 682 523 575 588 627 597 618 646 559 566 565 580 591 574 674 653 500 751 713 501 492 469 522 483 455 506 508 530 468 516 529 438 616 625 624 665 675 661 600 464 456 640 453 424 629 525 537 674 533 523 542 579 610 552 612 629 661 667 667 730 738 723 521 593 509 550 622 504 495 542 620 483 565 644 643 610 591 693 648 609 697 700 700 784 789 770 432 495 590 400 501 605 648 557 466 706 585 439 523 486 462 522 477 430
  21. Does there exist a data set repository where there is camera color response data for several different camera all for 100+ chip chart samples organized into a spread sheet or dataframe. It would be so useful.
  22. If you were going 400 ISO, do 2x 1200 HMIs or 2x 4ks make sense in the water for a wide at 2.8/4. I see there is still an upstage backlight, but the pool is doing a lot of the work.
  23. I was thinking about hydroflex HMIs in the pool, but I haven't worked with them before.
×
×
  • Create New...