Forgive this overlong reply, but I couldn't find any way to condense it more. And I'm writing this from memory, so I'd ask more technical users than me to please forgive any errors in terminology! But I think it covers the gist of the matter. Let's say you have an image, and you want it brighter and higher contrast. You decide to use an incredibly simple gain function that doubles the incoming values in your image. The gain function itself would be "input value x2 equals output value". So looking at a set of incoming values: 0,1,2,3,4,5,6,7 we would get the result 0,2,4,6,8,10,12,14. You've doubled each value in the image. But you've had to do live computation actual realtime math on each incoming number. So we make it easier for the computer by pre-calculating all possible answers and storing them as a Look-Up-Table, such as this: 0,0 1,2 2,4 3,6 4,8 5,10 6,12 7,14 Now it's really fast and easy for the computer to find the results you want, because all it has to do is "look up" the answers. It's less flexible than doing the math in realtime (what if you wanted to change the gain to x2.1 or x3? you'd need a whole new LUT), but it's way faster. When you start talking about real-world image information, and real-world color edits, storing every value makes the table larger than we want. Even for a simple gain function as above, you'd need to store 1024 value-pairs for 10-bit precision. So we store fewer values. In the example above, we might store just half the values: 0,0 2,4 4,8 6,12 This means the computer has to do *some* calculations to interpolate the missing values. But it only has to do them once upon first loading the LUT( I think?!) and then it can "look up" from the full set of values in future. (This explains why different software can sometimes give slightly different results from the same LUT, they might be interpolating the missing values using different techniques, and thus give slightly different results.) The example above is of a "1D" LUT. As you can see, with individual value-pairs (even with pairs for each of the RGB channels) we can only do simple things to "shape" the image, altering its overall brightness and crude color balance. We CAN affect the channels separately but and this is important the channels cannot interact. They only know what their piece of the look up table is telling them. For real world use with color images, we need "3D" LUTs. These are far more complex, but the same principles apply. They store triplet values for all possible RGB combinations, and map each incoming triplet to an output triplet. This means you can affect color and saturation along with brightness. A crude example being: 1,1,1 might "look up" to 2,2,2 but 1,1,2 might "look up" to 1,2,3 The R and G values get a different treatment in the second pair than the first. This is how 3D LUTs are able to store sophisticated and subtle color edits. To store all possible I incoming and outgoing combinations would make for enormous LUTs, so again we only store enough for the computer to accurately work out the whole table in future. Because of their "3D" nature, these LUTs are often referred to as "cubes", and their sizes given as 17x17 or 33x33 for example. These numbers describe the amount of precision the LUT is storing, and hence how much "interpolation" your software will have to do. To a certain degree bigger is better, and more accurate. 65x65 is a common high-quality size for use in final grading. For viewing purposes, 33x33 is plenty good enough and a common standard for monitor calibration. Most hardware devices have a limit on the "size" of LUT they can handle. Now all the boring part is out the way, I'll speak about the actual use of LUTs. What a LUT is effectively doing is saying "make THIS look like THAT," and they are utterly inflexible once created. They can only do the one color and brightness transform they were built to do. So the trick becomes building them specifically and accurately. And for that you need data: at least two pieces of it, and input and an output, a THIS and a THAT. In the case of the popular so-called "film look" LUTs, someone has "profiled" an input image signal let's say Arri Log-C, a flat-looking curve designed to preserve image detail throughout the range of a photographed scene and also the characteristics of a desired output image. For the sake of argument let's say "the look of a color negative printed onto Kodak Vision stock." If both the input and output are accurately "profiled" and the software used to compare them to "build" the LUT is good, then the resulting LUT will accurately "transform" the incoming Log-C image to the desired appearance every time. (Whether that's what you actually need or want is another matter. What happens if you feed a higher contrast image into this LUT, say a Rec709 image instead of Log-C? Well, the LUT will simply do what it does exactly as before, but your result will be much higher contrast and color saturation than the creator of the LUT intended. If you want that, fine!) Things do get more complicated when you take viewing environment into consideration. When building your "Log-C to Kodak Print" LUT, you also need to know what the target viewing conditions for the resulting image will be Rec709 for example so that your LUT can "target" that known viewing condition. A lot of the "print emulation" LUTs that were floating around the web a few years ago were originally built by labs to VERY precisely emulate their print-stock, right down to the dense yellow highlights and somewhat lifted and cyan blacks that the real photochemical release print would have when projected with the right kind of bulb, etc. They did this because THEY wanted to see on a video monitor EXACTLY what the print would look like when they ACTUALLY printed it on film stock. Many users today DON'T actually want that much precision: they want the film color response or "look" without the "projected through a piece of plastic" look. To profile real film (for example), and then adjust the numbers to give a "pleasing" result without qualities you might not want takes skill, which is why well built LUTs are often proprietary and expensive. Although they are just one-stop color transforms once built, they took very talented people a lot of work to get right. It's also important to note that you don't HAVE to profile some medium's existing color response to create the look you want. If you play around in color correction software and get a result you like, you can save that as a LUT and apply it freely to any similar incoming footage without ever having to actually worry about "where" the created look came from. Beyond creative uses, LUTs are used in any situation where you want to make "this" image like "that" image. If you have a monitor, and you can profile it accurately enough to know what it ACTUALLY looks like, and you also know what you WANT it to look like, you can use a LUT to make it so. (Again, this is a little more complex in reality: you can't make a monitor display colors it physically can't display, so your LUT can only make the displayed image a subset of the monitor's maximum range of values. And a LUT can't magically make an unstable device more stable.) LUTs can also be built to make one camera "look" like another if desired. But again, you're subject to the limitations of the source data: you can't magic new information where there was none, if your camera clips highlights sooner than the target camera, no LUT is bringing that back. To end this rambling post, and answer your original question, there can be no such thing as a "Rec709" LUT, only a "something to Rec709 LUT." You could have a "Log-C to Rec709" LUT or an "S-Log to Rec709" LUT. And with monitor LUTs, they are even more specific, useful only on the actual device that was profiled for their creation: you might have a "Jim's-monitor-at-75-backlight-and-all-other-settings-at-default to Rec709" LUT whose sole purpose is to make Jim's monitor display an incoming Rec709 image accurately. LUTs are an important part of modern color workflows because they are precise, repeatable and once created have low computational overhead. They allow us to make subtle and nuanced color corrections and store them for repeated use through all stages of the imaging pipeline. They can be creative or they can be technical. And in cases such as monitor calibration they can simply be right or wrong. It all depends on the quality of the data used to create them, and on clear understanding of the intended use. Hope that helps!