Couple of new questions for all. So it seems that there is more data contained in a 16 bit RAW signal than in 16 bit 4:4:4 uncompressed signal.
Questions:
1) If one were interested to use visible light to capture what we CAN'T see, which of the 2 contains more data points?
2) Would it be possible to use the information in the data set captured that we can't see, and reformat it so that we CAN see it?
Kind of like what LANSAT 8 does with weather imaging that can reformat the combined RGB and multiple infrared (near-infrared, short wave, thermal infrared) channels and "see" through clouds, identify rock types in mountains, population centers and so on? Or what snakes, eagles or some deep-sea lifeforms "see" with enhanced visible light range capabilities.