Jump to content

DJ Joofa

Basic Member
  • Posts

    149
  • Joined

  • Last visited

Posts posted by DJ Joofa

  1. How much does the student version differ from the regular version?

     

    There is no difference in functionality while usage. Upgrade to a newer version is something I am not fully sure about. However, updates (not upgrades) such as newer fixes, etc., are the same as with regular retail version and come at no additional cost.

     

    BTW, its jut not students who are in for a free ride. I think faculty and staff of colleges/universities also get a reduced price (though slightly higher than student price, but nevertheless, much less than retail price).

  2. I really dont know if I should get a mac a a pc laptop.I think I would like to get a mac and then once I finish college get FCP studio. Can anyone with FCP Studio give me your input on the software compared to adobe premiere pro and affter effects? Thanks very much!!

     

    If you are in college you can get very good deal on Adobe Production Premium (which includes Premiere, After Effects, Photoshop Extended, Flash, Encore, Soundbooth, Illustrator, Ultra, OnLocation, any some more) for about 300-400 dollars. Final Cut Studio can be had for about 500-600 dollars.

     

    I personally think that Adobe Premiere has reached a stage where it is equal to or better than Final Cut Pro. Of course, the dynamic link combo of After Effects and Premiere is something that Apple does not even come close to with Final Cut Pro and Motion or even Shake. And Shake is not part of Final Cut Studio, but a useful software.

  3. There are fun things like filling the prefetch que with register increments, and then overwriting the code in memory with NOP's to see how long the que is -- that's a way of identifying which of the early Intel chips you were running on.

     

    John that sounds interesting. Care to illustrate a little more how you used to do this trick? Thanks.

  4. So yes, if you have highlight clippping, you set the the "ASA" to a higher figure, and (I presume) Redcode will reduce the gain of its digital amplifier. But that will only work if you do the same to your light meter and follow its instructions.

     

    I think if you just up the ASA then the digital gain should increase and not decrease. Red's software has no way of knowing if corresponding metering was done to go with higher ASA or not. And, also to go with the notion of a "faster" film stock, higher ASA rating should mean higher gain for a digital system.

  5. [A] Cineon LOG scan of a film negative does, it's not meant for direct TV viewing, it's meant for color-correcting and film-out work. The point of a LOG scan is to optimize the number of bits for the dynamic range of film, to preserve as much luminence information as possible.

     

     

    The scan is never log, it is always linear, it is the negative data that is already log. There is a difference between the two. Therefore, the linear scanning is not optimizing the number of bits. It is the response of the film to the incoming light that is doing that. The relationship of film data to incoming light is logarithmic in nature and a typical DI scan just *linearily* scans that already non-linear log range.

     

    A good CMOS DSLR can produce very good saturation in jpeg, jpeg-tagged RAW or RAW straight out of a good RAW converter.

     

    -Sam

     

    The maximum attainable saturation (closest approach to the corresponding spectrum color) for a material having a specified dominant wavelength and visual efficiency (luminance factor) will be attained if the material has a spectrophotometric curve which is everywhere either zero or unity, and which has, at most, two transitions between these values within the region of visible radiation.

     

    Since, most materials will not have this type of reflectance response (i.e., the characteristic should display only one continuous reflectance band or only one continuous absorption band in the range of visible wavelengths) the recorded saturation will always be (relatively) poorer.

     

    Difference in chromaticity are less apparent for dark colors than for light colors. Therefore, the perfection of color reproduction is less important in areas of smaller luminance values.

  6. Horizontal resolution is/was an entirely different story; you could change that easily by simply using low-pass filters, and the theoretical maximum resolution possible for most TV systems was something that was aimed for but rarely achieved. Basically, vertical resolution was only limited by the optics, horizontal resolution was limited by a number of factors.

     

    I don't think the above quote is entirely correct. Unlike a typical (Fourier) spectrum of an image that results in both horizontal and and vertical frequencies (2D spectrum), the way the TV scanning work for a traditional TV system (say NTSC), there is only *one* frequency. The same single frequency represents both horizontal and vertical detail. So if you put a low pass filter on that single-dimensional frequency spectrum you are going to get rid of some vertical (typically oblique) detail in addition to horizontal detail.

  7. Ultra-high-definition? There is some fixation with increasing the resolution size without considering its full implication on color fidelity.

     

    As colored objects are decreased in size, four things are found to happen in succession. First, blues become indistinguishable from grays of equivalent brightness and, second, yellows become indistinguishable from grays. In the size range where this happens, browns are confused (in hue but not in brightness) with crimsons, and blues with greens, but reds remain clearly distinct from blue-greens. On the whole, colors with pronounced blue lose blueness, while colors lacking in blue gain blueness; all become less saturated. Third, with still further decrease in size, reds merge with grays of equivalent brightness and, finally, blue-greens also become indistinguishable from gray. For exceedingly small objects, normal visual sensations are devoid of all color connotation, and only perception of brightness remains.

     

    Small patches cut from large colored sheets are not as well matched visually by the original sheets as they are by sheets of somewhat differently colored material. Additionally, the indication is that any color, in a small enough patch well centered in the field of vision, can be matched by mixing only two, and not three, "primary" colored lights. I.e., the tendency of the chromaticity diagram to degenerate toward a single line for these small patches and indicates that the two primaries mixed to match the color of a tiny object may be chosen as a barely orange-red and a greenish-blue.

     

    Refs.:

     

    George H. Brown, NTSC Report.

     

    W. E. K. Middleton and M. C. Holmes, "The apparent colors of surfaces of small subtense-a preliminary report," Journal of Optical Society of America., vol. 39, 582-592; July, 1949.

  8. Because that is what many of the RED fans kept proclaiming. I remember one going "It can do 10,000 ISO" over and over again.

     

    I think Red's marketing strategy is hurting them also as it is not taking advantage of Red's inherent potential to pose itself as a company that seriously understands issues in developing a complex system such as a video camera. As somebody, who has simultaneous academic and industrial experience, where we have developed systems that includes a lot of Red's functionality, and even go much farther than it, I guess I am well placed to make an evaluation. I am impressed at the achievement they have accomplished in a short span of time. However, I think the camera is still not there yet, but with the right effort it can be pulled off. I do however feel that Red should reduce the level of hype/mystery surrounding its products, as IMHO it is not in their best interest in the long run.

  9. I'm wondering how that could be. Does 601, for instance, look significantly better to you? In both cases, the gamut covers far more than the region of skin tones in 1931 (x,y) space. With a bit depth of 10 or more, we shouldn't be running out of codes because of the larger gamut (especially towards green) of 709. My guess is it's more likely to be specific implementations -- cameras, monitors, etc.

     

    (1) In our extensive development experience with video systems that is a fact we have observed regarding skin tones and Rec 709.

     

    (2) The gamuts are not uniform whether Rec 601 or Rec 709. NTSC published extensive material on the non-uniformity of gamuts as far as color fidelity is concerned, which though not directly applicable here, is instructive. Typical white-balance/RGB co-ordinate transformation are defined as linear matrices, and I would tend to think that linear transformation do not capture the non-uniformity equally well everywhere, and specialized non-linear transformations may be needed.

     

    Additional complications are imposed by the very non-linear nature of human visual system. According to some evidence vision is not even tri-chromatic everywhere, for e.g., a small enough patch well centered in the field of vision can be matched by mixing only two and not three "primary" colors. Such facts have implications regarding selection of axis regarding the color space.

     

    Ref. Willmer and Wright, "Colour Sensitivity of the Fovea Centralis,", Nature, vol. 156, pp. 119-121, July 1945.

     

    On the other hand, extensive visual experiments have indeed shown change of color axis (from one set of RGB axis to another set of RGB axis) to be defined by linear transformations. (Recall any linear transformation in a finite dimension space can be defined by a matrix.) Linear theory will stipulate that any three linearly independent vectors suffice to span a 3-D space, and one can transform from one set of basis vectors to others (typically by matrix multiplication).

     

    However, we have frequently seen problems that cannot be fully resolved by linear transformations. Could it be an error in our implementations. It is possible, but for many of such things we have not done anything different from what other people do.

     

    Again as I said I have a "feeling", and I do not have full evidence.

  10. Hi,

     

    FWIW I think the same can be said for any digital camera, I also personally prefer skin tones on Fuji film, some will call me a Kodak hater :lol:

     

    Stephen

     

    You are right. Same can be said about any digital camera. I agree.

     

    I was more wondering if there is some reasoning based upon the chemical decomposition of film when exposed in relation to skin tones.

     

    I do have a feeling that Rec 709 is not very amenable to (certain) skin tones.

  11. Latitude is the amount an image can be over- or under-exposed and still yield an acceptably "normal" looking image after color correction (and/or processing, in the case of film).

     

    Dynamic range is the range of brightness a system can capture, between solid black and solid white.

     

    Thanks for the explanation. So basically what you are saying is that latitude decides your "wiggle room" -- the amount by which you can shift your captured range up and down, just like a slider, along the full range of signal values available to you.

  12. The wavy black sharpie line will appear to be the boundary between the colors, not the straight edges of the sheets of paper.

     

    Walk in closer, and at some point the auto-correction gets overwhelmed, and you see the boundary where it really is.

     

    I think this phenomenon is similar to the famous "Mach Bands" effect that happen when human vision changes contrast perception at the boundaries. Mach bands effects results in seeing stuff that is not there.

     

    This plus the fact that luminance depends far more on green than red and blue is why the Bayer pattern is a good idea.

     

    The actual situation is more complex than just green vs. red and blue. Kindly Google "Opponent axis" as a starting point.

  13. Anyhow, this notion of complexity diminishing with resolution is something that I believed about 10 - 15 years ago.

     

    -- J.S.

     

    Many of these concepts of smaller/bigger frame compression ratio apply on single frames. For video temporal relationship also comes into analysis and perhaps the level of detail complexity regarding smaller/bigger frames is not as important.

     

    As I mentioned before, for a regular spatio-temporal video compression we have simultaneously employed direct SD/HD sized acquisition for various bit rates (so no resaampling of data was required to get one from the other) and seen the relationship is more complex than just the phrase that "higher spatial size means necessarily better compression."

  14. Within the bounds that a compression algorithm is designed to work in, delivering a larger image and then cranking the compression ratio up to keep the file size down will almost always produce a better image,

     

    Not always true. Such scenarios work for certain ratios of higher and lower resolution image sizes and data rates.

    We have routinely seen scenarios where a compressed HD image seems poorer in relative comparison to a lower data rate SD sized image when upscaled after decompression to the same size as HD image.

     

    a modern compression algorithm is far better at deciding which detail it can get away with discarding than is a downscaling algorithm.

     

    In *practise*, actually the implementation is kept as simple as possible. Typically, implementation of compression algorithms opt for simple objective function for parameter optimization to cover the large search space.

  15. I agree in that I suspect this will depend on the comfort factor of a RAW workflow. (FINE with me !)

     

    It seems for now Sony et al are betting on matrixed output - EX-1, EX-3. etc.

     

    Although _some_ time down the road a RAW option of some kind will appear on the camcorders I'm sure.

     

    -Sam

     

    How many people who use HD cameras understand the compression (H.264/MPEG/DCT/Hadamard Transform/Entropy encoding/.....) associated with these cameras?

     

    The fact that HD cameras are easier to use is because of standardization of format. If RAW format is standardized, as Adobe appears to be making strides, then at the user level RAW workflow can be made as transparent as HD formats, and people can use it without resorting to even know that it RAW, as their software applications would support it natively.

  16. And remember, given two images of the same subject at different resolutions, you can almost always get away with a higher compression ratio for the larger one. For any natural image (e.g. not something like a single-pixel checkerboard pattern over the entire image), the larger image is going to contain less detail, on average, per unit of area. The upshot is that compressing 4K to the same sizes as 2K doesn't require an algorithm four times as efficient; it maybe requires one twice as efficient.

     

    We have used the reverse of this many times in our calculations. In my experience a general rule of thumb is that if an image size is reduced by 2 then the data rate required to keep more or less the same image quality is reduced by sqrt (2) (and not by a factor of 2 as would be suggested).

  17. Since 4K video has four times as much information as standard 1080 x 1920, it would seem to follow that using the same sort of encoding, the same disc would hold eight hours of standard HD, or four hours on a single-layer disc, which means two regular sized movies!

     

    In current implementation Red's R3D files are linear light data. If Red's 4K video codec would act on this data type, then it would have to be even more efficient than say a 4K HD-style codec, since Red's 4K data is linear light, and typical HD data is gamma corrected. Gamma correction results in a perceptually better space for encoding signals. Many compression algorithms such as JPEG even assume inherently perceptual space signals. Therefore, Red's codec may have to work extra hard on linear light data.

  18. I'm guessing that this 2/3" is a single chip Bayer, not a traditional 2/3" three chip prism block. Anybody know for sure?

     

     

    -- J.S.

     

    Yes, I concur with you. I think this is all in agreement with what Graeme Nattress of Red One has said several times, that he measures 78% percent resolution out of his Debayering algorithms. Lets round it to 80%. So 3K * 0.8 = 2.4K for Scarlet, and assuming Red Epic is also Bayer, 5K * 0.8 = 4K.

     

    It appears Red One is going for cleaner 2K and 4K options, and hence claim in some sense "true" 2k and 4K cameras by introducing 3K and 5K Bayer patterns cameras.

  19. Dont want to rain on your parade but dont you think things may have moved on a bit since 1991 !! ?

     

    I did mentioned the Glen Kennel's book, which is from 2007 and he has the same graph in his book also, so it is quite recent, and he must have thought about the validity of including the graphs from his 1991 paper in his book.

     

    In addition, you need to understand that being old is not tantamount to being invalid. It is better if you first read the paper than just get fixated about 1991/2007 issues.

×
×
  • Create New...