Jump to content

DJ Joofa

Basic Member
  • Posts

    149
  • Joined

  • Last visited

Everything posted by DJ Joofa

  1. Hi Michel, If you are talking about motion prediction schemes in realtime, then, previous and sometimes "future" frames are kept in memory for processing. BTW, on the issue of realtime, just like many people on this and other forums have incorrect perception of the utility of compression, a myth has been propagated by certain quarters regarding what can be accomplished in realtime, for e.g., if deMosaicing (deBayering), color processing, etc., can be accomplished in hardware processing in realtime. From our own experiences we have successfully dealt with these issues.
  2. Hi John, I really wish I could provide examples here. However, since they are all work, I can't post them to a public forum, or make them available publicly. But, we have battled temporal noise before compression extensively and our consensus is that it is really helpful if employed before compressing the signal. The type of compression also plays a part here, and certain schemes, for e.g., those that use motion prediction, really take advantage of good temporal filtering. Turning on and off the temporal filtering resulted in noticeable visual difference in the cases we have dealt with at least. As I said before, if a camera manufacturer does not use it, then, that is okay as long as they provide good justification for why not.
  3. Temporal filtering is a well-established practise in camera industry. Since the first access point you would have to your "master source" will be after decompression in this case, the corresponding original compression will be relatively poor if done without temporal noise removal. A few things are industry norms. For e.g., if a camera does not have OLPF you need to wonder what was the reason? May be there was a reason. Similarly, absence of temporal filtering needs strong justification. BTW, there are some powerful nonlinear ways of dealing with temporal noise that only impact the noise part and leave the signal part more or less unchanged under many circumstances, and of course not all circumstances. Perhaps your notions of noise removal are colored by spatial noise removal where the typical linear methods employed effect both signal and noise.
  4. No techy in their right state of mind dreams so, or at least I hope not. I think for small independent types the best is to put money on the sound system than running after endless "K" syndrome of resolution. I shall take 2K/1080p format with a great sound system for a movie project than a 4K/8K resolution with average or so so sound. Sound is more important than apparent visual resolution; people are forgiving of relatively lower image quality, but not audio quality.
  5. As John Sprung mentioned, moving images should give an impression of smearing of noise and less perceptability. Additionally, I have an impression that Red does not do temporal filtering for noise removal before compression and that manifests itself in low light situations. Similarly, lack of analog gain on Red can also increase perception of noise in certain low light situations.
  6. Hi Paul, 20 GBytes = 163840 Mbits, and 2 hours = 7200 seconds. Hence, max. bitrate is 163840/7200 = 22.76 Mbits/seconds, which is a reasonably good bitrate for 1280x720@30fps. For comparison: ** Red camera compresses to 1.2 bits/sample. ** DVD at its best quality for video only (~8Mbits/sec) is about 0.51 bits/sample on the 4:2:0 MPEG-2 data. ** 1280x720@30 fps comes out to be 0.55 bits/sample on 4:2:0 data. Going by just data rate you should be fine compressing a 2-hour movie in H.264 in 20Gbytes. However, in practise one would have to figure out which H.264 profile/level the above data rate fits into, and if in case one needs to drop to a lower level or perhaps profile, then do accordingly.
  7. I think that may be true. Unfortunately, I don't have access to a large screen. I guess I have to wait for a theatrical release.
  8. I saw the trailer. I think the image quality is good, and in my opinion even if there are cameras around that produce better results, Red has joined that group that generate acceptable cinema quality images.
  9. DJ Joofa

    I feel burned.

    Hi Paul, I did not fully get your question. I was referring to the process of the recovery of signal distribution (probability density) for the analog data from the quantized digital data after ADC, as it has many applications including the estimation of noise performance, dither analysis, quantization effects, etc. I think an example will illustrate it better. Suppose the US population is given for each year, 1900, 1901, 1902, ..., 2008. Now, if somebody accumulated the yearly stats into 5 years so that the tables are now 5-yearly, 1900, 1905, 1910, ..., 2010. Then, under what circumstances can the original yearly distribution be recovered? The same is done by a quantizer during ADC as it brackets data into buckets. An ADC is an interesting device. Simple, but both non-linear and linear at the same time! Non-linear if you want quantized signal amplitudes, linear if you are looking for signal probability estimates, and therefore, linear filtering techniques such as Shannon sampling theorem can be applied here to signal distributions.
  10. DJ Joofa

    I feel burned.

    David, how a signal is defined all depends upon which properties of signal you are interested in, and actually for certain statistics nothing is lost, if a few rules are followed. For e.g., if you are interested in just pure signal amplitude values then yes, some information has been thrown away by the digital signal because of the quantizer (ADC). However, if you are interested in signal distribution (in probability estimates) then no information has been thrown away by digitization as long as the quantization step is sufficient small (bit depth is large). One can recover the original analog signal distribution from the digitized values and it has certain useful applications.
  11. In many ways, yes. H.264 was designed for low bitrate streaming applications. According to several studies, H.264 roughly requires about half the bitrate for the same quality as MPEG-2. Though H.264 encoder can get very complex, the good news is that because of the asymmetric encoding/decoding process inherent in MPEG type schemes, the decoder (for a particular profile/level) is relatively less complex, and should be fast.
  12. DJ Joofa

    I feel burned.

    David, there is no reason to restrict the usage of the term Raw to Bayer. Several CMOS/CCD arrays are used without the obvious color array of Bayer-types, especially, in medical imaging. I think it is okay to consider the signal direct from the sensor as Raw, without any special regard to Bayer type color filter array, etc, i.e., a sensor signal that has not been transformed to any standard notion of display device specifications, color space, etc. In theory, compression applied to linear Raw should result in relatively poorer gains in comparison to a non-linear mapping that enhances certain properties of the signal that are more amenable for compression (say Log transformation applied when the signal has a good variation from dark to extremely bright). Linear data is noisy compared to smoothed data from certain non-linear transformations, and hence, even some other filtering operations, such as resampling, etc., also have to be applied more carefully.
  13. Though, H.264 does not allow for a direct usage of any wavelets, IIRC, however, H.264 is a special flavor of MPEG-4 (Part 10. to be precise) and MPEG-4 in general allows wavelet types in addition to DCT-types, if my memory serves me right. Both Cineform and Red are actually mildly compressed. I think both are of the order of (1:10), which is just mild, and that is one reason the output for them looks good because in the first place they are not a whole lot compressed. Extremely compression would be order of 1:500, that is done for many realtime applications. Ture. However, unlike JPEG and MPEG2, IIRC, H.264 allows for prediction within the intra frames also, so instead of having prediction among microblocks from neighboring frames, you can have prediction from microblocks from the same frame. Of course not. The best compression for a single image using linear methods would result from using SVD (singular value decomposition), and, after that, using KLT (Karhunen Lòeve Transform). As you mentioned, an advantage of wavelets is that there are no blocking artifacts. Additionally, there is another flavor of wavelets, called Wavelet Packets, that is a little different from traditional wavelet configuration, that is sometimes also used in compression.
  14. Chris, many companies use their software workflow for their hardware as prime-movers. Nobody is asking for an NLE, just basic Raw processing tools that work reliably on multiple platforms (Windows, PC, Linux, etc.). The point is that it seems Red has set the software solutions as low priority, and not used that as a catalyst.
  15. I objected to the deletion of a post of mine. See, it was rather easy :lol: But, seriously, it is immaturity to mistake the strictness of enforcement of rules with good policing. For a good example of this behavior kindly see: http://reduser.net/forum/showthread.php?t=23223
  16. Chris, that is the incorrect view to take. I think Red is a great camera with a potential for future. However, you have to admit where Red made mistakes also. You can go over my posts at RedUser to have an inkling of some of the shortcomings. I can't write anymore at RedUser as the over-zealous moderators have banned me so I shall identify a few for you here. During my tenure working in high-tech industry I have seldom seen a product go out the door without having an obvious workflow as you seem to suggest. And, you must be aware of that is one reason the Red camera owners are so petrified because of buggy software (at least when Red One was initially released) and poor workflow options. It is encouraging to see that Red has certainly caught up with many of those issues now. However, to date many concerns by owners are not fully addressed. Remember, people have a tendency to take software as being free. That is why many hardware companies give the software workflows that run on their products as free, because that actually results in increased sales of the hardware, as people can see accompanying software solutions. Unfortunately, Red did not capitalize on this obvious model that has been in place for a long time. But, since the Red hardware itself seems promising, especially considering the new models announced, that shall give it a traction for sometime, but sooner or later Red must develop appropriately complete software solutions.
  17. Imaging systems use some notions that are not exactly similar to those in audio domain but still provide some sort of noise immunity, such as Correlated Double Sampling.
  18. David, thanks for your detailed analysis. I think you are right, so I shall take back my words, especially so, since you deal with the practical aspects of handling high-profile cameras day in day out, which I don't.
  19. If John Sprung says that I will believe it. ;)
  20. David, I sincerely hope that your quote does not fall into the category of IBM's T. J. Watson's ("I think there is a world market for maybe five computers", though Wikipedia says there is scant evidence that he said that), and then there is this quote which I don't know the authenticity of ("Serious business machines don't need graphics"). I think you are accessing future's/tomorrows' users with the workflows that exist today -- though you might have accepted how technology will shape out in future, but perhaps you are not relenting on modification to the workflows that are in existence today.
  21. Shouldn't then you at least learn from Jim Jannard the art of how to "hypnotize" people so much so that even when they know the product is flawed they are still rooting for it. I think it is not easy to have that ability. What is he doing? BTW, what should be qualification of a person who can assess Red appropriately?
  22. Hi John, you have refreshed some good memories now forgotten. I liked the published code in the IBM technical reference manual as it helped me in understanding what the computer was doing, especially during boot up. If I remember correctly there was no support for a "graphics" card/device in those models. Then there was this company "Hercules", that came up with one of the first graphics adapter, perhaps called Monochrome Display Adapter?? (MDA??) at that time, but the PC had no support for it. So Hercules would fool PC Into thinking that it was still in "text" mode.
  23. Didn't IBM used to publish the source code for ROM Bios for some early IBM Technical Reference Manual for the PC? I used to have one for XT, and it came with full source code.
  24. Another important consideration is the quality of downsampling process itself. With the advent of 4K and higher resolution and the need to downsample it to SD/HD sized footage, care must be taken in the resampling process. It appears many software and hardware solutions out there, including publications in journal and conferences have gotten some understanding incorrect, and even when understanding was correct, some mistakes in the implementation. (Many open source software seem to have some aspects of implementation wrong. The results still look very good on certain downsampling ratios, but don't work on others, with the result that authors claim erroneous conclusion regarding what filters work best and when. An article describing a fast hardware solution for fast downsampling in a very prestigious IEEE publication got the implementation wrong so that it actually boiled down to a simple moving average process.)
×
×
  • Create New...