Jump to content

Dennis Couzin

Basic Member
  • Posts

    152
  • Joined

  • Last visited

Everything posted by Dennis Couzin

  1. Oh dear, why are you saying that the Switar RX lenses are suitable for any camera but the Bolex H16RX? That matter was sorted out 40 years ago. The two RX lenses will give lousy images without the 9.5mm thick glass between them and the image plane. See my old studies at https://sites.google.com/site/cinetechinfo/ .. Or see another guy's recent work on this at https://cinetinker.blogspot.com/2014/12/rx-vs-non-rx-lenses.html . .
  2. 48 years ago there was much confusion concerning the intercompatibility of standard C mount lenses on Bolex reflex cameras quite asside from the obvious probem of some lenses smashing into the prims. Kern, Angenieux, Schneider and others made special RX-mount lenses for those cameras for an optical reason. Imaging through a 9.5 mm thick glass prism can introduce profound aberration. Their special RX lenses were made with compensatory aberration. I wrote about this in 1976, 1978, and finally in 1987. I see that https://cinetinker.blogspot.com/2014/12/rx-vs-non-rx-lenses.html discussed my articles, but it links to my Google site "cinetechinfo" which has since vanished. Here's a new link to the 1987 article: https://mediafire.com/file/6rh6f6tth12l625 . In a nutshell: A C-mount lens works weil on a RX camera or a RX-mount lens works weil on a C camera, if and only if: (1) the lens is slower than about f/2 or f/2.8, or stopped down this far; AND (2) the lens has a deep set exit pupil, about l½ inches or farther into its screw mount.
  3. Kino Flo 3200K fluorescents aren't what Kino Flo says they are. Kino Flo publishes a spectral power distribution for its KF32 lamp at http://www.kinoflo.com/GIF%20image/Lamps/Spectral-Charts.gif Their KF32 graph has gone through some artistic retouching. Its blue and green (mercury line) spikes are at the wrong wavelengths. (They're at nearly correct wavelengths in Kino Flo's KF55 graph.) More deviously a third, violet spike has been completely disappeared. I had a KF32 lamp measured at a good laboratory. Compare their graph at http://www.mediafire.com/file/1thd5hamyk224ew/kf32.gif to Kino Flo's! That violet spike at 404 nm matters since many, but not all, digital sensors are sensitive there, so it accentuates the differences between cameras (and between lenses). Better use a UV filter (~Wratten 2A) when using this lamp. Indeed the Kino Flo KF32 hardly deserves to be called "High CRI". It scores 87.1, but the CRI measure is based on just 8 specimens and is easily gamed by lamp manufacturers. The newer color fidelity index from CIE and IES is based on 99 specimens. The Kino Flo KF32 scores just 79.2 by that measure. The fidelity is most horrible for greens. Note: all these measures are for visual color rendering. DC
  4. No. No. No. You have misunderstood my comments about color dithering in recent posts #58 and #60 on a parallel strand named "What's the Attraction?". I like that Van Gogh self-portrait, but the discussion in here should stay monochrome. In here, dither is a means to display intermediate tones. We do need more tones than 8-bit can provide as shown by optimal Hecht coding. But we hardly need more colors (sans tones) than 8-bit can provide. The motivation for color dithering -- pointillism, etc. -- was not for precision but expansion. See that other strand.
  5. You missed my point of post #58. The 8-bit depth of the typical screen, is already enough to display all colors (within the limitations of the chosen primaries) with nearly full visual precision (meaning human visual precision, as explained in the post). Likewise the mixing of oil paint allowed painters to display all colors, with unlimited precision, within the limits of their chosen pigments. The pointillist painters were aiming at colors unachievable by those uniform mixed paints. Likewise, the new video or film colors that might be producible with color dither would not just be colors that extra bit depth would achieve, but newer ones. They could be outside the limitations of the monitor's primaries, or outside the region of CIE color space achievable with light, or even outside CIE color space, requiring higher dimension.
  6. Carl, behave yourself! I didn't write what you say I wrote and put in quote marks, above. What I wrote, and what you copied above your post and presumably sort-of read, was this: I said that your illustration of noise producing useful dither was not applicable to our technologies because your noise had large amplitude, unlike film grain noise, and also because you are reducing the original 8-bit image to 2-bits. It was a dramatic illustration but irrelevant. You can't produce a corresponding illustration where the noise is film grain-like and where the output is 8-bits as all output today is. (Your input can be greater bit depth if you wish.) You've leaped over the quantitative to the qualitative where your arguments are just amusing, not serious. Of course dither provides advantage to our technologies. It allows an 8-bit monitor to show 10-bit video without the banding that usually results from reduction to 8-bit. Apple QuickTime does this on my 8-bit monitor when it plays ProRes, but interestingly it does not do it when it plays uncompressed 10-bit. That dither is created by QuickTime, it is not in the original 10-bit video (which has no grain or noise). That is the meaning of my final sentence quoted just above. When a transform must knock 10-bit down to 8-bit it has the option of representing the lost 10-bit levels by wee patterns of the neighboring 8-bit levels just above and just below. Example: 10-bit 500 becomes 8-bit 125. 10-bit 504 becomes 8-bit 126. So what does the player do when it encounters 10-bit 501? It makes a fine pattern of 25% 8-bit 125 and 75% 8-bit 126. It looks pretty good. People may download my banding test clips and see how they play on their systems. With such a transform you didn't need to add 10% noise to the face that was to become 2-bit. You simply used the wrong transform. Video post production is a chain of transforms rather than a chain of re-imagings as film post production was. Video transforms can be quite intelligent.
  7. If it's true it should be explainable. If we mean by colors those that fill extended areas without variation, then video is capable of the finest color precision. I think a good 10-bit monitor can display all colors within its gamut and its luminance range, with perfect precision. A good video camera can produce the file for that display. There are infinitely fine gradations possible in X,Y,Z, which are numbers, but not what X,Y,Z measure, which are colors. For two colors to be two, our visual system must be able to distinguish them, and our visual system is not very precise. A famous illustration of the imprecision is the "MacAdam ellipses" diagram. Talk about 8-bit per channel video representing 16 million colors is belied by there not being 16 million colors, or at least not those 16 million colors. Consider a simple example. Make our white at x=y=0.333 and give it X=Y=Z=1. Now quantize each of X,Y,Z linearly into 1000 levels (almost 10-bit). Now while adapted to white, look at all the billion other combinations like (0.123, 0.456, 0.789). Are there any colors missing? Use CIE L*a*b* space to measure how far apart our digitized colors are from their nearest neighbors. For example, how close is (0.123, 0.456, 0.789) to (0.123, 0.456, 0.790)? Delta E = 0.08, so those two colors can't be distinguished. But how close is (0.123, 0.456, 0.789) to (0.124, 0.456, 0.789)? Delta E = 0.67. So those two colors might be distinguished, so we need a somewhat finer net than this almost 10-bit XYZ net. XYZ space includes much more than the whole visual gamut. Typical RGB spaces include much less than the whole visual gamut, so our 10-bit RGB monitor should, as suggested, be precise enough to display all colors. It is a simple programming exercise to find out, if you trust the CIE L*a*b* metric. There are two other ways to interpret what Carl says colorists know. One is that this is about extracting more colors from the film than from the video, not about there being more colors representable. Yeah, color film is a grainy mess, and a high resolution scan will yield quite different colors at each pixel within a "grey" patch. There will be color controls, such as "saturate", that will extract new colors from this "grey" patch even though grey shouldn't respond to "saturate". These color extractions might have no relation to what was pictured, but they are remarkable. I did color grading of films, including 8 mm, in the film days, and now do color grading of videos, and I've never had the experience of "extracting" or "pulling" colors from a picture. This is because it's an experience that comes from using video color grading methods (which include, e.g., the "saturate" control) on color films. It's a medium-crossover effect. Enjoy it while you can. Another interpretation is more fundamental. It is that color might be more than uniform X,Y,Z on areas can measure. Can spatial variation of color create new colors? Does everyone with eyes, not just the colorists, see colors in super-8 films which they don't see in video. A better example is that of the pointillest painters who claimed to be painting new colors. It's not hard to imagine color irregularity modifying color, or even pushing it to where untextured color can't go. (I'd like to read the eminent perceptual psychologist Floyd Ratlif's book on Signac.) Are the modifications or pushes along the usual three color dimensions, or do they introduce new dimensions of color?
  8. I can't leave this strand without saying that I am unconvinced by all Carl's arguments re film grain. 1. He emphasizes a single case where image noise provides an information advantage, when a high bit-depth image is reduced to low-bit depth whence the noise introduces dither. That advantage is not reaped in our technologies, where the film grain noise is of low modulation (as seen or optically recorded) and where there are no dramatic transforms in bit depth. Anyhow, our transform could itself introduce dither. 2. Information advantage is not at all what film grain provides to stills or to movies. Gestalt psychology (and the allied aesthetics) studies what it provides, information theory doesn't. To Greg, eye saccades are not tiny sub-sensor jitters, but of several degrees (as explained back in post #43). Concerning the camera mimicking eye. The original "camera obscura" mimicked the eye in two important ways. It used a lens to make a real 2-dimensional image from the 3-dimensional scene outside. And it was enclosed to prevent the scene's light from falling on the image except via the lens. Few further developments in cameras relate to the eye. An iris was added to the lens to adjust the image brightness on the receiving medium, while incidentally modifying the depth of field. The eye is shutterless. The eye focuses differently from cameras. Color films and color video mimic the eye insofar as they incorporate sensors with three different spectal sensitivities. Video color is more eye-like when it recodes the three channels as YUV. The now accepted opponent process theory of color vision works similarly. Video cameras are altogether more eye-like insofar as their sensor plane issues signals to some "higher" functioning something -- computer. Film functions dumbly by directly turning the sensor into the displayer. This is what prevents a reversal color film from making correct colors. (Maxwell figured that out in around 1850.) Video can, in theory, make correct colors.
  9. Take care when choosing non-RX lenses for the Bolex RX camera. Bolex disseminated a rule-of-thumb: lenses up to 50 mm should be RX; lenses longer than 50 mm need not be RX. The rule-of-thumb was FALSE. I first challenged it in 1976-78, and returned to the matter (with more optics under my belt) in 1987. Others have confirmed my findings, most recently this by Dom Jaeger. Unfortunately, as Bolexes become collectors' items, the original Bolex literature becomes holy text. When buying old RX lenses, beware that either the focus mount grease chosen by Kern, or some lubrication around the iris, tends over time to transfer to the two lens surfaces facing the iris. Cleaning those two surfaces requires minimal disassembly of the lens. Concerning the dim viewfinder, the diagram of the optics shows internal reflections off three 45° prism faces. Those purposely uncoated faces must be scrupulously clean to avoid reflection failure. Many years outgassing of the camera's paint and lubricants might cloud them.
  10. Agreed that the graininess in film is entangled with the image. I described this back in post #42 as the grain noise being convolutional (as opposed to additive or multiplicative) with the image. There is no reason this convolution can't be performed ex post facto on the video image provided the video image has sufficiently higher resolution than the sought film grain simulation (as explained back in post #36). How much higher resolution to start with, or alternatively, how much resolution must be lost, in order to obtain how good simulation of film grain, is the engineering problem that should concern us. It's the usual balancing act in which the "how much" and the "how good" must be quantified.
  11. I fail to see what this example proves. No proof is needed that film is analog. Repositioning a sllver blob partly in a pixel area continuously varies the lightfall in the pixel area. Size, as well as position, of the developed film grain is analog, so size can make the continuous variation of lightfall too. These continuous variations say nothing about any kind of precision the film may have. They only say that the pixel sensor needs infinite precision to record the lightfall. But the pixel sensor needed infinite precision to record the original optical lightfall, and on good basis we are satisfied with 10- or 12-bit recording. (See my "Hecht and digital video" (2009).) What reason is there for the pixel sensor to record the film grain's affected lightfall with such precision, since the film grain does not accurately represent the optical lightfall that matters? Measurement or recording can be more or less precise. Noisy instruments are imprecise whether analog -- continuously variable -- or not. The silver blob in the example does not record the optical lightfall with any precision. Actually many blobs are required, and only then can we select an aperture, like Kodak's 48 micron aperture, to move over the collection of grains and through which to measure the circumscribed lightfall. The precision of the grainy field is determined by how little the lightfall changes while moving the aperture. Fine repositioning of individual grains within the ensemble hardly affects this. The precision is something like ±0.009 around density 1. That's is a 4% range in transmissions, so an 8-bit linear sensor is sufficient, or less than 8-bits after gamma encoding. I fail to see how any of Carl's bit-depth or quantisation arguments apply to digital picturing as it is now practiced. There are many special questions like "how to fake the appearance of film grain in video?" and "what are the optical and resolution requirements for preserving film grain quality in scanning" and "how effectively can spatially dithered 8-bit video convey 10-bit video?" that need serious attention.
  12. Crazy thread, but Carl understood the point of wiggling the sensor, which Simon didn't. Of course film frames should be perfectly registered. But the grains in the frames are not perfectly registered. They jump like crazy. The reason for wiggling the sensor is not to wiggle the image but only to wiggle the positions of the pixels which capture the image. That is, to make the video pixels somewhat more like film grains. My Jumpy Pixel Experiment of 2009 involved jogging the sensor around by a random 0/10, 1/10, 2/10, ... 9/10 of a pixel horizontally and vertically, frame-by-frame, and then jogging the projector's DLP chip around by a compensatory amount so the image stayed put on the screen while the pixels comprising the image jump about. Look at one corner of each of these six pictures to understand what was done. Actually, observers of this experiment should not know what was done to the images, lest their responses be biased by their beliefs. Non-film people, including children, have been the best observers. Links to the experiment were in post #16, but I've since added an option. First read the readme here. Then download either the "None" version or the "ProRes" version.
  13. Carl Looper's quartet of pictures deserves our attention. original original to 4 colours original + 10% noise original + 10% noise to 4 colours He means the example to show that "the grain doesn't do anything other than mediate the transfer of tone between one bandwidth and another." I was surprised by how he exemplified reduction of bandwidth, by transforming the 8-bit images on the left to 2-bit images on the right. In our video context where images are always 8-bit, 10-bit or 12-bit, I was expecting reduction of bandwidth to be in the spatial domain -- either by softening the image through loss of high spatial frequencies, or by resampling at lower resolution. So our first request of Carl should be for examples where 8-bit remains 8-bit. But allowing that he has reduced bandwidth by reducing bit depth, we should wonder whether the noise in Carl's example is similar enough to film grain noise to sustain the example. If his lower left image had film grain noise the example actually wouldn't work. It works because the noise involves such large modulation that the far-apart levels in the 2-bit images are reached, achieving the dithering. Actual grainy film involves much smaller modulation. Where the 7276 (my favorite stock) data sheet says RMS granularity = 9 this means that the microdensitometer, set up to emulate human visual acuity at 12x magnification, finds the film density 1 varying = 1±0.009 rms. This puny variation cannot even register in an 8-bit image, much less traverse 2-bit levels. So reducing bandwidth by reducing bit depth was a poor example. Grand flourishes like "adding noise does not affect the image" don't get us far. There was much brilliant work done on image noise starting in the 1950's by Rose, Fellgett, Zweig, and many others. Did any of these scientists rely on grand flourishes? There's a nice picture of different models of film grain on page 75 of "Image Science" by Dainty & Shaw. (I must have been remembering that picture when I described models, but overlooked the optical softening step.) Kodak printed a small pamphlet "Understanding Graininess and Granularity" and there's discussion in Campbell's "Photographic Theory for the Motion Picture Cameraman" easily accessible to cinematographers We should demand more attention to the particulars of film grain. It's noise, but it's our special noise with its special character revealed in its Wiener spectrum or by how it looks.
  14. It is easy to keep metaphysicians busy. Ask them: What is the relation of "material embodiments" to their "images"? What makes material thing A the embodiment of image B? Two different material things can be the embodiments of the same image. Can one material thing be the embodiment of two images? Can there be the "zero" embodiment? Can there be a minimal embodiment? A maximal embodiment? Images have attributes. Of what kinds? Images have attributes that may or may not be visible in the image's embodiments. "An image is not to be found in its material embodiment". Is it perhaps to be found in the totality of all its material embodiments? Is it to be found some other way, or in no way? When we "treat the image as an objective reality independent of it's material embodiment" what kind of independence is this? Carl is "proud to be metaphysical". Oh-oh, being metaphysical is not the same as being a metaphysician. Maybe these questions will not keep him busy. Who has slaughtered the word "theory"? And the saddest thing is that it contributes nothing to either the art or science of cinema.
  15. The print itself is not the image? The print is what the painter call the ground? Carl, here is not where to pursue the metaphysics of depiction. No real problems are solved by philosphy anymore. The grainy photographic image is akin to a half-tone, except at too small a scale to be resolved by photographic lenses or by the viewing eye so instead of grains one sees a grainy field, variegated grey where the light image was a smooth grey, rough edges instead of smoothy falling ones. Graininess is an operation on the light image that we can model. The dot screen model is admittedly too crude a model. It simply isn't true that "once [the image is] embodied in material form it's too late to do anything with the signal". Material form is one or another lossy encoding. What's lost is lost, but a new signal can be reconstructed from what remains. Some things that can be done with the original signal come out exactly the same when done with reconstructed signal. Some other things not. Becoming grainy is a lossy operation in itself. Cine optical printing exemplifies repeated applications of the graininess operation. An optical image of one grain pattern is impressed upon a second grain pattern. Then an optical image of that thing is impressed upon a third grain pattern. The resultant of several steps can still be called a grain pattern. So what happens when a video original undergoes the "becoming grainy" operation? Some grainy image will result. What are the limitations on the qualities of that grainy image? (In the worst case the operation could consist of modeling the out of focus video cast onto photographic film and reimaged in very high resolution video.)
  16. You picked a nasty (unsharp vision) portion of the retina, where it's more rods than cones, but yeah, it would be nice to make our image sensor imitate our retina. Rectilinear manufacturing, and 2-dimensional addressing and image processing are in strong opposition to this. But if you could have that sensor, why jiggle it (and how much excursion is needed to break the pattern)? Our retina doesn't jiggle, nor do the cones shift about. There are saccadic eye movements, but these are nowhere near 24 per second, and they are of several degrees. The retina, and the whole visual system, has its own kinds of noise, but not of the shifitng kind. For the dynamic grain pattern of cinema, that we both think important, what exactly is it doing, visually and aesthetically?
  17. You might be misunderstanding how grain would be "added". This is more nearly multiplicative noise, most nearly convolutional noise. The fake grain isn't just thrown onto the video image. Remember how dot screens work in the graphic arts. The continuous tone image is printed to an array of black dots, their sizes dependent on the local brightness of the image. Now imagine dot screens with the convolvers randomly located. Now imagine how easy this is in digital image processing. The dot screen isn't quite the model I'd use to fake grain since there are different sizes of photographic grain, each with its probability distribution for brightnesses turning it black. Also we'd need to model the 3-dimension nature of the emulsion. Again, no biggie for digital image processing. The resolution question comes to: what video resolution is required to achieve as good as possible image with a particular grain model? The requirement might be higher than we'd like. The fake grain must dominate the underlying pixel array while being achieved in those pixels. Conversely, for a given video resolution, how coarse must the imposed fake grain pattern be? It might be coarser than we'd like.
  18. That would be a recreation of film. I was aiming at a practical method for enhancing video by incorporating a couple of piezo elements on the camera's imaging sensor and on the projector's light transducer.
  19. Steve: my advice to each young cinematographer: learn the relevant science. To master shooting fluorescent lit scenes, learn about light spectra, and how the eye sees color, and how films (or cameras) reproduce color. My primer was the little Kodak booklet "Color as seen and photographed". The bible is still Hunt's "The reproduction of color". What makes fluorescent lighting tricky is precisely that things don't photograph under it they way they look under it. Most skin looks a bit unpleasant under most fluorescent lights, but it doesn't look green! What's happening is that all color films, and most digital color cameras, are poor matches to how color is seen, and the fluorescent illumination exaggerates the mismatch. Understand that there are two very different instruments that might be called a color meter. One, whose technical name is "colorimeter" is designed to yield what human color vision yields. The colorimeter measures a color by three numbers, often CIE defined X,Y,Z or Y,x,y or L*,a*,b*, but it could also be Hue, Saturation, Brightness, etc., some way we characterize colors. There are many scientific color systems, and since it will be your job to shoot color pictures, you should begin to know colors according one or another system. This colorimeter will notice nothing tricky about the fluorescent illumination. The lamp itself will measure acceptably white. The grey card will measure acceptably grey. Someone's skin will measure decently close to how it measures under non-fluorescent light sources that themselves match the whiteness of the fluorescent lamp. A colorimeter can tell you which white the fluorescent lamp produces, but not that the light is making the white in some funky way. A very different instrument that might be called a color meter, although it really shouldn't be, is the spectrometer. It measures a color, or a light, by a graph or a long list of numbers. It measures what's going on at every wavelength. This is much more than the human eye can sense from the color or light. Also much more than the film or camera can sense. Strictly speaking the spectrometer measures more than the color. So the spectrometer can identify funky light sources like fluorescents and, if you also have detailed knowledge how the eye and the film "see", the spectrometer can predict how they will disagree under that light. Zeiss pocket spectroscopes were once popular for this. They showed you the spectrum's highs and lows on a reticle. Though less than a spectrometer, you could identify different types of fluorescent lamps with them and maybe make appropriate filtrations. There are now portable spectrometers available for $2000. Maybe this what Adrian S. meant by "color meter". With film there can be bad color surprises when you see the rushes. With video the color twists due to funky lights are revealed immediately, even before you shoot. But it's still useful to understand what's causing them.
  20. Simon, this is what I cautioned about in post #19. The characteristic curves you've posted were not measured using diffuse densitometry, as would be appropriate for cine negatives that will be contact printed. Dr. Wahl uses what he calls "realistic" densitometry such as a condenser photographic enlarger (or a movie optical printer) uses. "Die Dichten werden unter realen, 'vergrößerungsnahen' Bedingungen..." Since Gigabitfilm's data sheets describe the film's enlarger printing densities as being the result of an "asymmetric Callier-Effect", the diffuse density characteristic curve will not have the shape of the characteristic curve measured by Dr. Wahl. And since his details of the asymmetric Callier-Effect seem confused in the data sheet, I have no idea what the diffuse density characteristic curve looks like. Someone with a darkroom and a (typical, diffuse) densitometer should just measure it. Also your praise of the extremely linear, toeless, second curve is humorous coming in this topic searching for differences between film and video. That extremely linear, toeless curve was what characterized the early video, before it tried to look like film.
  21. This is interesting. I'm sure that grain can be faked in video, yielding an ersatz film which must have the "film look", provided the video is of extremely high resolution. That video is then substituting for the exposing light. But I get the point that this is not possible with lower resolution video that only seems, visually, to match the exposing light, due to the high frequency rolloff of vision. Two questions arise. 1. Is this ability of the genuinely dithered image to hold up through transfers to lower bandwidth -- a Signal Processing fact -- necessary to the film look, or might a faked version look the same? 2. The analysis given, as I understand it, is for grainy photographs generally and not especially for cinema. For if the point about the difficulty of faking film grain applied only to the running cinema, but not to the single frame, then it could be disproved by using whatever method worked for the single frame and jiggling it around again and again, frame after frame. I wrote in post #16 that "the random grain of film ... functions as a dynamic ground for cinema" and then claimed that "you can fake grain and have a dynamic ground for video". The latter might not require well-faked grain at all. It's unlikely the single frame needs to perfectly simulate a grainy still photograph when it will be on view for just 1/24 second. A single grainy image looks not-all-there, but also definitely sketched-out. Carl's post #17 suggests how a Signal Processing analysis can explain that. Cinema requires an analysis of the running grain, what people describe as "boiling", "grinding", etc., notions exclusively dynamic. The 3-D Signal Processing analysis of that would involve correlations to human pattern perception data that is now unavailable.
  22. Simon, Your experiments with Gigabitfilm are reminiscent of mine, and others', with Eastman 7360 Direct MP Film. (The film is discontinued; a scan of the 1982 data sheet is here.) 7360 was very different from Gigabitfilm technically. 7360 produced a positive image with negative development, by means of the Sabattier Effect! Also 7360 was orthochromatic. But 7360 resembled Gigabitfilm in being virtually grainless. The pictures were as if shot and printed on holographic film. It was uncanny to watch 7360 original (shot with difficulty as the EI was around 0.25) and see the lens's direct image of the scene, unmediated by silver halide grains. One then recognizes the role of grain in cinema. But 7360 was also different from standard cinema in its warped tonal reproduction. This is evident in the curvier-than-S characteristic curve in the data sheet. I suspect, from your picture of Frau Kopp am Montageplatz, that Gigabitfilm is also providing warped tonal reproduction. This can be settled by seeing a diffuse density characteristic curve for Gigabitfilm. (As explained earlier, that is the appropriate density since cine film will be contacted printed.)
  23. Go to the Akihabara district and ask around. I've never been in a city more likely to have a special battery than Tokyo.
  24. Hi Carl, The PDF is included in the big ZIP file. The readme just warns that the experiment requires a monitor having at least 2000×1500 pixels and a system with Quicktime able to display a 1800 Mb/s video stream. I worked in Apple codec "None" to make the experiment's videos with full control, but I was silly to release them in that codec. I didn't understand, or trust, other codecs back in 2009.
  25. Why all this muddle? Reading too much French philosophy? What else produces such a slather of questions, proposals, candidate answers, assumptions, concepts, concepts, and more concepts? There can be no science in that truthlessness.
×
×
  • Create New...