Jump to content

Ilmari Reitmaa

Basic Member
  • Posts

    71
  • Joined

  • Last visited

Everything posted by Ilmari Reitmaa

  1. This is a bit off-topic as it describes a prototype post-production VFX process, but I think it's relevant as, if nothing else, an illustration how computational methods are pushing into the traditional domain of principal photography. "Unwrap Mosaics was developed by Rav-Acha, Andrew Fitzgibbon, Pushmeet Kohli and Carsten Rother. It virtually strips the skin from a selected object in a video, producing a 2D surface that can be easily edited using photo-editing software. Rav-Acha likens that surface to a tiger-skin rug: a flat surface that once covered a 3D object. The software reverses the process after any edits are complete, wrapping the edited skin back around the 3D object. Because the edits are attached to the skin, they then move with the object throughout the movie." http://www.newscientist.com/video.ns?bctid=1743772183 http://technology.newscientist.com/article/dn14572
  2. I quite agree; while the potential for useful applications is substantial, so is the potential for abuse. I also feel that there's a broader context: algorithms are getting ever more intelligent and CPU time is getting ever cheaper, and with them an age of image manipulation is only just beginning. Digital post, as we now know it, is baby steps.
  3. This could be slightly off-topic for DI, but interesting nevertheless. An algorithm for resizing an image by recomposing it: http://www.faculty.idc.ac.il/arik/imret.pdf http://www.faculty.idc.ac.il/arik/IMRet-All.mov Both files are quite big. The video demo is rather impressive.
  4. Copypasting from a DALSA newsletter: "Tempting Hyenas," the first feature length film shot with the DALSA Origin 4K digital cinema camera, will undergo a full 4K DI at Post Logic Studios, a leading independent post-production and digital intermediate (DI) service facility. ... Post Logic?s 4K workflow, developed in cooperation with DALSA Digital Cinema, begins with the careful handling of assets at the end of each day of shooting. Data from the Origin 4K camera?up to two terabytes a day?was recorded onto a Codex unit and output as 2K ProRes files for viewing dailies and for creating an edit decision list (EDL) during online editing. The files were offloaded daily from the Codex onto a Ciprico Media Vault for transporting back to Post Logic Studios in Hollywood. Upon arrival, the data was backed up onto 400GB LTO3 tapes, with all assets and metadata meticulously catalogued in a proprietary database. This process ensured that the production could shoot continuously without skipping a beat. With the LTO3 tapes serving as the 4K image master files, Post Logic Studios will match up the time codes with the EDL to create the final 4K product. ?With the new digital cameras, post production houses are becoming involved in every stage of production. We?re there during pre-production, during production and afterward in post,? said Dr. Mitch Bogdanowicz, Executive VP of Imaging Science, Post Logic Studios. ?Early on, we made very specific suggestions about working with the DALSA camera. The quality output of any digital camera you shoot with requires not only an understanding of what you?re capturing, but also the limitations of the curves you?re placing images on, so that when you get to the final product you can really optimize things.? Director LeVar Burton, DP Kris Krosskove.
  5. Uh, just realized there's a VFX subforum that might be more appropriate... if somebody knows how to move this thread there, please feel free to do so.
  6. This may be slightly off-topic, but here's a link to a quite awesome demo of a bleeding edge scene reconstruction tool called Photosynth. Not a post/DI application per se (yet, at least), but imagine the possibilities of the technology (scene design, virtual scene construction, footage quality and resolution enhancement...). The cool stuff begins at 2:45. http://www.ted.com/index.php/talks/view/id/129 (There's a download link if the embedded player doesn't work.)
  7. Copy-pasting from a DALSA news release: DALSA Digital Cinema reinforced its leadership in 4K motion picture capture today with the announcement of new 4K camera models, an on-board 4K data recorder and new 4K anamorphic lenses. The new products will be on display at the upcoming National Association of Broadcasters (NAB) trade show (booth # C-9423), April 16th to 19th, 2007 in Las Vegas. "Basically, we?re talking about the ability to shoot at 4K resolution, 16-bit, uncompressed, untethered, using the highest quality anamorphic lenses. I think cinematographers will be particularly thrilled that, for the first time, a digital camera will be able to capture the CinemaScope 2.40:1 aspect ratio without compromising image quality." Available "early 2008".
  8. Just for the record, I did the edit on a G5 2.1GHz iMac with 1.5GB RAM running OS 10.4 and FCP 5.0.4. I transferred the footage (some 80GB of PAL 4:2:2 8-bit uncompressed) onto the internal hard drive (SATA) and streamed it from there, ending up with a 5GB final version. Everything went just fine, no glitches whatsoever.
  9. iMac specs say the internal drive is SATA, at least the newest models, and I'd assume all the Intel models are SATA. So would I, but the situation is such that I have to look for ad hoc -alternatives. Anyway, thanks for the input.
  10. Things happen, and I find myself in need of a quick fix for a music video edit... Would an average Intel iMac running FCP 5 be enough for doing PAL SD 4:2:2 10-bit online editing? I'm mostly worried about the single SATA-drive that would be hosting both the video material plus FCP and OS; would it be a bottleneck, transfer rate -wise?
  11. I would expect some difficulties with capture device codecs, which are unlikely your regular video edit codecs. Conversion software for proprietary/unusual stuff could be difficult/expensive to come by. Also industrial cameras come in greater variety as industrial applications are much more varied than cinematography, where image quality at the visual range of the spectrum is at premium; the criteria of quality are different for industrial cameras and you may end paying for wrong kind of quality (low signal to noise ratio but poor dynamics, etc.) or features you really don't need. Furthermore, opportunities to resell your investment are fewer than with a camcorder; you're probably stuck with the camera and the capture device and whatnot, even if you won't need them later. Still, I wouldn't so much doubt the feasibility of the idea, but I would expect all kinds of funky workflow-related issues, be they digital (proprietary codecs, proprietary capture devices, driver problems, insufficient RAID...), or practical, such as having to haul a monitor around more than usual (no viewfinder), having to customize bridge plate/mattebox, etc.
  12. I came across this one just recently; it's a variable-speed HD camera from a small company in Munich (partnering with P+S Technik), seems to have won an award at the Cinec expo this year. Five units delivered world-wide. From the specs: PL-mount Electronic viewfinder External control unit (laptop PC) Firewire/Fiber-optic link 1.150 fps (SD PAL) 950 fps (720p) 650 fps (1280 x 1024) Any experiences?
  13. Presented at this year's SIGGRAPH, a co-project of USC and National Taiwan University for relighting a moving human subject for immersion into a CGI set. Looks pretty high-tech and at the same time not yet entirely convincing, but nevertheless a novel concept. http://gl.ict.usc.edu/research/RHL/ http://gl.ict.usc.edu/research/RHL/LS6_EGSR_062006.avi (~70MB)
  14. Well, you've pretty much summed up my thoughts with that. They don't have a working model. They promise the moon. With as little substance as this, there's nothing much to say one way or the other. After reading the website a while back, it did come across as hyped. Who knows.
  15. From the engineering point of view, the answers are simple no and yes, provided that such cameras exist. Storage will become more and more compact, data transfer rates will become higher. It is only a matter of timescale and engineering effort. There do exist physical limits to this progress but these limits will likely not be encountered before digital storage systems are capable of delivering more information than the human visual sensory system is able to process (provided that sufficient projectors exist). Might take a while, though. For some numbers, see e.g. Kryder's Law (for hard drives) and, yes, Moore's Law (for solid-state memory). Ten years ago my PC had a 4GB hard drive and 48MBs of memory. Ten years from now...
  16. Hello everyone. Long time no see. I may be just adding to the whole depth of field -confusion here, but there is an alternate, however equivalent, approach to depth of field and related concepts, as described by Harold Merklinger (not affiliated) in the book Ins and Outs of Focus, particularly in chapter five. I've always found it to have certain appeal. The basic idea is this: You draw an imaginary line from the perimeter of the lens diaphragm to the point of focus, and beyond. Now, at any given distance, the distance from the optical axis to the drawn line equals the radius of a disk of which a point-like detail looks like, at that distance (think about a long shot in the night, with an actor in the foreground, in focus, and blurry disks of city lights in the background). At the point of focus, a point-like detail remains point-like. Closer to the camera, a point-like detail is no longer point-like but never exceeds the size of the diaphragm. Approaching infinity, a point-like detail grows infinitely. What gives? The radius of the disk is proportional to size of the smallest resolvable detail at that distance (roughly, half of the size). Focal lengths and format sizes (and actually, even depth of field) need not be considered at all, until one begins to consider how to frame the image (that is, what is the angle of view or the ratio of the format size to the focal length). And even then, the radiuses of the disks at all distances remain the same; it's just a matter of their relative sizes in the image, which is what depends on the angle of view. It's a kind of a backward method, and I may not have been able to put it down too clearly here, but there's certain elegance to it, I think. For a better explanation, refer to Merklinger's book.
  17. Just to throw in some numbers, nightsky photographs with stars visible usually have specs something like f/2.8, ISO 800, 4-10 secs exposure plus you only get stars in the brighter range - if you're located well outside any urban light pollution. The faintest stars visible to naked eye are some ten stops below the brightest. So you'll definitely be better off doing double-exposure, as suggested.
  18. I saw Khondji credited on a poster of the ad. I seem to recall that Khondji and LaChappelle have also worked together on something for Nokia. Can't cite an actual source for that one either, though.
  19. Oh, yes. I went to see the Hitchhiker's Guide a while back; first there was the usual ten-minutes-plus of commercials and then on top of that, just before the movie, they played the H&M ad and after three minutes I sincerely thought of walking out of the theatre to wait for it to end. If it had any redeeming qualities I'm unable to recall any, it was just plain awful, just corny with not a hint of irony coming through. Moreover, I find the whole concept of a five minute "commercial with artistic values" manipulative and repulsive and as for promoting the H&M brand, if anything it worked just the opposite for me. I did manage to enjoy the Hitchhiker's Guide, nevertheless, but the ad actually came near to ruining the beginning for me.
  20. Both Filmfotograferna and P. Mutasen Elokuvakonepaja are well respected. For AC contacts you could inquire them; also Angel Films hire crew. For local student-level ACs, you can have a look the Helsinki University of Art and Design cinematography student list (http://www.uiah.fi/eto/stud.html). I'll be happy to look up any info that might further help you.
  21. As far as I know, the bodymount used on Pi was an ad hoc -construction by the crew and not actually a Doggicam harness, they referred to it as the "Snorricam", I think. Quite the same thingy anyway.
  22. I don't actually see the problem in saying "longer lens" since it's a relative term in any case (long in comparison to what?); the usual definition is that focal lengths sufficiently larger than the format diagonal are "long", so actually "long lens" does unambiguously describe the visual effect (although to a cinematographer the intuitive absolute frame of reference is probably the 35mm format). Why I however do feel a little uneasy with using "telephoto" to describe the visual effect is because it makes things like "inverse telephoto" or "telephoto macrolens" sound much more mysterious than they actually are. I do realize though that using "telephoto" to describe the visual effect is common practice, I probably use it myself at times :D
  23. Incidentally, and hate to nitpick here, but... Telephoto does not refer to the angle of view of a focal length / format combination but to a particular lens construction characterized by a convex lens element in front of a concave element separated by a space, resulting in a back focal distance shorter than that of a normal construction of the same focal length [see e.g. L. Stroebel: View Camera Technique 7e, Focal Press 1999]. So what Mr. Mullen stated applies regardless of the construction of the lens, to which 'telephoto' refers. Just terminology, I know... B)
  24. Ilmari Reitmaa

    lux -

    http://hyperphysics.phy-astr.gsu.edu/hbase...on/areance.html http://en.wikipedia.org/wiki/Lux
  25. Steve Jobs' WWDC 2005 keynote is online at http://www.apple.com/quicktime/qtv/wwdc05/, most of which is about the x86 transition.
×
×
  • Create New...