Jump to content

Lance Flores

Basic Member
  • Posts

    124
  • Joined

  • Last visited

Everything posted by Lance Flores

  1. The camera isn't ready for prime time. This is one of the things I went round and round with them about ... They're working on their storage packs which are 128-256-512MB flash packs which could be unloaded from the camera. I convinced them this was not good enough and that they needed the flash-pack docking station so that it would work like the same as a film work flow. The 512 would be about equivalent to 1000' of film and would go to a docking station and a new flash pack in the camera. The data would be unloaded into a Digital-Codex on location or set and the removal drive taken to the SAN and made ready for viewing of the dailies. Yeah, well, that's what I asked them too. I just expected to see a dual-link and was surprised that they hadn't begin working on it. The data would be a little more than 4:4:4 but could be squeezed into the dual channel bandwidth. They were working on the flash pack first, because we decided that is the way we were going to capture, and use the single HDSDI 4:2:2 2K output for the monitor. Later when the finished the dual or quad link we might use the glass lines to record. We've been talking to the Digital-Codex guys about this for a while. There is 16/32MB internal RAM that stores the data in kind of a barrel shifter type arrangement that allows the data to be dumped into a recorder or flash intermediate storage. Remember I mentioned a quad glass .. fiber? That's why I opted for the flash pack workflow. When I talked to Phil about using anamorphic for using the entire array, he was surprised, and wondered why I wouldn't just shoot it flat. That's when I got into the reasoning of optimizing the sensor for the more cinema oriented aspect ratios, which would allow higher resolution, allowing you to shoot it flat with primes. If we shoot on schedule we'll go with the Dalsa Origin.
  2. Yikes! You’d you have stuck around when the nickelodeon flashing cards movies were replaced with fancy technology like Vitascope or worse the Kinetoscope. How about when they made talkies or introduced color. I’m sure there’s something romantic about the out-house compared to indoor plumbing, but I appreciate the technology nonetheless – the present technology is not going to kill film. Digital technology will have to make another leap. When it does, the industry will simply evolve again. Dinosaurs will become extinct and those unable to cope, will don orange pajamas and sing mantras with the Hare Krishna folks. That’s just the way the world is ... That day hasn’t arrived, its just the guys with the “end of the world is upon us” sign carrying people you're hearing on the street corners. Since I’ve had my hands on the Phantom and Mitch has some at his place, I’ll give my take on it, including the my concerns about the 65 I’ve given to Phil Jantzen and the marketing guys there. First, to answer David’s question, yes it is 4:4:4, probably better. Detail and color is excellent. The most differentiating feature is that the cell is large. Two advantages are that the proportional control circuitry and sense amp ratio is greater compared to CMOS devices with smaller sense area. Because the sense amp circuit doesn’t need to be scaled proportionally to the sense cell area, the signal efficiency as well as the cell charge/sensitively and the corresponding die efficiency are dramatically improved. What this all reduces to, is that the dynamic range easily reaches 11 stops. Our first cursory test said 10+, but after re-calibrating the monitor I was looking at and a hi-res die printer we could see that it was easily 11 stops of range. The 12 bit linear output is quite stable and we didn’t detect any noise . . . if there was noise it was negligible. The signal is internally originated as a 14 bit linear, so I’m sure the lower bits are unable. So as you probably roughly computed in your head, there is more image information than a 4:4:4 characterization would imply. The camera has an ASA 600 and a lot of control over shutter and gain, so you can stop it down to get a greater DoF. I wish it had more horizontal resolution though. The biggest problem for an 8K format is the data transfer rate and storage. But hold that thought David - Apple called a couple of weeks ago increasing our SAN by 50% with a negligible cost increase. Eventually, technology will solve the data rate and local storage problem – just not today. If you hold the aspect ration constant, that would be a hell of a lot more data, requiring 4-6 channels to get the it out unless you used even more exotic fiber optic transfer methods. Your 1000+ foot film equivalent 512MB flash module would drop to about 7min instead of 25min. (For our application 2.35:1 using less vertical space on the sensor and holding the horizontal 4096 pixel with). This was one of my gripes, well, maybe not a gripe, but input to Phil. My thinking is that the sensor aspect should be more optimized toward cinema 1.85:1 and 2.35:1 formats and move the vertical pixels to the horizontal end (smaller cells). I think you could probably get to 5.2K, 5.8K, and hold the circuit parametrics well enough to maintain the performance maybe 6K. A lot of circuitry changes; like address decoders, length of data lines, and the number of connections to the data lines which changes capacitive and inductive reactance. Lenses are a problem. I think the investment is worth it for a production company. Eventually there will be more moving to the 65/70mm format and resolution will move to 6K-8K. The film and digital technologies are different and film is not dead. After the next major technology shift, film as well as the present semiconductor sensors, will probably go the way of vinyl records which, as I was told as a kid, would never be replaced because their recording purity could never be duplicated.
  3. Maybe it's kinda like being down with the flu, buy another 10TB of storage and call me in the morning, thing. Sorry, I'm tired gotta be up late with E.P. in London and another in L.A., London' about to get for a meeting in an hour or so. :wacko:
  4. Chris - I don't think it's likely you'll see an 8K CMOS Bayer sensor in the same 35mm format. Certainly the geometries are easily attainable, but the signal characteristics/parametrics of the circuit are not practically attainable in the prevailing silicon process. The difficulty lies in the cell signal which relatively proportional to its area. If you scale the process of the cell you can?t equally scale the control/sense amp circuitry without substantial degradation of signal integrity; not to mention the substantial loss in signal strength/sensitivity further degrading signal. I think what will improve is the tweaking foundry process and the optimizing of the sense amp, control circuitry and transaction/interpolation algorithms. The performance of the RED image render is very good. If their sensor foundry can reasonably yield the die then they will readily meet every claim and expectation they have made. Certainly the capital is there to make it happen. Those that continually rant about its technological viability are simply laymen who are completely unfamiliar with the technology. It?s an establish technology in a new application using all the improvements in processing and design; a matter of mathematics and engineering is all there is left to improve the results.
  5. Elliot - this site was set up by professionals who have asked that all subscribers to this site use their real names. You will shortly be asked to do this by the administrator if you fail to do so. It's their home, and it would be the polite thing to do before you are asked.
  6. Yes. We're having lenses made for the 65.
  7. Mmmm. I was just gonna respond about the Polish Bros. mention the first time I noted David's work, and got caught up in the rest. :huh: and here we are.
  8. My insuation was not that the man is glorified, but the presumed "ends" (by some) without the sufficient insight of the means and other intents. This frequently the case with biographers. This work is not of that nature. I didn't mean to give the impression that Caro's work was lacking. It's that "if you only knew" response you feel when read about something you've experienced or know about and there is information you know is missing. Caro is among the biographers that are honest beyond expection. The book just brought back memories I had of LBJ and memories of watching that "Why Vietnam" propaganda film five or six times at Camp Pendelton, the last of which, two weeks later, our entire battalion was landing in Đà Nẵng. Loved the country, loved the people, wanted to forget the war. I don't expect he will pull any punches.
  9. Interesting off topic thread. The Polish Bros. screenplays do reflect their "classical liberal" (libertarian) influence, particularly in their vision of the story. One of their stories in particular, that I really liked, was Northfork. Some of the critics didn't, I think perhaps, because the story didn't fit the standard Hollywood script/story model. If you haven't seen Northfork you ought to buy or rent it. Excellent cinematography that captured the visual essence of the picture. That's where I came to noticed Mr. Mullen's cinematography work. In my opinion and from my observation, some of his best work. But -- as far as The Life of Lyndon Johnson - well, I got though the first third, and like most books about politicians, found, there is far too many apologies that glorify the ends; and far too less to explain real motives and character. I had the advantage of having a home room teacher, also my Texas history teacher, Clyde Jones, who was Lyndon's childhood companion. He was quite easy to entice into talking about Lyndon, during class or after school hours. The other spoiler was my familiarity with his political career; much of my opinion being formed in over the fence line and small town café conversations with John Connally whose ranch was close to one of our family’s ranches near Floresville. It seems candid conversation and insight make for loss of the mystique and enjoyment of other’s biographies about political animals, particularly the secrets surrounding them. Many political writers describe LBJ as a complex man, but I found him to have been a simple man involved in many complex events.
  10. Lance Flores

    hd 4.4.4

    It is related to the colour space. Try: http://www.animemusicvideos.org/guides/avt...colorspace.html http://en.wikipedia.org/wiki/Chroma_subsampling then try posting in the First Time Filmmakers section for help. It's more likely that people will help students there and will be more understanding and generous with their time to explain things. Will look for you.
  11. Yeah, everything I've seen doesn't have enough contrast anyway. The Phantom guys were suggesting the Sony HDVS C30W - Looks like I'll to have to do something like Dalsa's t-t-l vf. Looks like we're not going to put this all together for this film. Waiting to hear from Rob Hummell. I think we're going to have to shoot with 3 Origins. It will work with our workflow. This will give us time to have the lenses we want rather than having to rush a set from the guys in NY and have them rebarreled later.
  12. I'm looking for a Viewfinder that with optional extension that will accept HDSDI connection HD but optimal would accept a 2K (2048 x what ever I can get).
  13. Oh Troy .. that's only because their waiting for About Schmidt and Lost in Translation interactive video.
  14. You are asking much of Claudio to copy raw images. Most production companies, especially if they have distribution contracts or are negotiating contracts, will not let any release footage. After release, cinematographers, usually as a matter of contract, can use certain amounts of the finished footage for their personal use in a demonstration reel to present in their curriculum vitae. Likewise, they are usually required to give consent to use crew member likeness, image, conversation, etc. for "making of," promotional uses, documentary, etc. for the picture. Sometimes they will allow test shots etc. for educational purposes, like this what your asking. Producers, Distributors, Studios, are usually very protective about their property, and you wouldn't want a DP to jeopardize his professional relationships.
  15. If you've got some data with the noise I'd like to compare it with the image data we have from our qualification process of the Phantom HD which we did in December. I believe we are scheduling the Phantom 65 for next week and some EXT and INT shots at our practical and set locations. I pushed the gain up 10db to test the noise level in low light because we have INT night shots with moonlight light from window with warm light from gas heater and lava lamp in a dark room with medium to very dark complexion actors in the scene. We didn't detect any noise problem at all. Maybe this was an earlier prototype. When did you do the tests?
  16. I am not involved in the RED project. I have just responded to what I perceived as genuine interest of individuals with rational questions; lending my experience from my background as a physicist, mathematician, and sometime engineer in the applications questions within the disciplines I am experienced. I am intimately familiar with the issue because I have been evaluating camera technologies for use in the pictures I am working on. That's all. I just happened to have designed an 8K camera and related electronics and math about two years ago and understand what the folks at RED, ARRIFLEX, DALSA, and others are doing. My interests are in the resulting performance for our application. That it.
  17. One of the guys I know in San Antonio just finished up a picture using the 200 that may be similar to the look you are interested in achieving. You might contact him. His name is Phillip Guzman. He has a site at http://www.thelawlessmovie.com/ and he is pretty accessible.
  18. MF is not a software company. I don't know what you mean, it is a vague question. He works for the RED company, and from what I gather he is involved in the processing algorithms. I don't know him. I was just trying to clarify some of your questions. Geez .. I don't have time to go though that in detail. You start off with a lot of knowledge and experience. You first need expertise in optical physics (light theory), semiconductor technologies, digital processing, analogue electronics, and mathematics. An extensive understanding in these fields and you're set. I haven't done any assembler or machine code since '74 then did some instruction code for AMD 2900 slice processor which is more akin to the process for machine processing of bayer algorithms .. like PALs mixed with shift/barrel registers etc., but I got away from most of that when I was able to move back into the fields of physics and mathematics. I speak using relative examples which I believe you can grasp. Understand, these electronics are not like general processors. They are more in the structure of RISC processors only even more optimized to work with other specialized machines like DSPs. The firmware, hard logic structure, and loadable instructions, are customized instructions/data are generated on more traditional computer using specialized programs. Since the data from the CMOS sensor line are analog in nature, the compensation, equilibration, and all post-sensory information has to undergo extensive post-sensory processing, the solution is quite complex and beyond the scope of further explanation on this thread.
  19. Sorry Max for the delayed response - been busy with Exec Producers and getting actor commitments and all that #&^4^, that goes along with it. I thought I had already answered your question in another forum. We're working with Vision Research and trying to get their Phantom 65 ready for this next shoot. I think you know we were doing testing on their 2K HD, the Phantom HD, back in December. If you recall that was when we were discussing the 70mm lenses. I followed on your suggestions but it eventually led us to have the lens 70mm lens sets made for the 65mm sensor.
  20. I'm back on our production final development and can't go into the mathmatics, but basicaly the algorithms are matice matmatics and other predictive algorithms. If you go to the Arriflex site at <http://www.arri.com/news/newsletter/articles/09211103/d20.htm> it gives you a basic concept of what is being done with the raw Bayer data. For a better understanding there are a couple of papers that provide a good mathmatical insight of the process: http://graphics.cs.msu.su/en/publications/.../prog2004lk.pdf and http://www.ima.umn.edu/preprints/oct2006/2139.pdf I have used similar algorithms similar to some of the examples in the latter. My approach was to establish the boundries for the domains for patterns using similar algorithms then processing using some of my brain-theory based algorithms which are applied finite inductive sequence determinants along vectors from the primary pixel cluster. I've used these in data reconstruction of lost datum or data in other applications. If the Arri site explaination is all you need, then fine. Otherwise, the other papers will give you a more concise understanding. I hope this helps. I've got to get back to work.
  21. Spending time with family. Ice storm hit missed the flight to NY so I got to play hookie for a few days. Got get to work too. Yeah, I know. The price tag is awfully low. It's good, but I know where the foundry where die is being made and it can't be cheap, and there is a lot more production costs to consider. Seems awfully low for the projected performance. It will be something if they can hold the price. Best of the new year to everyone. Got to get back to work.
  22. Carl don't get upset. There is so much unnecessary bickering and chauvinism over the image acquisition instead of professional constructive exchange it is not funny. I'm sure David is just try to keep issues from boiling over as they often do. You guys are lucky to a couple of guys like David who are willing to help so many with so much. For the life of me I don't understand why he's not always on a feature. I don't know him personally, but I've ask a number of directors and cinematographers I know about him. All have said he was easy to get along with, highly competent and very creative. He seems to be least provocative of all the people with the most to contribute. Anyway David probably assumed you understood more about the technology than you did. This technology is an old well-know process being used in a rather new market place which is evolving the process in a different direction. So you gotta think of it as an emerging technology. The problem with just sticking more processing power inside the camera is cost, power consumption, heat, etc. General processor don't do efficient work on high data rates and can't keep up in a serial (or multi parallel serial channels) real-time environment. What you need is a processor specifically optimized for this type data and is integrally configured to work with, or integrated with, on the same die, a digital/analog signal processor. The R&D isn't cheap, the process won't be cheap, because unless there is a commodity push that will trickle up the funds to pay for high performance high end cameras. The technology will improve in the CMOS with more preprocessing going on board the die. At the moment it is cost, waste-heat and energy prohibitive. Such man not be true in the future.
  23. And a compromise it is looking at jaw dropping 65mm. And in the real world we are forced to live within our means and the means of the executive producer.
  24. Exactly, David. The more economically feasible data you can get the better, resolution being only one attribute. Cell size well noted. The problem with just scaling more cells into a 35mm format is that the inherent noise stays about the same reducing the S/N ratio, causing a loss of a couple of bits. Some of this can be corrected using 5T sense amp circuit using more feed back and other mechanisms like sense-line. equilibration. technique, or rather the data line for a optical sense/storage cell. The cost would be to sensitivity, but there are ways to compensate. You're right. Even with such enhancements and the improvements brought about, you can't reduce the cell size to get resolution with appreciable results. You have to go with larger cell, thusly, you must move to a larger format. There are other problems (challenges) you create doing this, but there are many benefits as well, like a substantially (I anticipate, but I'll know in a few hours) higher S/N ratio which will substantially improve the dynamic range.
  25. Ummm, yes and no .. and it depends if you swallowed the red pill or the blue pill Neo. The more data ... the more accurate you can resolve images ... more importantly ... the more accurate the data, the more accurately you (a good algorithm) are able to predict, moreover, you can extrapolate/interpolate information and predict that which extends beyond the data you have captured. You apply heuristics and other algorithms which are deterministic functions to resolve density or color and end up with (hopefully) a more accurate image than you originally acquired. It is a naturally occurring phenomena in nature and synthesized visual recognition. I read in another thread someone referring to film as storing "real" optical images and digital cameras creating synthetic or artificial, if something to that effect, images. David noted that by that person's own definitions, what we were seeing was artificial. This was an astute observation and corollary, and a point worth reiterating here. The point is that none of what film, present solid state sensors, and the human visual sensory systems acquire is precisely "real." Because in the real world Neo, the energy which propagates as reflections from images or original emission is discrete. Human vision, digital cameras, and film, essentially sense three (generally speaking, there are methods of sampling more colors) and interpolate this information, or at least portions of the information into images we deem a reasonable facsimile of the "real" image. This is done in real time by both our optic chiasma where the bio analog/digital data is correlated (by algorithms). The data in then pre-processed through several more algorithmically steps at different location and arrives at the visual cortex where the final processing and portions of the recognition process takes place. This is a rather simplified explanation, but the essential point is that we don't see the discreet real world anyway. What we see through our eyes, what cameras see, whether a silver salt emulsion or present solid state, all depend on some algorithmic interpretation of the real world. So, Super 8 to IMAX algorithms and processing. It's possible, but it would not end up looking like 70mm or 120mm film frames, could be better or worse depending on your perspective of what the out come should be. It's all an illusion, but that's film making, for the most part, is .. illusion. Like purple and lavender .. no such colour. It's just some colors we made up in our evolution that served as a benefit for the organism. And the probability that such a process would be created for Super 8 ... about the same as the Cubs matching the World Series record of the Yankees in our life time. Which all begs the question, "what is real" Neo?
×
×
  • Create New...