Jump to content

Aapo Lettinen

Premium Member
  • Posts

    2,885
  • Joined

  • Last visited

Everything posted by Aapo Lettinen

  1. those are physical scratches, either caused by the magazine (almost certainly the source of the problem) or the camera aperture plate (not that common) or the lab processing (may happen at times if they have problems with their equipment) . static electricity looks very different, commonly it is present as blueish or white "flashes" streaks originating from the edge of the film or some part of it and only present in one frame per flash. ------------- static caused by winding down the film in darkroom commonly looks like this (the streaks may be "sharper" edged and brighter if there is lots of static discharge going on and the "sparks" are happening at the edge of the film) :
  2. :lol: another way would be to use the bulkiest ancient super heavy tripod +sandbags with the camera so that it is impossible to run with it (most junkies have no idea how to detach a camera from the tripod so they would have no other option than to try to grab the whole 150+ lbs package and "run" :D you would get the camera back when they would run out of steam couple of seconds later B) )
  3. defending your gear may not be the best idea, the junkies and other shady people may carry knives and you will get cut if trying to fight them, even if you have some decent mma skills. It is always much safer to run away ( and to be reasonably aware of your surroundings when walking around so that you can see potential problems beforehand and discreetly change to the other side of the street if there seems to be problems ahead )
  4. this varies from country to country and I believe there's even different laws in different US states about this, but... here in Finland it kinda varies and generally the absolute requirement is that the subjects are presented in completely neutral context when publishing the material. a warning sign does not help; it is legal to photograph/shoot footage in public places but publishing that footage is completely separate issue and may be complicated or forbidden without clearances especially if it is used for commercial purposes (where the context almost never is neutral anyway. and using a person's face for marketing purposes without permission is not legal here if the person is identifiable). for non-commercial uses and documentary+news journalistic content it is easier to film in public places than for fiction/drama production. city's shooting permissions help a little bit but generally for commercial fiction/drama one still needs the clearances for most uses. Completely different country with different laws (we don't have the "fair use" laws for content for example and here it is technically not legal for example to photograph public art although it is otherwise legal to photograph in public places without permission) but the neutral context requirement is a good starting point in any country. in the US it would still be needed to get the clearances to ensure that you can publish the material in the end and will avoid legal problems . (If you absolutely have to use guerilla material without clearances then you could try to fake dof in post so that only your actor would be recognizable and everyone else is very blurred and unidentifiable. this should not cause you that much legal problems I believe..)
  5. with standard8mm camera ("2x8mm" camera) you have 16mm wide film stock which has perforations on both sides and double the amount of perforations per feet compared to 16mm camera film. with the 2x8mm camera, you will first shoot one side of the film (8mm width of the total 16mm width), and when done, you remount the takeup spool to the feed side but flip it upside down so that you will now expose the other side of the film, again 8mm wide exposure. After developing, the lab separates the sides by slitting the film from the center to two reels. So you will run the raw stock TWO times through the camera to first expose one side of the film and then the other side, and AFTER DEVELOPING the exposed film is slit in two separate 8mm reels. If you look the camera's aperture plate you'll see that there is a regular8mm size aperture which exposes only one side of the 16mm raw stock at a time. The film stock has double the amount of perforations compared to regular 2side perforated 16mm film so you can't use normal 16mm film with it, you'll need raw stock which is specifically perforated for a 2x8mm camera (that be, it has double the amount of perforation holes compared to similar 16mm camera stock and the holes are on both sides of the film)
  6. generally for that type of work you need a bag where the camera can be transported fully assembled (not a camera bag! just a basic normal bag which does not draw attention at all and does not look it could contain anything expensive) and ready to shoot. if your camera is small enough it can even fit in your pocket like I sometimes do with the GH4 and cctv lens... then you can scout the area like a normal person without walking around with the camera visible (showing off your equipment is always a bad idea) and can instantly hide the camera when you've got the shot. I don't know your locations at all (never even visited the US) but generally the "hide camera whenever it is not used for actual shooting" and "always operate so that you can start running for safety in couple of seconds if something goes wrong" work quite well in all urban areas around the world :)
  7. I would personally use a real trained bird rather than trying to do a realistic one with CG but if it really has to be cg then so be it, maybe it's a good exercise at least :rolleyes: for low budget the best option quality wise would be to jerk the real hat as per the birds movement I think, not trying to do any more elements CG than absolutely necessary... and trying to prevent the z axis rotation of the hat when the bird is on it so that the bird can be added with 2d or planar tracking and does not need to 3d track "match move" the hat rotation to get the bird a matching perspective change :ph34r:
  8. what kind of decorative elements you will add? those hat edges themselves are relatively easy to rotoscope as long as the background is not similar colour. it might, however, complicate things if the hat will be opaque (depending on how clearly you will see the background and bird through the hat surfaces) . if you can manage with a bird not showing through the hat when flying behind it, then it would be much much easier to do. (you can, of course, also play with opacity and masks in the compositing program to get the bird somewhat appear through the hat, without needing to key through it... that would do quite well when a fast fly-by) I was asking about the decorative elements because for rotoscoping the edge shape and roughness and motion blur is extremely critical and for example simply adding some feathers to the hat for decorative purposes may make the rotoscoping 10x more difficult than it would otherwise be. depending on the background colour and texture and the tools available of course. But for fast flying object like the bird I would say the rotoscoping approach could be the most practical route, the bird's motion blur will mask most of the imperfections of the effect. Just be very careful to not do anything which makes the rotoscoping more challenging for you (like adding something hard-to-rotoscope elements to the edges of the hat or choosing hat edge colours which are very close to the background colour. or shooting with very shaky handheld camera which motion blurs all over the place so that you can see clean edge only every 20 frame or so) . You may need to add some decorative elements to the hat for tracking marker purposes, near the place where the bird is landing and two or more markers which show how the hat is turning around its z axis. it would be best if the actress does not turn the hat around z axis when the bird has landed, you would otherwise need to either 3d track the hat z rotation or try to imitate it manually in 3d program which would both be very challenging to do with that hat design
  9. Shooting the hat separately would probably need a motion control systems for both the hat and camera and motion capturing the live hat. But dont know for sure with the current info
  10. Real hat would look better if on a budget. But it would help a lot if seeing a image what kind of hat it would be. Maybe if absolutely necessary one could make the top of the hat cgi, would look more real than full cgi and easier to track. Best would be if you could use real hat and just remove the decorations which cant be rotoscoped. And choose the decorations such a way that there is suitable details which can be used as tracking points
  11. of course if the hat does not move 3d when the pigeon lands you can do the separate elements shoot and imitate the pigeon landing with a keyable pole etc. which moves the hat decorations. but that necessitates that the perspective and hat position and angle would be exactly the same compared to the live action shot so it would be very difficult to shoot and may take hours. I would recommend doing the pigeon landing interaction at the same time when shooting the live action element, using a small thin pole or similar to move the hat elements when the pigeon lands, or some kind of remote controlled system. then you would not need to worry about getting exactly matching hat elements plate. you will need, of course, a clean plate of the background anyway to mask out the pole and correct the hat corners if they need rotoscoping. but as said you may manage with only the live action shot and a background plate if the storyboard and hat allow it
  12. do you have a storyboard and stills of the scene and especially the hat available? with composites the elements needed will depend a lot of which element is in front of /behind which element and how practical or unpractical it is to key or rotoscope and how much the perspective changes. Without seeing photos I am highly sceptical that you would have any benefit of shooting the hat separately, the actress will probably move so much with it that you can't imitate its movement in the clean plate. And if the bird flies from behind the hat (the hat is first in front of the cgi element and then the cgi element goes in front of the hat plate) you will need either bluesceen the actress WITH the hat which would probably be very unpractical OR rotoscope the corners of the hat which are in front of the bird at the start of the shot (I would recommend this approach) . Are you on fixed camera or can arrange a camera setup which has no parallax error when panning/tilting at the start of the shot (if the camera is not dollying in any direction) ? if so, you can shoot a clean plate of the background WITHOUT the actress or any extras/actors on the path of the flying bird. then the actress with the bird hat IF the corresponding hat corners can be rotoscoped easily if the bird is flying behind them. if the decorative elements can also be rotoscoped easily it may be easiest to just shoot those two elements and rotoscope the bird behind the decorations when it lands. but that depends on a lot of variables and you may need to 3d track the hat's movement when the bird is on it to get the bird's perspective change to appear right. I recommend shooting on high quality camera+format and deteriorating it after the vfx is done to get the cellphone camera look. What your vfx supervisor said about the scene? as said it depend on how the bird flies and how the hat moves and especially the hat elements colors compared to the background colors and each other (to determine if rotoscoping is practical or not or if blue/greenscreen is needed) . (for example if you have feathers etc. very difficult-to-rotoscope elements on the hat and almost similar color background, you may need to use chroma/greenscreen and rethink the vfx)
  13. peaking and focus assist can fool you sometimes, especially when focusing with smaller hd or less than hd onboard monitors with wide angle lenses. the biggest issue of course would be the possible inability to focus to infinity if there is something wrong with the scales but accurate focus scales are very important in lots of situations, especially longer dof situations like the stopped down wide angles scenario. some ac:s tend to mostly trust the distance markings when pulling focus
  14. part of the problem is of course that there is no absolute "middle gray" with video cameras because there is gain and signal processing involved and because there isn't only one way to expose a image there can't be one correct middle grey level for all shooting situations (of course there can be but then all material ever shot would look like same-lighting-setup soap opera material without any artistic variations) . shooting a grey card and expecting it to always automatically apply on certain IRE on monitors would basically mean that there would not be any kind of artistic lighting involved, only some kind of idealized tv studio setup... the easiest solution would be to shoot log material with for example camera's standard rec709 LUT applied to the waveforms but recording in the log (for example log-c) , exposing according to the waveform and on-set monitors (if calibrated correctly) , then applying the exact same LUT in post production so that the "middle gray" would be exactly where you decided to expose it on-set using the LUT-applied waveforms and monitors and then fine tune the look after that. Your exposure decisions may call, for example, the middle grey to be 1 stop under with your lighting setup and intended look. in the log recording the IRE levels can be anything and do not matter much as long as they are recording technically correctly, ie important parts of the image not clipping or having too much compression. if you are only staring at the LOG waveforms and expose every single shot separately with the log waveforms you will have multiple times more work in grading to match the shots because every shot is "graded in-camera separately for technically perfect levels" and thus they don't match in visual look at all
  15. did I understand the question correctly, you are asking how to grade log raw material fully manually without applying the correct LUT at all? it would be 100x easier to first add the correct LUT in grading and then continue from there. if your editing software does not allow importing LUTs you can find out a plugin workaround for this or use another software (much easier than trying to manually create a matching s-curve and color settings from the start)
  16. Hackintosh systems can be extremely unstable depending on the specific system you are running. as far as I have seen it may be partially just bad luck if your specific hackintosh is unstable and the origin of the problems may be untraceable, you can never know if a system is stable or not until you have actually built it and tested it even if some other person has got good results with exactly similar setup with all the same components. My friend for example has a hackintosh which freezes dozens of times a day without any particular reason and theres nothing one can do about it, it seems. I personally do the editing mostly from either proresLT or hq fullhd offlines (made usually via resolve to save hdd space so that I can use mobile drives for editing when not at office) or original XAVC or Prores files (4k or UHD) if the camera can record them directly. I can usually do all the editing with a slower mac, even a off the shelf one but for conversions and larger amounts of raw material and LTO backups I need to have lots of fast RAID drives (I am mostly using 6-drive Thunderbolt Raid arrays, the Pegasus2R6 ones) , normally two 10TB or 15TB raids per computer. That said, if you really need to have a very fast computer so that you can edit natively whichever hard-to-decode camera format there is, you will also need lots of very fast hdd space to be able to read the data steam fast enough from storage in the first place. and that kind of material tend to take up A LOT of hdd space so you will need both very fast RAID space for editing and you need A LOT of it: THAT will be expensive, even more so than building up the best cpu+gpu power editing computer possible
  17. 1.2 and 1.5 come from the potential difference of the cell which comes from the materials used for electrodes, they are only related to the technology used in the cells for practical and performance reasons and can only be changed by changing the battery technology. 1.2 is for rechargeable and 1.5 is for regular batteries traditionally (nicad or nimh and carbon+zinc, respectively)
  18. I personally use 300w fresnel units very often and also the 650's. also lots of 2kw fresnels (which can sometimes replace small hmi's in low budget shoots if you have enough power to run them). if shooting with fast lenses and high ISO it might be better to add some smaller units and especially something which can do very narrow spot if needed. like 100 or 150w dedolights type units. if you need softer tungsten light you can also consider open face units with chimeras or for bouncing, though for bouncing the tungsten fresnels can be a bit easier by my experience even if they waste some light compared to open face. maybe another 650w fresnel and a 300w fresnel next?
  19. the most common practice here is to use the final TV masters also for dvd and bluray creation and for making VOD files and these masters (usually prores hq or 444 here) are checked by the main colorist of the feature film. the colorists are usually reasonably precise about the color accuracy of these tv masters compared to the original dcp grade so the possible color/gamma variations, if any, are thus almost always the dvd/bluray authoring company's fault. if there is enough budget there may be a separate grade for TV masters which is checked by the DP and Director so they should know if there is some intentional changes made to it like making the darkest scenes a little brighter for TV release to ensure the viewer can see the important elements. workflows can vary from country to country but it is most likely a screw up if there is very noticeable changes in brightness/gamma/colors in bluray/dvd release compared to the tv masters or cinema release (excluding the darkest scenes which may be intentionally made a little brighter for tv release to compensate the viewing environment)
  20. generally it is always best to have the Longest FFD and Smallest Diameter mount possible on the lens side so that you will have as many adapting choices as possible. For example, NikonF is much better than CanonEF because it has longer FFD and smaller diameter so that it can be very easily adapted to EF mount camera (EF mount lens is NOT possible to adapt to a NikonF mount camera because of the much larger diameter and longer ffd)
  21. Nikon mount on the lenses is among the most versatile ones if you want a lens set which is compatible with as many camera models as possible. It can be adapted very easily to for example Canon EF, Micro4/3, E-mount, C-mount, etc. and you can use it natively with Nikon cameras or even certain NikonF adapted film movie cameras (for example adapted Arri 2C , Eclair Cameflex, Arri 35-3) . Nikon F also generally has better quality mounts with less tolerances and play compared to EF mount and it is much sturdier than M4/3 or E-mount. Adapters may have slight tolerances so you may need to shim and adjust them a little to enable absolutely correct FFD with the camera so that the distance markings will be correct. Samyang/Rokinon lenses themselves, by my experience, have so bad quality calibrating/wide tolerances though that you may need shimming anyway to be able to use them in any kind of production environment, especially the wide angles where FFD is more critical
  22. one thing which could be done would be to move (dolly) the model instead of the camera if the lighting etc. allows this. if you are experiencing operating problems with the ubangi setup this could solve a lot of issues though with booming up/down at the same time it could not be practical... but it would allow you to for example lean on a platform over the model to operate the head and would be a lot easier to get stable shots. Of course you could attach the model to the boss/mitchell mount of a hydraulic dolly so that you can also boom it up and down and keep the camera fixed :lol:
  23. you will need at least 16GB ram to do anything with it, preferably the 32GB at least. memory is relatively cheap and you can change them by yourself, very easy to do. I have had a lot of problems with iMac hard drives, they tend to break about every two years, possibly because of the high temperatures inside the machine when in use. A ssd would be better for this type of machine I think. If doing editing, you will need a complete backup operating system on a separate EXTERNAL hard drive so that you can boot from there if the main operating system crashes down or the internal hdd fries. a Time Machine backup is not enough, you really need a full identical bootable copy of the system hdd so that you don't need to reinstall anything and will lose only minutes instead of hours or days if anything goes wrong or for example software/hardware change prevents the machine from booting up correctly (driver updates for example can do this sometimes). I personally use Carbon Copy Cloner for making this type of bootable backup system drives, I can just boot from external and all the software is working just like with the internal drive. the best thing is that I can boot from external, and if the internal hdd is changed to new empty drive, I can format it with the now running copied external drive's Carbon Copy Cloner and use it to clone the external drive to the internal hdd and then boot from the internal. no need to reinstall anything unless there is some software compatibility problems present
  24. faster wide angles than CP2 yes, that is quite handy.... if they would have made them lenses magnesium bodied to reduce weight they could be quite good for certain productions
  25. I also have the impression that they have Sigma glass inside, either rehoused or custom made. the only RED lens I have seen in use here in recent times is the 18-85 zoom. maybe it's a little better design or alternatively the range is so useful that it's worth adjusting and repairing when needed. personally haven't shot anything with the red primes but tested them briefly a while ago. the biggest disadvantage seemed to be the size and weight, in that regard they are quite similar to Master Primes but the optical quality is not even relatively close. If, however, they would be more compact and lightweight, like in the same range than Ultra Primes, they could be useful if one has time and skill to calibrate them in-house whenever needed. ugly flaring though ^_^
×
×
  • Create New...