Jump to content

Phil Connolly

Basic Member
  • Posts

    1,078
  • Joined

  • Last visited

Everything posted by Phil Connolly

  1. The Christopher Eccleston Dr Who shows, were de-interlaced Digi-Beta (dvw-790) and the more recent season is HD
  2. Forgot to add Symphony when running on a meridian system does is calcutions at a 10bit depth and when using the Nitris hardware calculations are done at 12bit. So it would produce higher quality images. But it would depend on what your rushes were and how strong a grade you apply to them - whether the extra precision would make a huge difference on 8 bit material such as DV.
  3. Well Nitris Hardware is faster and can do more in real time. It really depends on your project and budget. Symphony systems running on older hardware are still very powerful. Their may be no major advantage to your project using Nitris especially for SD stuff - it depends on what your trying to do and for what money. But any version Symphony (Nitris or otherwise) is going to be much better than bog standard media composer (aderenaline included) for grading. I'm pretty sure Symphony will take all the bins and projects from the Avid offline (media composer, express etc) and reproduce them exactly. DS Nitris takes AAF sequence - according to my avid text book it does (but I've not really dealt with DS). So it should be fairly painless - but I'm not an expert on the permutations of different vintages of avid systems that may create problems -when moving projects between them. On stuff I edit, I off-line on final cut pro - and generally when I do get in an online suite its a blagged avid - so I have to go the pain of exporting avid compatible edls. That why I have in the passed layed the fcp sequence to tape or quicktime and graded that. Its easier if your finishing on Avid to cut on Avid as I have learnt to my cost -their still may be some pain their always is in post production - but less. But I can't afford Avid express and I'm more used to the FCP interface.
  4. Hi Depends on the Grading system you intend to use. If you were to use something linear like Pogle and do a straight tape to tape grade it would make sense to dub up to digi-beta. It won't give you any extra quality but its more a robust format and able to cope with all the tape shuttling you'd have do. The company I used to work for, had a Hard Disk Recorder as a source deck in its Pogle suite - so the format the ungraded master came in on didn't matter as it was usually dubbed onto disk, but for SD work digi-beta would be the norm. For non-linear grading, you could just supply an uncompressed quicktime of the timeline. I've done this on a few projects that have been cut at home on FCP and graded on Nitris and I didn't want to deal with the pain of getting the EDL's to work between systems. The disavantage of this and linear grading is that you only can grade one video track, so if their any dissolves, from one shot to another you might not get as neat a grading transition as having a separate grade on each shot before the transition effect is added. Or the other route is using an EDL from you off-line and reconforming in an on-line suite from the DV tapes, using something like Symphony Nitris which is quite powerful on the grading front. If your cut was done on an other Avid this is not too painful. But if you are going from FCP, Premier etc.. it can become painful
  5. Many people don't like the 50fps look, for promos etc the motion at 25fps is more filmic and many people, myself included, prefer the look of 25/24 fps. The Digi-beta 970 is a much much better camera than the gy-HD100, I've used both and the 970PAL is just loverly. Colour rendition is much better, latitude is better, its more sensitive, images are less noisy, better lens choice, shallower depth of field, more image control, sharp viewfinder.... Its so good, I'd be very keen to see it in an AB test with the varicam to see if theirs much between them qualitywise. I think the JVC camera is good for the money - but you can't compare it with a proper broadcast camera costing 10 times as much
  6. Phil Connolly

    4:3 in HD

    In FCP have your sequence set to 4:3 and the clips to 16:9. When you drag your clips onto the timeline it should automatically presever the correct geometry and create a 16:9 letterboxed image within a 4:3 frame. You now just need to resize this letterboxed image so that the top of the image touches the top of the 4:3 frame and the edges are cropped. Another method would be to produce a 16:9 master, with 4:3 safe text and use a hardware aspect ratio converter to produce a 4:3 master. Many post houses have this sort of kit and can do the job in real time as a tape to tape process.
  7. You could use a party fogger, but generally the fog is quite dense and lumpy looking. I used one on a short last year and it was ok but not very even. Maybe a combination of a fog filter and a small amount of smoke from a party fogger to give it a bit of depth. Also wafting the smoke through a desk fan also helps even it out. Its not a perfect substitute to steam, but maybe workable with the right filtration and lighting But there not expensive, I got one for less than $20 , so you could get one and shoot tests. The attached still shows some of the 'issues' :D I had with trying to get smoke look even
  8. Good Luck with your film Daniel. If your looking for actors put an add on castnet: http://www.castnet.co.uk/ and shootingpeople.org If your up front about it being an ultra low budget prooduction and just pay expenses and feed them, you should still be able to get some decent actors willing to help out to beef up their showreels. Always aim high with your actors and try to get the best you can, get the casting right and everything else falls into place.
  9. I think part of lighting fast is just being efficient, having your lighting planned out ahead of time and effectivly deligating the work between the crew. I've definatly got faster at lighting, but thats mostly due to being better able to pre-plan lighting setups ahead of time. When I first started lighting, I had a real trial by error approach to lighting and would often have to keep swoping lights about, because I'd picked the wrong size fixture etc... Learning as much about different types of lights and support gear will really help, using the right tool for a job will speed you up, espcially if you have the right stand - as oposed to having to jerry rig something. But its important not to rush too much and sometimes you have to stand up to 1stAD and Director, if you need more time. If your up against it timewise - you need to be able to pick which shots are worth taking the extra time over vs the ones that can be done more quickly - Basically pick your battles.
  10. You have to be careful as some Portals are terrible for fogging film, and the possiblity that an interdimensional hyper beast might come through and eat your crew. :D
  11. You have to be careful as some Portals are terrible for fogging film, and the possiblity that an interdimensional hyper beast might come through and eat your crew. :D
  12. Varous Digi-beta post workflows Standard Off-Line/On line Edit 1) Capture camera tapes onto computer, using digibeta deck. At off-line resolution - this is a lower quality mode that is used if you don't have enough storage space for full uncompressed on-line quality, eg lots of footage. But you may be able to skip this step and capture at On-line resolution for a short project such as a music vid. You will still need an SDI capture card to capture the footage from the deak such as a Black Magic Deck link 2) Edit you off-line footage and create an EDL (edit decision list). 3) Re Capture your footage from digi-beta using full on-line uncompressed footage, you will need a fast RAID based hard disk and an SDI card. But you only capture the bits you need fromm your EDL. 4) your edit is now re conformed using the EDL you made in the off-line but using the On-line footage. 5) do any colour correction needed 6) lay back to digi-beta You could skip the off-line stage and edit directly in on-line mode, if you have a fast machine theres no reason to use an intermediate codec. Saves re-capturing the footage and hiring the deck twice. The direct to on-line works best on short projects such as music videos where their isn't too much footage to deal with and the off-line/on-line process is used on longer progects such as documentories where you may have 100 hours plus of footage - too much to fit on a disk at full quality The above method assumes you have an SDI card and a RAID array on you PC. Buts its still quite expensive as you have to rent and insure a Digi-beta deck twice. What might work out cheaper and what I tend to do on Digi-beta projects: 1) shoot digi-beta 2) got to facilites house and get Digi-beta tapes dubbed to DVCAM with matching timecode 3) edit the project at home using the DV tapes, using a cheap consumer camera to capture from. 4) generate and EDL for the project 5) go to a facilites house that does digi-beta grade online editing. 6) get them to do and online conform using the digi-beta camera tapes from your home generated EDL. (but you need to test that their kit understands the EDL files that your edit package at home spits out) 7) colour grade and have them lay off to digi-beta/DV/SP whatever This approach may seem expensive as on-line edit sessions can be expensive - but you may be able to get a deal. But overall it I find it to be cheaper as your home edit kit can be cheap, no need for RAID array hard disks and SDI cards. Also Renting of Digi-Beta Deaks is quite expensive and the cost of insurance for the rental may stack up. Basically you need to price each method up with the costs of stuff in your area.
  13. Hi Stuart Not done this my self, but I have worked as QC person for broadcast and dealt with lots of bad NTSC to PAL conversions in my time To Get the NTSC look I would 1) Lift the blacks, NTSC blacks are at 7% compared to 0% for PAL. A good conversion would fix this but I've seen plenty of bad ones that don't - mmm loverly grey blacks 2) Images need to be a little soft - this could be done in post by re-sizing the image to a lower resolution in after affects or something - or maybe do it in camera with a filter in front of the lens 3) The big give away of NTSC, when converted to PAL is motion artifacts (judder on pans etc), the less good standards converters go from 30 fps to 25fps by (in the worst case) skipping frames - causing judder or blending frames - causing a smearing effect with double edges on fast motion. This is really noticable on NTSC matial that was shot at 24 frames and telecined with a 3:2 pull down. Eg earlier PAL conversions of the simpsons. The ideal way to get this effect would be to shoot 60i/30p and do a standards conversion to 25fps- the Z1 perhaps would be a good candidate. Or shoot PAL then standards convert the material to NTSC and then back to PAL. use a cheaper standards converter and the images will look suitabley nasty. 4) NTSC colour - easy to play with in the grade
  14. I've used DLT's for mastering dual layer DVD's - but I don't think their worth the hassel for single layer discs. I've had a few projects glass mastered from DVD-R (authoring disks) with no problems. DLT's are being used in broadcasting but mostly as an archive format. Typically in server based transmission programs are still delivered on Digi-Beta (D5, HDCAM whatever), and cached to a server for transmission and kept on the server for as long as the programs needed. Then afterwards the file is recorded to DLT for long term archiving as its easier and cheaper to store than the larger Beta/D5 tapes. But I would be supprised to hear a broadcaster requesting delivery on DLT as there are no standards (at the present time) to define how the data is encoded - either on servers or for archives it may be that a broadcaster has files at one bit rate for commercials and another for programes. One channel may use MPEG-2 at 50mbs but another could use 15, 25mbs. I'm not sure what saving delivering on DLT would bring to a channel - you would still have to copy it onto a server for broadcast - sure its faster than real time, but doing if from video tape is not really labour intensive (especially with auto loading systems) - and theres less to go wrong, I'm sure the broadcaster would get loads of un-playable DLTs with files in the wrong format.
  15. Well you can get ND filters or various strengths from www.leefilters.com They do Filter Swatch books, I would get one of those so you can try lots of different filter strengths to see if it works before buying and which strength works best.
  16. I think the D-20s a cool camera, but its not the most pretty of beasts. It turns heads, but in a what the hell is that sort of way.
  17. If your looking for that studio TV look of presenters to camera, soft flat light on your talent seems to be pritty standard. I'd still try and get some back light in which can be harder. Depending on the style exposing everything brighter can help. I did a comedy pilot, that had to spoof the look of a shopping channel. The look really came together when I overexposed, compared to how bright I would normally like skin tones, just so they looked a tiny bit plastically and unatturual (shooting on Digi-Beta - I also switched to 50i, as the rest of the prog was 25p, to help sell the difference). I still aimed not to clip the whites, but since I lit flat there were not too many hot spots to worry about. The space I had to light was quite small so I got away with just a 1.2K hmi bounced off foam core as Key, and soft fill from a couple of diffused red heads (full CTB). I got a slight double shadow on some objects but decided to go with it - its part of the look :blink:
  18. If you fancy a trip across the pond their is a Cinerama installation in the Photography Museum in Bradford UK. Every march they have a Widescreen weekend as part of their annual film festival and show a wide range of rare 70mm and some cinerama titles. I went a few years ago and saw "How the West Was Won" in three strip and also a new 70mm print of "2001" projected on the curved cinerama screen - that was pritty immersive. They have also screened "This is Cinerama", "Cinerama Holiday" and "Windjammer" in 3 strip as part of the widescreen weekend. This site: www.in70mm.com has more info on the Bradford cinerama set up and lots of other information about large format screenings.
  19. Hi Suze My experiences with applying to the Nfts were similar to Tims, again applying to cinematography - took ages to hear the bad news - same with my flat mate, he applied to documentary direction - but got in - but he didn't finally know for even longer as the interview and workshop process is quite long as well. Good luck, they are both great courses - over the last 6 months I've worked with a few NFTS graduates from the sound courses (just had a promo I directed, mixed by one of their sound post production graduates - it went really well) - they seem to know their stuff and were complimentry of the course, the schools dubbing theatre's pritty nice as well. Phil
  20. http://www.digital-heaven.co.uk/fcplugins/...incarnation.php These guys do a final cut pro plugin called re-incarnation - that seems to do exactly what you need. At £5.98 it won't break the bank.
  21. I would go directly to the manufactures: http://www.leefilters.com Tel: 01264 366245 They would be able to put on to your local distributer - there is a form to do this on their website.
  22. It may be that the film school has a in house policy on how freelancers get paid - sometimes its slightly different than the industry norm and you get paid as a staff member of an educational institution - with all the red tape that goes with it. I worked on a Student Project as a DP last year, at the NFTS in the UK, this meant invoicing the schools payroll department rather than going through the students. It seems they pay freelancers at the same time as staff members - so I had to wait until the first "payday" after the shoot finished. Annoying - but to be honest it was a bonus to get paid anything on a student project. You may be able to get cash directly from the production budget - but some schools seem to have layers of paper work, partially for insurance purposes this requires official contracts. In my case because I had to go on the schools vehicle insurance to drive the equipment van - which resulted in even more paperwork.
  23. I would agree with the comments on the 970, I shot a project on it last november - I was very impressed with the images. It was a nice supprise to get the camera, as I'd spec'd a 790 and when I turned up at the hire company they gave me the 970 - which was nice. (I had heard about the camera but didn't know it was avalible) I saw the footage projected on a christie D-cinema system and it looked very good, much better than I expected from digi-beta. It handles highlights very well for Video - 14bit signal processing has got to be a big help. Its probably the best SD camcorder avalible - I wish I could use it more often, but I don't think many hire companies will invest in them as there are some many 790's in service and when they (evntually) get replaced it would probably be with some flavor of HD .
  24. There would be less motion blur if a shot at 24fps is sped up in post rather than just shooting at a lower frame rate. Stuff shot at 24 fps then sped up, by dropping frames tends to look, well sped up and can look a bit juddery, rather than an object actually moving at that speed. For the most accurate motion blur feel, the frame rate should match the speed of the motion - so if the camera drop is done at 10% of actual speed - then the frame rate should be around 2.4 fps, this not only gives the correct speed - but the right amount of blur to sell the move - otherwise you would be into adding blur in post, doable but extra hassle. Also, remember that the camera has to accelerate as it falls, ideally at 9.81 metres per second, per second - that?s if my knowledge of physics hasn?t disserted me Phil
  25. Sorry I should have been clearer in my post, that in this case anamorphic refers to geomatry of a video signal. Generally in the UK, on many channels anything wider than 4:3, is sent as an 16:9 anamorphic signal - meaning productions that are wider than 4:3 but less wide than 16:9 - will have black bars at the side rather than top and botton eg: 1.66:1 - would be a full hight anamorphic image with thin black bars at the side - rather than a 1.66:1 letterbox within a 4:3 frame. 1.85:1 and 2.35:1 are letterbox withing a 16:9 anamorphic frame - but very few monitors shoe the black bars present on a 1.85:1 master Exceptions to this, are programs that mix aspect ratios - then a choice between 4:3 linear type geomatry or 16:9 anamorphic geomatry is chosen based on which format is mostly used. Standard Def Channels in the UK also send out a control signal with the images telling the recever whether to switch to 4:3 or 16:9 FHA
×
×
  • Create New...