Jump to content

Your ranking of these hard drive possibilities


Max Field

Recommended Posts

Trying to gather as much info as possible before I get the funds for a new Windows desktop machine.

The criteria is; you have either ProRes or REDcode footage placed on this given drive. Taking into account stability, longevity, preview frame rate, and render speed, how would you rank them from best to worst?

 

We're assuming all of the other hardware in each theoretical Windows machine is identical and the storage space is 4TB.

The workload is rendering only 3gb a week. Your considerations will ignore cost for the moment.

 

Options:

- 10,000rpm HDD

- Internal SSD (read/write speed of 550mb/s)

- External HDD RAID connected via usb3.1

- Internal HDD RAID array

 

Thanks for your input, whatever it may be.

Link to comment
Share on other sites

-10,000rmp HDD

They are okay, but expensive. In reality, if a 7200rpm drives will not work, then your best bet is to move to a RAID array of 7200rpm drives, or possibly to SSD. Many of these drives are boarder on the smart-HDDs, which are essentially HDD's with a SSD quick read/write section. Once you get above 7200rpm in HDDs, you really should start looking at a RAID instead.

 

- Internal SSD

Best bet for operating system, program files, possibly cache and project files. Loads fast. The big issues with SSD's is that reading from them is fine, but writing to them eats up their life real quick. They are also very expensive and have small storage for the price. If you can afford SSD's, they are great - but an HDD is much cheaper per MB, and are still generally more reliable.

 

- External HDD RAID <-SUGGESTED SETUP w/ Note

These are great if you're working with vast amounts of footage, and where you need either speed (RAID0) or redundancy (RAID1). You can also set them up in a RAID10 (1+0) configuration, which is a combination of the two. This is a great, low-cost alternative to SSD's that has been used for decades in professional editing suites.

 

As for USB3, I'd still stick with a SATA3 or SATA6 connection for RAID. I have seen people praise USB3, but I have never seen any real great speed from it myself.

 

Note: Even if you use an external Raid, I'd still suggest one internal SSD to store your Operating System, Program Files, and things on. Your patience will appreciate the quick load times.

- Internal HDD RAID

Same benefits as above, except for the fact that you're limiting the number of drives to however many the case will hold.

Edited by Landon D. Parks
Link to comment
Share on other sites

  • Premium Member

- 10,000rpm HDD

The western digital VelociRaptor has been my personal drive of choice since it's inception. Prior to that, I've always used 10,000 RPM Seagate Cheetah drives. The great thing about 10k drives is they use more platters then their same data size counterparts and the VelociRaptor is a 2.5" design, so the access time is very fast. Since there are 4 heads and 2 platters, it's very quick to find the next piece of data. Once found, it uses standard 6Gb Sata connection and can push through upwards of 200MBps, which is very good for a single drive. I've installed over 100 of these drives at multiple locations in Hollywood, including using them exclusively for myself. I've had ONE failure in all those years and it was on my own personal machine, with a drive that was 5 years old and had been on almost 24/7 all that time. I was able to recover the data no problem and buy a new drive. I'm a big fan of this drive and have absolutely nothing but glowing reviews of it's consistency and speed.

 

1TB is $379 retail.

 

- Internal SSD

SSD's are a very cool technology, but it's a very flawed technology that's always on the edge of some sort of failure. The problem comes down to how memory can store data long term. There is only one simple solution and that is flash memory. The problem is, flash memory is very slow. The solution is to take multiple pieces of flash memory, stick them on a board with a raid controller and make a single logical volume out of them. This requires a great deal of logic to work properly and because it uses a raid zero or "striped" set, if any one of the flash circuits hangs because it's busy or stops working, the whole SSD stops working. The other big problem is that unlike hard disks which "forget" a particular block ever had data on it when you erase something, SSD's need to physically erase data from blocks before new data can be put there. This is fine for a camera capturing large files and then being erased afterwards. This is NOT fine for an operating system OR editing system where files are being constantly read and written, especially small files. All computers now a days use virtual memory as well, which is stored onto the boot volume. This is a HUGE problem with SSD's and when you fill SSD's, they tend to stop working. Also, they can hang all of a sudden as they erase from one or more of the flash memory circuits. So this is a big problem and it's why I rarely recommend SSDs to anyone on a desktop computer. With laptops, you don't have a choice, the SSD's benefits outweigh the detractors.

 

1TB SSD $329 Retail

 

- External HDD RAID

Just like SSD's, external raids use similar technology. They take slow 7200 RPM Sata drives, stripe them together into one logical volume and get the speed up. There are three major types of raid; 0, 1 and 5.

 

- Raid Zero stripes all the drives in your enclosure together as one logical volume, just like the SSD. This gives you maximum throughput, without any redundancy. It allows you to run 2 or more drives and get some substantial speed increases.

 

- Raid ONE is a mirror raid. It mirrors whatever data is on one set of drives, onto another set of drives. So if your first set fails, the second set would be a backup. This is most commonly used in large server installations where the client can't afford to be down. They will take a full raid enclosure with raid zero and duplicate it onto another raid zero enclosure, so what's done to one of the raid enclosures is done to the other.

 

- Raid FIVE is what most raids today are. Basically raid five spreads the data between the drives in a way, so if any ONE of the drives fails, you can replace it and get your data back. It's an amazing technology, but it has a lot of gotya's. One of the big problem is speed, it's not much faster then a two drive raid zero in any array lower then 5 disks. Once you get above 5 disks, the speed increases, but not dramatically until you get above 12. The second problem is down time. If a drive fails on a raid 5 array, generally the array is down for the count. Some arrays allow for "proxy" viewing of data, but it's very slow. It can then take upwards of 24hrs to recover the missing data onto the replacement drive. The final problem (which effects all raids) is the simple fact, if you buy a bunch of drives at the same time, from the same vendor, they all tend to go bad at the same time. So the moment one drive goes bad, you need to somehow figure out how to replace ALL the drives, which is kinda of impossible because only ONE drive can go bad.

 

So in the long run, any RAID solution you buy, will need to be backed up and if that's the case, might as well use Raid Zero and get that throughput.

 

Now the next big question is connectivity. Firewire 800 and USB3 are very slow, I wouldn't even think about using them with any raid solution. E-Sata II would be where I'd start and there are some OK E-Sata raids on the market. Most people in the industry use Fiber, SAS or Thunderbolt for their near-line storage. For more off-line storage, network solutions are available, but the cost skyrockets.

 

E-Sata enclosures are around $200 - $8000 with card + drives

SAS enclosures are around $300 - $600 with card + drives.

8GB Fiber enclosures are around $1400 - $4000 + card + drives.

 

The other way to go is all internal. PCI based raid cards available for PC's, to allow drives to be installed inside your tower. It's a simple/inexpensive solution and one I've been using for my own computers for years. My current 3 drive internal raid, is 12TB, raid zero and around 500MBps throughput, not bad for a bunch of drives inside your computer.

 

My advice is to put everything inside the tower. Buy a 10k RPM boot volume, which will speed up your normal every day tasks. Then put a small raid array inside the computer using a PCI controller. If you can do 3 or more drives (based on space in the case), that's the direction I'd head. 2 drives isn't quite worth doing a raid for, it's not quite fast enough.

 

I hope some of that makes some sense.

Link to comment
Share on other sites

We use internal RAIDs in most of our machines. For reference, we mostly work with DPX sequences at 2k and 4k. Some machines have different requirements than others, but as a general rule, here's how we do it:

 

1) Inexpensive dedicated RAID card. A SATA 3Gbps PCIe card can be had for next to nothing these days, and with 8 drives on it, you can easily play 2k at realtime (assuming 30fps or less), or 4k at slightly less than realtime. Again - DPX, which is about 10x the bandwidth of an equivalent resolution ProRes file. This can be made faster with a 6Gbps RAID card, but it's not necessary if you're primarily dealing with compressed formats. Just about any drive you buy today is going to be 6Gbps, which will work fine with both 3 and 6Gbps cards.

 

2) The RAID is typically a RAID 5, or if there's room for additional drives, a RAID 6. RAID5 is fine in most cases. Never RAID 0.

 

3) Cheap hard drives. Spending a lot of money on super fast drives is a waste. There's simply no point. We usually buy whatever is on sale at the time, so we have a variety of discs across a dozen machines. Stick with the same size/brand/speed in each RAID, though over time you will find that you can't get the exact models, and putting in a rough equivalent is just fine. Lately we've been buying WD Green drives at 5400RPM. The notion that they have to be fast is false, unless you're only using one drive at a time. Once you put them in a RAID with 6-8 other drives, most of the benefit of faster RPMs goes away and the bottleneck becomes your RAID card, not the disk speed.

 

Remember, we're talking about DPX sequences here, so for anything that's compressed, such as ProRes, the specs above are overkill. FWIW, as I type this, I'm capturing an HD film to a single 7200RPM drive as a ProRes 422HQ file (1080p/23.98).

 

We've been doing this for the past 16 years on probably 15-20 different machines over that time, from SCSI to IDE to SATA and SAS. You have to expect that drives *will* fail, which is why we don't use RAID 0. With RAID5 you get most of the performance benefit of a RAID 0, with some built-in redundancy. If one dies, you can pop in a replacement and rebuild, while running the RAID in degraded mode if you need to. We usually do rebuilds overnight.

 

For some machines, like where we store client projects semi-long term, we use RAID 6, which allows for two disks to fail before you lose data. We keep a stack of spare drives on the shelf just in case. Over the past year, we've had to replace 3 disks. All total we have about 60 drives in various RAIDs throughout the office, so that's not bad. And the cost of those replacement drives is far less than the cost of expensive 10k drives or "enterprise" drives.

 

We don't use SSDs for RAIDs because it's not cost effective. We do use them for system drives, and they make a world of difference in OS responsiveness and boot times. For system drives, keep a clone of your system disk handy so if/when it fails you can just swap it out and keep going. Don't store anything on the system disk that wouldn't be on the clone (that is, don't put stuff in your personal documents folder that's inside your user account folder, keep them in a folder on the RAID. That way the system disk can fail and it won't matter when you pop in the clone.

 

We have 11 machines with SSD system drives in them, some of which have been running for 3+ years. We've had one fail in that time.

 

If you have a recent PC or Mac, doing the RAID in software isn't as much of a problem as it was back in the day, when those CPU cycles couldn't be spared. I personally like having a dedicated RAID card because it has other features (like sending you an email or sounding an alarm when a drive fails).

 

And if you don't believe me that those more expensive drives aren't worth it, check this out: https://www.backblaze.com/blog/enterprise-drive-reliability/

Edited by Perry Paolantonio
Link to comment
Share on other sites

 

Thanks for the knowledge guys. Its there a drastic difference in hard drive speed required from REDcode 1080p to Prores 4444 at 1080p?

 

 

I edit 2k DNxHR 444 12-bit all the time on my PC, which is probably similar to that falvor of ProRes. Never had any trouble editing 1-2 streams off a single 7200rpm drive. Never tried more than 2 streams though, so that might well necessitate a RAID. I have a RAID anyway, even though I don't really need it for speed.

 

REDcode I have no experience with.

Edited by Landon D. Parks
Link to comment
Share on other sites

  • Premium Member

Thanks for the knowledge guys. Its there a drastic difference in hard drive speed required from REDcode 1080p to Prores 4444 at 1080p?

They're pretty close to the same data rate. Both can be tricky without more then two drives in a raid of some kind to speed it up. Also, Red Code SHOULD be transcoded because red's file structure is a cluster and it makes life way easier to just transcode them to Pro Res. Far better to just shoot Pro Res 4444.

 

 

And how would any of you compare the speed of a RAID 5 array with 7200rpm drives to a lone 10kRPM drive?

No difference. Raid 5 is more taxing in general because duplicate data needs to be put onto each drive. The disk spindle speed only really helps with single drive accessing, like Perry said.

 

Remember, the more disks you add, the faster raid 5 gets. But you'd have to get OVER 5 drives to make that worth while. Anything 5 drives or less that's a raid 5, is going to be A LOT slower then Raid 0. Not saying raid 0 is smart, but if you back everything up religiously, it's not a big problem. I also cycle my drives every 2 years. Whenever I finish a big job, the old drives get pulled out and put into a box, new drives are installed. This way, there is less opportunity for the drives to go bad whilst working.

Link to comment
Share on other sites

The rule of thumb for raid is that you can expect roughly 50-80% the speed performance for each addition drive you add. Running two drives in a RAID0 will not really equal 2x the speed of one drive. There is always a bit of a fall-off, which is primarily due to the extra information needing to be processed by the computer. You can also clog the 'pipes' so to speak, resulting in a speed cap.

 

RAID needs to be approached carefully. It's also advised to only use Network-ready drives in a RAID. Western Digital Blues and Blacks are not really appropriate for RAID, and I have has bad luck with Samsung. The only drives I use now are Hitachi server-class.

 

I have only ever owned a 10,000rpm drive once, and found it not that much faster than my 7200rpm disc. The sucker was loud though, and ate a lot of power. Comparing a 7200rpm, 10,000rpm, and an SSD - I would say if the base line is 100% on the 7200, the 10,000 might have been a 120% while the SSD is at least 200%, and probably higher. In my opinion, the high cost of 10,000rpm drives are not worth the money. They are nearly as expensive (and in some cases more expensive) than SSD's of similar size, and nowhere near as fast.

 

I don't run a RAID5, I run a RAID10 (1+0). I just like the idea of the additional speed and peace of mind that failure of up to 2 drives will not harm the system. It's also easier to recover data in a loss from a RAID10 than it is from RAID5.

 

The RAID10 has 6 Hitachi server-class 2TB drives running at 7200rpm. That is 3 main drives with 3 backups. I'd say my performance over the raid in terms of speed is about 200% over a single 7200. My understanding is that RAID0 and RAID10 will be faster than RIAD5, though I cannot verify this. Something to do with trying to compress the backup data to less than an equal number of discs. That is second-hand though, so don't take it as fact.

 

In today's world, there is little reason to run a RAID5 over 10 in my opinion. Hard Drives are cheaper than dirt, and most all RAID controllers that can do 5 and do 10.

Edited by Landon D. Parks
Link to comment
Share on other sites

RAID needs to be approached carefully. It's also advised to only use Network-ready drives in a RAID. Western Digital Blues and Blacks are not really appropriate for RAID, and I have has bad luck with Samsung. The only drives I use now are Hitachi server-class.

 

This is demonstrably false. See the link to Backblaze's data on consumer vs enterprise drive reliability. Spending money on expensive drives in a RAID is basically a waste *IF* that RAID has redundancy built in like Raid5 or 6.

 

A 2TB WD Red (NAS) Pro drive is $144

A 2TB WD Black drive is $109

(prices from MicroCenter.com)

 

The MTBF on the NAS drive is 1 million hours. I couldn't find published data on the MTBF for the black drive, but let's say for the sake of argument that it's 20% of the NAS drive (which I bet is way lower than the actual amount). That means the MTBF on the cheap drive is going to be 22 years. You will not be using that drive for anywhere close to that time frame, because the tech will have changed within 3-5 years.

 

So if you populate an 8-disk raid with overpriced NAS drives, you're looking at $1152 for the disks. You can make the same RAID with for nearly $300 less with cheaper drives. You would have to replace three of them to have broken even when compared to the same RAID with NAS drives installed.

 

The main thing you get with the more expensive drives is a better warranty and tweaked firmware (that doesn't really do enough to justify the doubling in price). I can't remember the last time we got a new drive as a replacement for a warranty swap. Every manufacturer has sent us a refurb when we've tried that, and a lot of those failed as well. I gave up on warranties for drives long ago.

 

The cheaper drives also have energy saving features, which you'd want to turn off, because that can affect performance. Most RAID controllers can do this.

 

Our current longest-running RAID is a 16-drive NAS that's been in operation since 2010. There have only been a handful of failed drives in that time (maybe 3-4). All the disks in there are cheap Seagate 2TB drives at 5400RPM, in a RAID 6. Replacing a failed drive is as simple as popping it out and putting in a new one. Rebuilding happens automatically, so I'm not sure how that's more complicated than a RAID 10 (I don't have much experience with RAID 10, because we've had such good luck with RAID 5 and 6 setups over the years).

 

While the specs and the firmware tweaks may in fact benefit users of RAIDs, in those more expensive drives, the benefit is marginal, and often can be matched by simply adding an additional disk to the array. When you're using cheap disks, this is more cost effective than having 8x really expensive drives.

Link to comment
Share on other sites

That may well be, but I still choose to use Hitachi drives, which are professional server-class drives. I actually don't buy mine new - I get them factory refurbished from Microcenter. 2TB Hitachi drive is $55. I have had way too much bad luck with brands like Seagate, Samsung and Western Digital to ever bother with them. While the RAID may well be protected and using cheaper drives is fine, I'd rather have server-class drives that last longer rather than spending my time tweaking consumer drives - and replacing them when they go bad.

 

That is just my experience in running a RAID10 though, perhaps others mileage will vary.

Link to comment
Share on other sites

None. In fact, it's a good idea to store project files and programs files on a separate drive from your media drive. It will allow for more open speed between the NLE and the Media drive, when the NLE itself is not competing for hard drive resources.

 

If you're not going with a Raid, then I'd suggest 3 drives: an SSD drive for program files / operating system / NLE, etc. A second drive (7200rpm - high TB) for media input files (like camera files), and a second drive for media out (your rendered files). You'll find your files will render faster when it's not trying to pull all the information from the same drive it's pushing it to.

Edited by Landon D. Parks
Link to comment
Share on other sites

If you don't want to set up a RAID, then the bare minimum you should do is make sure, as Landon said, that you're using separate drives for your source media and your final rendered files. And they should each be connected to the computer directly, not daisy chained like you can do with, say, Firewire. This ensures that each dive gets access to the full bandwidth of the connection you're using, and that helps to prevent bottlenecks.

 

-perry

Link to comment
Share on other sites

In all actuallity, without a RAID, I'd say the best absolute setup would be four drives.

 

(1) SSD for OS and Program files

(2) 7200rpm for media files and project files (camera files, saved projects, etc).

(3) 7200rmp as a mirror drive for the above, setup in a windows software raid 1 array (no need for special hardware or software, you're just essentially keeping a backup of your camera and project files).

(4) 7200rmp media output drive for renders and cache.

 

I know above I mentioned keeping the project files on the SSD, but I'd actually move them to the raid1 media hard drives, so they have a backup and can be restored.

 

Adding in the second drive to the media is more of a backup, though it's a good one to have. Loosing your rendered files in a hard drive or PC crash is one thing, loosing all your camera originals and project files is a whole other beast. And given how cheap these drives are (and that most all cases can hold at least 4 hard drives), there is little reason not to windows raid1 your media drive.

 

I use refurbished Hitachi 2TB/3TB drives in my system, and they can be had at Microenter for around $50. They are server-class drives, and even refurbished I have never had one go out on me.

 

I don't practice what I preach here, but then again I have an external RAID that I work off of, and my internal media in drive is solely for backup.

Edited by Landon D. Parks
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...