In an attempt to re-consolidate my data I've recently purchased four (4) 3000GB Seagate Barracuda drives. I use 3000GB rather carefully, that's what's written on the ST3000DM001 label. On Amazon it would say 3TB. If you're already aware of the decimal (used by marketing for Storage numbers) versus binary (what computers actually use) way of counting then you wouldn't be surprised that it formats to about 2794.39 GB for NTFS on every disk. These drives came preformatted in NTFS in the factory so it only takes a few seconds to initialize the drive in Windows as GPT then quick format with NTFS.
Now why did I get these instead of Hitachi or Western Digital? They were the cheapest 3TB drives I could get ($149.91 at the time). These were 7200rpm. Plus they were $5 cheaper than even Western Digital Green drives. Also they were on the on Synology's compatibility list.
What about 7200rpm versus "Green" drives?
I have plenty of 5400/5900 rpm drives from several makes. These are decent enough in performance but the baseline is really the 7200rpm drive. In terms of power consumption, the slower drives don't appear to make a big impact in the numbers even I deal with. The Kill-a-Watt is reading 183-210 Watts used for my desktop plus four Barracudas running. With just the drives powered up these were showing 45-60 Watts. The UPS they are hooked up to account for 15 of those Watts. As for heat, well I'll give you that, the 7200rpm does run a bit hotter. But I'm eventually going to run them with a fan in a NAS/RAID setup. Besides, the "Green" drives, particularly WD Caviar Green, have been notorious for some problems coming back from sleep that make them go missing in some RAID configurations. WD even has disclaimer that their desktop drives are not warranted for Business Critical RAID. Which is a change now that Western Digital actually sells "Red" drives which are NAS approved. The 7200rpm models equate to the "Black" ones. And they have RE models that are Enterprise-grade.
Western Digital Caviar Green: 5400 rpm
Western Digital Red: 5400 rpm (?)
Seagate Barracuda LP: 5900 rpm
Seagate Barracuda: 7200rpm
Western Digital Caviar: 7200 rpm
Hitachi Deskstar: 7200 rpm
Are you sure these drives are reliable?
No, every drive is a gamble. What matters is whether or not you got a set of good drives in the batch. Also did your seller's carrier keep from throwing the thing around. Every manufacturer from Hitachi to Western Digital has had their share of bad history. I still remember losing 3 IBM "Deathstars" (sold to Hitachi, recently became Western Digital) some years ago. I've also had a mix of success with Maxtor (now Seagate) and Samsung (also now Seagate). And I'm very familiar with the RMA process for Western Digital, which is pretty good in convenience and turn around. They're all the same to me. I just need to have good backups and make my drives as reliable as possible. So the first thing I look at after price is comments on reliability. These ST3000DM001 actually have a firmware update released. The drives came with CC4B and the latest out there is CC4H.
Patching the firmware
How did I know these drives came with CC4B? I read the label on the drive and checked Google for hits on "<
You need an Intel based CPU. It says so in Seagate's website. Also a user that tried an AMD CPU said that it didn't work for him. The operation runs once per boot, though you could do multiple drives if they were all present on boot It appears to ignore my JMicron based PCI e-sata card, so had to open up my desktop case and swap some drives around Your Windows Disk 0 should either be empty, unformatted or be Windows boot device. One of my drives to be patched superceded the boot drive and I had to unformat the 3TB before Windows firmware patch stopped complaining.
Just for the heck of it, I wanted to try setting up these four drives as a Stripped Volume (Windows RAID-0). I had a pair of Thermaltake BlacX Duet (paid link) hooked up to two separate e-SATA ports on a JMicron-based PCI card. The disks were initialized as GPT to get a size >2TB.
Why a Stripe and not a RAID-5? In Windows 7 you will notice that option is greyed out. The OS-based RAID-5 is only supported in Windows Server versions. Of course you can have hardware RAID or BIOS/driver-based RAID.
RAID-0 is supposed to be the fastest RAID setup for both read and write. There is no fault tolerance in this setup, which is actually what I want in this case. If any one of the drives fail the entire array fails. I'd like to know pretty quickly that the entire set is working.
I wanted to use standard and free tools to stress the disks. So I used Robocopy and CrystalDiskMark.
Above results from four ST3000DM001 SATA-III in a software RAID-0
This would represent the initial speeds I could reasonably expect from this array. When I use it in a RAID-5 NAS later on, I'm expecting it would be slower. This also helps me determine if I need to bother with link aggregation and a managed switch later on. The theoretical maximum of a gigabit Ethernet is 125MB/s. It should be slower than this in real use, about 111 MB/s according to a test by Tom's Hardware. So do I need < =43MB/s read or < =17MB/s write more out of this array?
One ST3000DM001 SATA-III by itself.
Tom's Hardware gives it an average sequential speed of 119.8 MB/s. Storage Review tests have it in the middle of the pack.
Compared to a ST31000340AS SATA-II I've been using as a Render target
For the Robocopy test, I used a USB-3 source drive that I knew I can hook up to all my devices for comparative tests later. 85.74 MB/sec is my baseline.
Speed : 89907202 Bytes/sec.
Speed : 5144.531 MegaBytes/min.
Robocopy write test
The Burn In
Before feeding these drives into a hardware RAID I wanted to make sure they didn't break within the first few days. My choice of tool was HDDScan with CrystalMarkInfo. HDDscan would be left running overnight while I look at Load/Unload Cycle Count and temperature. I wanted to know how hot the drives can get in operation. Load/Unload Cycle Count is how many times the drive had to come of park. An large (more than double-digit) increase means the drives are too aggressive in going to sleep. This would make them unsuitable for RAID application. Most drives are rated for a few hundred thousand in their life and park/unpark would mean a lot of wear in the drive mechanism which would lead to earlier failure. The point of RAID is extra reliability, if your base reliability is compromised it can only do so much.
I test each drive individually in a JBOD setup. 55-degrees Celsius was the hotest any single drive ever got. The room had an electric fan in a room in the 27-degree Celsius (81 degrees Fahrenheit) range.
Next up: Mediasonic ProRaid and e-SATA without Port Multiplier
Post a Comment