Language Selection

English French German Italian Portuguese Spanish

Best Hard Drives?

Filed under
Linux

I think my hard drive is going out. It's been a while since I bought one. I used to really like Maxtor, but I think they were bought out. By who? I bought a Western Digital a couple years ago and it didn't last too long. I had a samsung one time that lasted for like 10 years! So, what are your opinions of the best, as in sturdy, dependable, and long lasting, hard drives today?

I really got my eye on this one. It's a Western Digital, but it has a five year warranty where most others have a 3 or 1 year. There are a lot of good reviews and several bad ones, but they all have good and bad reviews.

What brand do you guys like?

Gonna have to keep it to SATA cause SSD seem quite expensive. Tongue

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

re: HD

We use the enterprise class Seagate or Wester Digital drives. Seagate has a no hassle warranty procedure (we just received a warranty replacement for a drive we fumble fingered and broke the data connector off - we fessed up in the trouble ticket and they replaced it under warranty anyways).

Currently the 10K rpm WD Velociraptor are the fastest non-SSD drives on the market (even faster then the Seagate hybrid drive for most applications).

Luckily, SATA is so cheap these days it's not that big of a deal if they don't make it much past 3 years - you do after all have a viable (and tested) daily backup scheme right?

re: HD

Weell, no not usually on my desktop. I have been backing up my important stuff since my drive started acting up.

Those hybrids look interesting. I think I have it narrowed down between this one and this one.

Thanks.

re: Hard Drive

If you're looking for raw speed, then the 10K VR WDC drives are faster then the Seagate Hybrid.

http://www.anandtech.com/show/3734/seagates-momentus-xt-review-finally-a-good-hybrid-hdd

The 10K VR WDC comes in 200G, 400G, 600G versions. We're updating most of our boot drives to the 200G model and see an overall 10-20% speed increase in the systems.

If you want the Seagate Hybrid (interesting concept, but keep in mind it's just READ caching, not READ & WRITE caching) I'd avoid TigerDirect like the plague. Newegg sells that drive for the same price.

http://www.newegg.com/Product/Product.aspx?Item=N82E16822148592&Tpk=ST93205620AS

re: Hard Drive

naw speed is secondary really.

I think I'll try out that hybrid for a month or two. Like you said, hard drives have gotten so cheap these days I can a regular one if I don't like it.

Thanks!

Hybrid Hard Drive

srlinuxx wrote:
naw speed is secondary really.

I think I'll try out that hybrid for a month or two. Like you said, hard drives have gotten so cheap these days I can a regular one if I don't like it.

Thanks!

I was just wondering if you ever got the hybrid drive and if so how do you like it?

Tex

re: Hybrid HD

Yeah, I been using since soon after that post. But I can't tell a significant speed increase using it. In fact, testing with hdparm shows it not as fast as my regular SATA drive on sdb - but I had stuff going on on the hybrid. So, I don't know if there is any real advantage or not.

I've since read of disadvantages such as numerous spin ups and downs. Some folks could actually hear their drives growling about it. Well, I don't know if mine was spinning up and down a lot but I still updated the firmware and I'm believing that was a good move despite no real proof. Big Grin

So, in summary, yeah it works but I don't think it's any faster than regular harddrives. I'd advise folks not to buy one if it costs significantly more - but yeah buy one if you find it on sale if you wanna.

Next up -> a real SSD.

Hard drives

I've mostly been using enterprise level Seagate and WD, both at home, and at the company. Price difference between consumer and enterprise is small these days, and hence I agree that it's better to pay the extra money.

If you look at the hybrid Seagate drive, take into account future plans. You'll get double storage capacity if choosing a regular hard drive, and leave the option open for a SSD upgrade to speed up the OS itself. 60-80GB SSD isn't that expensive any more, and I suppose it's more than enough for most of us to run a Linux OS.

re: Hard Drive

I have bad experiences with WD. I have had three, all where warm and noisy and one crashed after less than 15 months. I have replaced the other two with Samsung drives of same size and speed, and they run much cooler and quieter. So much cooler that my big tower cabinet temperature dropped 1.5C. It's to early to say anything about reliability, but lower temperature and less vibrations can't harm.

I can only second that. I had

I can only second that. I had several WD in the past and all were noisy and hot. Right now I have a 750G Samsung Green, bought about 3 years ago. It works very well, I can't even hear it and it is about 10 C (ten degrees) cooler then my 160G Hitachi which is also in the same computer.
So, for a usual desktop system I would recommend Samsung. But if you want extreme performance, you probably will have to go with something else. I consider my Samsung's performance satisfactory and more than enough for my needs, but other may consider they need a faster drive.

WD Green Drives

Stay away from the so-called "green" drives with Western Digital. Not only do they not make any difference in a desktop system (with a 500W PSU), I have lost 6 of 10 of those drives at my place of business. Of the 6 replacements, 2 died within a month and I am beginning to suspect issues with 2 others. (and... no, no RAID... just straight drives, though the systems are pretty much on 24/7)

I have also gotten some of the WD Black drives... I think 3 of them. No problems yet. The Seagate drives have been 0% defective and are all I typically buy now.

More in Tux Machines

Leftovers: BSD

Security Leftovers

  • Stop using SHA1 encryption: It’s now completely unsafe, Google proves
    Security researchers have achieved the first real-world collision attack against the SHA-1 hash function, producing two different PDF files with the same SHA-1 signature. This shows that the algorithm's use for security-sensitive functions should be discontinued as soon as possible. SHA-1 (Secure Hash Algorithm 1) dates back to 1995 and has been known to be vulnerable to theoretical attacks since 2005. The U.S. National Institute of Standards and Technology has banned the use of SHA-1 by U.S. federal agencies since 2010, and digital certificate authorities have not been allowed to issue SHA-1-signed certificates since Jan. 1, 2016, although some exemptions have been made. However, despite these efforts to phase out the use of SHA-1 in some areas, the algorithm is still fairly widely used to validate credit card transactions, electronic documents, email PGP/GPG signatures, open-source software repositories, backups and software updates.
  • on pgp
    First and foremost I have to pay respect to PGP, it was an important weapon in the first cryptowar. It has helped many whistleblowers and dissidents. It is software with quite interesting history, if all the cryptograms could tell... PGP is also deeply misunderstood, it is a highly successful political tool. It was essential in getting crypto out to the people. In my view PGP is not dead, it's just old and misunderstood and needs to be retired in honor. However the world has changed from the internet happy times of the '90s, from a passive adversary to many active ones - with cheap commercially available malware as turn-key-solutions, intrusive apps, malware, NSLs, gag orders, etc.
  • Cloudflare’s Cloudbleed is the worst privacy leak in recent Internet history
    Cloudflare revealed today that, for months, all of its protected websites were potentially leaking private information across the Internet. Specifically, Cloudflare’s reverse proxies were dumping uninitialized memory; that is to say, bleeding private data. The issue, termed Cloudbleed by some (but not its discoverer Tavis Ormandy of Google Project Zero), is the greatest privacy leak of 2017 and the year has just started. For months, since 2016-09-22 by their own admission, CloudFlare has been leaking private information through Cloudbleed. Basically, random data from random sites (again, it’s worth mentioning that every site that used CloudFlare in the last half year should be considered to having fallen victim to this) would be randomly distributed across the open Internet, and then indefinitely cached along the way.
  • Serious Cloudflare bug exposed a potpourri of secret customer data
    Cloudflare, a service that helps optimize the security and performance of more than 5.5 million websites, warned customers today that a recently fixed software bug exposed a range of sensitive information that could have included passwords and cookies and tokens used to authenticate users. A combination of factors made the bug particularly severe. First, the leakage may have been active since September 22, nearly five months before it was discovered, although the greatest period of impact was from February 13 and February 18. Second, some of the highly sensitive data that was leaked was cached by Google and other search engines. The result was that for the entire time the bug was active, hackers had the ability to access the data in real-time by making Web requests to affected websites and to access some of the leaked data later by crafting queries on search engines. "The bug was serious because the leaked memory could contain private information and because it had been cached by search engines," Cloudflare CTO John Graham-Cumming wrote in a blog post published Thursday. "We are disclosing this problem now as we are satisfied that search engine caches have now been cleared of sensitive information. We have also not discovered any evidence of malicious exploits of the bug or other reports of its existence."

Security Leftovers

  • Change all the passwords (again)
    Looks like it is time to change all the passwords again. There’s a tiny little flaw in a CDN used … everywhere, it seems.
  • Today's leading causes of DDoS attacks [Ed: The so-called 'Internet of things' (crappy devices with identical passwords) is a mess; programmers to blame, not Linux]
    Of the most recent mega 100Gbps attacks in the last quarter, most of them were directly attributed to the Mirai botnet. The Mirai botnet works by exploiting the weak security on many Internet of Things (IoT) devices. The program finds its victims by constantly scanning the internet for IoT devices, which use factory default or hard-coded usernames and passwords.
  • How to Set Up An SSL Certificate on Your Website [via "Steps To Secure Your Website With An SSL Certificate"]
  • SHA-1 is dead, long live SHA-1!
    Unless you’ve been living under a rock, you heard that some researchers managed to create a SHA-1 collision. The short story as to why this matters is the whole purpose of a hashing algorithm is to make it impossible to generate collisions on purpose. Unfortunately though impossible things are usually also impossible so in reality we just make sure it’s really really hard to generate a collision. Thanks to Moore’s Law, hard things don’t stay hard forever. This is why MD5 had to go live on a farm out in the country, and we’re not allowed to see it anymore … because it’s having too much fun. SHA-1 will get to join it soon.
  • SHA1 collision via ASCII art
    Happy SHA1 collision day everybody! If you extract the differences between the good.pdf and bad.pdf attached to the paper, you'll find it all comes down to a small ~128 byte chunk of random-looking binary data that varies between the files.
  • PayThink Knowledge is power in fighting new Android attack bot
    Android users and apps have become a major part of payments and financial services, carrying an increased risk for web crime. It is estimated that there are 107.7 million Android Smartphone users in the U.S. who have downloaded more than 65 million apps from the Google App Store, and each one of them represents a smorgasbord of opportunity for hackers to steal user credentials and other information.
  • Red Hat: 'use after free' vulnerability found in Linux kernel's DCCP protocol IPV6 implementation
    Red Hat Product Security has published details of an "important" security vulnerability in the Linux kernel. The IPv6 implementation of the DCCP protocol means that it is possible for a local, unprivileged user to alter kernel memory and escalate their privileges. Known as the "use-after-free" flaw, CVE-2017-6074 affects a number of Red Hat products including Red Hat Enterprise Linux 6, Red Hat Enterprise Linux 7 and Red Hat Openshift Online v2. Mitigating factors include the requirement for a potential attacker to have access to a local account on a machine, and for IPV6 to be enabled, but it is still something that will be of concern to Linux users. Describing the vulnerability, Red Hat says: "This flaw allows an attacker with an account on the local system to potentially elevate privileges. This class of flaw is commonly referred to as UAF (Use After Free.) Flaws of this nature are generally exploited by exercising a code path that accesses memory via a pointer that no longer references an in use allocation due to an earlier free() operation. In this specific issue, the flaw exists in the DCCP networking code and can be reached by a malicious actor with sufficient access to initiate a DCCP network connection on any local interface. Successful exploitation may result in crashing of the host kernel, potential execution of code in the context of the host kernel or other escalation of privilege by modifying kernel memory structures."

Android Leftovers