Language Selection

English French German Italian Portuguese Spanish

Samsung Replaces Hdd With Flash

Filed under
Hardware

The solid-state disk (SSD) uses memory chips in place of the mechanical recording system used inside hard drives, and has several advantages including lower power consumption and higher data rates. Flash memory technology isn't new and the advantages have been known for years but such solid-state disks have never been commercially produced before because flash has one big disadvantage over hard-drive storage: it's much more expensive.

Samsung announced basic details of the SSD on Monday but declined to provide any information about its price.

The Seoul company is planning SSDs with parallel ATA (Advanced Technology Attachment) interfaces in capacities up to 16GB. The 16GB devices will contain 16 memory chips holding 8 gigabits each, it says. Such chips sell for about $55 each on the spot memory market, according to DRAM Exchange Tech. That would put the chip cost of the 16GB SSD at almost $900.

Because Samsung is a major manufacturer of flash memory chips, it can likely source the chips internally at a lower price. Even so, it will be difficult to compete with hard drive makers on cost. Laptop drives at capacities of up to 30GB can easily be found for less than $200.

The SSD operates silently, consumes 5 percent of the power used by a hard drive, and weighs less than half as much. It can read data at up to 57MB per second and write it at up to 32MB per second.

Because SSDs don't use moving parts, they are much more resistant to harsh environmental conditions or shock and are thus suitable for industrial or military markets, says Samsung. Such users are less focused on low-cost components than the consumer market.

Full Story.

More in Tux Machines

NHS open-source Spine 2 platform to go live next week

Last year, the NHS said open source would be a key feature of the new approach to healthcare IT. It hopes embracing open source will both cut the upfront costs of implementing new IT systems and take advantage of using the best brains from different areas of healthcare to develop collaborative solutions. Meyer said the Spine switchover team has “picked up the gauntlet around open-source software”. The HSCIC and BJSS have collaborated to build the core services of Spine 2, such as electronic prescriptions and care records, “in a series of iterative developments”. Read more

What the Linux Foundation Does for Linux

Jim Zemlin, the executive director of the Linux Foundation, talks about Linux a lot. During his keynote at the LinuxCon USA event here, Zemlin noted that it's often difficult for him to come up with new material for talking about the state of Linux at this point. Every year at LinuxCon, Zemlin delivers his State of Linux address, but this time he took a different approach. Zemlin detailed what he actually does and how the Linux Foundation works to advance the state of Linux. Fundamentally it's all about enabling the open source collaboration model for software development. "We are seeing a shift now where the majority of code in any product or service is going to be open source," Zemlin said. Zemlin added that open source is the new Pareto Principle for software development, where 80 percent of software code is open source. The nature of collaborative development itself has changed in recent years. For years the software collaboration was achieved mostly through standards organizations. Read more

Arch-based Linux distro KaOS 2014.08 is here with KDE 4.14.0

The Linux desktop community has reached a sad state. Ubuntu 14.04 was a disappointing release and Fedora is taking way too long between releases. Hell, OpenSUSE is an overall disaster. It is hard to recommend any Linux-based operating system beyond Mint. Even the popular KDE plasma environment and its associated programs are in a transition phase, moving from 4.x to 5.x. As exciting as KDE 5 may be, it is still not ready for prime-time; it is recommended to stay with 4 for now. Read more

diff -u: What's New in Kernel Development

One problem with Linux has been its implementation of system calls. As Andy Lutomirski pointed out recently, it's very messy. Even identifying which system calls were implemented for which architectures, he said, was very difficult, as was identifying the mapping between a call's name and its number, and mapping between call argument registers and system call arguments. Some user programs like strace and glibc needed to know this sort of information, but their way of gathering it together—although well accomplished—was very messy too. Read more