Language Selection

English French German Italian Portuguese Spanish

Building a New Computer System for Linux

Filed under
Linux
HowTos
by Gary Frankenbery, Computer Science Teacher, Grants Pass High School

Going to build a new computer soon, and outfit it with Linux? Here's the story of one such recent foray into purchasing components and assembling a new system.

Introduction

As a 55 year-old computer user and a high school computer science teacher, I'm not a typical system builder or adventurous case modder. Indeed, many would consider my using Linux the only daring thing I've ever done in the realm of computer science. Also, at the high school where I teach, I'm not the hardware teacher—I'm the software guy, as I teach computer programming, basic computer applications/literacy, and web page design.

So, my choices of equipment for my new home computer system were definitely a mix of my conservative stick-to-what-you-know tendencies coupled with a rare desire for something different. I like good performance in a computer system, but I'm not a speed demon. I'm not much of a computer game player, and the games I do play are not that demanding of computer resources. Finally, on a school teacher's salary, money is always a concern.

This new system was to be my new main home computer system, and would run only Linux. No MS-Windows or Linux/MS-Windows dual boot. In fact, none of my home computers run MS-Windows, as I converted (upgraded) my wife's machine to Linux about 6 months ago.

Processor and Cooler

When building a new computer system, the choice of the CPU determines other choices. I've been an AMD processor user for several years now, so while I'm not opposed to an Intel CPU (I do like Intel Corporation for their support of computer science education in my home state of Oregon, USA), I will stick with an AMD processor. But, this time, I decide to go 64-bit, and I end up purchasing an AMD FX-64 3200 CPU (socket 939). Yes, I wanted to go faster, but prices rise sharply in the AMD FX-64 processor line as you step up in CPU clock speed. I do decide to buy an OEM version of the processor, without a fan/heat-sink or instructions, as I've built other Athlon CPU based systems and I want to select a quiet, effective cooler for my processor.

So, next, a processor cooler. I've always used the stock bundled AMD fan/heat-sink before, but this time I'm buying my own. Now, I'm not a CPU overclocker, so something that cools a little better than the stock Athlon cooler is fine. Of greater importance is that the fan be quieter than a stock cooler. Another factor is that I'm definitely not going to the expense (and installation trouble) of a water-based CPU cooling system. Back to the Internet to spend some time reading CPU cooler reviews.

One of my students tells me about www.xoxide.com, which has many excellent photos of different makes/models of CPU coolers (and good prices too). One concern is that many of the coolers require attaching a mounting bracket onto the underside of the motherboard for installation. I finally go with the Arctic Cooling Freezer-64 cooler because the reviews all say it's very quiet, it cools better than the stock Athlon cooler, it clips onto the stock mounting lugs on a socket 939 motherboard, and it's reasonably priced. While this cooler lacks the razzle dazzle look of some of the other coolers, it appears to be the perfect match for my needs and preferences.

Motherboard and Video Card

For now, I plan to use the video card that I had with my previous system, and upgrade the video card a couple of months down the road when I have more cash. For a while, my old AGP 8X NVidia 5700 video card will do just fine. Since it's an AGP, I decide not to purchase a mainboard with PCI-Express slots. When I do upgrade the video card, I plan to buy an MSI Nvidia AGP 8X Geforce 6600 GT card.

I've used many different motherboards over the years, including motherboards from Tyan, Gigabyte, Albatron, MSI, Biostar, and Soyo. I've had good luck with all of these (only problem was that I had to update the BIOS with one of them to get it to work properly), and I'm not wedded to any particular manufacturer. However, one feature that all these motherboards have in common is that they've all been AMD/VIA chipset boards. While you may prefer an Nforce or SIS chipset board, my cautious nature propels me to stick with with the familiar VIA chipset. So I finally purchase a Soltek SL-K8TPro-939 VIA K8T800 Pro ATX motherboard. While this is not the most feature packed or fastest mainboard around, it certainly is a great fit with the rest of my gear, and has excellent reviews on the Internet.

Getting Radical—the Case

I really don't like wild-looking computer cases. When some of my students show me the gaudy cases they've bought for their systems, I try to be polite and kind with my opinions, but I always look for elegance and simplicity rather than flash and splash. One minor prejudice is that I don't like cases with a hinged closing cover over the cd-drive bays.

My choice of computer system case surprised even me. I decided to go with a blue tinted clear acrylic case.

Yes, I know there are many disadvantages to clear acrylic cases.

They have to be cleaned frequently as the dust that gathers inside will make them look incredibly ugly in a hurry. They typically use a large number of screws (10 to 12) for fastening the side panels. They scratch and mar easily. Finally, installation of drives into the drive bays can be tricky.

Yes, I know all this—but I bought one anyway. I purchased a Logisys CS888UVBL UV blue acrylic clear case.

I bought the case first, and then went back and did some more research on clear acrylic cases after the purchase. If I had it to do over again, I would still purchase a clear acrylic case, but it would be a case by another manufacturer, where the motherboard is mounted on a slide out tray instead of being directly mounted to the case side panel.

One reason that I went with a clear case is that I can use it for instruction with my students in my computer literacy class at school when we discuss computer hardware. But, bottom line, the real reason is that I just like the look of this case.

More Power

The case doesn't come with a built-in power supply like other systems I've built, so I'll have to buy one separately. Back to the Internet for more research. After reading countless power supply reviews, I finally buy an MGE XG Vigor 500 Watt Power Supply. This PS has a fan-speed adjustor knob on the back, nicely wrapped cables, an attractive chrome finish, good power stability and accurate voltage levels as assessed in reviews, and a relatively modest price. I'm very pleased with this purchase, and I think the selection of this particular power supply may be the best cost versus value component I've purchased for this system.

RAM

I've decided on 1 Gigabyte of Memory. I won't buy the most expensive, but I won't stint on this either. I end up buying the Corsair XMS TWINX1024-3200C2PT 1GB (2 x 512MB) 184-Pin DDR SDRAM DDR 400 (PC 3200) Dual Channel Kit. This is good quality stuff. If you buy the cheapest RAM, you may get away with it—or you may not. With this Corsair RAM, I know that if any problems arise later, they are very unlikely to be memory related.

CD/DVD Drives

I already have a LITEON CD burner/DVD-ROM drive that works very well for me. However, I want to do some DVD burning, so I purchase another LITEON with DVD burning capability. Both optical drives are installed in my new system. But—wait a minute—both drives have beige colored front bezels. Frankly, this won't look too good in my new acrylic case. I get some metallic silver Testor hobbyist spray paint and spray the fronts of my optical drives as well as an old floppy drive that's going into the system. Then I get some clear mailing label stock, print new identification labels for my drives, and affix the labels to the fronts of the drives. It all looks pretty good—not perfect, but better than stock drive bezels would look with my case.
I burn a lot of CDs (mostly open source software for my students), so I install the two optical drives and leave one empty drive bay between them, as heat build-up with repeated cd-burns is a major cause of coasters. When doing mass burning sessions, I'll alternate back and forth between the two drives.

Hard Disk Drive

I'm going to go with a SATA drive. This is my first experience with SATA, and I end up purchasing a 160GB Maxtor drive. As I install this drive, I'm struck with how neat, small, and tidy the SATA data and power cables are--this is really the way to go. I may eventually purchase a couple more drives, and try a SATA-RAID configuration. But, because the primary role for this machine is workstation rather than server, one sata drive will do.

The Smoke Test

When you first power on a newly built computer, you experience that stressfull moment of doubt, and maybe even a little panic. After all, you've spent an awful lot of time and money on this. And, if you're foolish like me, you've probably been bragging to others about this wonderful new computer system you've been building. Not only have you invested considerable money and time, you've invested major macho ego into getting this thing working. Clearly, failure is not an option.
The brain starts to whirl rapidly with increasingly wild thoughts. Have I missed anything? Will the motherboard complete the Power On Self Test? Will the processor overheat? Will the memory function? Will the motherboard melt? Will a cloud of smoke rise from the machine? Will I bring down the entire Northwestern USA power grid?

A now slightly trembling finger reaches out to press the on switch.

In fact, the system starts just fine—what a relief.

Wait a minute—there is a problem—the BIOS is not recognizing one of the Optical Drives.

I power down, and scratch my head for a moment. After a few seconds of thought, I realize that when installing the optical drives, I forgot to make sure that one drive was set as a master and the other as a slave. Yes, the optical drives are cabled to the same IDE port, so the master-slave arrangement matters. I take a close look at the backs of the optical drives, and sure enough that's what I've done—both are set as masters. I quickly grab another cable out of stores, and connect each optical drive to its own IDE channel. Problem fixed. With my heart rate now back to normal, it's time to install Linux.

Which Linux Distribution?

I've been a Mandrake (now Mandriva) user ever since version 7.1. Though I enjoy installing and trying different distributions, I want to install a familiar distribution—this is to be my main production machine at home—and I know Mandriva inside and out. I've also been a member of the Mandriva Club for several years, so I'll install Mandriva Limited Edition 2005.

Changes (If I Had It All to do over Again)

Although more expensive, I would buy a BeanTech BT-84-B blue tinted acrylic case instead of the Logisys case. With the BeanTech case, the motherboard is mounted to a slide-out-tray. The BeanTech case also has rubber pads in the drive bays.

People tell me that the comparable Seagate sata drive is quieter and quicker than the Maxtor I purchased. I would investigate this furthur, and perhaps purchase the Seagate.

Conclusion

I've now been using this system for 3 weeks. It runs quietly, and the processor stays relatively cool at 40-43 degrees Celsius. The system is extremely quick, and all my devices are recognized.

I haven't tried any overclocking at this point, but the cool CPU temperature, good quality RAM, and the capabilities of the motherboard and processor should provide opportunities to experiment with this later. All in all, I'm very satisfied and I think this system is going to serve me wll for some time to come.

Original in pdf.

More in Tux Machines

Red Hat's "DevOps" Hype Again and Analysis of last Night's Financial Results

OSS Leftovers

  • Deutsche Telekom and Aricent Create Open Source Edge Software Framework
    Deutsche Telekom and Aricent today announced the creation of an Open Source, Low Latency Edge Compute Platform available to operators, to enable them to develop and launch 5G mobile applications and services faster. The cost-effective Edge platform is built for software-defined data centers (SDDC) and is decentralized, to accelerate the deployment of ultra-low latency applications. The joint solution will include a software framework with key capabilities for developers, delivered as a platform-as-a-service (PaaS) and will incorporate cloud-native Multi-access edge computing (MEC) technologies.
  • A Deeper Look at Sigma Prime's Lighthouse: An Open-Source Ethereum 2.0 Client
  • Notable moments in Firefox for Android UA string history
  • Dweb: Creating Decentralized Organizations with Aragon
    With Aragon, developers can create new apps, such as voting mechanisms, that use smart contracts to leverage decentralized governance and allow peers to control resources like funds, membership, and code repos. Aragon is built on Ethereum, which is a blockchain for smart contracts. Smart contracts are software that is executed in a trust-less and transparent way, without having to rely on a third-party server or any single point of failure. Aragon is at the intersection of social, app platform, and blockchain.
  • LLVM 7.0.0 released
  • Parabola GNU/Linux-libre: Boot problems with Linux-libre 4.18 on older CPUs
    Due to a known bug in upstream Linux 4.18, users with older multi-core x86 CPUs (Core 2 Duo and earlier?) may not correctly boot up with linux-libre 4.18 when using the default clocksource.
  • Visual Schematic Diffs in KiCAD Help Find Changes
    In the high(er)-end world of EDA tools like OrCAD and Altium there is a tight integration between the version control system and the design tools, with the VCS is sold as a product to improve the design workflow. But KiCAD doesn’t try to force a version control system on the user so it doesn’t really make sense to bake VCS related tools in directly. You can manage changes in KiCAD projects with git but as [jean-noël] notes reading Git’s textual description of changed X/Y coordinates and paths to library files is much more useful for a computer than for a human. It basically sucks to use. What you really need is a diff tool that can show the user what changed between two versions instead of describe it. And that’s what plotgitsch provides.

LWN's Latest (Today Outside Paywall) Articles About the Kernel, Linux

  • Toward better handling of hardware vulnerabilities
    From the kernel development community's point of view, hardware vulnerabilities are not much different from the software variety: either way, there is a bug that must be fixed in software. But hardware vendors tend to take a different view of things. This divergence has been reflected in the response to vulnerabilities like Meltdown and Spectre which was seen by many as being severely mismanaged. A recent discussion on the Kernel Summit discussion list has shed some more light on how things went wrong, and what the development community would like to see happen when the next hardware vulnerability comes around. The definitive story of the response to Meltdown and Spectre has not yet been written, but a fair amount of information has shown up in bits and pieces. Intel was first notified of the problem in July 2017, but didn't get around to telling anybody in the the Linux community about it until the end of October. When that disclosure happened, Intel did not allow the community to work together to fix it; instead each distributor (or other vendor) was mostly left on its own and not allowed to talk to the others. Only at the end of December, right before the disclosure (and the year-end holidays), were members of the community allowed to talk to each other. The results of this approach were many, and few were good. The developers charged with responding to these problems were isolated and under heavy stress for two months; they still have not been adequately thanked for the effort they put in. Many important stakeholders, including distributions like Debian and the "tier-two" cloud providers, were not informed at all prior to the general disclosure and found themselves scrambling. Different distributors shipped different fixes, many of which had to be massively revised before entry into the mainline kernel. When the dust settled, there was a lot of anger left simmering in its wake.
  • Writing network flow dissectors in BPF
    Network packet headers contain a great deal of information, but the kernel often only needs a subset of that information to be able to perform filtering or associate any given packet with a flow. The piece of code that follows the different layers of packet encapsulation to find the important data is called a flow dissector. In current Linux kernels, the flow dissector is written in C. A patch set has been proposed recently to implement it in BPF with the clear goal of improving security, flexibility, and maybe even performance.
  • Coscheduling: simultaneous scheduling in control groups
    The kernel's CPU scheduler must, as its primary task, determine which process should be executing in each of a system's processors at any given time. Making an optimal decision involves juggling a number of factors, including the priority (and scheduling classes) of the runnable processes, NUMA locality, cache locality, latency minimization, control-group policies, power management, overall fairness, and more. One might think that throwing another variable into the mix — and a complex one at that — would not be something anybody would want to attempt. The recent coscheduling patch set from Jan Schönherr does exactly that, though, by introducing the concept of processes that should be run simultaneously. The core idea behind coscheduling is the marking of one or more control groups as containing processes that should be run together. If one process in a coscheduled group is running on a specific set of CPUs (more on that below), only processes from that group will be allowed to run on those CPUs. This rule holds even to the point of forcing some of the CPUs to go idle if the given control group lacks runnable processes, regardless of whether processes outside the group are runnable. Why might one want to do such a thing? Schönherr lists four motivations for this work, the first of which is virtualization. That may indeed be the primary motivation, given that Schönherr is posting from an Amazon address, and Amazon is rumored to be running a virtualized workload or two. A virtual machine usually contains multiple processes that interact with each other; these machines will run more efficiently (and with lower latencies) if those processes can run simultaneously. Coscheduling would ensure that all of a virtual machine's processes are run together, maximizing locality and minimizing the latencies of the interactions between them.
  • Machine learning and stable kernels
    There are ways to get fixes into the stable kernel trees, but they require humans to identify which patches should go there. Sasha Levin and Julia Lawall have taken a different approach: use machine learning to distinguish patches that fix bugs from others. That way, all bug-fix patches could potentially make their way into the stable kernels. Levin and Lawall gave a talk describing their work at the 2018 Open Source Summit North America in Vancouver, Canada. Levin began with a quick introduction to the stable tree and how patches get into it. When a developer fixes a bug in a patch they can add a "stable tag" to the commit or send a mail to the stable mailing list; Greg Kroah-Hartman will then pick up the fix, evaluate it, and add it to the stable tree. But that means that the stable tree is only getting the fixes that are pointed out to the stable maintainers. No one has time to check all of the commits to the kernel for bug fixes but, in an ideal world, all of the bug fixes would go into the stable kernels. Missing out on some fixes means that the stable trees will have more security vulnerabilities because the fixes often close those holes—even if the fixer doesn't realize it.
  • Trying to get STACKLEAK into the kernel
    The STACKLEAK kernel security feature has been in the works for quite some time now, but has not, as yet, made its way into the mainline. That is not for lack of trying, as Alexander Popov has posted 15 separate versions of the patch set since May 2017. He described STACKLEAK and its tortuous path toward the mainline in a talk [YouTube video] at the 2018 Linux Security Summit. STACKLEAK is "an awesome security feature" that was originally developed by The PaX Team as part of the PaX/grsecurity patches. The last public version of the patch set was released in April 2017 for the 4.9 kernel. Popov set himself on the goal of getting STACKLEAK into the kernel shortly after that; he thanked both his employer (Positive Technologies) and his family for giving him working and free time to push STACKLEAK. The first step was to extract STACKLEAK from the more than 200K lines of code in the grsecurity/PaX patch set. He then "carefully learned" about the patch and what it does "bit by bit". He followed the usual path: post the patch, get feedback, update the patch based on the feedback, and then post it again. He has posted 15 versions and "it is still in progress", he said.

PostgreSQL 11: something for everyone

PostgreSQL 11 had its third beta release on August 9; a fourth beta (or possibly a release candidate) is scheduled for mid-September. While the final release of the relational database-management system (currently slated for late September) will have something new for many users, its development cycle was notable for being a period when the community hit its stride in two strategic areas: partitioning and parallelism. Partitioning and parallelism are touchstones for major relational database systems. Proprietary database vendors manage to extract a premium from a minority of users by upselling features in these areas. While PostgreSQL has had some of these "high-tier" items for many years (e.g., CREATE INDEX CONCURRENTLY, advanced replication functionality), the upcoming release expands the number considerably. I may be biased as a PostgreSQL major contributor and committer, but it seems to me that the belief that community-run database system projects are not competitive with their proprietary cousins when it comes to scaling enterprise workloads has become just about untenable. Read more