Language Selection

English French German Italian Portuguese Spanish

32bit Vs 64bit Ubuntu

Filed under
Ubuntu

Recently I received an email where someone asked me point blank why it seems like there is better adoption for 32bit Linux vs that of 64bit. Honestly, I can see why this would be confusing. With the general thought that 64bit Linux is faster, many people are finding themselves being drawn to it over that of the 32bit option. In this article, I will be explaining why the desire to try 64bit Ubuntu is just silly. Not saying you should avoid it by any means, rather that choosing to install it over that of a 32bit distro is simply unneeded most of the time.

The speed debate

There is the common belief that by using 64bit Ubuntu, that somehow you will be experiencing a faster Linux experience. The thinking is because most of us are using dual-core processors now, you will see a significant speed increase. News flash – you can experience this today on 32bit Ubuntu.

rest here




I feel bad for people who read...

... content like this on the Internet and take credence in it.

That author is a total blockhead. He makes it seem like there's just no reason for a 64-bit OS, when there ARE reasons it'd make a good choice even or novice users. 64-bit isn't just the future, it's the NOW.

It all boils down to this, why NOT use a 64-bit OS? I've been using 64-bit Linux for five years and NEVER experienced a problem with 32-bit software. Oh wait, one: Adobe Flash. But today's distros make that a non-issue.

Thanks Lockergnome, it's idiots like your editors that keep people using 32-bit OSes when we should have made a conversion to 64-bit long ago.

64bit

So what "are" those reasons everyone should be running 64bit NOW?

64bit is a memory hog - unless you need to address more then 4G of ram, using 64bit gains you NOTHING, and costs you in RAM usage, I/O (bigger data means more info to move back and forth), and CPU (something has to handle all that I/O).

Since you incur NONE of that overhead running a 32bit OS on a >4G System - only fools waste resources on running a 64bit OS for no other reason then "it's there".

Re: 64bit

If you're running an ancient machine, sure, stick to 32-bit. If you're running ANYTHING even fairly recent, your PC is going to be able to handle the minor pitfalls of a 64-bit system that you mention to make them non-noticeable. I've been running Ubuntu 64-bit on my single-core CPU / 2GB RAM netbook for almost two full years and it runs great. I just don't see a reason why 32-bit is even viable anymore. There might not be major reasons to make the switch, but why not? The sooner we transition to 64-bit, the sooner all of our software will too.

If you have the simplest needs as a computer user, then go ahead and stick to 32-bit. If you like to wring out as much performance from your PC as possible, go with 64-bit.

32 and 64 bit OS

I have ran both 32 and 64 bit OS from the same distributions and the only conclusion I could find is programs will load a tad faster when first launched on a 64 bit system. Other than that the programs ran at the same speed on both. 64 bit is the "in" thing to have is the only compelling reason to run it. Of course when you mention 32 bit vs 64 bit OS everyone is suddenly a Video producer and needs the 64 bit OS.

Re: 32 and 64 bit OS

Well, it's not ONLY video producing that requires a beefy machine. I run a six-core Intel CPU (twelve-threads) along with 12GB of RAM and regularly make use of that on my Gentoo 64-bit install (VMware helps a lot, and compiling things uses the entire CPU with ease). If you have a HEAVY multi-tasker, a 64-bit OS is a no-brainer.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

NHS open-source Spine 2 platform to go live next week

Last year, the NHS said open source would be a key feature of the new approach to healthcare IT. It hopes embracing open source will both cut the upfront costs of implementing new IT systems and take advantage of using the best brains from different areas of healthcare to develop collaborative solutions. Meyer said the Spine switchover team has “picked up the gauntlet around open-source software”. The HSCIC and BJSS have collaborated to build the core services of Spine 2, such as electronic prescriptions and care records, “in a series of iterative developments”. Read more

What the Linux Foundation Does for Linux

Jim Zemlin, the executive director of the Linux Foundation, talks about Linux a lot. During his keynote at the LinuxCon USA event here, Zemlin noted that it's often difficult for him to come up with new material for talking about the state of Linux at this point. Every year at LinuxCon, Zemlin delivers his State of Linux address, but this time he took a different approach. Zemlin detailed what he actually does and how the Linux Foundation works to advance the state of Linux. Fundamentally it's all about enabling the open source collaboration model for software development. "We are seeing a shift now where the majority of code in any product or service is going to be open source," Zemlin said. Zemlin added that open source is the new Pareto Principle for software development, where 80 percent of software code is open source. The nature of collaborative development itself has changed in recent years. For years the software collaboration was achieved mostly through standards organizations. Read more

Arch-based Linux distro KaOS 2014.08 is here with KDE 4.14.0

The Linux desktop community has reached a sad state. Ubuntu 14.04 was a disappointing release and Fedora is taking way too long between releases. Hell, OpenSUSE is an overall disaster. It is hard to recommend any Linux-based operating system beyond Mint. Even the popular KDE plasma environment and its associated programs are in a transition phase, moving from 4.x to 5.x. As exciting as KDE 5 may be, it is still not ready for prime-time; it is recommended to stay with 4 for now. Read more

diff -u: What's New in Kernel Development

One problem with Linux has been its implementation of system calls. As Andy Lutomirski pointed out recently, it's very messy. Even identifying which system calls were implemented for which architectures, he said, was very difficult, as was identifying the mapping between a call's name and its number, and mapping between call argument registers and system call arguments. Some user programs like strace and glibc needed to know this sort of information, but their way of gathering it together—although well accomplished—was very messy too. Read more