Language Selection

English French German Italian Portuguese Spanish

A test drive of Debian/etch Xen

Filed under

I have been looking for a new server for quite some time now. My old server is an aging HP NetServer LC3 Dual PII 233 Mhz that was donated to me. I use it as a general purpose home server and I also run a few other services off it, such as our Bugzilla and Subversion repository. It works but it was a little inflexible. I've had to repartition it a couple of times to make more room for my backups and maintenance wasn't too friendly with so many services running off the same OS.

A few months ago swbrown posted a fantastic tutorial on the LXer forums that gave a short overview of setting up RAID with LVM, Xen and LUKS. Around that same time I noticed a decommissioned HP server at my job. I decided to buy it off my boss and see if I could get to swbrown's nice setup. Here is how I fared.

Installing the Xen packages is pretty easy, but you have to pick the right xen-hypervisor package. Etch has two xen-hypervisor packages, one with a -pae extension and the other without. If you want the PAE version then you can simply install the xen-linux-system package. If you want the non-PAE version then you need to install the linux-image-3.6.18-4-xen kernel, xen-utils and the xen-hypervisor separately. The kernel in question is for both the dom0 (host) as well as the domu (guest) operating systems. There are also -xen-vserver packages which I found a bit confusing, but those are if you want to mix Xen and vserver virtualization on the same machine.

Full Story.

More in Tux Machines

KDE: KDE Applications 18.04, KDE Connect, KMyMoney 5.0.1 and Qt Quick

  • KDE Applications 18.04 branches created
    Make sure you commit anything you want to end up in the KDE Applications 18.04 release to them :)
  • KDE Connect – State of the union
    We haven’t blogged about KDE Connect in a long time, but that doesn’t mean that we’ve been lazy. Some new people have joined the project and together we have implemented some exciting features. Our last post was about version 1.0, but recently we released version 1.8 of the Android app and 1.2.1 of the desktop component some time ago, which we did not blog about yet. Until now!
  • KMyMoney 5.0.1 released
    The KMyMoney development team is proud to present the first maintenance version 5.0.1 of its open source Personal Finance Manager. Although several members of the development team had been using the new version 5.0.0 in production for some time, a number of bugs and regressions slipped through testing, mainly in areas and features not used by them.
  • Qt Quick without a GPU: i.MX6 ULL
    With the introduction of the Qt Quick software renderer it became possible to use Qt Quick on devices without a GPU. We investigated how viable this option is on a lower end device, particularly the NXP i.MX6 ULL. It turns out that with some (partially not yet integrated) patches developed by KDAB and The Qt Company, the performance is very competitive. Even smooth video playback (with at least half-size VGA resolution) can be done by using the PXP engine on the i.MX6 ULL.

Red Hat Leftovers

Debian Leftovers

  • RcppSMC 0.2.1: A few new tricks
    A new release, now at 0.2.1, of the RcppSMC package arrived on CRAN earlier this afternoon (and once again as a very quick pretest-publish within minutes of submission).
  • sbuild-debian-developer-setup(1) (2018-03-19)
    I have heard a number of times that sbuild is too hard to get started with, and hence people don’t use it. To reduce hurdles from using/contributing to Debian, I wanted to make sbuild easier to set up. sbuild ≥ 0.74.0 provides a Debian package called sbuild-debian-developer-setup. Once installed, run the sbuild-debian-developer-setup(1) command to create a chroot suitable for building packages for Debian unstable.
  • control-archive 1.8.0
    This is the software that maintains the archive of control messages and the newsgroups and active files on I update things in place, but it's been a while since I made a formal release, and one seemed overdue (particularly since it needed some compatibility tweaks for GnuPG v1).
  • The problem with the Code of Conduct
  • Some problems with Code of Conducts

OSS Leftovers

  • Can we build a social network that serves users rather than advertisers?
    Today, open source software is far-reaching and has played a key role driving innovation in our digital economy. The world is undergoing radical change at a rapid pace. People in all parts of the world need a purpose-built, neutral, and transparent online platform to meet the challenges of our time. And open principles might just be the way to get us there. What would happen if we married digital innovation with social innovation using open-focused thinking?
  • Digital asset management for an open movie project
    A DAMS will typically provide something like a search interface combined with automatically collected metadata and user-assisted tagging. So, instead of having to remember where you put the file you need, you can find it by remembering things about it, such as when you created it, what part of the project it connects to, what's included in it, and so forth. A good DAMS for 3D assets generally will also support associations between assets, including dependencies. For example, a 3D model asset may incorporate linked 3D models, textures, or other components. A really good system can discover these automatically by examining the links inside the asset file.
  • LG Releases ‘Open Source Edition’ Of webOS Operating System
  • Private Internet Access VPN opens code-y kimono, starting with Chrome extension
    VPN tunneller Private Internet Access (PIA) has begun open sourcing its software. Over the next six months, the service promises that all its client-side software will make its way into the hands of the Free and Open Source Software (FOSS) community, starting with PIA's Chrome extension. The extension turns off mics, cameras, Adobe's delightful Flash plug-in, and prevents IP discovery. It also blocks ads and tracking. Christel Dahlskjaer, director of outreach at PIA, warned that "our code may not be perfect, and we hope that the wider FOSS community will get involved."
  • Open sourcing FOSSA’s build analysis in fossa-cli
    Today, FOSSA is open sourcing our dependency analysis infrastructure on GitHub. Now, everyone can participate and have access to the best tools to get dependency data out of any codebase, no matter how complex it is.
  • syslog-ng at SCALE 2018
    It is the fourth year that syslog-ng has participated at Southern California Linux Expo or, as better known to many, SCALE ‒ the largest Linux event in the USA. In many ways, it is similar to FOSDEM in Europe, however, SCALE also focuses on users and administrators, not just developers. It was a pretty busy four days for me.
  • Cisco's 'Hybrid Information-Centric Networking' gets a workout at Verizon
  • Verizon and Cisco ICN Trial Finds Names More Efficient Than Numbers
  • LLVM-MCA Will Analyze Your Machine Code, Help Analyze Potential Performance Issues
    One of the tools merged to LLVM SVN/Git earlier this month for the LLVM 7.0 cycle is LLVM-MCA. The LLVM-MCA tool is a machine code analyzer that estimates how the given machine code would perform on a specific CPU and attempt to report possible bottlenecks. The LLVM-MCA analysis tool uses information already used within LLVM about a given CPU family's scheduler model and other information to try to statically measure how the machine code would carry out on a particular CPU, even going as far as estimating the instructions per cycle and possible resource pressure.
  • Taking Data Further with Standards
    Imagine reading a book, written by many different authors, each working apart from the others, without guidelines, and published without edits. That book is a difficult read — it's in 23 different languages, there's no consistency in character names, and the story gets lost. As a reader, you have an uphill battle to get the information to tell you one cohesive story. Data is a lot like that, and that's why data standards matter. By establishing common standards for the collection, storage, and control of data and information, data can go farther, be integrated with other data, and make "big data" research and development possible. For example, NOAA collects around 20 terabytes of data every day.Through the National Ocean Service, instruments are at work daily gathering physical data in the ocean, from current speed to the movement of schools of fish and much more. Hundreds of government agencies and programs generate this information to fulfill their missions and mandates, but without consistency from agency to agency, the benefits of that data are limited. In addition to federal agencies, there are hundreds more non-federal and academic researchers gathering data every day. Having open, available, comprehensive data standards that are widely implemented facilitates data sharing, and when data is shared, it maximizes the benefits of "big data"— integrated, multi-source data that yields a whole greater than its parts.