Language Selection

English French German Italian Portuguese Spanish

Kubuntu 6.06

Filed under
Linux
Reviews
-s

With all the Ubuntu excitement passed few days it occurred to me that being a KDE fan moreso than gnome, perhaps Kubuntu might be more my cup of tea. When perusing the downloads it also occurred to me that 'hey I have a 64bit machine now!' So, I downloaded the Kubutu 6.06 desktop amd64 iso. Was it more appealing to a diehard KDE fan? Does 64bit programming make much difference?

The boot is similar to Ubuntu's, in fact it's almost identical except for the more attractive blue coloring and instead of the Ubuntu goldish logo we have the blue kubuntu. Otherwise there didn't seem to be much difference until we reached the splash screen. As attractive as Ubuntu's gui splash might be, kubuntu's is much more so. It's clean and crisp, and I just personally prefer blue.

        

Once you reach the desktop, one finds a blue background that looks like large faint bubbles as a foundation for KDE 3.5.2. It is your basic KDE desktop consisting of kapps for most popular tasks. What's not included on in the iso is installable. Included graphics are Kooka, krita, kpdf, and gwenview. For Internet we find akregator, bluetooth chat, Konqueror, Konversation, kopete, Kppp, krdc, drfb, ktorrent and a wireless lan manager. Multimedia includes amaroK, K3b, Kaffeine, KAudioCreator, kmix, and KsCD.

        

OpenOffice.org 2.02 rounds up the office category as well as KDE deciding kontact is an office application. There are plenty of system tools and utilities as well. There are utils for software packaging, setting alarms, configuring groupware connections, managing your printing jobs, and calculating. System tools consists of Kcron, Keep, KInfocenter, KSysGuard, KSystemLog, Konsole and QTParted.

        

On the desktop as well as in the System menu is an icon for Install. Installing to the harddrive is simplified over comparable Linux system, and in this case it is very similar if not identical to the process found in Ubuntu. It starts with answering a few configuration questions such as language, timezone, keyboard, and user and machine name.

        

Next comes partitioning if necessary and setting the target partition and swap. Confirm settings and press Install. All one does now is wait. It takes about 10 minutes for the installer to complete it's work before asking if you'd like to reboot. That's it.

        

Like Ubuntu, the installer presumes you would like grub installed so doesn't bother to ask and my first install attempt wasn't successful. The newly installed system would not boot. It just sat at the 'loading grub' screen blinking at me, in much the same manner as I encountered with the Ubuntu release candidate. After replacing grub with lilo, kubuntu tried to boot, but lots of things failed including the loading of needed modules and the start of the gui. I booted the livecd and tried again, this time doing nothing else in the background and achieved a bootable install. The first time I was taking a bunch of screenshots. I think I'm beginning to see a pattern emerge here in all my installs of the Ubuntu family and can sum it up in a few words of advice. Do not do anything else while your new Ubuntu system installs. This of course detracts from the main advantage of using a livecd as an install medium, but on the other hand, it takes such a short span of time to install that it's not a major sacrifice.

The installed system affords one the opportunity to install whatever applications one might need as well any 3rd party or proprietary drivers. (k)ubuntu software is installed thru an app called adept. Not only is it an software manager, but it also takes care of system or security updates. In fact one of the first thing I saw when I booted Kubuntu the first time was an icon in the System tray for adept and clicking on it brought up an updater. Click to fetch list of updates and in a few seconds it will inform you if anything needs updating. In this case there were updates to the adept software manager and gnome install data. One can Apply Updates or Forget Changes and Quit. I clicked Apply Changes and the updates were downloaded and installed in seconds without issue.

        

In the menu is an entry for Adept which opens a window similar to Synaptic. You can search for specific packages by keywork with tickable options, and right click package name to "Request Install." Then click on the Apply Changes button and your package as well as dependencies are downloaded and installed.

        

Clicking on "Add and Remove Programs" also brings up adept, but in a different layout. In this layout one finds the applications available or installed listed by category. Ticking the little checkbox and clicking Apply Changes will install or remove chosen programs.

        

The hardware detection was good and pretty much everything worked out of the box. Kaffeine was able to play mpgs and the example files but not avis. OpenOffice crashed and disappeared my first attempt at using it, but functioned properly in all subsequent tests. The KDE that was included was a bit stripped down and included no games at all, but lots of choices are available through the software manager. The desktop itself was pretty even if customized very little. Under the hood is a 2.6.15 kernel, Xorg 7.0 and gcc 4.0.3 is installable.

The performance of the system was well above average. In fact, I'll just say it, that thing flies. Applications opened up before I could move my mouse. There was no artifacting or delay in redrawing windows, no delay at all in switching between windows, or "jerkiness" when moving windows around. The menu popped right open without delay as well. The whole system felt light and nimble. I was quite impressed. Comparing the performance of kde kubuntu to gnome ubuntu is almost like comparing peaches to nectarines and since I didn't test the x86 version of kubuntu, I can't say with any authority or expertise that kubuntu 64 out-performs the others. But I can say this is one of the, if not the, fastest full-sized systems I've tested. Yes sir, kubuntu was quite impressive.

Kubuntu 6.06 Review

Thanks for your review.
Just a few personal comments about this new release.

FIRST, THE BAD . . .

1) Yes, I profoundly dislike NOT being able to log in as root. Granted, I don't spend much time logged-in as user root, but when I'm going to do an extended session of system configuration and maintenance, it's the fastest, most efficient way. And, of course, Linux is supposed to be about choice.

So I googled "Kubuntu root login", and got to a Kubuntu forum where some user had asked the same question. In the forum, the next person had posted in reply (I'm paraphrasing here): I know how to enable root logins for Kubuntu, but I'm not going to tell you how because this is not a good idea.

I just couldn't believe my eyes. OSS is all about freedom and the ability to control your own machine. "THERE'S NO INFORMATION HIDING IN LINUX!!!" (Imagine Tom Hanks in the movie, A League of Their Own saying "There's no crying in baseball!")

As I sat there thinking about this, I began to think that these Ubuntu/Kubuntu folk are a different kind of folk than me--alien, strange, ungracious, and infuriating. All that wondering about the popularity of Ubuntu/Kubuntu increased. Why would anyone want to use a distro where a simple request for information was received with such an ignorant, uptight, shortsighted, narrowminded, hypocritical, and outright anal response?

Working myself up to a good simmer, now, I then look at the next post in the forum, where a user quietly and considerately told exactly how to do it. OK, maybe these Kubuntu folk aren't jerks. Too much rush to judgement and stereotyping going on in our world anyway.

2) I'm not too familiar with Debian, or Debian based distros--so this one is probably my fault. I couldn't get Nvidia's accelerated drivers working properly. I have an older 17" LCD monitor that still works great, but it's very fussy about sync rates. I can usually tinker around and get things working, but no go here. I admit I was impatient here, and if I'd spent more time, I could have gotten it to work.

3) Development compilers and libraries are not included with the basic live CD install. There is such a thing as designing a distro for beginners, but not treating them like idiots. Can't find a package for your distro that works? Then, you can get the source and compile your own. Basic tools to do this, in my opinion, should always be included with the basic install of a Linux distro.

NOW, THE GOOD.

1) I like the graphical package installer, Adept. It works well, and is well designed and integrated.

2) Despite the heavy load the Ubuntu/Kubuntu servers must have been undergoing with a new release, they were quick and responsive.

3) I agree with every thing srlinux had to say in her (typically excellent) review, particularly when she says Kubuntu is very fast and responsive. I installed the amd64 version, and speed was excellent.

4) Kubuntu very courteously found all the other distros on my hard disks, and added them to the Grub boot menu. If all distros would do this, installing and trying out multiple distros would certainly be easier.

Well, that's it. I think it might be interesting to see an in-depth comparison of the latest Mepis(ver. 6, rc4) to this new Kubuntu release.

Gary Frankenbery

re: Kubuntu 6.06 Review

gfranken wrote:

Thanks for your review.
Just a few personal comments about this new release.

FIRST, THE BAD . . .

1) Yes, I profoundly dislike NOT being able to log in as root. Granted, I don't spend much time logged-in as user root, but when I'm going to do an extended session of system configuration and maintenance, it's the fastest, most efficient way. And, of course, Linux is supposed to be about choice.

Yeah, I didn't like that one too much, especially when the ubuntus hit the pipes. I'm used to it now I guess and it only annoys me slightly when I forget to sudo a command. I found just setting a root password will fix that tho.

teehee on the tom hanks thing. Big Grin

gfranken wrote:

3) Development compilers and libraries are not included with the basic live CD install. There is such a thing as designing a distro for beginners, but not treating them like idiots. Can't find a package for your distro that works? Then, you can get the source and compile your own. Basic tools to do this, in my opinion, should always be included with the basic install of a Linux distro.

Yeah, that used to really urk me too. But like the sudo thing, I'm getting used to it. I'm seeing it more and more in distros these days. A comment on another site said to install the build-essentials to get gcc, make, and friends. It was nice seeing that comment before my installs that way I didn't have to get all annoyed at it. Big Grin

Thanks for your kind words for me, I really appreciate that.

----
You talk the talk, but do you waddle the waddle?

It's not as bad as it looks

1) Yes, I profoundly dislike NOT being able to log in as root.

I must admit that I created a root password on the first machines where I installed Ubuntu. (sudo passwd) But now I just let the root user blocked and use (sudo su) to get a shell that has root rights and exit it again after some system maintenance. It is possible more secure that script kiddies cannot login as root via ssh (not standard installed but the first thing I usually install) even if they try really hard.

2) I couldn't get Nvidia's accelerated drivers working properly.

After installing the nvidia package (apt-get install nvidia-glx) I usually run the reconfiguration of xorg by (dpkg-reconfigure xserver-xorg) this creates a new /etc/X11/xorg.conf file including sync ranges. This tool always shows the options chosen the last time it has run so it is easy to use multiple times to tweak things.

3) Development compilers and libraries are not included with the basic live CD install.

It is easy to correct (apt-get install build-essential) but many people don't need those tools. The current repositories are packed with all kind of usefull software. It is very seldom that I revert to debian-unstable to get sources of something and repack it to create an ubuntu version of the same project.

Re: It's not as bad as it looks

Quote:
After installing the nvidia package (apt-get install nvidia-glx) I usually run the reconfiguration of xorg by (dpkg-reconfigure xserver-xorg) this creates a new /etc/X11/xorg.conf file including sync ranges. This tool always shows the options chosen the last time it has run so it is easy to use multiple times to tweak things.
Thanks Jurgen, for taking the time to explain the nvidia thing. My weakness when configuring Debian based distros is definitely showing here.

Also thanks for your clarification on installing the development packages.

Kubuntu is definitely on my radar. It was certainly blazingly fast.

Regards,
Gary Frankenbery
Computer Science Teacher
Grants Pass High School, Oregon, USA

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

Videos/Audiocasts/Shows: GNU/Linux and Python, Fresh Look at LMDE 4 Beta

  • Hopeful for HAMR | TechSNAP 423

    We explore the potential of heat-assisted magnetic recording and get excited about a possibly persistent L2ARC. Plus Jim's journeys with Clear Linux, and why Ubuntu 18.04.4 is a maintenance release worth talking about.

  • 2020-02-21 | Linux Headlines

    Red Hat OpenStack Platform reaches version 16, Google announces the mentors for this year’s Summer of Code, DigitalOcean secures new funding, the Raspberry Pi 4’s USB-C power problems get a fix, and the GTK Project unveils its new website.

  • Talk Python to Me: #252 What scientific computing can learn from CS

    Did you come into Python from a computational science side of things? Were you just looking for something better than Excel or Matlab and got pulled in by all the Python has to offer?  That's great! But following that path often means some of the more formal practices from software development weren't part of the journey.  On this episode, you'll meet Martin Héroux, who does data science in the context of academic research. He's here to share his best practices and lessons for data scientists of all sorts.

  • Matt Layman: Templates and Logic - Building SaaS #45

    In this episode, we added content to a template and talked about the N+1 query bug. I also worked tricky logic involving date handling. The first change was to update a course page to include a new icon for any course task that should be graded. After adding this, we hit an N+1 query bug, which is a performance bug that happens when code queries a database in a loop. We talked about why this happens and how to fix it. After finishing that issue, we switched gears and worked on a tricky logic bug. I need a daily view to fetch data and factor in the relative time shift between the selected day and today. We wrote an involved test to simulate the right conditions and then fixed the code to handle the date shift properly.

  • LMDE 4 Beta Debbie Run Through

    In this video, we are looking at LMDE (Linux Mint Debian Edition) 4 Debbie.

KVM and Xen Project: Commercial Exploitation and Unikraft Work

  • Cloud, Linux vendors cash in on KVM-based virtualization

    Vendors such as Red Hat, IBM, Canonical and Google rely on KVM-based virtualization technology for many of their virtualization products because it enables IT administrators to execute multiple OSes on the same hardware. As a result, it has become a staple in IT admins' virtual systems. KVM was first announced in October 2006 and was added to the mainline Linux kernel in February 2007, which means that if admins are running a Linux machine, they can run KVM out of the box. KVM is a Type 1 hypervisor, which means that each individual VM acts similar to a regular Linux process and allocates resources accordingly. Other Type 1 hypervisors include Citrix XenServer, Microsoft Hyper-V, Oracle VM Server for x86 and VMware ESXi.

  • Unikraft: Building Powerful Unikernels Has Never Been Easier!

    Two years ago, the Xen Project introduced Unikraft (http://unikraft.org) as an incubation project. Over the past two years, the Unikraft project has seen some great momentum. Since the last release, the community has grown about 20% and contributions have diversified a great deal. Contributions from outside the project founders (NEC) now make up 63% of all contributions, up from about 25% this time last year! In addition, a total of 56,739 lines were added since the last release (0.3). [...] Finally, the Unikraft team’s Simon Kuenzer recently gave a talk at FOSDEM titled “Unikraft: A Unikernel Toolkit”. Simon, a senior systems researcher at NEC Labs and the lead maintainer of Unikraft, spoke all about Unikraft and provided a comprehensive overview of the project, where it’s been and what’s in store.

Gopher: When Adversarial Interoperability Burrowed Under the Gatekeepers' Fortresses

In the early 1990s, personal computers did not arrive in an "Internet-ready" state. Before students could connect their systems to UMN's network, they needed to install basic networking software that allowed their computers to communicate over TCP/IP, as well as dial-up software for protocols like PPP or SLIP. Some computers needed network cards or modems, and their associated drivers. That was just for starters. Once the students' systems were ready to connect to the Internet, they still needed the basic tools for accessing distant servers: FTP software, a Usenet reader, a terminal emulator, and an email client, all crammed onto a floppy disk (or two). The task of marshalling, distributing, and supporting these tools fell to the university's Microcomputer Center. For the university, the need to get students these basic tools was a blessing and a curse. It was labor-intensive work, sure, but it also meant that the Microcomputer Center could ensure that the students' newly Internet-ready computers were also configured to access the campus network and its resources, saving the Microcomputer Center thousands of hours talking students through the configuration process. It also meant that the Microcomputer Center could act like a mini App Store, starting students out on their online journeys with a curated collection of up-to-date, reliable tools. That's where Gopher comes in. While the campus mainframe administrators had plans to selectively connect their systems to the Internet through specialized software, the Microcomputer Center had different ideas. Years before the public had heard of the World Wide Web, the Gopher team sought to fill the same niche, by connecting disparate systems to the Internet and making them available to those with little-to-no technical expertise—with or without the cooperation of the systems they were connecting. Gopher used text-based menus to navigate "Gopherspace" (all the world's public Gopher servers). The Microcomputer Center team created Gopher clients that ran on Macs, DOS, and in Unix-based terminals. The original Gopher servers were a motley assortment of used Macintosh IIci systems running A/UX, Apple's flavor of Unix. The team also had access to several NeXT workstations. Read more Also: The Things Industries Launches Global Join Server for Secure LoRaWAN

IBM/Red Hat and POWER9/OpenBMC

  • Network Automation: Why organizations shouldn’t wait to get started

    For many enterprises, we don’t need to sing the praises of IT automation - they already get it. They understand the value of automation, have invested in a platform and strategy, and have seen first-hand the benefits IT automation can deliver. However, unlike IT automation, according to a new report from Forrester Research 1, network automation is still new territory for many organizations. The report, "Jump-Start Your Network Automation," found that 56% of global infrastructure technology decision makers have implemented/are implementing or are expanding/upgrading their implementation of automation software, while another 19% plan to implement it over the next 12 months. But those same organizations that are embracing IT automation haven’t necessarily been able to take that same initiative when it comes to automating their networks. Even if they know it will be beneficial to them, the report found that organizations often struggle with even the most basic questions around automating their networks.

  • Using a story’s theme to inform the filmmaking: Farming for the Future

    The future of farming belongs to us all. At least that’s the message I got from researching Red Hat’s most recent Open Source Stories documentary, Farming for the Future. As a self-proclaimed city boy, I was intrigued by my assignment as director of the short documentary, but also felt like the subject matter was worlds away. If it did, in fact, belong to all of us how would we convey this to a general audience? How could we use the film’s theme to inform how we might approach the filmmaking to enhance the storytelling?

  • Raptor Rolls Out New OpenBMC Firmware With Featureful Web GUI For System Management

    While web-based GUIs for system management on server platforms with BMCs is far from anything new, Raptor Computing Systems with their libre POWER9 systems does now have a full-functioning web-based solution for their OpenBMC-powered systems and still being fully open-source. As part of Raptor Computing Systems' POWER9 desktops and servers being fully open-source down to the firmware/microcode and board designs, Raptor has used OpenBMC for the baseboard management controllers but has lacked a full-featured web-based system management solution on the likes of the Talos II and Blackbird systems up until now.

  • Introduction to open data sets and the importance of metadata

    More data is becoming freely available through initiatives such as institutions and research publications requiring that data sets be freely available along with the publications that refer to them. For example, Nature magazine instituted a policy for authors to declare how the data behind their published research can be accessed by interested readers. To make it easier for tools to find out what’s in a data set, authors, researchers, and suppliers of data sets are being encouraged to add metadata to their data sets. There are various forms for metadata that data sets use. For example, the US Government data.gov site uses the standard DCAT-US Schema v1.1 whereas the Google Dataset Search tool relies mostly on schema.org tagging. However, many data sets have no metadata at all. That’s why you won’t find all open data sets through search, and you need to go to known portals and explore if portals exist in the region, city, or topic of your interest. If you are deeply curious about metadata, you can see the alignment between DCAT and schema.org in the DCAT specification dated February 2020. The data sets themselves come in various forms for download, such as CSV, JSON, GeoJSON, and .zip. Sometimes data sets can be accessed through APIs. Another way that data sets are becoming available is through government initiatives to make data available. In the US, data.gov has more than 250,000 data sets available for developers to use. A similar initiative in India, data.gov.in, has more than 350,000 resources available. Companies like IBM sometimes provide access to data, like weather data, or give tips on how to process freely available data. For example, an introduction to NOAA weather data for JFK Airport is used to train the open source Model Asset eXchange Weather Forecaster (you can see the model artifacts on GitHub). When developing a prototype or training a model during a hackathon, it’s great to have access to relevant data to make your solution more convincing. There are many public data sets available to get you started. I’ll go over some of the ways to find them and provide access considerations. Note that some of the data sets might require some pre-processing before they can be used, for example, to handle missing data, but for a hackathon, they are often good enough.

  • Red Hat Helps Omnitracs Redefine Logistics And Transportation Software

    Fleet management technology provider Omnitracs, LLC, has delivered its Omnitracs One platform on the foundation of Red Hat OpenShift. Using the enterprise Kubernetes platform along with Red Hat Ansible Automation Platform, Omnitracs One is a cloud-native offering and provides an enhanced user experience with a clear path towards future innovations. With Red Hat’s guidance, Omnitracs said it was able to embrace a shift from on-premises development technologies to cloud-native services, improving overall operations and creating a more collaborative development process culture.