Language Selection

English French German Italian Portuguese Spanish

Kubuntu 6.06

Filed under
Linux
Reviews
-s

With all the Ubuntu excitement passed few days it occurred to me that being a KDE fan moreso than gnome, perhaps Kubuntu might be more my cup of tea. When perusing the downloads it also occurred to me that 'hey I have a 64bit machine now!' So, I downloaded the Kubutu 6.06 desktop amd64 iso. Was it more appealing to a diehard KDE fan? Does 64bit programming make much difference?

The boot is similar to Ubuntu's, in fact it's almost identical except for the more attractive blue coloring and instead of the Ubuntu goldish logo we have the blue kubuntu. Otherwise there didn't seem to be much difference until we reached the splash screen. As attractive as Ubuntu's gui splash might be, kubuntu's is much more so. It's clean and crisp, and I just personally prefer blue.

        

Once you reach the desktop, one finds a blue background that looks like large faint bubbles as a foundation for KDE 3.5.2. It is your basic KDE desktop consisting of kapps for most popular tasks. What's not included on in the iso is installable. Included graphics are Kooka, krita, kpdf, and gwenview. For Internet we find akregator, bluetooth chat, Konqueror, Konversation, kopete, Kppp, krdc, drfb, ktorrent and a wireless lan manager. Multimedia includes amaroK, K3b, Kaffeine, KAudioCreator, kmix, and KsCD.

        

OpenOffice.org 2.02 rounds up the office category as well as KDE deciding kontact is an office application. There are plenty of system tools and utilities as well. There are utils for software packaging, setting alarms, configuring groupware connections, managing your printing jobs, and calculating. System tools consists of Kcron, Keep, KInfocenter, KSysGuard, KSystemLog, Konsole and QTParted.

        

On the desktop as well as in the System menu is an icon for Install. Installing to the harddrive is simplified over comparable Linux system, and in this case it is very similar if not identical to the process found in Ubuntu. It starts with answering a few configuration questions such as language, timezone, keyboard, and user and machine name.

        

Next comes partitioning if necessary and setting the target partition and swap. Confirm settings and press Install. All one does now is wait. It takes about 10 minutes for the installer to complete it's work before asking if you'd like to reboot. That's it.

        

Like Ubuntu, the installer presumes you would like grub installed so doesn't bother to ask and my first install attempt wasn't successful. The newly installed system would not boot. It just sat at the 'loading grub' screen blinking at me, in much the same manner as I encountered with the Ubuntu release candidate. After replacing grub with lilo, kubuntu tried to boot, but lots of things failed including the loading of needed modules and the start of the gui. I booted the livecd and tried again, this time doing nothing else in the background and achieved a bootable install. The first time I was taking a bunch of screenshots. I think I'm beginning to see a pattern emerge here in all my installs of the Ubuntu family and can sum it up in a few words of advice. Do not do anything else while your new Ubuntu system installs. This of course detracts from the main advantage of using a livecd as an install medium, but on the other hand, it takes such a short span of time to install that it's not a major sacrifice.

The installed system affords one the opportunity to install whatever applications one might need as well any 3rd party or proprietary drivers. (k)ubuntu software is installed thru an app called adept. Not only is it an software manager, but it also takes care of system or security updates. In fact one of the first thing I saw when I booted Kubuntu the first time was an icon in the System tray for adept and clicking on it brought up an updater. Click to fetch list of updates and in a few seconds it will inform you if anything needs updating. In this case there were updates to the adept software manager and gnome install data. One can Apply Updates or Forget Changes and Quit. I clicked Apply Changes and the updates were downloaded and installed in seconds without issue.

        

In the menu is an entry for Adept which opens a window similar to Synaptic. You can search for specific packages by keywork with tickable options, and right click package name to "Request Install." Then click on the Apply Changes button and your package as well as dependencies are downloaded and installed.

        

Clicking on "Add and Remove Programs" also brings up adept, but in a different layout. In this layout one finds the applications available or installed listed by category. Ticking the little checkbox and clicking Apply Changes will install or remove chosen programs.

        

The hardware detection was good and pretty much everything worked out of the box. Kaffeine was able to play mpgs and the example files but not avis. OpenOffice crashed and disappeared my first attempt at using it, but functioned properly in all subsequent tests. The KDE that was included was a bit stripped down and included no games at all, but lots of choices are available through the software manager. The desktop itself was pretty even if customized very little. Under the hood is a 2.6.15 kernel, Xorg 7.0 and gcc 4.0.3 is installable.

The performance of the system was well above average. In fact, I'll just say it, that thing flies. Applications opened up before I could move my mouse. There was no artifacting or delay in redrawing windows, no delay at all in switching between windows, or "jerkiness" when moving windows around. The menu popped right open without delay as well. The whole system felt light and nimble. I was quite impressed. Comparing the performance of kde kubuntu to gnome ubuntu is almost like comparing peaches to nectarines and since I didn't test the x86 version of kubuntu, I can't say with any authority or expertise that kubuntu 64 out-performs the others. But I can say this is one of the, if not the, fastest full-sized systems I've tested. Yes sir, kubuntu was quite impressive.

Kubuntu 6.06 Review

Thanks for your review.
Just a few personal comments about this new release.

FIRST, THE BAD . . .

1) Yes, I profoundly dislike NOT being able to log in as root. Granted, I don't spend much time logged-in as user root, but when I'm going to do an extended session of system configuration and maintenance, it's the fastest, most efficient way. And, of course, Linux is supposed to be about choice.

So I googled "Kubuntu root login", and got to a Kubuntu forum where some user had asked the same question. In the forum, the next person had posted in reply (I'm paraphrasing here): I know how to enable root logins for Kubuntu, but I'm not going to tell you how because this is not a good idea.

I just couldn't believe my eyes. OSS is all about freedom and the ability to control your own machine. "THERE'S NO INFORMATION HIDING IN LINUX!!!" (Imagine Tom Hanks in the movie, A League of Their Own saying "There's no crying in baseball!")

As I sat there thinking about this, I began to think that these Ubuntu/Kubuntu folk are a different kind of folk than me--alien, strange, ungracious, and infuriating. All that wondering about the popularity of Ubuntu/Kubuntu increased. Why would anyone want to use a distro where a simple request for information was received with such an ignorant, uptight, shortsighted, narrowminded, hypocritical, and outright anal response?

Working myself up to a good simmer, now, I then look at the next post in the forum, where a user quietly and considerately told exactly how to do it. OK, maybe these Kubuntu folk aren't jerks. Too much rush to judgement and stereotyping going on in our world anyway.

2) I'm not too familiar with Debian, or Debian based distros--so this one is probably my fault. I couldn't get Nvidia's accelerated drivers working properly. I have an older 17" LCD monitor that still works great, but it's very fussy about sync rates. I can usually tinker around and get things working, but no go here. I admit I was impatient here, and if I'd spent more time, I could have gotten it to work.

3) Development compilers and libraries are not included with the basic live CD install. There is such a thing as designing a distro for beginners, but not treating them like idiots. Can't find a package for your distro that works? Then, you can get the source and compile your own. Basic tools to do this, in my opinion, should always be included with the basic install of a Linux distro.

NOW, THE GOOD.

1) I like the graphical package installer, Adept. It works well, and is well designed and integrated.

2) Despite the heavy load the Ubuntu/Kubuntu servers must have been undergoing with a new release, they were quick and responsive.

3) I agree with every thing srlinux had to say in her (typically excellent) review, particularly when she says Kubuntu is very fast and responsive. I installed the amd64 version, and speed was excellent.

4) Kubuntu very courteously found all the other distros on my hard disks, and added them to the Grub boot menu. If all distros would do this, installing and trying out multiple distros would certainly be easier.

Well, that's it. I think it might be interesting to see an in-depth comparison of the latest Mepis(ver. 6, rc4) to this new Kubuntu release.

Gary Frankenbery

re: Kubuntu 6.06 Review

gfranken wrote:

Thanks for your review.
Just a few personal comments about this new release.

FIRST, THE BAD . . .

1) Yes, I profoundly dislike NOT being able to log in as root. Granted, I don't spend much time logged-in as user root, but when I'm going to do an extended session of system configuration and maintenance, it's the fastest, most efficient way. And, of course, Linux is supposed to be about choice.

Yeah, I didn't like that one too much, especially when the ubuntus hit the pipes. I'm used to it now I guess and it only annoys me slightly when I forget to sudo a command. I found just setting a root password will fix that tho.

teehee on the tom hanks thing. Big Grin

gfranken wrote:

3) Development compilers and libraries are not included with the basic live CD install. There is such a thing as designing a distro for beginners, but not treating them like idiots. Can't find a package for your distro that works? Then, you can get the source and compile your own. Basic tools to do this, in my opinion, should always be included with the basic install of a Linux distro.

Yeah, that used to really urk me too. But like the sudo thing, I'm getting used to it. I'm seeing it more and more in distros these days. A comment on another site said to install the build-essentials to get gcc, make, and friends. It was nice seeing that comment before my installs that way I didn't have to get all annoyed at it. Big Grin

Thanks for your kind words for me, I really appreciate that.

----
You talk the talk, but do you waddle the waddle?

It's not as bad as it looks

1) Yes, I profoundly dislike NOT being able to log in as root.

I must admit that I created a root password on the first machines where I installed Ubuntu. (sudo passwd) But now I just let the root user blocked and use (sudo su) to get a shell that has root rights and exit it again after some system maintenance. It is possible more secure that script kiddies cannot login as root via ssh (not standard installed but the first thing I usually install) even if they try really hard.

2) I couldn't get Nvidia's accelerated drivers working properly.

After installing the nvidia package (apt-get install nvidia-glx) I usually run the reconfiguration of xorg by (dpkg-reconfigure xserver-xorg) this creates a new /etc/X11/xorg.conf file including sync ranges. This tool always shows the options chosen the last time it has run so it is easy to use multiple times to tweak things.

3) Development compilers and libraries are not included with the basic live CD install.

It is easy to correct (apt-get install build-essential) but many people don't need those tools. The current repositories are packed with all kind of usefull software. It is very seldom that I revert to debian-unstable to get sources of something and repack it to create an ubuntu version of the same project.

Re: It's not as bad as it looks

Quote:
After installing the nvidia package (apt-get install nvidia-glx) I usually run the reconfiguration of xorg by (dpkg-reconfigure xserver-xorg) this creates a new /etc/X11/xorg.conf file including sync ranges. This tool always shows the options chosen the last time it has run so it is easy to use multiple times to tweak things.
Thanks Jurgen, for taking the time to explain the nvidia thing. My weakness when configuring Debian based distros is definitely showing here.

Also thanks for your clarification on installing the development packages.

Kubuntu is definitely on my radar. It was certainly blazingly fast.

Regards,
Gary Frankenbery
Computer Science Teacher
Grants Pass High School, Oregon, USA

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

GNU Parallel 20190822 ('Jesper Svarre') released [stable]

GNU Parallel 20190822 ('Jesper Svarre') [stable] has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/ No new functionality was introduced so this is a good candidate for a stable release. GNU Parallel is 10 years old next year on 2020-04-22. You are here by invited to a reception on Friday 2020-04-17. Read more

KDE ISO Image Writer – Release Announcement

My GSoC project comes to an end and I am going to conclude this series of articles by announcing the release of a beta version of KDE ISO Image Writer. Read more Also: How I got a project in Labplot KDE

Linux Foundation: Automotive Grade Linux Announcement and Calling Surveillance Operations "Confidential Computing"

  • Automotive Grade Linux Announces New Instrument Cluster Expert Group and UCB 8.0 Code Release

    Automotive Grade Linux (AGL), an open source project developing a shared software platform for in-vehicle technology, today announced a new working group focused on Instrument Cluster solutions, as well as the latest code release of the AGL platform, the UCB 8.0. The AGL Instrument Cluster Expert Group (EG) is working to reduce the footprint of AGL and optimize the platform for use in lower performance processors and low-cost vehicles that do not require an entire infotainment software stack. Formed earlier this year, the group plans to release design specifications later this year with an initial code release in early 2020. “AGL is now supported by nine major automotive manufacturers, including the top three producers by worldwide volume, and is currently being used in production for a range of economy and luxury vehicles” said Dan Cauchy, Executive Director of Automotive Grade Linux at the Linux Foundation. “The new Instrument Cluster Expert Group, supported by several of these automakers, will expand the use cases for AGL by enabling the UCB platform to support solutions for lower-cost vehicles, including motorcycles.”

  • Shhh! Microsoft, Intel, Google and more sign up to the Confidential Computing Consortium

    The Linux Foundation has signed up the likes of Microsoft and Google for its Confidential Computing Consortium, a group with the laudable goal of securing sensitive data. The group – which also includes Alibaba, Arm, Baidu, IBM, Intel, Red Hat, Swisscom and Tencent – will be working on open-source technologies and standards to speed the adoption of confidential computing. The theory goes that while approaches to encrypting data at rest and in transit have supposedly been dealt with, assuming one ignores the depressingly relentless splurts of user information from careless vendors, keeping it safe while in use is quite a bit more challenging. Particularly as workloads spread to the cloud and IoT devices.

  • Tech giants come together to form cloud security watchdog

    Some of the world’s biggest technology companies are joining forces to improve the security of files in the cloud. This includes Google, IBM, Microsoft, Intel, and many others. The news first popped up on the Linux Foundation, where it was said that the Confidential Computing Consortium will work to bring industry standards and identify the proper tools to encrypt data used by apps, devices and online services. At the moment, cloud security solutions focus to protect data that’s either resting, or is in transit. However, when the data is being used is “the third and possibly most challenging step to providing a fully encrypted lifecycle for sensitive data.”

  • Tech firms join forces to boost cloud security

    Founding members of the group – which unites hardware suppliers, cloud providers, developers, open source experts and academics – include Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent. [...] “The earliest work on technologies that have the ability to transform an industry is often done in collaboration across the industry and with open source technologies,” said Jim Zemlin, executive director at the Linux Foundation. “The Confidential Computing Consortium is a leading indicator of what is to come for security in computing and will help define and build open technologies to support this trust infrastructure for data in use.”

  • Google, Intel and Microsoft form data protection consortium
  • Intel Editorial: Intel Joins Industry Consortium to Accelerate Confidential Computing

    Leaders in information and infrastructure security are well versed in protecting data at-rest or in-flight through a variety of methods. However, data being actively processed in memory is another matter. Whether running on your own servers on-prem, in an edge deployment, or in the heart of a cloud service provider’s data center, this “in-use” data is almost always unencrypted and potentially vulnerable.

  • Confidential Computing: How Big Tech Companies Are Coming Together To Secure Data At All Levels

    Data today moves constantly from on-premises to public cloud and the edge, which is why it is quite challenging to protect. While there are standards available that aim to protect data when it is in rest and transit, standards related to protecting it when in use do not exist. Protecting data while in use is called confidential computing, which the Confidential Computing Consortium is aiming to create across the industry. The Confidential Computing Consortium, created under the Linux Foundation, will work to build up guidelines, systems and tools to ensure data is encrypted when it’s being used by applications, devices and online services. The consortium says that encrypting data when in use is “the third and possibly most challenging step to providing a fully encrypted lifecycle for sensitive data.” Members focused on the undertaking are Alibaba, ARM, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent.

  • IT giants join forces for full-system data security

    Apple is conspiciously missing from the consortium, despite using both Intel hardware and inhouse designed ARM-based processors. Of the first set of commitments, Intel will release its Software Guard Extensions (SGX) software development kit as open source through the CCC.

  • Google, Intel, and Microsoft partner to improve cloud security

    Some of the biggest names in tech have banded together in an effort to promote industry-wide security standards for protecting data in use.

  • Alibaba, Baidu, Google, Microsoft, Others Back Confidential Computing Consortium

    The Confidential Computing Consortium aims to help define and accelerate open-source technology that keeps data in use secure. Data typically gets encrypted by service providers, but not when it’s in use. This consortium will focus on encrypting and processing the data “in memory” to reduce the exposure of the data to the rest of the system. It aims to provide greater control and transparency for users.

  • Microsoft, Intel and others are doubling down on open source Linux security

    In other words, the operating system could be compromised by some kind of malware, but the data being used in a program would still be encrypted, and therefore safe from an attacker.

  • Microsoft, Intel, and Red Hat Back Confidential Computing

    The Linux Foundation’s latest project tackles confidential computing with a group of companies that reads like a who’s who of cloud providers, chipmakers, telecom operators, and other tech giants. Today at the Open Source Summit the Linux Foundation said it will form a new group called the Confidential Computing Consortium. Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom, and Tencent all committed to work on the project, which aims to accelerate the adoption of confidential computing.

IBM/Red Hat: OpenShift, CUDA, Jim Whitehurst, VMworld and RHELvolution

  • Red Hat Launches OpenShift Service Mesh to Accelerate Adoption of Microservices and Cloud-Native Applications

    Red Hat, Inc., the world's leading provider of open source solutions, today announced the general availability of Red Hat OpenShift Service Mesh to connect, observe and simplify service-to-service communication of Kubernetes applications on Red Hat OpenShift 4, the industry’s most comprehensive enterprise Kubernetes platform. Based on the Istio, Kiali and Jaeger projects and enhanced with Kubernetes Operators, OpenShift Service Mesh is designed to deliver a more efficient, end-to-end developer experience around microservices-based application architectures. This helps to free developer teams from the complex tasks of having to implement bespoke networking services for their applications and business logic.

  • CUDA 10.1 U2 Adds RHEL8 Support, Nsight Compute Tools For POWER

    NVIDIA last week quietly released a second update to CUDA 10.1. CUDA 10.1 Update 2 brings Red Hat Enterprise Linux 8.0 support, continued POWER architecture support improvements, and other additions.

  • IBM Stock and Jim Whitehurst’s Toughest Test

    What analysts say they want from IBM stock is Red Hat CEO Jim Whitehurst in current CEO Virginia Rometty’s chair. They want Red Hat running IBM. That wasn’t the promise when this deal was put together. The promise was that Red Hat would get autonomy from IBM, not that IBM would lose its autonomy to Red Hat. But Whitehurst’s concept of an Open Organization has excited analysts who don’t even know what it is. If IBM became an Open Organization, these analysts think, it would replace the top-down structure IBM has used for a century with an organic system in which employees and customers are part of the product design process. Instead of selling gear or even solutions, IBM would become a corporate change agent.

  • Going to VMWorld? Learn to help data scientists and application developers accelerate AI/ML initiatives

    IT experts from around the world are headed to VMworld 2019 in San Francisco to learn how they can leverage emerging technologies from VMware and ecosystem partners (e.g. Red Hat, NVIDIA, etc.) to help achieve the digital transformation for their organizations. Artificial Intelligence (AI)/Machine Learning (ML) is a very popular technology trend, with Red Hat OpenShift customers like HCA Healthcare, BMW, Emirates NBD, and several more are offering differentiated value to their customers. Investments are ramping up across many industries to develop intelligent digital services that help improve customer satisfaction, and gain competitive business advantages. Early deployment trends indicate AI/ML solution architectures are spanning across edge, data center, and public clouds.

  • RHELvolution 2: A brief history of Red Hat Enterprise Linux releases from RHEL 6 to today

    In the previous post, we looked at the history of Red Hat Enterprise Linux from pre-RHEL days through the rise of virtualization. In this one we'll take a look at RHEL's evolution from early days of public cloud to the release of RHEL 8 and beyond.