Language Selection

English French German Italian Portuguese Spanish

Beastie of an OS

Filed under
Reviews
BSD
-s

Once a distro goes into beta 3, most of the major choices are in place. In looking at the 3rd testing versions of distributions, one can get a fairly good idea of what a distro might be like once it's released. The only experience I've had with a BSD clone or derivative was with my PC-BSD review some months ago. That install was as simple as 1, 2, 3... or click, click, click. I'd heard the horror stories about other BSD installs, yet downloaded 6.0 beta 3 with hope. Was this going to be a brain-burning, hair-pulling, data-losing proposition? What happened with my attempted FreeBSD 6.0 Beta 3 install?

As this is my first foray into FreeBSD, this isn't so much a "what's new" as it is a "what's here".

First off, the install was much easier than running it... at first. But as with many new things, once you learn how, you wonder why you were nervous to begin with. The installer was easy enough. I had read the docs on FreeBSD before and as I recall, it sounded like a cross between lfs and gentoo. But if that was true then, it certainly isn't true now. The FreeBSD installer was a nice ascii graphical type installer that walks one through the install in much the same manner as Slackware. Can you install Slackware? Then you can install FreeBSD. In fact it even looks very much like Slackware's.

The most difficult step for the newcomer might be the fdisk step. I even experienced a sweaty-palm moment. The FreeBSD fdisk didn't seem to see all my partitions, or rather it saw the extended partition as one big empty slice. I toyed with the idea of inputting the start and end block numbers in and seeing if it installed on the correct partition, but chickened out of that. It was already complaining that it didn't agree with the geometry reported for the partitions it did see. I chose to put FreeBSD on the first partition of the drive - former spot reserved for, if not the current home of, Windows. It is now a Unix slice.

The rest of the install is fairly straight forward. One picks out the type of install they'd like, if I recall correctly, something like: developer, developer + kernel, developer + kernel + X11, etc., or as I chose ALL. It takes about 10 minutes to install the system, then it asks about packages and ports. I chose many many packages I'd need including KDE, gnome, and all the other window managers available during install. Turns out there are many many more available through their package manager. This step takes some time, probably a 1/2 hour or so, but then it gets to the configuration portion. It asks some questions about your net connection preferences, root password, setting up users & groups, and some other hardware. All this was quite easy to follow and complete.

I didn't choose to install its bootloader, instead I googled and discovered one only needs an entry in their linux lilo.conf very similar to ones we used for Windows. In fact, it's almost exactly like that. Mine looks like so:
other=/dev/hda1
table=/dev/hda
label=FreeBSD

Then run lilo and yippee! Upon reboot, lilo hands off to the FreeBSD
bootloader and your new system boots as desired.

One is booted to a terminal for logging in. First thing I always do is setup X. I fired up my console browsers in an attempt to download the NVIDIA drivers, but that failed as NVIDIA changed their site since I last downloaded their drivers with a text browser. I used to think how nice that one could use Links/Lynx to do that, but now their stupid javascript license agreement ruins it. So, I improvised. Since my FreeBSD is still not seeing anything in my extended partition, I had to make other arrangements. This was all in vain as the install bombed out very early on. It shot an error something about NOOBJ is deprecated to NO_OBJ or some such and I knew it was vesa for me. Xorg NV drivers lock my machine up fairly tight no matter the boot options I use.

However, there was no /etc/X11/xorg.conf skeleton in place and copying one from another install wasn't an option, so I was left to run Xorg -configure. This sets up a test file in /root called xorg.conf.new, and one can test their configuration with Xorg -config xorg.conf.new. If it works well, then you can cp it to /etc/X11/xorg.conf, and I did.

Now to start KDE, or actually more accurately, KDM. I wanted to be able to check out all the window managers and figured KDM was my best bet. But where the heck was it? As with many Linux commands, fortunately "which" is in my BSD Unix clone and it worked quite well. I found xinit was located in /usr/X11R6/bin and kdm was located at /usr/local/bin/kdm. So su to root and issue the command /usr/X11R6/bin/xinit /usr/local/bin/kdm and we are in business. In the future to expedite things, I learned startkde was in /usr/local/bin/startkde. One finds the standard and complete KDE 3.4.2 upon startup or one of many other window managers.

        

        

Many ports get installed into /usr/local with FreeBSD and there is no /opt directory. In fact the directory structure may be similar in some ways to Linux, but to me, it was more different than alike. Many binaries are located in /usr/libexec and /usr/X11R6/libexec. But how does one find something not in their path? As you might recall in Linux systems, you can't use locate or slocate until you build the database, and regularly update it. But "which updatedb" didn't turn up anything. Thank goodness for google. To build and update that locate database, one needs to issue /usr/libexec/locate.updatedb

The kernel sources are located in /usr/src/sys/i386/ and the modules reside in /boot/kernel. I don't know which kernel I'm actually running, as uname -a reveals

tuxmachine# uname -a
FreeBSD tuxmachine.tuxmachines.org 6.0-BETA3 FreeBSD 6.0-BETA3 #0: Mon Aug 22 22:59:46 UTC 2005 root@harlow.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC i386

I supposed I was still thinking Linux and expecting 2.6something. I try to remember we're dealing with a horse of a different color here. Anyway, at this point, if support for something wasn't in default, then I just won't use it. Maybe later.

One of those things not in the default kernel build was support for my bttv card. But sound was there and instead of modprobe snd_emu10k1, one issues kldload snd_emu10k1. For convenience I googled again and found that /boot/defaults/loader.conf is where one sets up their modules to autoload upon boot. Some commandline equivalents might be:

  • kldload = modprobe
  • kldunload = rmmod
  • kldstat = lsmod

But what about installing other software? I always like to have mplayer installed and GIMP is a must-have. But what do I do? Well, google of course. I found that the installer for FreeBSD is pkg_add. A lot of software is located in /usr/ports/. One could just navigate to the package directory of choice and issue a make install or one can use pkg_add <name of package>. Using the -r flag tells it to search remotely and get the latest available. It tries to sort out dependencies as well, but if there are issues, one might try portupgrade <package name> Mplayer isn't available, but gimp is as well as bash_completion.

There are many similarities between FreeBSD and Linux, but there are subtle differences as well. One major difference is the naming convention of devices. For example, ethX are vrX and hdX are acdX. As stated the directory structure is quite a bit different and I found commandline flags must be typed before the filename.

So, all in all, I found FreeBSD to be a capable desktop system. I've experienced a few konqueror crashes, but no other stability problems. I think their strongpoint is still in the server market and I'd probably appreciate it more there. If one checks in with Netcraft, they will find that almost 1/2 of the longest running systems by average uptime are FreeBSD.

I now recall how it feels to be the newbie stumbling around in a strange operating system. One wonderful resource where I found some answers to some of my issues is the BSDWiki. There is also some documentation as well as latest news on the FreeBSD website. I could very easily adapt to FreeBSD if something catastrophic happened where all the Linuxes (Lini?) suddenly vanished off the face of the earth. I can't say what's new in this release since the last stable or even the other betas, but I can state that many of the applications are of the lastest (stable) versions available. Try it, you might like it!

I have some additional Screenshots in the gallery.

More in Tux Machines

today's leftovers

  • Linux Weekly Roundup #35

    Hello and welcome to this week's Linux Roundup and what a wonderful week we had! We have plenty of Linux Distro releases and LibreOffice 6.3 RC1. The Linux distros with releases this week are Q4OS 3.8, SparkyLinux 5.8, Mageia 7.1, ArcoLinux 19.07.11, Deepin 15.11, ArchBang 2107-beta, Bluestar 5.2.1, Slackel 7.2 "Openbox" and Endeavour OS 2019.07.15. I looked at most of these Linux Distros, links below, I will look at some of them in the new week and some I will unfortunately not have a look at, for download links and more, please visit distrowatch.com Well, this is this week's Linux Roundup, thank you so much for your time! Have a great week!

  • Full Circle Magazine: Full Circle Weekly News #140
  • Christopher Allan Webber: ActivityPub Conf 2019

    That's right! We're hosting the first ever ActivityPub Conf. It's immediately following Rebooting Web of Trust in Prague. There's no admission fee to attend. (Relatedly, the conference is kind of being done on the cheap, because it is being funded by organizers who are themselves barely funded.) The venue, however, is quite cool: it's at the DOX Centre for Contemporary Art, which is itself exploring the ways the digital world is affecting our lives. If you plan on attending (and maybe also speaking), you should get in your application soon (see the flier for details). We've never done one of these, and we have no idea what the response will be like, so this is going to be a smaller gathering (about 40 people). In some ways, it will be somewhere between a conference and a gathering of people-who-are-interested-in-activitypub. As said in the flier, by attending, you are agreeing to the code of conduct, so be sure to read that.

Sysadmin Appreciation Day, IBM and Fedora

  • Gift ideas for Sysadmin Appreciation Day

    Sysadmin Appreciation Day is coming up this Friday, July 26. To help honor sysadmins everywhere, we want you to share your best gift ideas. What would be the best way a team member or customer could show their appreciation for you? As a sysadmin, what was the best gift you've ever received? We asked our writers the same question, and here are their answers: "Whilst working in the Ubuntu community on Edubuntu, I took it upon myself to develop the startup/shutdown sound scheme, which became the default in Ubuntu for, from what I can understand, the next decade. Whilst people had a love-hate relationship with my sound scheme, and rightly so, I had a love-hate relationship with my sound card during the development. At the time I had recorded all my sound samples using one sample rate, but my new sound card, as my motherboard had exploded a few days earlier, did not support it. I had two choices, resample all my samples (which I didn't really want to do) or buy a new sound card.

  • Red Hat OpenStack Platform with Red Hat Ceph Storage: Radosbench baseline performance evaluation

    Red Hat Ceph Storage is popular storage for Red Hat OpenStack Platform. Customers around the world run their hyperscale, production workloads on Red Hat Ceph Storage and Red Hat OpenStack Platform. This is driven by the high level of integration between Ceph storage and OpenStack private cloud platforms. With each release of both platforms, the level of integration has grown and performance and automation has increased. As the customer's storage and compute needs for footprints have grown, we have seen more interest towards running compute and storage as one unit and providing a hyperconverged infrastructure (HCI) layer based on OpenStack and Ceph. [...] Continuing the benchmarking series, in the next post you’ll learn performance insights of running multi-instance MySQL database on Red Hat OpenStack Platform and Red Hat Ceph Storage across decoupled and hyperconverged architectures. We’ll also compare results from a near-equal environment backed by all-flash cluster nodes.

  • The State of Java in Flathub

    For maintainers of Java-based applications in Flathub, it's worth noting that even if you consume the Latest OpenJDK extension in your application, users will not be broken by major updates because OpenJDK is bundled into your Flatpak. The implication of this for users is that they won't see updates to their Java version until the application maintainer rebuilds the application in Flathub. If you maintain a Java-based Flatpak application on Flathub, you can consume the latest version of your chosen OpenJDK stream (either LTS or Latest) simply by rebuilding; the latest version of that OpenJDK steam will be pulled in automatically.

  • Fedora Magazine: Contribute at the Fedora Test Week for kernel 5.2

    The kernel team is working on final integration for kernel 5.1. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test week from Monday, Jul 22, 2019 through Monday, Jul 29, 2019. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

Debian and Ubuntu Leftovers

  • Bootstrappable Debian BoF

    Greetings from DebConf 19 in Curitiba! Just a quick reminder that I will run a Bootstrappable Debian BoF on Tuesday 23rd, at 13.30 Brasilia time (which is 16.30 UTC, if I am not mistaken). If you are curious about bootstrappability in Debian, why do we want it and where we are right now, you are welcome to come in person if you are at DebCon or to follow the streaming.

  • Candy Tsai: Outreachy Week 6 – Week 7: Getting Code Merge

    You can’t overhear what others are doing or learn something about your colleagues through gossip over lunch break when working remotely. So after being stuck for quite a bit, terceiro suggested that we try pair programming. After our first remote pair programming session, I think there should be no difference in pair programming in person. We shared the same terminal, looked at the same code and discussed just like people standing side by side. Through our pair programming session, I found out that I had a bad habit. I didn’t run tests on my code that often, so when I had failing tests that didn’t fail before, I spent more time debugging than I should have. Pair programming gave insight to how others work and I think little improvements go a long way.

  • about your wiki page on I/O schedulers and BFQ
    Hi,
    this is basically to report outdated statements in your wiki page on
    I/O schedulers [1].
    
    The main problematic statement is that BFQ "...  is not ideal for
    devices with slow CPUs or high throughput I/O devices" because too
    heavy.  BFQ is definitely more sophisticated than any of the other I/O
    schedulers.  We have designed it that way to provide an incomparably
    better service quality, at a very low overhead.  As reported in [2],
    the execution time of BFQ on an old laptop CPU is 0.6 us per I/O
    event, against 0.2 us for mq-deadline (which is the lightest Linux I/O
    scheduler).
    
    To put these figures into context, BFQ proved to be so good for
    "devices with slow CPUs" that, e.g., Chromium OS migrated to BFQ a few
    months ago.  In particular, Google crew got convinced by a demo [3] I
    made for them, on one of the cheapest and slowest Chromebook on the
    market.  In the demo, a fast download is performed.  Without BFQ, the
    download makes the device completely unresponsive.  With BFQ, the
    device remains as responsive as if it was totally idle.
    
    As for the other part of the statement, "...  not ideal for ...  high
    throughput I/O devices", a few days ago I ran benchmarks (on Ubuntu)
    also with one of the fastest consumer-grade NVMe SSDs: a Samsung SSD
    970 PRO.  Results [4] can be summarized as follows.  Throughput with
    BFQ is about the same as with the other I/O schedulers (it couldn't be
    higher, because this kind of drives just wants the scheduler to stay
    as aside as possible, when it comes to throughput).  But, in the
    presence of writes as background workload, start-up times with BFQ are
    at least 16 times as low as with the other I/O schedulers.  In
    absolute terms, gnome-terminal starts in ~1.8 seconds with BFQ, while
    it takes at least 28.7 (!) seconds with the other I/O schedulers.
    Finally, only with BFQ, no frame gets lost in video-playing
    benchmarks.
    
    BFQ then provides other important benefits, such as from 5x to 10X
    throughput boost in multi-client server workloads [5].
    
    So, is there any chance that the outdated/wrong information on your
    wiki page [1] gets updated somehow?  If I may, I'd be glad to update
    it myself, after providing you with all the results you may ask.
    
    In addition, why doesn't Ubuntu too consider switching to BFQ as
    default I/O scheduler, for all drives that BFQ supports (namely all
    drives with a maximum speed not above ~500 KIOPS)?
    
    Looking forward to your feedback,
    Paolo
    
    
  • Should Ubuntu Use The BFQ I/O Scheduler?

    The BFQ I/O scheduler is working out fairly well these days as shown in our benchmarks. The Budget Fair Queueing scheduler supports both throughput and low-latency modes while working particularly well for consumer-grade hardware. Should the Ubuntu desktop be using BFQ by default? [...] But in addition to wanting to correct that Wiki information, Paolo pops the question of why doesn't Ubuntu switch to BFQ as the default I/O scheduler for supported drives. Though as of yet, no Ubuntu kernel developers have yet commented on the prospect of switching to BFQ.

Devices With Linux Support

  • Quest Releases KACE SDA & SMA Updates

    The update to 7.0 for KACE Systems Deployment Appliance is primarily about bringing a scope of endpoint management capabilities with new support for Linux devices to the table.

  • Rugged, Kaby Lake transport computer has a 10-port LAN switch with PoE

    Axiomtek’s Linux-ready “tBOX400-510-FL” transportation system has a 7th Gen Intel CPU and a 10-port managed switch with 8x M12-style 10/100Mbps PoE and 2x GbE ports. The rugged system also has 3x mini-PCIe slots and dual swappable SATA drives. Axiomtek has launched a fanless, Kaby Lake-U based transportation computer with a choice of power supplies designed for in-vehicle, marine, or railway applications. The rugged tBOX400-510-FL features a Qualcomm-driven, Layer 2 managed PoE switch with support for IP surveillance and video management applications. “Customers can connect IP cameras directly without installing an extra PoE switch, minimizing overall deployment costs and installation space onboard,” stated Axiomtek product manager Sharon Huang.