Language Selection

English French German Italian Portuguese Spanish

Linux 4.1 Has Improvements For The Multi-Queue Block Layer

Filed under
Linux

The latest good stuff for the Linux 4.1 kernel are the block core improvements, which mostly are focused on improving the multi-queue block layer (blk-mq).

Read more

More in Tux Machines

GhostBSD 20.11.28 Release Announcement

I am happy to announce the availability of GhostBSD 20.11.28. This release comes with a new live system that leverages ZFS, compression, and replication first introduced in FuryBSD by Joe Maloney. The 20.11.28 release contains numerous improvements, including OS fixes for linuxulator to improve Linux Steam performance, an updated kernel, and GhostBSD userland updates. Userland updates include a MATE desktop upgrade to version 1.24.1, Software Station performance improvements, and numerous application updates. Read more

Linux 5.10-rc6

For the first part of the week, it really looked like things were
calming down quite nicely, and I mentally already went "Ahh,
Thanksgiving week, this is going to be a nice small, calm rc".

And then Friday rolled around, and everybody sent me their pull
requests for the week, and it all looks very normal again.

But at least this week isn't unusually bigger than normal - it's a
pretty normal rc6 stat-wise.  So unless we have some big surprising
left-overs coming up, I think we're in good shape.

And the diffstat looks nice and flat too,  which is a sign of just
widespread small fixes, rather than some big last-minute changes. The
exception is a chunk of fixes to the new vidtv driver, but that is not
only a new driver, it's a virtual test-driver for validation and
development rather than something that would affect users.

That vidtv driver shows up very clearly in the patch stats too, but
other than that it all looks very normal: mostly driver updates (even
ignoring the vidtv ones), with the usual smattering of small fixes
elsewhere - architecture code, networking, some filesystem stuff.

So I'm feeling pretty good about 5.10, and I hope I won't be proven
wrong about that. But please do test,

                 Linus

Read more

Review: Trisquel GNU/Linux 9.0

Trisquel GNU/Linux is an entirely free (libre) distribution based on Ubuntu. Trisquel offers a variety of desktop editions, all of which are stripped of non-free software components. The project is one of the few Linux distributions endorsed by the Free Software Foundation and a rare project that attempts to both be entirely free and friendly to less experienced Linux users. The Trisquel website lists several desktop editions. The main edition (which is a 2.5GB download) features the MATE desktop environment while the Mini edition is about half the size and runs LXDE. There is also a KDE Plasma edition (called Triskel) along with Trisquel TOAST which runs the Sugar learning platform. Finally, there is a minimal net-install option for people who are comfortable building their system from the ground up using a command line interface. The release announcement for Trisquel 9.0 is fairly brief and does not mention many features. The bulk of the information is provided in this paragraph: "The default web browser Abrowser, our freedom and privacy respecting take on Mozilla's browser, provides the latest updates from upstream for a great browsing experience. Backports provide extended hardware support." Though it does not appear to be mentioned specifically in the release announcement, Trisquel 9.0 looks to be based on Ubuntu 18.04 LTS packages, with some applications backported. [...] On the whole I found Trisquel to be pleasant to use, easy to set up, and pretty capable out of the box. I really like how fast it performed tasks and how uncluttered/unbusy the desktop felt. The one problem I had with Trisquel was the lack of wireless networking support. The distribution strives for software freedom (as defined by the Free Software Foundation) and this means no non-free firmware, drivers, or applications. This slightly limits its hardware support compared to most Linux distributions. It also means no easy access to applications such as Steam, Chrome, Spotify, and so on. This may make Trisquel a less practical operating system to some, but that is sort of the point: Trisquel takes a hard stance in favour of software freedom over convenience. If you are a person who does not use non-free software and doesn't need non-free wireless support, then Trisquel is probably the best experience you can have with an entirely free Linux distribution. It is painless to set up, offers several desktop flavours, and runs quickly. For free software enthusiasts I would highly recommend giving Trisquel a try. Read more

Accurate Conclusions from Bogus Data: Methodological Issues in “Collaboration in the open-source arena: The WebKit case”

Nearly five years ago, when I was in grad school, I stumbled across the paper Collaboration in the open-source arena: The WebKit case when trying to figure out what I would do for a course project in network theory (i.e. graph theory, not computer networking; I’ll use the words “graph” and “network” interchangeably). The paper evaluates collaboration networks, which are graphs where collaborators are represented by nodes and relationships between collaborators are represented by edges. Our professor had used collaboration networks as examples during lecture, so it seemed at least mildly relevant to our class, and I wound up writing a critique on this paper for the class project. In this paper, the authors construct collaboration networks for WebKit by examining the project’s changelog files to define relationships between developers. They perform “community detection” to visually group developers who work closely together into separate clusters in the graphs. Then, the authors use those graphs to arrive at various conclusions about WebKit (e.g. “[e]ven if Samsung and Apple are involved in expensive patent wars in the courts and stopped collaborating on hardware components, their contributions remained strong and central within the WebKit open source project,” regarding the period from 2008 to 2013). At the time, I contacted the authors to let them know about some serious problems I found with their work. Then I left the paper sitting in a short-term to-do pile on my desk, where it has been sitting since Obama was president, waiting for me to finally write this blog post. Unfortunately, nearly five years later, the authors’ email addresses no longer work, which is not very surprising after so long — since I’m no longer a student, the email I originally used to contact them doesn’t work anymore either — so I was unable to contact them again to let them know that I was finally going to publish this blog post. Anyway, suffice to say that the conclusions of the paper were all correct; however, the networks used to arrive at those conclusions suffered from three different mistakes, each of which was, on its own, serious enough to invalidate the entire work. So if the analysis of the networks was bogus, how did the authors arrive at correct conclusions anyway? The answer is confirmation bias. The study was performed by visually looking at networks and then coming to non-rigorous conclusions about the networks, and by researching the WebKit community to learn what is going on with the major companies involved in the project. The authors arrived at correct conclusions because they did a good job at the later, then saw what they wanted to see in the graphs. I don’t want to be too harsh on the authors of this paper, though, because they decided to publish their raw data and methodology on the internet. They even published the python scripts they used to convert WebKit changelogs into collaboration graphs. Had they not done so, there is no way I would have noticed the third (and most important) mistake that I’ll discuss below, and I wouldn’t have been able to confirm my suspicions about the second mistake. You would not be reading this right now, and likely nobody would ever have realized the problems with the paper. The authors of most scientific papers are not nearly so transparent: many researchers today consider their source code and raw data to be either proprietary secrets to be guarded, or simply not important enough to merit publication. The authors of this paper deserve to be commended, not penalized, for their openness. Mistakes are normal in research papers, and open data is by far the best way for us to be able to detect mistakes when they happen. Read more