Language Selection

English French German Italian Portuguese Spanish

Kernel Coverage at LWN (Outside Paywall Now)

Filed under
Linux
  • XArray and the mainline

    The XArray data structure was the topic of the final filesystem track session at the 2018 Linux Storage, Filesystem, and Memory-Management Summit (LSFMM). XArray is a new API for the kernel's radix-tree data structure; the session was led by Matthew Wilcox, who created XArray. When asked by Dave Chinner if the session was intended to be a live review of the patches, Wilcox admitted with a grin that it might be "the only way to get a review on this damn patch set".

    In fact, the session was about the status of the patch set and its progress toward the mainline. Andrew Morton has taken the first eight cleanup patches, Wilcox said, which is great because there was a lot of churn there. The next set has a lot of churn as well, mostly due to renaming. The 15 patches after that actually implement XArray and apply it to the page cache. Those could be buggy, but they pass the radix-tree tests so, if they are, more tests are needed, he said.

  • Filesystem test suites

    While the 2018 Linux Storage, Filesystem, and Memory-Management Summit (LSFMM) filesystem track session was advertised as being a filesystem test suite "bakeoff", it actually focused on how to make the existing test suites more accessible. Kent Overstreet said that he has learned over the years that various filesystem developers have their own scripts for testing using QEMU and other tools. He and Ted Ts'o put the session together to try to share some of that information (and code) more widely.

    Most of the scripts and other code has not been polished or turned into a project, Overstreet continued. Bringing new people up to speed on the tests and how they are run takes time, but developers want to know how to run the tests before they send code to the maintainer.

  • Messiness in removing directories

    In the filesystem track at the 2018 Linux Storage, Filesystem, and Memory-Management Summit (LSFMM), Al Viro discussed some problems he has recently spotted in the implementation of rmdir(). He covered some of the history of that implementation and how things got to where they are now. He also described areas that needed to be checked because the problem may be present in different places in multiple filesystems.

    The fundamental problem is a race condition where operations can end up being performed on directories that have already been removed, which can lead to some rather "unpleasant" outcomes, Viro said. One warning, however: it was a difficult session to follow, with lots of gory details from deep inside the VFS, so it is quite possible that I have some (many?) of the details wrong here. Since LSFMM there has been no real discussion of the problem and its solution on the mailing lists that I have found.

  • Handling I/O errors in the kernel

    The kernel's handling of I/O errors was the topic of a discussion led by Matthew Wilcox at the 2018 Linux Storage, Filesystem, and Memory-Management Summit (LSFMM) in a combined storage and filesystem track session. At the start, he asked: "how is our error handling and what do we plan to do about it?" That led to a discussion between the developers present on the kinds of errors that can occur and on ways to handle them.

    Jeff Layton said that one basic problem occurs when there is an error during writeback; an application can read the block where the error occurred and get the old data without any kind of error. If the error was transient, data is lost. And if it is a permanent error, different filesystems handle it differently, which he thinks is a problem. Dave Chinner said that in order to have consistent behavior across filesystems, there needs to be a definition of what that behavior should be. There is a need to distinguish between transient and permanent failures and to create a taxonomy of how to deal with each type.

  • 4.18 Merge window, part 1

    As of this writing, 7,515 non-merge changesets have been pulled into the mainline repository for the 4.18 merge window. Things are clearly off to a strong start. The changes pulled this time around include more than the usual number of interesting new features; read on for the details.

  • Year-2038 work in 4.18

    We now have less than 20 years to wait until the time_t value used on 32-bit systems will overflow and create time-related mayhem across the planet. The grand plan for solving this problem was posted over three years ago now; progress since then has seemed slow. But quite a bit of work has happened deep inside the kernel and, in 4.18, some of the first work that will be visible to user space has been merged. The year-2038 problem is not yet solved, but things are moving in that direction.

    If 32-bit systems are to be able to handle times after January 2038, they will need to switch to a 64-bit version of the time_t type; the kernel will obviously need to support applications using that new type. Doing so in a way that doesn't break existing applications is going to require some careful work, though. In particular, the kernel must be able to successfully run a system where applications have been rebuilt to use a 64-bit time_t, but ancient binaries stuck on 32-bit time_t still exist; both applications should continue to work (though the old code may fail to handle times correctly).

    The first step is to recognize that most architectures already have support for applications running in both 64-bit and 32-bit modes in the form of the compatibility code used to run 32-bit applications on 64-bit systems. At some point, all systems will be 64-bit systems when it comes to time handling, so it makes sense to use the compatibility calls for older applications even on 32-bit systems. To that end, with 4.18, work has been done to allow both 32-bit and 64-bit versions of the time-related system calls to be built on all architectures. The CONFIG_64BIT_TIME configuration symbol controls the building of the 64-bit versions on 32-bit systems, while CONFIG_COMPAT_32BIT_TIME controls the 32-bit versions.

More in Tux Machines

digiKam 7.7.0 is released

After three months of active maintenance and another bug triage, the digiKam team is proud to present version 7.7.0 of its open source digital photo manager. See below the list of most important features coming with this release. Read more

Dilution and Misuse of the "Linux" Brand

Samsung, Red Hat to Work on Linux Drivers for Future Tech

The metaverse is expected to uproot system design as we know it, and Samsung is one of many hardware vendors re-imagining data center infrastructure in preparation for a parallel 3D world. Samsung is working on new memory technologies that provide faster bandwidth inside hardware for data to travel between CPUs, storage and other computing resources. The company also announced it was partnering with Red Hat to ensure these technologies have Linux compatibility. Read more

today's howtos

  • How to install go1.19beta on Ubuntu 22.04 – NextGenTips

    In this tutorial, we are going to explore how to install go on Ubuntu 22.04 Golang is an open-source programming language that is easy to learn and use. It is built-in concurrency and has a robust standard library. It is reliable, builds fast, and efficient software that scales fast. Its concurrency mechanisms make it easy to write programs that get the most out of multicore and networked machines, while its novel-type systems enable flexible and modular program constructions. Go compiles quickly to machine code and has the convenience of garbage collection and the power of run-time reflection. In this guide, we are going to learn how to install golang 1.19beta on Ubuntu 22.04. Go 1.19beta1 is not yet released. There is so much work in progress with all the documentation.

  • molecule test: failed to connect to bus in systemd container - openQA bites

    Ansible Molecule is a project to help you test your ansible roles. I’m using molecule for automatically testing the ansible roles of geekoops.

  • How To Install MongoDB on AlmaLinux 9 - idroot

    In this tutorial, we will show you how to install MongoDB on AlmaLinux 9. For those of you who didn’t know, MongoDB is a high-performance, highly scalable document-oriented NoSQL database. Unlike in SQL databases where data is stored in rows and columns inside tables, in MongoDB, data is structured in JSON-like format inside records which are referred to as documents. The open-source attribute of MongoDB as a database software makes it an ideal candidate for almost any database-related project. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you the step-by-step installation of the MongoDB NoSQL database on AlmaLinux 9. You can follow the same instructions for CentOS and Rocky Linux.

  • An introduction (and how-to) to Plugin Loader for the Steam Deck. - Invidious
  • Self-host a Ghost Blog With Traefik

    Ghost is a very popular open-source content management system. Started as an alternative to WordPress and it went on to become an alternative to Substack by focusing on membership and newsletter. The creators of Ghost offer managed Pro hosting but it may not fit everyone's budget. Alternatively, you can self-host it on your own cloud servers. On Linux handbook, we already have a guide on deploying Ghost with Docker in a reverse proxy setup. Instead of Ngnix reverse proxy, you can also use another software called Traefik with Docker. It is a popular open-source cloud-native application proxy, API Gateway, Edge-router, and more. I use Traefik to secure my websites using an SSL certificate obtained from Let's Encrypt. Once deployed, Traefik can automatically manage your certificates and their renewals. In this tutorial, I'll share the necessary steps for deploying a Ghost blog with Docker and Traefik.