Language Selection

English French German Italian Portuguese Spanish

Screencasts: Manjaro, Enso OS, Endless OS, Dead Cells on Ubuntu

Filed under
GNU
Linux

More in Tux Machines

today's howtos and programming bits

  • How to fix trailing underscores at the end of URLs in Chrome
  • How to Install Ubuntu Alongside With Windows 10 or 8 in Dual-Boot
  • Beginner’s guide on how to git stash :- A GIT Tutorial
  • Handy snapcraft features: Remote build
  • How to build a lightweight system container cluster
  • Start a new Cryptocurrency project with Python
  • [Mozilla] Celery without a Results Backend
  • Mucking about with microframeworks

    Python does not lack for web frameworks, from all-encompassing frameworks like Django to "nanoframeworks" such as WebCore. A recent "spare time" project caused me to look into options in the middle of this range of choices, which is where the Python "microframeworks" live. In particular, I tried out the Bottle and Flask microframeworks—and learned a lot in the process. I have some experience working with Python for the web, starting with the Quixote framework that we use here at LWN. I have also done some playing with Django along the way. Neither of those seemed quite right for this latest toy web application. Plus I had heard some good things about Bottle and Flask at various PyCons over the last few years, so it seemed worth an investigation. Web applications have lots of different parts: form handling, HTML template processing, session management, database access, authentication, internationalization, and so on. Frameworks provide solutions for some or all of those parts. The nano-to-micro-to-full-blown spectrum is defined (loosely, at least) based on how much of this functionality a given framework provides or has opinions about. Most frameworks at any level will allow plugging in different parts, based on the needs of the application and its developers, but nanoframeworks provide little beyond request and response handling, while full-blown frameworks provide an entire stack by default. That stack handles most or all of what a web application requires. The list of web frameworks on the Python wiki is rather eye-opening. It gives a good idea of the diversity of frameworks, what they provide, what other packages they connect to or use, as well as some idea of how full-blown (or "full-stack" on the wiki page) they are. It seems clear that there is something for everyone out there—and that's just for Python. Other languages undoubtedly have their own sets of frameworks (e.g. Ruby on Rails).

Kernel: Linux 5.3, DragonFlyBSD Takes Linux Bits, LWN Paywall Expires for Recent Articles

  • Ceph updates for 5.3-rc1
    Hi Linus,
    
    The following changes since commit 0ecfebd2b52404ae0c54a878c872bb93363ada36:
    
    Linux 5.2 (2019-07-07 15:41:56 -0700)
    
    are available in the Git repository at:
    
    https://github.com/ceph/ceph-client.git tags/ceph-for-5.3-rc1
    
    for you to fetch changes up to d31d07b97a5e76f41e00eb81dcca740e84aa7782:
    
    ceph: fix end offset in truncate_inode_pages_range call (2019-07-08 14:01:45 +0200)
    
    There is a trivial conflict caused by commit 9ffbe8ac05db
    ("locking/lockdep: Rename lockdep_assert_held_exclusive() ->
    lockdep_assert_held_write()"). I included the resolution in
    for-linus-merged.
    
  • Ceph Sees "Lots Of Exciting Things" For Linux 5.3 Kernel

    Ceph for Linux 5.3 is bringing an addition to speed-up reads/discards/snap-diffs on sparse images, snapshot creation time is now exposed to support features like "restore previous versions", support for security xattrs (currently limited to SELinux), addressing a missing feature bit so the kernel client's Ceph features are now "luminous", better consistency with Ceph FUSE, and changing the time granularity from 1us to 1ns. There are also bug fixes and other work as part of the Ceph code for Linux 5.3. As maintainer Ilya Dryomov put it, "Lots of exciting things this time!"

  • The NVMe Patches To Support Linux On Newer Apple Macs Are Under Review

    At the start of the month we reported on out-of-tree kernel work to support Linux on the newer Macs. Those patches were focused on supporting Apple's NVMe drive behavior by the Linux kernel driver. That work has been evolving nicely and is now under review on the kernel mailing list. Volleyed on Tuesday were a set of three patches to the Linux kernel's NVMe code for dealing with the Apple hardware of the past few years in order for Linux to deal with these drives. On Apple 2018 systems and newer, their I/O queue sizing/handling is odd and in other areas not properly following NVMe specifications. These patches take care of that while hopefully not regressing existing NVMe controller support.

  • DragonFlyBSD Pulls In The Radeon Driver Code From Linux 4.4

    While the Linux 4.4 kernel is quite old (January 2016), DragonFlyBSD has now re-based its AMD Radeon kernel graphics driver against that release. It is at least a big improvement compared to its Radeon code having been derived previously from Linux 3.19. DragonFlyBSD developer François Tigeot continues doing a good job herding the open-source Linux graphics driver support to this BSD. With the code that landed on Monday, DragonFlyBSD's Radeon DRM is based upon the state found in the Linux 4.4.180 LTS tree.

  • Destaging ION

    The Android system has shipped a couple of allocators for DMA buffers over the years; first came PMEM, then its replacement ION. The ION allocator has been in use since around 2012, but it remains stuck in the kernel's staging tree. The work to add ION to the mainline started in 2013; at that time, the allocator had multiple issues that made inclusion impossible. Recently, John Stultz posted a patch set introducing DMA-BUF heaps, an evolution of ION, that is designed to do exactly that — get the Android DMA-buffer allocator to the mainline Linux kernel. Applications interacting with devices often require a memory buffer that is shared with the device driver. Ideally, it would be memory mapped and physically contiguous, allowing direct DMA access and minimal overhead when accessing the data from both sides at the same time. ION's main goal is to support that use case; it implements a unified way of defining and sharing such memory buffers, while taking into account the constraints imposed by the devices and the platform.

  • clone3(), fchmodat4(), and fsinfo()

    The kernel development community continues to propose new system calls at a high rate. Three ideas that are currently in circulation on the mailing lists are clone3(), fchmodat4(), and fsinfo(). In some cases, developers are just trying to make more flag bits available, but there is also some significant new functionality being discussed. clone3() The clone() system call creates a new process or thread; it is the actual machinery behind fork(). Unlike fork(), clone() accepts a flags argument to modify how it operates. Over time, quite a few flags have been added; most of these control what resources and namespaces are to be shared with the new child process. In fact, so many flags have been added that, when CLONE_PIDFD was merged for 5.2, the last available flag bit was taken. That puts an end to the extensibility of clone().

  • Soft CPU affinity

    On NUMA systems with a lot of CPUs, it is common to assign parts of the workload to different subsets of the available processors. This partitioning can improve performance while reducing the ability of jobs to interfere with each other. The partitioning mechanisms available on current kernels might just do too good a job in some situations, though, leaving some CPUs idle while others are overutilized. The soft affinity patch set from Subhra Mazumdar is an attempt to improve performance by making that partitioning more porous. In current kernels, a process can be restricted to a specific set of CPUs with either the sched_setaffinity() system call or the cpuset mechanism. Either way, any process so restricted will only be able to run on the specified CPUs regardless of the state of the system as a whole. Even if the other CPUs in the system are idle, they will be unavailable to any process that has been restricted not to run on them. That is normally the behavior that is wanted; a system administrator who has partitioned a system in this way probably has some other use in mind for those CPUs. But what if the administrator would rather relax the partitioning in cases where the fenced-off CPUs are idle and going to waste? The only alternative currently is to not partition the system at all and let processes roam across all CPUs. One problem with that approach, beyond losing the isolation between jobs, is that NUMA locality can be lost, resulting in reduced performance even with more CPUs available. In theory the AutoNUMA balancing code in the kernel should address that problem by migrating processes and their memory to the same node, but Mazumdar notes that it doesn't seem to work properly when memory is spread out across the system. Its reaction time is also said to be too slow, and the cost of the page scanning required is high.

How the Open Source Operating System Has Silently Won Over the World

The current and future potential for Linux based systems is limitless. The system’s flexibility allows for the hardware that uses it to be endlessly updated. Functionality can, therefore, be maintained even as the technology around the devices change. This flexibility also means that the function of the hardware can be modified to suit an ever-changing workplace. For example, because the INSYS icom OS has been specifically designed for use in routers, this has allowed it to be optimised to be lightweight and hardened to increase its security. Multipurpose OS have large libraries of applications for a diverse range of purposes. Great for designing new uses, but these libraries can also be exploited by actors with malicious intent. Stripping down these libraries to just what is necessary through a hardening process can drastically improve security by reducing the attackable surfaces. Overall, Windows may have won the desktop OS battle with only a minority of them using Linux OS. However, desktops are only a minute part of the computing world. Servers, mobile systems and embedded technology that make up the majority are predominately running Linux. Linux has gained this position by being more adaptable, lightweight and portable than its competitors. Read more

Operating-System-Directed Power-Management (OSPM) Summit

  • The third Operating-System-Directed Power-Management summit

    he third edition of the Operating-System-Directed Power-Management (OSPM) summit was held May 20-22 at the ReTiS Lab of the Scuola Superiore Sant'Anna in Pisa, Italy. The summit is organized to collaborate on ways to reduce the energy consumption of Linux systems, while still meeting performance and other goals. It is attended by scheduler, power-management, and other kernel developers, as well as academics, industry representatives, and others interested in the topics.

  • The future of SCHED_DEADLINE and SCHED_RT for capacity-constrained and asymmetric-capacity systems

    The kernel's deadline scheduling class (SCHED_DEADLINE) enables realtime scheduling where every task is guaranteed to meet its deadlines. Unfortunately SCHED_DEADLINE's current view on CPU capacity is far too simple. It doesn't take dynamic voltage and frequency scaling (DVFS), simultaneous multithreading (SMT), asymmetric CPU capacity, or any kind of performance capping (e.g. due to thermal constraints) into consideration. In particular, if we consider running deadline tasks in a system with performance capping, the question is "what level of guarantee should SCHED_DEADLINE provide?". An interesting discussion about the pro and cons of different approaches (weak, hard, or mixed guarantees) developed during this presentation. There were many different views but the discussion didn't really conclude and will have to be continued at the Linux Plumbers Conference later this year. The topic of guaranteed performance will become more important for mobile systems in the future as performance capping is likely to become more common. Defining hard guarantees is almost impossible on real systems since silicon behavior very much depends on environmental conditions. The main pushback on the existing scheme is that the guaranteed bandwidth budget might be too conservative. Hence SCHED_DEADLINE might not allow enough bandwidth to be reserved for use cases with higher bandwidth requirements that can tolerate bandwidth reservations not being honored.

  • Scheduler behavioral testing

    Validating scheduler behavior is a tricky affair, as multiple subsystems both compete and cooperate with each other to produce the task placement we observe. Valentin Schneider from Arm described the approach taken by his team (the folks behind energy-aware scheduling — EAS) to tackle this problem.

  • CFS wakeup path and Arm big.LITTLE/DynamIQ

    "One task per CPU" workloads, as emulated by multi-core Geekbench, can suffer on traditional two-cluster big.LITTLE systems due to the fact that tasks finish earlier on the big CPUs. Arm has introduced a more flexible DynamIQ architecture that can combine big and LITTLE CPUs into a single cluster; in this case, early products apply what's known as phantom scheduler domains (PDs). The concept of PDs is needed for DynamIQ so that the task scheduler can use the existing big.LITTLE extensions in the Completely Fair Scheduler (CFS) scheduler class. Multi-core Geekbench consists of several tests during which N CFS tasks perform an equal amount of work. The synchronization mechanism pthread_barrier_wait() (i.e. a futex) is used to wait for all tasks to finish their work in test T before starting the tasks again for test T+1. The problem for Geekbench on big.LITTLE is related to the grouping of big and LITTLE CPUs in separate scheduler (or CPU) groups of the so-called die-level scheduler domain. The two groups exists because the big CPUs share a last-level cache (LLC) and so do the LITTLE CPUs. This isn't true any more for DynamIQ, hence the use of the "phantom" notion here. The tasks of test T finish earlier on big CPUs and go to sleep at the barrier B. Load balancing then makes sure that the tasks on the LITTLE CPUs migrate to the big CPUs where they continue to run the rest of their work in T before they also go to sleep at B. At this moment, all the tasks in the wake queue have a big CPU as their previous CPU (p->prev_cpu). After the last task has entered pthread_barrier_wait() on a big CPU, all tasks on the wake queue are woken up.

  • I-MECH: realtime virtualization for industrial automation

    The typical systems used in industrial automation (e.g. for axis control) consist of a "black box" executing a commercial realtime operating system (RTOS) plus a set of control design tools meant to be run on a different desktop machine. This approach, besides imposing expensive royalties on the system integrator, often does not offer the desired degree of flexibility for testing/implementing novel solutions (e.g., running both control code and design tools on the same platform).

  • Virtual-machine scheduling and scheduling in virtual machines

    As is probably well known, a scheduler is the component of an operating system that decides which CPU the various tasks should run on and for how long they are allowed to do so. This happens when an OS runs on the bare hardware of a physical host and it is also the case when the OS runs inside a virtual machine. The only difference being that, in the latter case, the OS scheduler marshals tasks among virtual CPUs. And what are virtual CPUs? Well, in most platforms they are also a kind of special task and they want to run on some CPUs ... therefore we need a scheduler for that! This is usually called the "double-scheduling" property of systems employing virtualization because, well, there literally are two schedulers: one — let us call it the host scheduler, or the hypervisor scheduler — that schedules the virtual CPUs on the host physical CPUs; and another one — let us call it the guest scheduler — that schedules the guest OS's tasks on the guest's virtual CPUs. Now what are these two schedulers? That depends on the virtualization platform. They are always different, in the sense that it will never happen that, at runtime, a scheduler has to deal with scheduling virtual CPUs and also scheduling tasks that want to run on those same virtual CPUs (well, it can happen, but then you are not doing virtualization). They can be the same, in terms of code, or they can be completely different from that respect as well.

  • Rock and a hard place: How hard it is to be a CPU idle-time governor

    In the opening session of OSPM 2019, Rafael Wysocki from Intel gave a talk about potential problems faced by the designers of CPU idle-time-management governors, which was inspired by his own experience from the timer-events oriented (TEO) governor work done last year. In the first place, he said, it should be noted that "CPU idleness" is defined at the level of logical CPUs, which may be CPU cores or simultaneous multithreading (SMT) threads, depending on the hardware configuration of the processor. In Linux, a logical CPU is idle when there are no runnable tasks in its queue, so it falls back to executing the idle task associated with it (there is one idle task for each logical CPU in the system, but they all share the same code, which is the idle loop). Therefore "CPU idleness" is an OS (not hardware) concept and if the idle loop is entered by a CPU, there is an opportunity to save some energy with a relatively small impact on performance (or even without any impact on performance at all) — if the hardware supports that. The idle loop runs on each idle CPU and it only takes this particular CPU into consideration. As a rule, two code modules are invoked in every iteration of it. The first one, referred to as the CPU idle-time-management governor, is responsible for deciding whether or not to stop the scheduler tick and what to tell the hardware to do; the second one, called the CPU idle-time-management driver, passes the governor's decisions down to the hardware, usually in an architecture- or platform-specific way. Then, presumably, the processor enters a special state in which the CPU in question stops fetching instructions (that is, it does literally nothing at all); that may allow the processor's power draw to be reduced and some energy to be saved as a result. If that happens, the processor needs to be woken up from that state by a hardware event after spending some time, referred to as the idle duration, in it. At that point, the governor is called again so it can save the idle-duration value for future use.