Language Selection

English French German Italian Portuguese Spanish

Android Leftovers

More in Tux Machines

Kernel: LWN's Latest (SACK etc.) and Phoronix on Saitek R440 Force Racing Wheel Support Coming to Linux

  • The TCP SACK panic

    Selective acknowledgment (SACK) is a technique used by TCP to help alleviate congestion that can arise due to the retransmission of dropped packets. It allows the endpoints to describe which pieces of the data they have received, so that only the missing pieces need to be retransmitted. However, a bug was recently found in the Linux implementation of SACK that allows remote attackers to panic the system by sending crafted SACK information. Data sent via TCP is broken up into multiple segments based on the maximum segment size (MSS) specified by the other endpoint—or some other network hardware in the path it traversed. Those segments are transmitted to that endpoint, which acknowledges that it has received them. Originally, those acknowledgments (ACKs) could only indicate that it had received segments up to the first gap; so if one early segment was lost (e.g. dropped due to congestion), the endpoint could only ACK those up to the lost one. The originating endpoint would have to retransmit many segments that had actually been received in order to ensure the data gets there; the status of the later segments is unknown, so they have to be resent. In simplified form, sender A might send segments 20-50, with segments 23 and 37 getting dropped along the way. Receiver B can only ACK segments 20-22, so A must send 23-50 again. As might be guessed, if the link is congested such that segments are being dropped, sending a bunch of potentially redundant traffic is not going to help things.

  • Short waits with umwait

    If a user-space process needs to wait for some event to happen, there is a whole range of mechanisms provided by the kernel to make that easy. But calling into the kernel tends not to work well for the shortest of waits — those measured in small numbers of microseconds. For delays of this magnitude, developers often resort to busy loops, which have a much smaller potential for turning a small delay into a larger one. Needless to say, busy waiting has its own disadvantages, so Intel has come up with a set of instructions to support short delays. A patch set from Fenghua Yu to support these instructions is currently working its way through the review process. The problem with busy waiting, of course, is that it occupies the processor with work that is even more useless than cryptocoin mining. It generates heat and uses power to no useful end. On hyperthreaded CPUs, a busy-waiting process could prevent the sibling thread from running and doing something of actual value. For all of these reasons, it would be a lot nicer to ask the CPU to simply wait for a brief period until something interesting happens. To that end, Intel is providing three new instructions. umonitor provides an address and a size to the CPU, informing it that the currently running application is interested in any writes to that range of memory. A umwait instruction tells the processor to stop executing until such a write occurs; the CPU is free to go into a low-power state or switch to a hyperthreaded sibling during that time. This instruction provides a timeout value in a pair of registers; the CPU will only wait until the timestamp counter (TSC) value exceeds the given timeout value. For code that is only interested in the timeout aspect, the tpause instruction will stop execution without monitoring any addresses.

  • Dueling memory-management performance regressions

    The 2019 Linux Storage, Filesystem, and Memory-Management Summit included a detailed discussion about a memory-management fix that addressed one performance regression while causing another. That fix, which was promptly reverted, is still believed by most memory-management developers to implement the correct behavior, so a patch posted by Andrea Arcangeli in early May has relatively broad support. That patch remains unapplied as of this writing, but the discussion surrounding it has continued at a slow pace over the last month. Memory-management subsystem maintainer Andrew Morton is faced with a choice: which performance regression is more important? The behavior in question relates to the intersection of transparent huge pages and NUMA policy. Ever since this commit from Aneesh Kumar in 2015, the kernel will, for memory areas where madvise(MADV_HUGEPAGE) has been called, attempt to allocate huge pages exclusively on the current NUMA node. It turns out that the kernel will try so hard that it will go into aggressive reclaim and compaction on that node, forcing out other pages, even if free memory exists on other nodes in the system. In essence, enabling transparent huge pages for a range of memory has become an equivalent to binding that memory to a single NUMA node. The result, as observed by many, can be severe swap storms and a dramatic loss of performance. In an attempt to fix this problem, Arcangeli applied a patch in November 2018 that loosened the tight binding to the current node. But, it turned out, some workloads want that binding behavior. Local huge pages will perform better than huge pages on a remote node; even local small pages tend to be better than remote huge pages. For some tasks, the performance penalty for using remote pages is high enough that it is worth going to great lengths — even enduring a swap storm at application startup — to avoid it. No such workload has been publicly posted, but the patch was reverted by David Rientjes in December after a huge discussion.

  • Rebasing and merging in kernel repositories

    What follows is a kernel document I have been working on for the last month in the hope of reducing the number of subsystem maintainers who run into trouble during the merge window. If all goes according to plan, this text will show up in 5.3 as Documentation/maintainer/rebasing-and-merging.txt. On the off chance that some potentially interested readers might not be monitoring additions to the nascent kernel maintainer's handbook, I'm publishing the text here as well. Maintaining a subsystem, as a general rule, requires a familiarity with the Git source-code management system. Git is a powerful tool with a lot of features; as is often the case with such tools, there are right and wrong ways to use those features. This document looks in particular at the use of rebasing and merging. Maintainers often get in trouble when they use those tools incorrectly, but avoiding problems is not actually all that hard. One thing to be aware of in general is that, unlike many other projects, the kernel community is not scared by seeing merge commits in its development history. Indeed, given the scale of the project, avoiding merges would be nearly impossible. Some problems encountered by maintainers result from a desire to avoid merges, while others come from merging a little too often.

  • Years Late But Saitek R440 Force Racing Wheel Support Is On The Way For Linux

    If you happen to have a Saitek R440 Force Wheel or looking to purchase a cheap and used racing wheel for enjoying the various Linux racing game ports or even the number of games working under Steam Play like F1 2018 and DiRT Rally 2.0, Linux support is on the way. The Saitek R440 Force Wheel can still be found from the likes of eBay for those wanting a cheap/used PC game racing wheel. Now coming soon to the Linux kernel is support for this once popular gaming wheel -- which was originally released back in 2004. The Linux kernel patch originally adding the Saitek R440 was sent last year only to be resent out recently in an attempt for mainline acceptance.

More frequent Python releases?

Python has followed an 18-month release cycle for many years now; each new 3.x release comes at that frequency. It has worked well, overall, but there is interest in having a shorter cycle, which would mean that new features get into users' hands more quickly. But changing that longstanding cycle has implications in many different places, some of which have come up as part of a discussion on switching to a cycle of a different length. Łukasz Langa, who is the release manager for the upcoming 3.8 release, as well as the manager for the date-to-be-determined release of 3.9, has proposed PEP 596 ("Python 3.9 Release Schedule (doubling the release cadence)"). As its name would imply, the PEP proposes halving the current release cycle to nine months, which would make the 3.9 release happen in June 2020. As described in PEP 569 ("Python 3.8 Release Schedule"), the Python 3.8 release is slated for October of this year; it is in beta at this point, so no new features can be added. The beta release also marks the start of development for the next release, so work on 3.9 has already begun. With that overlap, a nine-month cycle would actually allow seven or eight months for feature development and four or five months for shaking out the bugs from the first beta release on. Read more Also: 7 Python Function Examples with Parameters, Return and Data Types

CNCF outlines its technical oversight goals

At KubeCon + CloudNativeCon Europe 2019 there was a public meeting of the Cloud Native Computing Foundation (CNCF) Technical Oversight Committee (TOC); its members outlined the current state of the CNCF and where things are headed. What emerged was a picture of how the CNCF's governance is evolving as it brings in more projects, launches a new special interest group mechanism, and contemplates what to do with projects that go dormant. The CNCF has several levels in its organizational structure with the Governing Board handling the overall operation, budget, and finances, while the TOC handles the technical vision and direction, as well as approving new project additions. Though the TOC currently acts as a sort of gatekeeper for admitting projects into the CNCF, there is more that TOC member Joe Beda, the developer who made the first commit to Kubernetes, said can be done. "The TOC helps to decide which projects come in, but I think we could do an expanded role to actually make sure that we're serving those projects better and that we're creating a great value proposition for projects, so that it's a really great two-way street between the CNCF and the projects to really build some sustainability," he said. Jeff Brewer had a different perspective on how the TOC can help projects, based on his role, which is as an end user of CNCF projects. He is excited about the fact that end users of Kubernetes are talking with one another and helping to bring a customer focus to the TOC. By having that focus, the TOC can help to ensure that the projects it takes in aren't just cool projects that nobody actually uses, but rather are efforts that have practical utility. "We have over 80 end-user organization members and we look for them to really help us lead the way with the technical direction of the CNCF," he said. Read more

Servers: SUSE, Ubuntu, Red Hat, OpenStack and Raspberry Digital Sigange

  • A Native Kubernetes Operator Tailored for Cloud Foundry

    At the recent Cloud Foundry Summit in Philadephia, Troy Topnik of SUSE and Enrique Encalada of IBM discussed the progress being made on cf-operator, a project that’s part of the CF Containerization proposal. They show what the operator can do and how Cloud Foundry deployments can be managed with it. They also delve deeper, and talk about implementation techniques, Kubernetes Controllers and Custom Resources. This is a great opportunity to learn about how Cloud Foundry can work flawlessly on top of Kubernetes. Cloud Foundry Foundation has posted all recorded talks form CF Summit on YouTube. Check them out if you want to learn more about what is happening in the Cloud Foundry world! I’ll be posting more SUSE Cloud Application Platform talks here over the coming days. Watch Troy and Enrique’s talk below:

  • Ubuntu Server development summary – 26 June 2019

    The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list or visit the Ubuntu Server discourse hub for more discussion.

  • Redefining RHEL: Introduction to Red Hat Insights

    At Red Hat Summit we redefined what is included in a Red Hat Enterprise Linux (RHEL) subscription, and part of that is announcing that every RHEL subscription will include Red Hat Insights. The Insights team is very excited about this, and we wanted to take an opportunity to expand on what this means to you, and to share some of the basics of Red Hat Insights. We wanted to make RHEL easier than ever to adopt, and give our customers the control, confidence and freedom to help scale their environments through intelligent management. Insights is an important component in giving organizations the ability to predict, prevent, and remediate problems before they occur.

  • Red Hat Shares ― Special edition: Red Hat Summit recap
  • OpenShift Commons Briefing: OKD4 Release and Road Map Update with Clayton Coleman (Red Hat)

    In this briefing, Red Hat’s Clayton Coleman, Lead Architect, Containerized Application Infrastructure (OpenShift, Atomic, and Kubernetes) leads a discussion about the current development efforts for OKD4, Fedora CoreOS and Kubernetes in general as well as the philosophy guiding OKD 4 develpoment efforts. The briefing includes discussion of shared community goals for OKD4 and beyond and Q/A with some of the engineers currently working on OKD. The proposed goal/vision for OKD 4 is to be the perfect Kubernetes distribution for those who want to continuously be on the latest Kubernetes and ecosystem components combining an up-to-date OS, the Kubernetes control plane, and a large number of ecosystem operators to provide an easy-to-extend distribution of Kubernetes that is always on the latest released version of ecosystem tools.

  • OpenStack Foundation Joins Open Source Initiative as Affiliate Member

    The Open Source Initiative ® (OSI), steward of the Open Source Definition and internationally recognized body for approving Open Source Software licenses, today announces the affiliate membership of The OpenStack Foundation (OSF). Since 2012, the OSF has been the home for the OpenStack cloud software project, working to promote the global development, distribution and adoption of open infrastructure. Today, with five active projects and more than 100,000 community members from 187 countries, the OSF is recognized across industries as both a leader in open source development and an exemplar in open source practices. The affiliate membership provides both organizations a unique opportunity to work together to identify and share resources that foster community and facilitate collaboration to support the awareness and integration of open source technologies. While Open Source Software is now embraced and often touted by organizations large and small, for many just engaging with the community—and even some longtime participants—challenges remain. Community-based support and resources remain vital, ensuring those new to the ecosystem understand the norms and expectations, while those seeking to differentiate themselves remain authentically engaged. The combined efforts of the OSI and the OSF will compliment one another and contribute to these efforts.

  • Raspberry Digital Sigange details

    system starts in digital signage mode with the saved settings; the admin interface is always displayed after the machine bootstrap (interface can be password-protected in the donors’ build) and if not used for a few seconds, it will auto-launch the kiosk mode; the web interface can be also used remotely; SSH remote management is available: you can login as pi or root user with the same password set for the admin interface. Operating system can be completely customized by the administrator using this feature (donors version only); screen can be rotated via the graphical admin interface: normal, inverted, left, right (donors version only);