Language Selection

English French German Italian Portuguese Spanish

Server

Kubernetes: KubeInvaders, CSI Ephemeral Inline Volumes and Reviewing 2019 in Docs

Filed under
Server
OSS
  • KubeInvaders - Gamified Chaos Engineering Tool for Kubernetes

    Some months ago, I released my latest project called KubeInvaders. The first time I shared it with the community was during an Openshift Commons Briefing session. Kubenvaders is a Gamified Chaos Engineering tool for Kubernetes and Openshift and helps test how resilient your Kubernetes cluster is, in a fun way.

  • CSI Ephemeral Inline Volumes

    Typically, volumes provided by an external storage driver in Kubernetes are persistent, with a lifecycle that is completely independent of pods or (as a special case) loosely coupled to the first pod which uses a volume (late binding mode). The mechanism for requesting and defining such volumes in Kubernetes are Persistent Volume Claim (PVC) and Persistent Volume (PV) objects. Originally, volumes that are backed by a Container Storage Interface (CSI) driver could only be used via this PVC/PV mechanism.

    But there are also use cases for data volumes whose content and lifecycle is tied to a pod. For example, a driver might populate a volume with dynamically created secrets that are specific to the application running in the pod. Such volumes need to be created together with a pod and can be deleted as part of pod termination (ephemeral). They get defined as part of the pod spec (inline).

    Since Kubernetes 1.15, CSI drivers can also be used for such ephemeral inline volumes. The CSIInlineVolume feature gate had to be set to enable it in 1.15 because support was still in alpha state. In 1.16, the feature reached beta state, which typically means that it is enabled in clusters by default.

    CSI drivers have to be adapted to support this because although two existing CSI gRPC calls are used (NodePublishVolume and NodeUnpublishVolume), the way how they are used is different and not covered by the CSI spec: for ephemeral volumes, only NodePublishVolume is invoked by kubelet when asking the CSI driver for a volume. All other calls (like CreateVolume, NodeStageVolume, etc.) are skipped. The volume parameters are provided in the pod spec and from there copied into the NodePublishVolumeRequest.volume_context field. There are currently no standardized parameters; even common ones like size must be provided in a format that is defined by the CSI driver. Likewise, only NodeUnpublishVolume gets called after the pod has terminated and the volume needs to be removed.

  • Reviewing 2019 in Docs

    Hi, folks! I’m one of the co-chairs for the Kubernetes documentation special interest group (SIG Docs). This blog post is a review of SIG Docs in 2019. Our contributors did amazing work last year, and I want to highlight their successes.

    Although I review 2019 in this post, my goal is to point forward to 2020. I observe some trends in SIG Docs–some good, others troubling. I want to raise visibility before those challenges increase in severity.

What Must be Considered Before Choosing a Container Platform?

Filed under
Server
OSS

An increasing number of IT groups are incorporating development tools, such as containers, in order to create cloud-native apps that operate in a constant manner across public, private, and hybrid clouds.

However, the trickiest part is to find the best container platforms for the organization. It is hard to make the correct decisions regarding container orchestration for managing lifecycles of the containers in order to function at scale and accelerate innovation.

Containers can be Linux

It is vital for every application to run on Linux since the containers are always running on a Linux host.

Containers that are used for managing their lifecycles, work best with Linux. However, these days, Kubernetes is the popular container orchestration platform that was built on Linux concepts and make use of Linux tooling and application programming interfaces (APIs) for managing the containers.

The companies are advised to opt for a Linux distribution that they know and trust before taking any decision on the OS for their container platform. Red Hat Enterprise Linux (RHEL), an OS platform, suits well for operating company’s containers as it provides stability and security features simultaneously, allowing developers to be agile.

Read more

Kubernetes: KubeDR, Elastic and Bug Bounty

Filed under
Server
OSS
  • Catalogic Software Announces KubeDR – Open Source Kubernetes Disaster Recovery

    Catalogic Software, a developer of innovative data protection solutions, today announced the introduction of its Catalogic open source utility, KubeDR, built to provide backup and disaster recovery for Kubernetes cluster configuration, certificates and metadata. Kubernetes is the fastest growing and most popular platform for managing containerized workloads in hybrid cloud environments. Catalogic is also launching cLabs to support new products, open source initiatives and innovations, such as KubeDR.

    Kubernetes stores cluster data in etcd, an interface that collects configuration data for distributed systems. While there are solutions focused on protecting persistent volumes, the cluster configuration data is often forgotten in existing industry solutions. There is a market need to provide the specific requirements of backup and support for Kubernetes cluster data stored in etcd. Catalogic’s new KubeDR is a user-friendly, secure, scalable and an open source solution for backup and disaster recovery designed specifically for Kubernetes applications.

  • Elastic Brings Observability Platform to Kubernetes

    Elastic N.V. announced this week that Elastic Cloud, a subscription instance of an observability platform based on the open source Elasticsearch engine, is generally available on Kubernetes.

    Anurag Gupta, principal product manager for Elastic Cloud, deploying Elastic Cloud for Kubernetes (ECK) eliminates the need to invoke an instance of the platform running outside their Kubernetes environment.

  • Kubernetes Launches Bug Bounty

    Kubernetes, the open-source container management system, has opened up its formerly private bug bounty program and is asking hackers to look for bugs not just in the core Kubernetes code, but also in the supply chain that feeds into the project.

    The new bounty program is supported by Google, which originally wrote Kubernetes, and it’s an extension of what had until now been an invitation-only program. Google has lent financial support and security expertise to other bug bounty programs for open source projects. The range of rewards is from $100 to $10,000 and the scope of what’s considered a valid target is unusual.

  • Google Partners With CNCF, HackerOne on Kubernetes Bug Bounty
  • CNCF, Google, and HackerOne launch Kubernetes bug bounty program

    Bug bounty programs motivate individuals and hacker groups to not only find flaws but disclose them properly, instead of using them maliciously or selling them to parties that will. Originally designed by Google and now run by the CNCF, Kubernetes is an open source container orchestration system for automating application deployment, scaling, and management. Given the hundreds of startups and enterprises that use Kubernetes in their tech stacks, it’s significantly cheaper to proactively plug security holes than to deal with the aftermath of breaches.

Enterprise Insights: Red Hat And The Public Cloud

Filed under
Red Hat
Server

Open source projects are the epicenter of technology innovation today. Docker and Kubernetes are revolutionizing cloud-native computing, along with data-focused projects like Mongo and Redis and many others. Even as open source projects drive innovation, however, sponsoring companies face a growing existential threat from hyper-scale cloud providers.

Red Hat is the recognized leader in enterprise open source support. It's a successful public company with a track record of growth, so it was somewhat puzzling to understand why the Red Hat board decided to sell to IBM this past year.

Read more

16 Open Source Cloud Storage Software for Linux in 2020

Filed under
Server
OSS

The cloud by the name indicates something which is very huge and present over a large area. Going by the name, in a technical field, Cloud is something that is virtual and provides services to end-users in the form of storage, hosting of apps or virtualizing any physical space. Nowadays, Cloud computing is used by small as well as large organizations for data storage or providing customers with its advantages which are listed above.

Mainly, three types of Services come associated with Cloud which are: SaaS (Software as a Service) for allowing users to access other publically available clouds of large organizations for storing their data like Gmail, PaaS (Platform as a Service) for hosting of apps or software on Others public cloud ex: Google App Engine which hosts apps of users, IaaS (Infrastructure as a Service) for virtualizing any physical machine and availing it to customers to make them get feel of a real machine.

Read more

Edge AI server packs in a 16-core Cortex-A72 CPU plus up to 32 i.MX8M SoCs and 128 NPUs

Filed under
GNU
Linux
Server
Hardware

SolidRun’s “Janux GS31 AI Inference Server” runs Linux on its CEx7 LX2160A Type 7 module equipped with NXP’s 16-core Cortex-A72 LX2160A. The system also supplies up to 32 i.MX8M SoCs for video and up to 128 Grylfalcon Lightspeeur 2803 NPUs via multiple “Snowball” modules.

When people talk about edge AI servers, they might be referring to some of the high-end embedded systems we regularly cover here at LinuxGizmos or perhaps something more server-like such as SolidRun’s rackmount form factor Janux GS31 AI Inference Server. The system would generally exceed the upper limits of our product coverage, but it’s a particularly intriguing beastie. The Janux GS31 is based on a SolidRun CEx7 LX2160A COM Express Type 7 module, which also powers the SolidRun HoneyComb LX2K networking board that we covered in June.

Read more

Also: Google Cloud Now Offering IBM Power SystemsGoogle Cloud Now Offering IBM Power Systems

Kubernetes on MIPS

Filed under
Server
Hardware
OSS

Background

MIPS (Microprocessor without Interlocked Pipelined Stages) is a reduced instruction set computer (RISC) instruction set architecture (ISA), appeared in 1981 and developed by MIPS Technologies. Now MIPS architecture is widely used in many electronic products.

Kubernetes has officially supported a variety of CPU architectures such as x86, arm/arm64, ppc64le, s390x. However, it’s a pity that Kubernetes doesn’t support MIPS. With the widespread use of cloud native technology, users under MIPS architecture also have an urgent demand for Kubernetes on MIPS.

Achievements

For many years, to enrich the ecology of the open-source community, we have been working on adjusting MIPS architecture for Kubernetes use cases. With the continuous iterative optimization and the performance improvement of the MIPS CPU, we have made some breakthrough progresses on the mips64el platform.

Over the years, we have been actively participating in the Kubernetes community and have rich experience in the using and optimization of Kubernetes technology. Recently, we tried to adapt the MIPS architecture platform for Kubernetes and achieved a new a stage on that journey. The team has completed migration and adaptation of Kubernetes and related components, built not only a stable and highly available MIPS cluster but also completed the conformance test for Kubernetes v1.16.2.

Read more

Kubernetes: Looking for Bugs, New Study and SUSE's Stake

Filed under
Server
OSS
  • Announcing the Kubernetes bug bounty program

    We aimed to set up this bug bounty program as transparently as possible, with an initial proposal, evaluation of vendors, and working draft of the components in scope. Once we onboarded the selected bug bounty program vendor, HackerOne, these documents were further refined based on the feedback from HackerOne, as well as what was learned in the recent Kubernetes security audit. The bug bounty program has been in a private release for several months now, with invited researchers able to submit bugs and help us test the triage process. After almost two years since the initial proposal, the program is now ready for all security researchers to contribute!

    What’s exciting is that this is rare: a bug bounty for an open-source infrastructure tool. Some open-source bug bounty programs exist, such as the Internet Bug Bounty, this mostly covers core components that are consistently deployed across environments; but most bug bounties are still for hosted web apps. In fact, with more than 100 certified distributions of Kubernetes, the bug bounty program needs to apply to the Kubernetes code that powers all of them. By far, the most time-consuming challenge here has been ensuring that the program provider (HackerOne) and their researchers who do the first line triage have the awareness of Kubernetes and the ability to easily test the validity of a reported bug. As part of the bootstrapping process, HackerOne had their team pass the Certified Kubernetes Administrator (CKA) exam.

  • Kubernetes: a secure, flexible and automated edge for IoT developers

    Cloud native software such as containers and Kubernetes and IoT/edge are playing a prominent role in the digital transformation of enterprise organisations. They are particularly critical to DevOps teams that are focused on faster software releases and more efficient IT operations through collaboration and automation. Most cloud native software is open source which broadens the developer pool contributing and customising the software. This has led to streamlined versions of Kubernetes with low footprints which are suited for IoT/edge workloads.

  • What’s New with SUSE CaaS Platform?

    SUSE CaaS Platform continues its steady pace of advancement, delivering new capabilities targeted at improving the Kubernetes platform operator experience. In addition to updating to Kubernetes 1.16, the SUSE CaaS Platform also now enables operators to consolidate operations across multi-cluster, multi-cloud, and multi-platform environments; to simplify cluster and application management with a web-based console; and to optimize system performance with powerful monitoring and management capabilities.

    Customer centricity was once again at the heart of feature considerations and enhancements for SUSE CaaS Platform. Over the past couple of weeks, we heard an increasing desire from our customers for key capabilities like the need for a unified management console and the need for more powerful data visualization. We listened to you, and your needs, and let that be our guide for development.

Kubernetes in the News

Filed under
Server
OSS
  • Should You Be Using Kubernetes?

    Like most people, I was only vaguely familiar with Kubernetes until my company started working with it. Since then, I’ve gained a deep appreciation for what it brings to cloud application management.

    For those unfamiliar, Kubernetes is a container-orchestration framework developed in 2014, originally as an internal project at Google. The framework automates much of the work involved in software development, including deployment, management and scaling. The Cloud Native Computing Foundation currently manages Kubernetes as an open source project, and Apache 2.0 distributes it.

    When we started our project, I understood only the basics of this framework. But as I dove deeper into the infrastructure and logic of Kubernetes, I discovered its distinct advantages when it came to integrating hardware, vendors and clouds onto a single platform.

  • Kubernetes Gets a Runtime Security Tool
  • 4 Ways Kubernetes Could Be Improved

    Kubernetes is good, but it could be improved. Here are some things that could be better.

    Like almost everyone else these days, I think Kubernetes is the best container orchestration solution. But that doesn’t mean Kubernetes isn’t without its flaws. For my money, there are a number of things that Kubernetes could do better—and needs to if it is going to remain the de facto open source container orchestrator.

    Indeed, some days I think the best thing I can say about Kubernetes is that it has fewer shortcomings than its competitors (I’m looking at you, Docker Swarm) rather than that it truly stands apart for its strengths.

    Here’s a list of areas where Kubernetes can be improved.

  • VMware Eyes Storage Options for Kubernetes
  • What Do Customers Want From The Kubernetes Ecosystem In 2020
  • Why the Air Force put Kubernetes in an F-16

Server: SysAdmins, So-called 'Ops', Infrastructure-as-Code (More Buzzwords) and Kubernetes Hype

Filed under
Server
  • 5 ops hacks for sysadmins

    As a sysadmin, every day I am faced with problems I need to solve quickly because there are users and managers who expect things to run smoothly. In a large environment like the one I manage, it's nearly impossible to know all of the systems and products from end to end, so I have to use creative techniques to find the source of the problems and (hopefully) come up with solutions.

    This has been my daily experience for well over 20 years, and I love it! Coming to work each day, I never quite know what will happen. So, I have a few quick and dirty tricks that I default to when a problem lands on my lap, but I don't know where to start.

  • Are you being the right person for DevOps?

    What does it mean to be the "right" person in a DevOps environment? That's the question that Josh Atwell, senior tech advocate at Splunk, tried to answer in his Lightning Talk at All Things Open 2019.

    "Being the right person for DevOps is being more than just your ops/dev role," says Josh. "In order to be the right person for DevOps, you have to be improving yourself, and you have to be working to improve for others."

    Watch Josh's Lightning Talk, "Are you being the right person for DevOps?" to learn why you should add communication, selflessness, self-care, and celebration to your list of core DevOps traits.

  • Infrastructure-as-Code mistakes and how to avoid them

    Two industry trends point to a gap in DevOps tooling chosen by many. Operations teams need more than an Infrastructure-as-Code approach, but a complete model-driven operations mentality. Learn how Canonical has addressed these concerns by developing Juju, an open source DevOps tool, to allow it create multiple world-leading products.

    [...]

    Juju is simple, secure devops tooling built to manage today’s complex applications wherever you run your software. Compute, storage, networking, service discovery and health monitoring come for free and work with Kubernetes, the cloud and the laptop.

    Juju allows your software infrastructure to maintain always-optimal configuration. As your deployment changes, every applications’ configuration operations are dynamically adjusted by charms. Charms are software packages that are run alongside your applications. They encode business rules for adapting to environmental changes.

    Using a model-driven mentality means raising the level of abstraction. Users of Juju quickly get used to a flexible, declarative syntax that is substrate-agnostic. Juju interacts with the infrastructure provider, but operations code remains the same across. Focusing on creating a software model of your product’s infrastructure increases productivity and reduces complexity.

    Automating infrastructure at a low level of abstraction, DevOps has bought the industry from breathing space. But that breathing space is running out.

  • 5 Kubernetes trends to watch in 2020

    “As more and more organizations continue to expand on their usage of containerized software, Kubernetes will increasingly become the de facto deployment and orchestration target moving forward,” says Josh Komoroske, senior DevOps engineer at StackRox.

    Indeed, some of the same or similar catalysts of Kubernetes interest to this point – containerization among them – are poised to continue in 2020. The shift to microservices architecture for certain applications is another example.

Syndicate content

More in Tux Machines

Events: LCA Talks and ChefConf 2020 CFP

  • The dark side of expertise

    Everyone has expertise in some things, which is normally seen as a good thing to have. But Dr. Sean Brady gave some examples of ways that our expertise can lead us astray, and actually cause us to make worse decisions, in a keynote at the 2020 linux.conf.au. Brady is a forensic engineer who specializes in analyzing engineering failures to try to discover the root causes behind them. The talk gave real-world examples of expertise gone wrong, as well as looking at some of the psychological research that demonstrates the problem. It was an interesting view into the ways that our brains work—and fail to work—in situations where our expertise may be sending our thoughts down the wrong path. Brady began his talk by going back to 1971 and a project to build a civic center arena in Hartford, Connecticut in the US. The building was meant to hold 10,000 seats; it had a large roof that was a "spiderweb of steel members". That roof would be sitting on four columns; it was to be built on the ground and then lifted into place.

  • Poker and FOSS

    He introduced poker with a definition: "Poker is a gambling game of strategy played by people for money, using cards". The order of the terms in that definition is important, he said. In online poker, though, the "people" element is weakened because you can't see and directly interact with the other people you are playing with. So, unlike real-life poker, online poker is more about sociology than psychology; serious players track the trends of the player base as a whole, rather than trying to recognize the quirks of a particular person. That means online poker is "really about money". In order to succeed, one has to develop some weird views of the value of money. Even in games with relatively small stakes, players can win or lose a few thousand dollars in a session; in games with "nosebleed stakes", a player could be up or down by a million dollars in an evening. The game is particularly popular in the US, UK, and Australia, he said; it is played online and in face-to-face games in people's homes or at casinos. Poker became mainstream in the late 1990s, largely due to the "Late Night Poker" television series in the UK. There are a lot of different kinds of poker games, but the show focused on no-limit Texas hold 'em, which is the most "high drama of poker games" so it was well-suited to television. The show pioneered the use of a hole-card camera, so that viewers could see the two unseen cards each player was dealt. That innovation allowed viewers and commentators to analyze the choices that the players were making; without seeing the hole cards, watching other people play poker is about as interesting as "watching paint dry", Kuhn said. He did not go into the rules of poker much in the talk; a lot of it is not really germane to his topic. The important things to note are that it is a zero-sum, partial-information game where players are playing against each other and not the house (as they are in most other gambling games). It is a game of skill—better players win more over time—but there is a huge element of chance. In order for the house to make any money (casinos are not charities after all), a small percentage of the bets are kept by the house, which is usually called the "rake". All of that made poker an ideal candidate for online play. He put up a screen shot of a online poker game from 1999 and noted that all of today's poker sites have a similar look. It features a simple user interface that allows players to quickly and easily see the cards and make their bets. Most online poker players do not want sophisticated graphics and the like. So poker is relatively easy to write an online system for; there are a few "tricky bits", but in comparison to, say, an online multiplayer role-playing game, there are only minimal timing or network-delay issues to handle. It is completely turn-based and the state of the game is easily maintained on the server side. In addition, the client does not need any secret information, so the ability to cheat by extracting secrets from the data sent back and forth is eliminated—or, at least, it should be. The main problem for these systems is scaling them to accommodate as many tables as there is demand for. Serious players want to play in multiple games at once and the house maximizes its revenue by the number of games it can run. The "watershed moment" for online poker came in 2003 when Chris Moneymaker—his actual birth name, as has been documented—joined into a "satellite tournament" for the World Series of Poker (WSoP). Moneymaker paid $86 to enter the tournament and ended up winning the $10,000 entry into the main WSoP event in Las Vegas; he won that tournament and received $2.5 million for doing so. That created a huge boom in online poker, Kuhn said.

  • ChefConf 2020 CFP – Make the Work Flow

    So hopefully you’ve taken the time to submit something. Lots of folks have, and thank you! Maybe you’re still not sure what you could talk about at ChefConf? Maybe you’ve got some interesting people stories from your time in the automation mines. Over the years we’ve categorized these talks as “DevOps” or “People, Processes, and Teams”, but the real guts of the discussion centers on how tooling helps people get their jobs done better, as well as how new theories in teamwork and product delivery impact technical teams. How we work together sets the stage for how we succeed together.

Graphics: Dav1d AV1 Acceleration, AMDVLK and Sway 1.4

  • Dav1d AV1 Decoder Begins Adding AVX-512 Optimizations For Intel Ice Lake

    Ahead of the forthcoming dav1d 0.6 release, this open-source AV1 video decoder has begun implementing AVX-512 optimizations targeting Intel Ice Lake processors. The work has begun on AVX-512 optimizations focused on Ice Lake for this already quite speedy AV1 video decoder.

  • AMDVLK 2020.Q1.1 Brings Some Performance Tuning, Still On Vulkan 1.1

    Out this morning is AMDVLK 2020.Q1.1 as AMD's first official open-source Vulkan driver code drop of the new year. While the Radeon Software Adrenalin Edition driver for Windows was recently updated with Vulkan 1.2 support, this AMDVLK release is still on Vulkan 1.1 but at least updated against API 1.1.130 compliance. Hopefully their next code drop will have the Vulkan 1.2 support officially exposed. Meanwhile Mesa's RADV Radeon Vulkan driver has been supporting Vulkan 1.2 since hours after the specification's unveil.

  • Sway 1.4 Wayland Compositor Brings VNC Support, Initial Bits For MATE Panel Support

    Sway 1.4 is out today as the newest version of this i3-inspired Wayland compositor that has a growing following. Sway 1.4 consists of nearly 200 changes from over 50 contributors, showing the significant progress of this Wayland compositor that has been quick to pick-up features over the past few years.

LWN and Oracle on Linux 5.x Kernel

  • Grabbing file descriptors with pidfd_getfd()

    In response to a growing desire for ways to control groups of processes from user space, the kernel has added a number of mechanisms that allow one process to operate on another. One piece that is currently missing, though, is the ability for a process to snatch a copy of an open file descriptor from another. That gap may soon be filled, though, if the pidfd_getfd() system-call patch set from Sargun Dhillon is merged. One thing that is possible in current kernels is to open a file that another process also has open; the information needed to do that is in each process's /proc directory. That does not work, though, for file descriptors referring to pipes, sockets, or other objects that do not appear in the filesystem hierarchy. Just as importantly, though, opening a new file in this way creates a new entry in the file table; it is not the entry corresponding to the file descriptor in the process of interest. That distinction matters if the objective is to modify that particular file descriptor. One use case mentioned in the patch series is using seccomp to intercept attempts to bind a socket to a privileged port. A privileged supervisor process could, if it so chose, grab the file descriptor for that socket from the target process and actually perform the bind — something the target process would not have the privilege to do on its own. Since the grabbed file descriptor is essentially identical to the original, the bind operation will be visible to the target process as well. For the sufficiently determined, it is actually possible to extract a file descriptor from another process now. The technique involves using ptrace() to attach to that process, stop it from executing, inject some code that opens a connection to the supervisor process and sends the file descriptor via an SCM_RIGHTS datagram, then running that code. This solution might justly be said to be slightly lacking in elegance. It also requires stopping the target process, which is likely to be unwelcome.

  • configfd() and shifting bind mounts

    The 5.2 kernel saw the addition of an extensive new API for the mounting (and remounting) of filesystems; this article covered an early version of that API. Since then, work in this area has mostly focused on enabling filesystems to support this API fully. James Bottomley has taken a look at this API as part of the job of redesigning his shiftfs filesystem and found it to be incomplete. What has followed is a significant set of changes that promise to simplify the mount API — though it turns out that "simple" is often in the eye of the beholder. The mount API work replaces the existing, complex mount() system call with a half-dozen or so new system calls. An application would call fsopen() to open a filesystem stored somewhere or fspick() to open an already mounted filesystem. Calls to fsconfig() set various parameters related to the mount; fsmount() is then called to mount a filesystem within the kernel and move_mount() to attach the result to the filesystem hierarchy somewhere. There are a couple more calls to fill in other parts of the interface as well. The intent is for this set of system calls to be able to replace mount() entirely with something that is more flexible, capable, and maintainable. Back in November, Bottomley discovered one significant gap with the new API: it is not possible to use it to set up a read-only bind mount. The problem is that bind mounts are special; they do not represent a filesystem directly. Instead, they can be thought of as a view of a filesystem that is mounted elsewhere. There is no superblock associated with a bind mount, which turns out to be a problem where the new API is concerned, since fsconfig() is designed to operate on superblocks. An attempt to call fsconfig() on a bind mount will end up modifying the original mount, which is almost certainly not what the caller had in mind. So there is no way to set the read-only flag for a bind mount. David Howells, the creator of the new mount API, responded that what is needed is yet another system call, mount_setattr(), which would change attributes of mounts. That would work for the read-only case, Bottomley said, but it falls down when it comes to more complex situations, such as his proposed UID-shifting bind mount. Instead, he said, the file-descriptor-based configuration mechanism provided by fsconfig() is well suited to this job, but it needs to be made more widely applicable. He suggested that this interface be made more generic so that it could be used in both situations (and beyond).

  • Accelerating netfilter with hardware offload, part 1

    Supporting network protocols at high speeds in pure software is getting increasingly difficult, with 25-100Gb/s interfaces available now and 200-400Gb/s starting to show up. Packet processing at 100Gb/s must happen in 200 cycles or less, which does not leave much room for processing at the operating-system level. Fortunately some operations can be performed by hardware, including checksum verification and offloading parts of the packet send and receive paths. As modern hardware adds more functionality, new options are becoming available. The 5.3 kernel includes a patch set from Pablo Neira Ayuso that added support for offloading some packet filtering with netfilter. This patch set not only adds the offload support, but also performs a refactoring of the existing offload paths in the generic code and the network card drivers. More work came in the following kernel releases. This seems like a good moment to review the recent advancements in offloading in the network stack.

  • Linux Kernel Developments Since 5.0: Features and Developments of Note

    Last year, I covered features in Linux kernel 5.0 that we thought were worth highlighting. Unbreakable Enterprise Kernel 6 is based on stable kernel 5.4 and was recently made available as a developer preview. So, now is as good a time as any to review developments that have occurred since 5.0. While the features below are roughly in chronological order, there is no significance to the order otherwise. BPF spinlock patches BPF (Berkeley Packet Filter) spinlock patches give BPF programs increased control over concurrency. Learn more about BPF and how to use it in this seven part series by Oracle developer Alan Maguire. Btrfs ZSTD compression The Btrfs filesystem now supports the use of multiple ZSTD (Zstandard) compression levels. See this commit for some information about the feature and the performance characteristics of the various levels. Memory compaction improvements Memory compaction has been reworked, resulting in significant improvements in compaction success rates and CPU time required. In benchmarks that try to allocated Transparent HugePages in deliberatly fragmented virtual memory, the number of pages scanned for migration was reduced by 65% and the free scanner was reduced by 97.5%.

Lakka 2.3.2 with RetroArch 1.8.4

The Lakka team wishes everyone a happy new year and welcomes 2020 with a new update and a new tier-based releases system! This new Lakka update, 2.3.2, contains RetroArch 1.8.4 (was 1.7.2), some new cores and a handful of core updates. Read more