Language Selection

English French German Italian Portuguese Spanish

June 2019

4MLinux 30.0 BETA released.

Filed under
GNU
Linux

4MLinux 30.0 BETA is ready for testing. Basically, at this stage of development, 4MLinux BETA has the same features as 4MLinux STABLE, but it provides a huge number of updated packages.

Road map:
June 2019 -> BETA
September 2019 -> STABLE
December 2019 -> OLD STABLE
March 2020 -> EOL

Read more

4 Best Adobe Illustrator Alternatives for Linux

Filed under
GNU
Linux
Software

Adobe Illustrator is considered to be the best when it comes to illustration and design on Windows and Mac, but the app isn’t available on Linux. So, if you’ve recently switched to an open source, Linux operating system, you’ll need to find a suitable alternative to use. Here are the best Adobe Illustrator alternatives for Linux.

Read more

GNUnet 0.11.5 released

Filed under
GNU

We are pleased to announce the release of GNUnet 0.11.5.

This is a bugfix release for 0.11.4, mostly fixing a few minor bugs and improving performance, in particular for identity management with a large number of egos. In the wake of this release, we also launched the REST API documentation. In terms of usability, users should be aware that there are still a large number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny (about 200 peers) and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.11.5 release is still only suitable for early adopters with some reasonable pain tolerance.

Read more

Foundations: Open Mobility Foundation, prpl Foundation, Cloud Native Computing Foundation and OpenChain (LF)

Filed under
OSS
  • Cities lead the way on open source tools for mobility

    The Open Mobility Foundation aims to evolve how cities better manage transportation today and in the future and develop and deploy digital mobility tools.

  • Open Mobility Foundation seeks to improve transportation with open source tools

    This morning, a host of U.S. cities and organizations — including Austin, Chicago, Los Angeles, Louisville, Miami-Dade County, Miami, Minneapolis, New York City DOT, New York City Taxi and Limo Commission, Philadelphia, Portland, San Francisco, San Jose, Santa Monica, Seattle, and Washington, D.C. — announced their participation in the newly formed Open Mobility Foundation (OMF), a nonprofit coalition that seeks to improve intercity transportation infrastructure with open source software tools. Escooter startup Bird also said it’ll join as a founding member.

  • GoodFirms Publishes Best Free & Open Source Software for Various Categories [Ed: Probably another marketing firm like Gartner, 'monetising' fake recommendation and lobbying services.]

    In this competitive world, running a business is an expensive endeavor, and none of the entrepreneurs can afford to take risks. But by thinking like a small business and taking the critical decisions and getting the things done can be a smart move. Thus, to help in this situation, GoodFirms.co has come up with ten blogs for entrepreneurs. In these blogs, you can find the various free business software that has been briefly introduced along with features to streamline your work and increase productivity.

  • ADTRAN Expands Participation in prpl Foundation—Open-Source Consortium Enabling the Security and Interoperability of Devices for the IoT and Smart Societies of the Future

    ADTRAN®, Inc., (ADTN), a leading provider of next-generation open networking and subscriber experience solutions, today announced it has joined the prpl Foundation—an open-source, community-driven, collaborative non-profit foundation that strives to enable the security and interoperability of embedded devices.

  • Cloud Native Computing Foundation Announces DiDi as Winner of Top End User Award

    KubeCon + CloudNativeCon + Open Source Summit China – The Cloud Native Computing Foundation® (CNCF®), which sustains and integrates open source technologies like Kubernetes® and Prometheus™, today announced that DiDi, the world's leading multi-modal transportation platform, has won the CNCF End User Award in recognition of its contributions to the cloud native ecosystem.

  • Cloud Native Computing Foundation Welcomes Ant Financial as Gold End User Member

    KubeCon + CloudNativeCon + Open Source Summit China 2019 -- The Cloud Native Computing Foundation (CNCF), which sustains and integrates open source technologies like Kubernetes® and Prometheus™, today announced that Ant Financial has joined the Foundation as a Gold Member.

  • CNCF to Expand Scope of SIGs

    The Cloud Native Computing Foundation announced at the KubeCon + Cloud Native + Open Source China Summit today that it is expanding the number of special interest groups (SIGs) surrounding Kubernetes, as part of an effort to accelerate development of critical complementary technologies.

    Dan Kohn, executive director for the CNCF, says it’s become apparent that the nine members of the technical oversight committee (TOC) for Kubernetes needs to be supported with expertise in specific areas. The first two SIGs to be formed will be focused on security and storage, followed by SIGs addressing network traffic, observability, governance, application delivery, core and applied architectures.

    [...]

    Kohn says as part of this initiative, one of the goals of the CNCF is to entice more developers to contribute to an increasing number of open source projects. Kohn estimates that well more than half the developers who leverage open source software don’t contribute to any project. Many of those developers are already creating forks to open source code every time they patch open source software on their own. Every time that software is updated—otherwise known as carrying your own patch—those developers have to reconstruct that patch. That issue would go away if the developers contributed their patches to the open source project, which would then ensure the issue is addressed as part of the life cycle of the project, he notes.

  • Wind River Becomes First to Achieve OpenChain 2.0 Conformance

    Wind River®, a leader in delivering software for critical infrastructure, today announced that it is certified on OpenChain version 2.0. It was also the first company to become OpenChain conformant.

    Hosted by the Linux Foundation, the OpenChain Project aims to build trust in open source by making open source license compliance simpler and more consistent. By working through the OpenChain Specification conformance process and curriculum, open source license compliance becomes more predictable, understandable and efficient for all participants in the software supply chain.

Latest Openwashing

Filed under
OSS
  • Salesforce open sources research to advance state of the art in AI for common sense reasoning [Ed: Openwashing by proprietary software giants. How fashionable. The open source 'movement' lets them pretend to respect users whilst actually attacking them. They just tick some box.]
  • Energy sector gets first open-source, tailor-made blockchain [Ed: Hype wave + openwashing when greenwashing of energy companies ain't sufficient]

    A public enterprise grade energy blockchain has powered up with the promise to accelerate a low-carbon, distributed electricity future. For the first time, energy sector companies are hosting validator nodes on a decentralized network as they seek to adapt to a more digitalized and decentralized energy system.

  • Visa modernises B2B global payments through open source blockchain [Ed: Same for banks]
  • Securitize DS Token Protocol goes Open Source[Ed: It's a bloody protocol. This is not "Open Source" but more like API, i.e. dependency on something opaque and centralised]

    The security token issuance platform, Securitize raised eyebrows amongst the cryptocommunity this week after releasing its DS Token code to the public. The move goes along with the crypto sectors long-held stance of open-source projects. Now, programmers from across the globe have a chance to test and advance the platform’s core coding.

  • How SNIA is using Open Source to speed up storage standards

    Developing a storage standard has always been a long, arduous and contentious process. It is the same for most standards.

    However, with the speed that technology is changing, that approach is no longer sustainable, and not just for storage. To understand what change means for the storage industry, Enterprise Times talked with Richelle Ahlvers.

    Ahlvers is a board member at the Storage Networking Industry Association (SNIA). She is also the Chair of the Scalable Storage Management Technical Workgroup. That workgroup is responsible for the Swordfish Storage Management API. Already providing support for block and file storage, it will release support for object storage soon.

    [...]

    Another example that Ahlvers gave is the SNIA work on the CDMI (Cloud Data Management Interface). That spec is now entirely in Open Source. All the bug fixes and changes are done through the Open Source community which, Ahlvers says, makes it faster.

  • Norigin Media open-sources part of TV app technology [Ed: "Part of" means openwashing, i.e. they get to call it 'open' even though it is proprietary]

    TV technology outfit Norigin Media has open-sourced parts of its technology framework for building TV apps in an initiative the company said was aimed at increasing the quality of software across the streaming industry, and encouraging broadcasters to work together by reusing common code.

  • Norigin Media open sources parts of TV App framework
  • Norigin Media open sources TV App framework [Ed: Misleading. Only part was "opened". It's openwashing.]
  • Open Source: the secret sauce to business success [Ed: Why is it that Microsoft employees now become 'journalists' who write about FOSS (when the employer attacks FOSS)?]

    Software is at the heart of the digital revolution and, ultimately, it is what determines the success, agility and competitiveness of businesses looking to succeed in today’s fast paced, digital world.

    Open source is changing the way organisations build software, offering a strong and critical foundation for digital transformation, while bringing teams and departments together. As the approach to in-house software development evolves, organisations understand that their success is determined by the way they participate in Open Source Software (OSS). This offers a realm of opportunities that do not just benefit the IT department, but the business at large.

OSS Leftovers

Filed under
OSS
  • D-Wave Releases of D-Wave Hybrid Workflow Platform to Open Source

    D-Wave Hybrid is designed to simplify and accelerate developers’ ability to build and run algorithms across classical and quantum systems, continuing D-Wave’s work to help customers with their real-world application development.

  • D-Wave’s open source platform for quantum-classical hybrid apps hits general availability

    D-Wave today announced the general availability of D-Wave Hybrid, its open source hybrid workflow platform for building and running quantum-classical hybrid applications. You can download D-Wave Hybrid, which is part of the company’s Ocean SDK, from GitHub.

  • Free, open-source virtual modular synth VCV Rack updated to v1.0

    Since its 2017 launch, VCV Rack has helped newbies step into modular synthesis, presenting a free, open-source software that simulates Eurorack on your desktop. VCV Rack has now been updated to version 1.0, which adds powerful features such as 16-voice polyphony, MIDI mapping and more.

    Important to note is that the software retains its intuitive module-patching feature, letting you add and connect both free and purchased modules creatively. What’s neat about v1.0, however, is support for polyphony of up to 16 voices, giving you the ability to produce thicker textures.

  • Healthcare Design Studio, GoInvo Celebrates 15th Anniversary with Release of Open Source Visualizations

    To celebrate 15 years in business, GoInvo, a digital health design consultancy headquartered in Arlington, Massachusetts, today announced the release of two new open source health projects, "Who Uses My Health Data?", and "Precision Medicine Timeline", both of which are available to all for use or modification, under a Creative Commons Attribution v3 license or MIT license.

  • “No Loss” Lotto Comes to Ethereum: Builders Commit to Open-Sourcing the Code

    A “no loss” lottery built atop Ethereum — PoolTogether — quickly generated buzz in cryptocurrency circles this week in being the newest DeFi project on the block.

    Yet the lotto’s hype was met with an initial wave of skepticism, too, as some cryptoverse stakeholders cautioned against using the dapp while its code remained closed-source. That caution was fair, and it got the PoolTogether team’s attention in short order.

  • Qwant Maps: open source Google Maps alternative launches

    Qwant, the French search engine that respects users privacy, has launched a beta version of Qwant Maps, a, you guessed it, privacy respecting mapping service.

    Qwant Maps is an open source project that anyone may contribute to. The data is hosted on GitHub and developers may run their own version by following the instructions on the project website.

    The beta version of the mapping service supports desktop and mobile access, and it works similarly to how other mapping services such as Google Maps, Bing Maps, or OpenStreetMap work.

  • Fans resurrect Super Mario Bros Royale as a free open-source project, available to play

    What this ultimately means is that there is a playable free open-source version of Super Mario Bros Royale, known as Mario Royale, available now to play.

  • DBS Bank goes big on open source

    Besides using a slew of open source software, DBS Bank is looking to contribute some of its own projects to the open source community in future

  • The financial services industry is the next great frontier for open source

    Open source software is a driver of the democratization of technology, opening doors, and leveling the playing field for many industries. However, financial services has been a rare exception: financial institutions have tended to rely on their own technology development and operation.
    In a sector that has traditionally served the few and not the many, open source could be the key to make financial services more inclusive for the 2 billion people and 200 million small businesses around the world lacking access to basic services such as banking and lending.
    In a report published by Gartner, global enterprise IT spending in the banking and securities market was estimated to have grown by 4.6% in 2018 in constant US dollars. Banking and securities firms remain steadfast as they continue to prioritize digital transformation. But it has largely been major global banks that have the resources and ability to throw their hats into the ring of technology development—smaller regional banks have tended to stay on the sidelines.

  • Should you be banking on open source analytics?

    Banks see open source as a hotbed of innovation – and a governance nightmare. Do the rewards outweigh the risks? Open source software used to be treated almost as a joke in the financial services sector.

    If you wanted to build a new system, you bought tried and tested, enterprise-grade software from a large, reputable vendor. You didn’t gamble with your customers’ trust by adopting tools written by small groups of independent programmers. Especially with no formal support contracts and no guarantees that they would continue to be maintained in the future.

    Fast-forward to today, and the received wisdom seems to have turned on its head. Why invest in expensive proprietary software when you can use an open source equivalent for free? Why wait months for the official release of a new feature when you can edit the source code and add it yourself? And why lock yourself into a vendor relationship when you can create your own version of the tool and control your own destiny?

  • Algorand, a Dapp Analytics Suite, Goes Open Source

    Algorand, a permission-less, proof-of-stake blockchain and technology company, announced that their node repository is now open source.

    Part of Algorand’s ongoing mission to develop and promote a decentralized blockchain, the company has made several of its projects open source over the past year, including a Verifiable Random Function and their Developer SDKs.

    The blockchain’s nodes are run by diverse entities — businesses, individuals, and consortiums — spread across many countries, according to the company website. The decentralized voting mechanism pools and randomly selects these users to develop a unique committee to approve every block.

  • [Old] On Usage of The Phrase "Open Source"

    It is unfortunate that for some time the Open Source Initiative deprecated Richard Stallman and Free Software, and that some people still consider Open Source and Free Software to be different things today. I never meant it to be that way. Open Source was meant to be a way of promoting the concept of Free Software to business people, who I have always hoped would thus come to appreciate Richard and his Free Software campaign. And many have. Open Source licenses and Free Software licenses are effectively the same thing.

Open Hardware/Modding: RISC-V, EDA, ACEINNA, Arduino and ESP32

Filed under
Hardware
OSS
  • Open Source Processors: Fact Or Fiction?

    Open source processors are rapidly gaining mindshare, fueled in part by early successes of RISC-V, but that interest frequently is accompanied by misinformation based on wishful thinking and a lack of understanding about what exactly open source entails.

    Nearly every recent conference has some mention of RISC-V in particular, and open source processors in general, whether that includes keynote speeches, technical sessions, and panels. What’s less obvious is that open ISAs are not a new phenomenon, and neither are free, open processor implementations.

  • Will Open-Source EDA Work?

    Open-source EDA is back on the semiconductor industry’s agenda, spurred by growing interest in open-source hardware. But whether the industry embraces the idea with enough enthusiasm to make it successful is not clear yet.

    One of the key sponsors of this effort is the U.S. Defense Advanced Research Projects Agency (DARPA), which is spearheading a number of programs to lower the cost of chip design, including one for advanced packaging and another for security. The idea behind all of them is to utilize knowledge extracted from millions of existing chip designs to make chip engineering more affordable and predictable.

  • Why Autonomous Vehicle Developers Are Embracing Open Source

    There's a growing trend of autonomous vehicle developers open-sourcing their software tools and hardware, even for applications outside of automotive.

  • Rugged open-source inertial measurement unit sensor offers affordable and rugged solution

    ACEINNA offers the new OpenIMU300RI. The device is a rugged, open-source, sealed-package, 9-DOF IMU for autonomous off-road, construction, agricultural and automotive vehicle applications. This new open-source IMU enables engineers to simply optimise an attitude, navigation or other algorithm for their vehicle/application and run it in on the IMU.

    [...]

    “Different vehicle platforms have different dynamics,” explains James Fennelly, product manager at ACEINNA. “To get the best performance, the attitude, navigation or other algorithm needs to be tailored for each vehicle platform and application. The ACEINNA OpenIMU300RI open-source platform gives designers a flexible and simple-to-integrate IMU solution that can be easily optimized for a wide range of vehicles and applications.”

  • Open Source ESP32 3D Printer Board Supports Marlin 2.0 Firmware
  • The Octopus is a 5K full frame open source camera that lets you swap out sensors

    Now that digital imaging sensors are starting to become more freely available to the masses, all kinds of open source projects have been popping up that use them. Most of them are typically fairly limited to things like the Raspberry Pi or development boards like the Arduino and ESP32.

    But now, there is a new and pretty serious looking open source camera out there. It’s called the Octopus, it has interchangeable sensors that go up to 5K full frame, it’s fully programmable and runs on the open source operating system, Linux.

  • ScopeFun open source all-in-one instrumentation

    ScopeFun has launched a new project via Crowd Supply for their open source all-in-one instrumentation hardware aptly named the ScopeFun. ScopeFun Has been created to provide an affordable platform that offers the following tools : Oscilloscope, Arbitrary waveform generator, Spectrum analyzer, Logic analyzer and Digital pattern generator .

    The hardware supports any accompanying software runs on Windows, Linux, and Mac and also provides a Server Mode that supports remote connections over an IP network. “A Xilinx Artix-7 FPGA and a Cypress EZ-USB FX3 controller allow the board to interface with a PC while maintaining fast data rates. Samples are buffered using 512 Megabytes of DDR3 SDRAM.

  • Bloom Chair is an open source furniture that lets you design your own piece

    Call it modular, call it DIY, call it I-have-control-over-my-interiors; the purpose of the Bloom Chair is to let you customize your chair, just the way you like it to be. It’s a collaborative effort between you and the manufacturer, where you get to download the modular design, cut it yourself and finally assemble it. While you make your piece, you have the liberty of modifying the pattern and making the end-shape define your vision. Haffun!

CMS: Acquia, Drupal and Top CMS Platforms

Filed under
Server
OSS
Drupal
  • Digital experience firm Acquia sees India as a global delivery centre

    Acquia, a US-based open source digital experience company, has announced the opening of an office in Pune, expanding its presence in the Asia-Pacific region. Taking this next step in its global growth strategy, Acquia looks to bolster its partner network and expand its global customer footprint.

  • EPAM Named An Acquia Global Select Partner, Joining Elite Group Of Partners

    EPAM Systems, Inc. (EPAM), a leading global provider of digital platform engineering and software development services, today announced that it has achieved Global Select status in Acquia's Partner Program. Acquia, an open source digital experience company, provides software and services built around Drupal. As one of only a few elite Global Select partners, EPAM leverages its Acquia and Drupal expertise to help its clients design, build and deliver engaging and intelligent customer experiences.

  • The Top 13 Free and Open Source Content Management Platforms

    This is the most complete and up-to-date directory of free and open source content management platforms available on the web.

  • 4 great Java-based CMS options

    OpenCms has been around since 1999, and it's been an open source Java CMS platform since 2001. Not only is it one of the oldest Java-based CMS platforms, it's one of the oldest CMS tools, predating the popular PHP-based WordPress, which debuted in 2003.

    From a developer's perspective, OpenCms is simple to set up and maintain. It runs as a Java servlet, which makes installation easy. It works with most major databases; whether you prefer MySQL, Microsoft SQL Server, MariaDB or another popular database, you can likely run OpenCms without much hassle.

    OpenCms probably won't win awards as the most elegant or attractive Java-based CMS. The interface was overhauled in 2019, but OpenCms doesn't exactly feel modern. It works, but it's a little clunky.

    However, OpenCms does enjoy the distinction as a truly cost-free open source Java CMS. There is no freemium pricing model for the product, and there are no licensing fees.

Linux 5.2-rc7

Filed under
Linux

It's Sunday afternoon _somewhere_ in the world right now. In
particular, in the middle of nowhere on a boat.

I didn't expect to have any internet this week, and honestly, I
haven't had much, and not fast. But enough to keep up with critical
pull requests, and enough to push out an rc.

But credit for the internet goes to Disk Hohndel and vmware, because
I'm mooching off his phone hotspot WiFi to do this.

Anyway, It's been _fairly_ calm. Would I have hoped for even calmer
with my crappy internet? Sure. But hey, it's a lot smaller than rc6
was and I'm not really complaining.

Read more

Also: Linux 5.2-rc7 Is Quiet & Released On A Boat Somewhere

More in Tux Machines

today's howtos and programming bits

  • How to fix trailing underscores at the end of URLs in Chrome
  • How to Install Ubuntu Alongside With Windows 10 or 8 in Dual-Boot
  • Beginner’s guide on how to git stash :- A GIT Tutorial
  • Handy snapcraft features: Remote build
  • How to build a lightweight system container cluster
  • Start a new Cryptocurrency project with Python
  • [Mozilla] Celery without a Results Backend
  • Mucking about with microframeworks

    Python does not lack for web frameworks, from all-encompassing frameworks like Django to "nanoframeworks" such as WebCore. A recent "spare time" project caused me to look into options in the middle of this range of choices, which is where the Python "microframeworks" live. In particular, I tried out the Bottle and Flask microframeworks—and learned a lot in the process. I have some experience working with Python for the web, starting with the Quixote framework that we use here at LWN. I have also done some playing with Django along the way. Neither of those seemed quite right for this latest toy web application. Plus I had heard some good things about Bottle and Flask at various PyCons over the last few years, so it seemed worth an investigation. Web applications have lots of different parts: form handling, HTML template processing, session management, database access, authentication, internationalization, and so on. Frameworks provide solutions for some or all of those parts. The nano-to-micro-to-full-blown spectrum is defined (loosely, at least) based on how much of this functionality a given framework provides or has opinions about. Most frameworks at any level will allow plugging in different parts, based on the needs of the application and its developers, but nanoframeworks provide little beyond request and response handling, while full-blown frameworks provide an entire stack by default. That stack handles most or all of what a web application requires. The list of web frameworks on the Python wiki is rather eye-opening. It gives a good idea of the diversity of frameworks, what they provide, what other packages they connect to or use, as well as some idea of how full-blown (or "full-stack" on the wiki page) they are. It seems clear that there is something for everyone out there—and that's just for Python. Other languages undoubtedly have their own sets of frameworks (e.g. Ruby on Rails).

Kernel: Linux 5.3, DragonFlyBSD Takes Linux Bits, LWN Paywall Expires for Recent Articles

  • Ceph updates for 5.3-rc1
    Hi Linus,
    
    The following changes since commit 0ecfebd2b52404ae0c54a878c872bb93363ada36:
    
    Linux 5.2 (2019-07-07 15:41:56 -0700)
    
    are available in the Git repository at:
    
    https://github.com/ceph/ceph-client.git tags/ceph-for-5.3-rc1
    
    for you to fetch changes up to d31d07b97a5e76f41e00eb81dcca740e84aa7782:
    
    ceph: fix end offset in truncate_inode_pages_range call (2019-07-08 14:01:45 +0200)
    
    There is a trivial conflict caused by commit 9ffbe8ac05db
    ("locking/lockdep: Rename lockdep_assert_held_exclusive() ->
    lockdep_assert_held_write()"). I included the resolution in
    for-linus-merged.
    
  • Ceph Sees "Lots Of Exciting Things" For Linux 5.3 Kernel

    Ceph for Linux 5.3 is bringing an addition to speed-up reads/discards/snap-diffs on sparse images, snapshot creation time is now exposed to support features like "restore previous versions", support for security xattrs (currently limited to SELinux), addressing a missing feature bit so the kernel client's Ceph features are now "luminous", better consistency with Ceph FUSE, and changing the time granularity from 1us to 1ns. There are also bug fixes and other work as part of the Ceph code for Linux 5.3. As maintainer Ilya Dryomov put it, "Lots of exciting things this time!"

  • The NVMe Patches To Support Linux On Newer Apple Macs Are Under Review

    At the start of the month we reported on out-of-tree kernel work to support Linux on the newer Macs. Those patches were focused on supporting Apple's NVMe drive behavior by the Linux kernel driver. That work has been evolving nicely and is now under review on the kernel mailing list. Volleyed on Tuesday were a set of three patches to the Linux kernel's NVMe code for dealing with the Apple hardware of the past few years in order for Linux to deal with these drives. On Apple 2018 systems and newer, their I/O queue sizing/handling is odd and in other areas not properly following NVMe specifications. These patches take care of that while hopefully not regressing existing NVMe controller support.

  • DragonFlyBSD Pulls In The Radeon Driver Code From Linux 4.4

    While the Linux 4.4 kernel is quite old (January 2016), DragonFlyBSD has now re-based its AMD Radeon kernel graphics driver against that release. It is at least a big improvement compared to its Radeon code having been derived previously from Linux 3.19. DragonFlyBSD developer François Tigeot continues doing a good job herding the open-source Linux graphics driver support to this BSD. With the code that landed on Monday, DragonFlyBSD's Radeon DRM is based upon the state found in the Linux 4.4.180 LTS tree.

  • Destaging ION

    The Android system has shipped a couple of allocators for DMA buffers over the years; first came PMEM, then its replacement ION. The ION allocator has been in use since around 2012, but it remains stuck in the kernel's staging tree. The work to add ION to the mainline started in 2013; at that time, the allocator had multiple issues that made inclusion impossible. Recently, John Stultz posted a patch set introducing DMA-BUF heaps, an evolution of ION, that is designed to do exactly that — get the Android DMA-buffer allocator to the mainline Linux kernel. Applications interacting with devices often require a memory buffer that is shared with the device driver. Ideally, it would be memory mapped and physically contiguous, allowing direct DMA access and minimal overhead when accessing the data from both sides at the same time. ION's main goal is to support that use case; it implements a unified way of defining and sharing such memory buffers, while taking into account the constraints imposed by the devices and the platform.

  • clone3(), fchmodat4(), and fsinfo()

    The kernel development community continues to propose new system calls at a high rate. Three ideas that are currently in circulation on the mailing lists are clone3(), fchmodat4(), and fsinfo(). In some cases, developers are just trying to make more flag bits available, but there is also some significant new functionality being discussed. clone3() The clone() system call creates a new process or thread; it is the actual machinery behind fork(). Unlike fork(), clone() accepts a flags argument to modify how it operates. Over time, quite a few flags have been added; most of these control what resources and namespaces are to be shared with the new child process. In fact, so many flags have been added that, when CLONE_PIDFD was merged for 5.2, the last available flag bit was taken. That puts an end to the extensibility of clone().

  • Soft CPU affinity

    On NUMA systems with a lot of CPUs, it is common to assign parts of the workload to different subsets of the available processors. This partitioning can improve performance while reducing the ability of jobs to interfere with each other. The partitioning mechanisms available on current kernels might just do too good a job in some situations, though, leaving some CPUs idle while others are overutilized. The soft affinity patch set from Subhra Mazumdar is an attempt to improve performance by making that partitioning more porous. In current kernels, a process can be restricted to a specific set of CPUs with either the sched_setaffinity() system call or the cpuset mechanism. Either way, any process so restricted will only be able to run on the specified CPUs regardless of the state of the system as a whole. Even if the other CPUs in the system are idle, they will be unavailable to any process that has been restricted not to run on them. That is normally the behavior that is wanted; a system administrator who has partitioned a system in this way probably has some other use in mind for those CPUs. But what if the administrator would rather relax the partitioning in cases where the fenced-off CPUs are idle and going to waste? The only alternative currently is to not partition the system at all and let processes roam across all CPUs. One problem with that approach, beyond losing the isolation between jobs, is that NUMA locality can be lost, resulting in reduced performance even with more CPUs available. In theory the AutoNUMA balancing code in the kernel should address that problem by migrating processes and their memory to the same node, but Mazumdar notes that it doesn't seem to work properly when memory is spread out across the system. Its reaction time is also said to be too slow, and the cost of the page scanning required is high.

How the Open Source Operating System Has Silently Won Over the World

The current and future potential for Linux based systems is limitless. The system’s flexibility allows for the hardware that uses it to be endlessly updated. Functionality can, therefore, be maintained even as the technology around the devices change. This flexibility also means that the function of the hardware can be modified to suit an ever-changing workplace. For example, because the INSYS icom OS has been specifically designed for use in routers, this has allowed it to be optimised to be lightweight and hardened to increase its security. Multipurpose OS have large libraries of applications for a diverse range of purposes. Great for designing new uses, but these libraries can also be exploited by actors with malicious intent. Stripping down these libraries to just what is necessary through a hardening process can drastically improve security by reducing the attackable surfaces. Overall, Windows may have won the desktop OS battle with only a minority of them using Linux OS. However, desktops are only a minute part of the computing world. Servers, mobile systems and embedded technology that make up the majority are predominately running Linux. Linux has gained this position by being more adaptable, lightweight and portable than its competitors. Read more

Operating-System-Directed Power-Management (OSPM) Summit

  • The third Operating-System-Directed Power-Management summit

    he third edition of the Operating-System-Directed Power-Management (OSPM) summit was held May 20-22 at the ReTiS Lab of the Scuola Superiore Sant'Anna in Pisa, Italy. The summit is organized to collaborate on ways to reduce the energy consumption of Linux systems, while still meeting performance and other goals. It is attended by scheduler, power-management, and other kernel developers, as well as academics, industry representatives, and others interested in the topics.

  • The future of SCHED_DEADLINE and SCHED_RT for capacity-constrained and asymmetric-capacity systems

    The kernel's deadline scheduling class (SCHED_DEADLINE) enables realtime scheduling where every task is guaranteed to meet its deadlines. Unfortunately SCHED_DEADLINE's current view on CPU capacity is far too simple. It doesn't take dynamic voltage and frequency scaling (DVFS), simultaneous multithreading (SMT), asymmetric CPU capacity, or any kind of performance capping (e.g. due to thermal constraints) into consideration. In particular, if we consider running deadline tasks in a system with performance capping, the question is "what level of guarantee should SCHED_DEADLINE provide?". An interesting discussion about the pro and cons of different approaches (weak, hard, or mixed guarantees) developed during this presentation. There were many different views but the discussion didn't really conclude and will have to be continued at the Linux Plumbers Conference later this year. The topic of guaranteed performance will become more important for mobile systems in the future as performance capping is likely to become more common. Defining hard guarantees is almost impossible on real systems since silicon behavior very much depends on environmental conditions. The main pushback on the existing scheme is that the guaranteed bandwidth budget might be too conservative. Hence SCHED_DEADLINE might not allow enough bandwidth to be reserved for use cases with higher bandwidth requirements that can tolerate bandwidth reservations not being honored.

  • Scheduler behavioral testing

    Validating scheduler behavior is a tricky affair, as multiple subsystems both compete and cooperate with each other to produce the task placement we observe. Valentin Schneider from Arm described the approach taken by his team (the folks behind energy-aware scheduling — EAS) to tackle this problem.

  • CFS wakeup path and Arm big.LITTLE/DynamIQ

    "One task per CPU" workloads, as emulated by multi-core Geekbench, can suffer on traditional two-cluster big.LITTLE systems due to the fact that tasks finish earlier on the big CPUs. Arm has introduced a more flexible DynamIQ architecture that can combine big and LITTLE CPUs into a single cluster; in this case, early products apply what's known as phantom scheduler domains (PDs). The concept of PDs is needed for DynamIQ so that the task scheduler can use the existing big.LITTLE extensions in the Completely Fair Scheduler (CFS) scheduler class. Multi-core Geekbench consists of several tests during which N CFS tasks perform an equal amount of work. The synchronization mechanism pthread_barrier_wait() (i.e. a futex) is used to wait for all tasks to finish their work in test T before starting the tasks again for test T+1. The problem for Geekbench on big.LITTLE is related to the grouping of big and LITTLE CPUs in separate scheduler (or CPU) groups of the so-called die-level scheduler domain. The two groups exists because the big CPUs share a last-level cache (LLC) and so do the LITTLE CPUs. This isn't true any more for DynamIQ, hence the use of the "phantom" notion here. The tasks of test T finish earlier on big CPUs and go to sleep at the barrier B. Load balancing then makes sure that the tasks on the LITTLE CPUs migrate to the big CPUs where they continue to run the rest of their work in T before they also go to sleep at B. At this moment, all the tasks in the wake queue have a big CPU as their previous CPU (p->prev_cpu). After the last task has entered pthread_barrier_wait() on a big CPU, all tasks on the wake queue are woken up.

  • I-MECH: realtime virtualization for industrial automation

    The typical systems used in industrial automation (e.g. for axis control) consist of a "black box" executing a commercial realtime operating system (RTOS) plus a set of control design tools meant to be run on a different desktop machine. This approach, besides imposing expensive royalties on the system integrator, often does not offer the desired degree of flexibility for testing/implementing novel solutions (e.g., running both control code and design tools on the same platform).

  • Virtual-machine scheduling and scheduling in virtual machines

    As is probably well known, a scheduler is the component of an operating system that decides which CPU the various tasks should run on and for how long they are allowed to do so. This happens when an OS runs on the bare hardware of a physical host and it is also the case when the OS runs inside a virtual machine. The only difference being that, in the latter case, the OS scheduler marshals tasks among virtual CPUs. And what are virtual CPUs? Well, in most platforms they are also a kind of special task and they want to run on some CPUs ... therefore we need a scheduler for that! This is usually called the "double-scheduling" property of systems employing virtualization because, well, there literally are two schedulers: one — let us call it the host scheduler, or the hypervisor scheduler — that schedules the virtual CPUs on the host physical CPUs; and another one — let us call it the guest scheduler — that schedules the guest OS's tasks on the guest's virtual CPUs. Now what are these two schedulers? That depends on the virtualization platform. They are always different, in the sense that it will never happen that, at runtime, a scheduler has to deal with scheduling virtual CPUs and also scheduling tasks that want to run on those same virtual CPUs (well, it can happen, but then you are not doing virtualization). They can be the same, in terms of code, or they can be completely different from that respect as well.

  • Rock and a hard place: How hard it is to be a CPU idle-time governor

    In the opening session of OSPM 2019, Rafael Wysocki from Intel gave a talk about potential problems faced by the designers of CPU idle-time-management governors, which was inspired by his own experience from the timer-events oriented (TEO) governor work done last year. In the first place, he said, it should be noted that "CPU idleness" is defined at the level of logical CPUs, which may be CPU cores or simultaneous multithreading (SMT) threads, depending on the hardware configuration of the processor. In Linux, a logical CPU is idle when there are no runnable tasks in its queue, so it falls back to executing the idle task associated with it (there is one idle task for each logical CPU in the system, but they all share the same code, which is the idle loop). Therefore "CPU idleness" is an OS (not hardware) concept and if the idle loop is entered by a CPU, there is an opportunity to save some energy with a relatively small impact on performance (or even without any impact on performance at all) — if the hardware supports that. The idle loop runs on each idle CPU and it only takes this particular CPU into consideration. As a rule, two code modules are invoked in every iteration of it. The first one, referred to as the CPU idle-time-management governor, is responsible for deciding whether or not to stop the scheduler tick and what to tell the hardware to do; the second one, called the CPU idle-time-management driver, passes the governor's decisions down to the hardware, usually in an architecture- or platform-specific way. Then, presumably, the processor enters a special state in which the CPU in question stops fetching instructions (that is, it does literally nothing at all); that may allow the processor's power draw to be reduced and some energy to be saved as a result. If that happens, the processor needs to be woken up from that state by a hardware event after spending some time, referred to as the idle duration, in it. At that point, the governor is called again so it can save the idle-duration value for future use.