Language Selection

English French German Italian Portuguese Spanish

Linux.com

Syndicate content
News For Open Source Professionals
Updated: 4 hours 11 min ago

Exploring ARM64 runtime patching alternatives

Tuesday 1st of June 2021 10:00:00 PM

An overview on utliizing the Linux Alternatives Framework to perform runtime kernel patching.
Click to Read More at Oracle Linux Kernel Development

The post Exploring ARM64 runtime patching alternatives appeared first on Linux.com.

Learn About Magma, the Open Source Project Bringing High Speed Internet to Remote Areas, in This Free Course

Tuesday 1st of June 2021 09:00:11 PM

Magma is an open source project supporting diverse radio technologies, including LTE, 5G and WiFi, which can help extend network access into remote, sparsely populated areas. It helps connect the world to a faster network by providing operators an open, flexible, and extendable mobile core network solution. Its operational simplicity and lower cost structure also empower innovators to build fixed and mobile wireless networks never previously imagined.

Magma has already been deployed in production environments. Muralnet, for example, is using Magma to extend network access to Native American communities, while Brisanet has similarly deployed it into remote areas of Brazil. With high speed internet access having huge impacts on regions’ economic fortunes, Magma has the potential to be a game changer around the world.

However, as a relatively new technology, there are not enough individuals with expertise in Magma at present. That’s why Linux Foundation Training & Certification and the Magma Core Foundation have partnered to develop a free, online training course to help technology strategists and decision makers at telcos – as well as rural ISP operators and systems integrators – learn the fundamentals of Magma.

Introduction to Magma: Cloud Native Wireless Networking is designed to provide an understanding of the overall Magma architecture and how it fits into the bigger picture of cellular network architectures, particularly 4G/LTE and 5G. Participants will learn to recognize and understand the main functions of a mobile wireless network, understand the key use cases and value proposition of Magma, the overall architecture of Magma at a functional block level, and the functions performed by each of the main Magma components (Access Gateway, Federation Gateway, and Orchestrator). The course will also provide resources to learn to deploy Magma on standard hardware.

The course was developed by Bruce Davie and Larry Peterson. Davie is a computer scientist noted for his contributions to the field of networking who has served in senior roles at VMware, Software Defined Networking (SDN) startup Nicira, and Cisco. He has over 30 years of networking industry experience. Peterson is the Robert E. Kahn Professor of Computer Science, Emeritus at Princeton University, where he served as Chair from 2003-2009. His research focuses on the design, implementation, and operation of Internet-scale distributed systems, including the widely used PlanetLab and MeasurementLab platforms. 

With Magma adoption still in relatively early stages, now is the time for telco and networking professionals to begin learning about this exciting technology. Enroll today!

The post Learn About Magma, the Open Source Project Bringing High Speed Internet to Remote Areas, in This Free Course appeared first on Linux Foundation – Training.

The post Learn About Magma, the Open Source Project Bringing High Speed Internet to Remote Areas, in This Free Course appeared first on Linux.com.

Build and Deploy Hyperledger Fabric on Azure Cloud Platform- Part 1

Wednesday 26th of May 2021 09:00:11 PM

By Matt Zand and Abhik Banerjee

Here is an outline of topics covered in this article series:

Azure cloud for Blockchain Applications
Fabric Marketplace Template versus Manual Configurations
Deploy Orderer and Peer Organizations
Setting Up the Development Environment
Setting Up Configurations for Orderer and Peer
Setting Up Pods and Volume Mounts for Development
Create A Channel
Adding  Peer to Network, Channel and Nodes
Deploying and Operating Chaincode

This series is divided into 3 parts: 

In the first part, we cover item 1, 2, 3 of the outline. In the second part, we will cover items 4, 5 and 6 and in the last part we will cover the remaining items (7, 8 and 9). 

Introduction

By finishing this article series, you will gain and be able to put Hyperledger Fabric skills into practice through creating and deploying Fabric applications on Azure cloud platforms. As such, this article covers highly practical steps for those interested in moving their Fabric application from the pilot step to production. We start off by reviewing the Azure cloud platform and its features, and follow with hands-on steps for building and deploying Fabric applications on Azure. 

While managed blockchain solutions from Azure were limited to an enterprise Ethereum variant called Quorum, Hyperledger Fabric is now also supported. However, at the time of writing, Azure Blockchain Service only offers Quorum in General Availability. Support for Hyperledger Fabric is a part of offering Blockchain-As-A-Service (BAAS) solutions in the Azure platform..

It should be noted that while this article is intended to be beginner friendly, it would be helpful to have prior familiarity with Azure, Kubernetes APIs, and Azure Kubernetes Service (AKS). Also, a good knowledge of Hyperledger Fabric and its components is required to follow steps discussed in this article. If you have no experience with Fabric, The Linux Foundation’s free Introduction to Hyperledger Blockchain Technologies course is a good place to start. The topics and steps covered in this article are very useful for those interested in doing blockchain consulting and development.

1- Azure Cloud for Blockchain Applications

Before we dive head first into building a blockchain network on Azure, it would be beneficial to look at the three main blockchain options provided by Microsoft Azure. These are:

Azure Blockchain Service (ABS) 

Azure Blockchain Service is a managed service from Microsoft Azure that aims to help organizations get started quickly with their blockchain solutions. It manages the deployment of Validator and Transaction Nodes in the network and can be easily used from the Azure Portal itself. 

Azure Blockchain Workbench (ABW)

Azure Blockchain Workbench takes the whole “managed” paradigm to another level. It introduces managed identity elements in the network. The participants in the network can have their addresses managed with the Azure Active Directory (AD). ABW also offers easy integration with Azure Services like Cosmos Database for storing off-chain data for analytics. However, like ABS, this too only supports Quorum at this time. Azure Blockchain Workbench is in “public preview” and can be a great way to test out Proof of Concepts quickly.

Azure Resource Manager Templates (ARM Templates)

Azure Resource Manager (ARM) can be thought of as a creator and a shepherd for your Microsoft Azure Resources. It is as simple as handing a JSON object to ARM. ARM Templates are basically JSON files that contain details of the infrastructure you might want to deploy (say a VM with 3 vCores and 100 GB of SSD storage and CentOS). ARM takes in these templates and then provisions said resources as you want them. In our case, Azure Marketplace is where one can find the ARM Template for deploying Hyperledger Fabric on Azure. Azure Marketplace provides the option to use ARM Template for deploying Hyperledger Fabric Nodes using AKS. Azure makes it easier by allowing the option to deploy from the Azure Portal itself. The deployed nodes exist in an AKS cluster and can be managed from the portal, Azure Shell or command line or even your local PC through kubectl (Kubernetes control CLI). We will be going with the latter solution in this section.

In the following sections we briefly compare the marketplace template with AKS using manual configurations and then dive into a hands-on Hyperledger Fabric network deployment on Azure, while playing around with a custom chaincode on the deployed solution.

2- Fabric Marketplace Template versus Manual Configurations

Azure Marketplace is a great place to find pre-built solutions. In our case, we are looking for a particular solution named “Hyperledger Fabric on Azure Kubernetes Service”. Before we discuss using it, let’s explore why someone may prefer it over manually provisioning their own resources. 

Manual provisioning can be helpful when one cannot find the proper base to build their solution. However, it requires in-depth knowledge of the platform. This is true not only for Azure but for anything. If there is a need for manual configuration, the architect needs to have a thorough understanding of networking, security and compliance on Azure. By this we mean that the person who will configure the network manually needs to know how to put up VMs and link them in a network without making the resources like SSH Keys or storage open to attacks.

For this reason, it is better to follow Murphy’s Law – “if something can go wrong, it will”. If so,  wouldn’t it be better to let aspects of the network be handled by Azure itself? This would also help the blockchain developer/operator improve their use case and focus on the chaincode, Access Control Lists, Fabric network architecture, and peer policies. This also does not close off the path of managing nodes in the network manually as an outside node, regardless of the cloud platform it is running on, can be a participant in the Hyperledger Fabric Network deployed on Azure.

3- Deploy Orderer and Peer Organizations

To begin, open your Azure Portal and then click on “Create a Resource” (the big + sign) which will take you to Azure Marketplace. From the list on the left find the option titled “Blockchain”. Once you click on it, you will see options like “Azure Blockchain Service”, “Azure Blockchain Workbench”, “Ethereum Studio” and more.  Click on “Hyperledger Fabric on Azure Kubernetes Service”.

At this point you might want to have a pre-created Azure Service Principal. Service Principals are needed for deploying Kubernetes (K8s) clusters on Azure. The logic behind this is – you cannot do anything in Azure without permissions! But instead of attaching permissions to your person, you have service principals. The permissions to create a K8s node in Azure, for example, gets attached to this service principal along with any other required permissions. Every time you want to deploy an AKS cluster you get to use this service principal. 

Hint

Service principals are complex and important, with many best practices to consider. Since this is outside the scope of this article, we will focus on Fabric on AKS.

Someone who uses Azure regularly might ask “Doesn’t AKS automatically create a service principal  for me?” Normally, it does but this is an exception; you can easily create a service principal if you don’t have one already by running the following command in the Azure CLI:

az ad sp create-for-rbac –skip-assignment –name {Name of the Service Principal}

Once you run this, you will get the output which will give you an App ID and a Password among other things. These are the ones which you will need here.

Now let’s deploy an orderer node using the portal. In the first screen you’ll see the following options:

Subscription 

Select the subscription you would like to use at this time. 

Resource Group 

An Azure Resource Group is a handy way to keep multiple resources in the same “basket” so to speak. These can be categorized based on resource type (say one RG for compute resources like VMs, another for storage) or by solution/use case. In our case, we will create two resource groups–one for Orderer and another for Org01 resources. Click on “Create New” and enter your resource group name (here we go with RG-HLF-AzureOrderer).

Region 

The Azure Region you want to deploy your resources to. Here we go with the default Central US.

Resource Prefix 

This is a collection of at most 6 characters that would be prepended to the resources being provisioned by/for this AKS cluster. We use the prefix “order” here to denote that the resources belong to Orderer Service.

Click on “Next : Fabric settings >” to set your Hyperledger Fabric related information. Here you would be asked to fill the following details:

Organization Name 

This would be the name of the organization trying to join the network. As you may know by now, a Hyperledger Fabric network is always brought up with an Orderer at first. Following suit here we shall name it “OrdererOrg”.

Fabric network component: 

Next you select the type of service. This is a drop-down menu where you get to select from “Ordering Service” and “Peer Node”. We shall select it as “Ordering Service”.

Number of nodes 

If you select the above option as Ordering Service, you need to choose between having 3, 5, or 7 nodes. Since this is a demo, we are going with 3. (You may be wondering why the choices are like this; this is to get the number of ordering nodes in a number conformable with the Consensus Protocol).

Fabric CA username 

Certifying Authority is one of the central parts of the network. Without a CA, you cannot enroll new members or sign transactions which would be considered from a valid identity. Here we select the username as “ordca01”.

Fabric CA password 

Now you need to select a secure password to go with your CA username. Confirm this in the next option.

Certificates 

You get a choice of selecting between self-signed certificates which would be managed automatically or to upload your own custom ones to act for the Fabric Certifying Authority. Here we choose the managed option – “Fabric CA self-signed certificates”.

After deciding on your HF network specific parameters for Ordering Service, you need to select the settings for the AKS Cluster to which it will be deployed. We leave most of the options at this screen as defaults. The ones which might require attention are Node Size (the type of Azure VMs to use in your cluster), Node Count and the service principal related fields Service principal client ID, Service principal client secret, and Azure Monitor. 

Keep in mind the following:

The Node type you select at this stage will directly impact the cost of sustaining your HF Network.
Do not change the Node Count here. It has to be selected in the previous screen and it plays an important part in the consensus.
If you already have a service principal, you may use it to deploy this cluster. Otherwise refer to the beginning of this section on how to create one. Put in the service principal’s app ID and password in the respective fields.
You may choose to enable cluster wide monitoring with Azure Monitor (in fact it’s recommended that you do). This will help you analyze metrics like transaction count and rate of memory growth.

Deploy your Orderer Service Cluster on AKS. It should take about 10-15 minutes for the cluster to deploy. Meanwhile we can complete another important step – create a cluster for our organization which would be a part of this network. Most of the steps in this case would be the same as above. We will be starting again from the “Hyperledger Fabric on Azure Kubernetes Service” template in Azure Marketplace and then going over the same steps. The values which may differ in this case are listed in the below table. 

Field
Value

Resource Group
RG-HLF-AzureOrg (creating another new RG for peer)

Resource prefix
org01 (for identifying all resources belonging to peer Org01)

Organization Name
Org01

Fabric network component
Peer nodes

Number of nodes
1 (can vary depending on your budget)

Peer node world state database
CouchDB

Fabric CA username
orgca01

Here we get an option to select between CouchDB and LevelDB. These two are natively offered from Hyperledger Fabric at this time. We go with CouchDB. Feel free to read up on the difference between the two if you are interested.

Now we have our clusters for Ordering Service and a sample organization Org01 in deployments and these will be up and running in a few moments. We have covered some ground, so let’s reflect on our future course of action – we have clusters for Orderers and Organization and now we need to create a network out of these two. We also need to create a channel where OrdererOrg will function as the ordering service and we need Org01 to join first and then record the nodes owned by it (basically the nodes in that cluster). While doing all this, we also need to set up a development environment which would provide common ground to develop from within the respective clusters as well as from outside clusters. We will do all these in the next article in the series.

Summary

We finished the first part of our article series where we discussed the Azure cloud platform offering for blockchain development as well as differences between Azure marketplace template and manual configuration. We also take our first step toward building Hyperledger Fabric blockchain applications on Azure by deploying Orderer and peer Organization. 

In our next article in this series, we will cover how to set up the development environment, configure the Orderer and peer, and set pods and volume mounts for our Fabric application. 

Resources

Free Training Courses from The Linux Foundation & Hyperledger

Blockchain: Understanding Its Uses and Implications (LFS170)
Introduction to Hyperledger Blockchain Technologies (LFS171)
Introduction to Hyperledger Sovereign Identity Blockchain Solutions: Indy, Aries & Ursa (LFS172)
Becoming a Hyperledger Aries Developer (LFS173)
Hyperledger Sawtooth for Application Developers (LFS174)

eLearning Courses from The Linux Foundation & Hyperledger

Hyperledger Fabric Administration (LFS272)
Hyperledger Fabric for Developers (LFD272)

Certification Exams from The Linux Foundation & Hyperledger

Certified Hyperledger Fabric Administrator (CHFA)
Certified Hyperledger Fabric Developer (CHFD)

Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy
Review of three Hyperledger Tools- Caliper, Cello and Avalon
Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact

Hands-On Smart Contract Development with Hyperledger Fabric V2 Book by Matt Zand and others.
Essential Hyperledger Sawtooth Features for Enterprise Blockchain Developers
Blockchain Developer Guide- How to Install Hyperledger Fabric on AWS
Blockchain Developer Guide- How to Install and work with Hyperledger Sawtooth
Intro to Blockchain Cybersecurity (Coding Bootcamps)
Intro to Hyperledger Sawtooth for System Admins (Coding Bootcamps)
Blockchain Developer Guide- How to Install Hyperledger Iroha on AWS
Blockchain Developer Guide- How to Install Hyperledger Indy and Indy CLI on AWS
Blockchain Developer Guide- How to Configure Hyperledger Sawtooth Validator and REST API on AWS
Intro blockchain development with Hyperledger Fabric (Coding Bootcamps)
How to build DApps with Hyperledger Fabric
Blockchain Developer Guide- How to Build Transaction Processor as a Service and Python Egg for Hyperledger Sawtooth
Blockchain Developer Guide- How to Create Cryptocurrency Using Hyperledger Iroha CLI
Blockchain Developer Guide- How to Explore Hyperledger Indy Command Line Interface
Blockchain Developer Guide- Comprehensive Blockchain Hyperledger Developer Guide from Beginner to Advance Level
Blockchain Management in Hyperledger for System Admins
Hyperledger Fabric for Developers (Coding Bootcamps)
Free White Papers from Hyperledger
Free Webinars from Hyperledger
Hyperledger Wiki

About the Authors

Matt Zand is a serial entrepreneur and the founder of 4 tech startups: DC Web Makers, Hash Flow, Coding Bootcamps and High School Technology Services. He is a leading author of Hands-on Smart Contract Development with Hyperledger Fabric book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms at sites such as IBM, SAP, Alibaba Cloud, Hyperledger, The Linux Foundation, and more. At Hash Flow, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, investor, business advisor for a few startup companies. You can connect with him on LI: https://www.linkedin.com/in/matt-zand-64047871

Abhik Banerjee is a researcher, an avid reader and also an anime fan. In his free time you can find him reading whitepapers and building hobby projects ranging from DLT to Cloud Infra. He has multiple publications in International Conferences and Book Titles along with a couple of patents in Blockchain. His interests include Blockchain, Quantum Information Processing and Bioinformatics. You can connect with him on LI:  https://in.linkedin.com/in/abhik-banerjee-591081164

The post Build and Deploy Hyperledger Fabric on Azure Cloud Platform- Part 1 appeared first on Linux Foundation – Training.

The post Build and Deploy Hyperledger Fabric on Azure Cloud Platform- Part 1 appeared first on Linux.com.

New container feature: Volatile overlay mounts

Wednesday 26th of May 2021 09:33:53 AM

With containers, we don’t always care about data being retained after a crash. See how volatile overlay mounts can help increase performance in these situations.
Read More at Enable Sysadmin

The post New container feature: Volatile overlay mounts appeared first on Linux.com.

The Linux Foundation joins Accenture, GitHub, Microsoft, and ThoughtWorks to Launch the Green Software Foundation to put sustainability at the core of software engineering

Wednesday 26th of May 2021 03:44:33 AM

As we think about the future of the software industry, we believe we have a responsibility to help build a better future – a more sustainable future – both internally at our organizations and in partnership with industry leaders around the globe. With data centers around the world accounting for 1% of global electricity demand, and projections to consume 3-8% in the next decade, it’s imperative we address this as an industry.

To help in that endeavor, we’re excited to announce the formation of The Green Software Foundation – a nonprofit founded by Accenture, GitHub, Microsoft, and ThoughtWorks established with the Linux Foundation and the Joint Development Foundation Projects LLC to build a trusted ecosystem of people, standards, tooling, and leading practices for building green software.

Read more at The Microsoft Blog

The post The Linux Foundation joins Accenture, GitHub, Microsoft, and ThoughtWorks to Launch the Green Software Foundation to put sustainability at the core of software engineering appeared first on Linux Foundation.

The post The Linux Foundation joins Accenture, GitHub, Microsoft, and ThoughtWorks to Launch the Green Software Foundation to put sustainability at the core of software engineering appeared first on Linux.com.

SPDX: It’s Already in Use for Global Software Bill of Materials (SBOM) and Supply Chain Security

Wednesday 26th of May 2021 03:28:03 AM

Author: Kate Stewart, VP of Dependable Systems, The Linux Foundation

In a previous Linux Foundation blog, David A. Wheeler, director of LF Supply Chain Security, discussed how capabilities built by Linux Foundation communities can be used to address the software supply chain security requirements set by the US Executive Order on Cybersecurity. 

One of those capabilities, SPDX, completely addresses the Executive Order 4(e) and 4(f) and 10(j) requirements for a Software Bill of Materials (SBOM). The SPDX specification is implemented as a file format that identifies the software components within a larger piece of computer software and metadata such as the licenses of those components. 

SPDX is an open standard for communicating software bill of material (SBOM) information, including components, licenses, copyrights, and security references. It has a rich ecosystem of existing tools that provides a common format for companies and communities to share important data to streamline and improve the identification and monitoring of software.

SBOMs have numerous use cases. They have frequently been used in areas such as license compliance but are equally useful in security, export control, and broader processes such as mergers and acquisitions (M&A) processes or venture capital investments. SDPX maintains an active community to support various uses, modeling its governance and activity on the same format that has successfully supported open source software projects over the past three decades.

The LF has been developing and refining SPDX for over ten years and has seen extensive uptake by companies and projects in the software industry.  Notable recent examples are the contributions by companies such as Hitachi, Fujitsu, and Toshiba in furthering the standard via optional profiles like “SPDX Lite” in the SPDX 2.2 specification release and in support of the SPDX SBOMs in proprietary and open source automation solutions. 

This de facto standard has been submitted to ISO via the Joint Development Foundation using the PAS Transposition process of Joint Technical Committee 1 (JTC1). It is currently in the enquiry phase of the process and can be reviewed on the ISO website as ISO/IEC DIS 5962.

There is a wide range of open source tooling, as well as commercial tool options emerging as well as options available today.  Companies such as FOSSID and Synopsys have been working with the SPDX format for several years. Open Source tools like FOSSology (source code Analysis),  OSS Review Toolkit (Generation from CI & Build infrastructure), Tern (container content analysis), Quartermaster (build extensions), ScanCode (source code analysis) in addition to the SPDX-tools project have also standardized on using SPDX for the interchange are also participating in Automated Compliance Tooling (ACT) Project Umbrella.  ACT has been discussed as community-driven solutions for software supply chain security remediation as part of our synopsis of the findings in the Vulnerabilities in the Core study, which was published by the Linux Foundation and Harvard University LISH in February of 2020.   

One thing is clear: A software bill of materials that can be shared without friction between different teams and companies will be a core part of software development and deployment in this coming decade. The sharing of software metadata will take different forms, including manual and automated reviews, but the core structures will remain the same. 

Standardization in this field, as in others, is the key to success. This domain has an advantage in that we are benefiting from an entire decade of prior work in SPDX. Therefore the process becomes the implementation of this standard to the various domains rather than the creation, expansion, or additional refinement of new or budding approaches to the matter.

Start using the SPDX specification here:https://spdx.github.io/spdx-spec/. Development of the next revision is underway, so If there’s a use case you can’t represent with the current specification, open an issue, this is the right window for input.   

To learn more about the many facets of the SPDX project see: https://spdx.dev/

The post SPDX: It’s Already in Use for Global Software Bill of Materials (SBOM) and Supply Chain Security appeared first on Linux Foundation.

The post SPDX: It’s Already in Use for Global Software Bill of Materials (SBOM) and Supply Chain Security appeared first on Linux.com.

Oracle Ampere A1 Compute tuning for advanced users

Tuesday 25th of May 2021 10:45:00 PM

Advanced tuning techniques for Oracle Ampere A1 instances
Click to Read More at Oracle Linux Kernel Development

The post Oracle Ampere A1 Compute tuning for advanced users appeared first on Linux.com.

Free Course Explores WebAssembly Modules from the Cloud to the Edge

Wednesday 19th of May 2021 09:00:09 PM

With our world being increasingly driven by apps and the microservices that support them, adoption of WebAssembly (Wasm) continues to accelerate. WebAssembly is a stack-based virtual machine that can greatly improve the performance and capabilities of websites and, despite the name, nearly any other kind of non-web platform you can imagine.

Besides making browsers much more powerful, this technology may extend beyond the scope of mere websites. It isn’t just for browsers; Wasm is currently being used in cloud, mobile, low-level networking, and edge-based environments.

This is why The Linux Foundation is today releasing a new, free, online training course, WebAssembly Actors: From Cloud to Edge (LFD134x). The course explores the portability, efficiency, and security of WebAssembly modules and how to leverage a number of open source frameworks to create distributed and seamlessly connected actors that can be deployed in a browser, on a laptop, in the cloud, on a Raspberry Pi, or practically anywhere.

This course is designed for developers who have built or are building microservices and have experienced a high degree of friction in cloud native application development. Developers looking to embrace the simplicity of Functions as a Service (FaaS) without the overhead of cloud providers or sacrificing the ability to experiment and test locally and in any other environment will gain significant value from this course.

Kevin Hoffman, the author of “Programming WebAssembly with Rust”, “Cloud Native Go”, and over a dozen books on various aspects of the .NET Framework, created this course. He has presented at a number of conferences and events over the past 2 years on WebAssembly, and at dozens of previous conferences on everything from .NET to Spring Boot to Redis and even at Apple’s WWDC.

The course is free to audit on edX.org for seven weeks, or a verified certification of completion is available for a fee, which includes a full year of course access. Enroll today and start improving your cloud native application development with Wasm!

The post Free Course Explores WebAssembly Modules from the Cloud to the Edge appeared first on Linux Foundation – Training.

The post Free Course Explores WebAssembly Modules from the Cloud to the Edge appeared first on Linux.com.

Please Participate In Hyperledger’s 2021 Blockchain Brand Survey

Wednesday 19th of May 2021 05:20:59 PM

Together with Linux Foundation Research, Hyperledger is conducting a survey to measure the market awareness and perceptions of Hyperledger and its projects relative to other blockchain platforms used in the technology industry, specifically identifying myths and misperceptions. Additionally, the survey seeks to help Hyperledger articulate the perceived time to production readiness for products and understand motivations for developers that both use and contribute to Hyperledger technologies.

  • Participants who complete the survey will receive a 50 percent discount on attendance to Hyperledger Global Forum, June 8-10, 2021
  • Please participate now; we intend to close the survey in early June. 
  • Privacy and confidentiality are important to us. Neither participant names, nor their company names, will be displayed in the final results. 
  • This survey should take no more than 20 minutes of your time.

Click here to access the Brand Survey

The post Please Participate In Hyperledger’s 2021 Blockchain Brand Survey appeared first on Linux.com.

Enroll in Instructor-Led Training and You’ll Now Receive a Free Gift

Tuesday 18th of May 2021 09:28:08 PM

We’ve heard from enrollees in our instructor-led training that they miss the Chomebook we previously provided with these course enrollments. So, effective immediately, we are offering you the chance to select a free gift if you enroll in one of our instructor-led training courses!

For those in the United States

Individuals in the USA who enroll in instructor-led training will be able to select from a variety of Linux-powered gifts from Best Buy. The options will change with time, but you should expect to be able to choose from things like:

Chomebooks
Android tablets
Fitness trackers
Smart watches
Smart speakers
And more!

After enrolling in your course, you will receive an email with a link where you’ll see all your options and can select the one that suits you best. 

For those outside the United States

Those outside the USA will receive a $300 refund on their course fees. This refund will be applied to the payment card used to purchase the training course. After much research, allowing our customers to use these funds to purchase something in their local market was more practical than trying to ship from the US and have to deal with long lead times, export restrictions and customs charges.

Click here to learn more about this great new benefit!

The post Enroll in Instructor-Led Training and You’ll Now Receive a Free Gift appeared first on Linux Foundation – Training.

The post Enroll in Instructor-Led Training and You’ll Now Receive a Free Gift appeared first on Linux.com.

How LF communities enable security measures required by the US Executive Order on Cybersecurity

Friday 14th of May 2021 07:30:00 AM

Our communities take security seriously and have been instrumental in creating the tools and standards that every organization needs to comply with the recent US Executive Order

Overview

The US White House recently released its Executive Order (EO) on Improving the Nation’s Cybersecurity (along with a press call) to counter “persistent and increasingly sophisticated malicious cyber campaigns that threaten the public sector, the private sector, and ultimately the American people’s security and privacy.”

In this post, we’ll show what the Linux Foundation’s communities have already built that support this EO and note some other ways to assist in the future. But first, let’s put things in context.

The Linux Foundation’s Open Source Security Initiatives In Context

We deeply care about security, including supply chain (SC) security. The Linux Foundation is home to some of the most important and widely-used OSS, including the Linux kernel and Kubernetes. The LF’s previous Core Infrastructure Initiative (CII) and its current Open Source Security Foundation (OpenSSF) have been working to secure OSS, both in general and in widely-used components. The OpenSSF, in particular, is a broad industry coalition “collaborating to secure the open source ecosystem.”

The Software Package Data Exchange (SPDX) project has been working for the last ten years to enable software transparency and the exchange of software bill of materials (SBOM) data necessary for security analysis. SPDX is in the final stages of review to be an ISO standard, is supported by global companies with massive supply chains, and has a large open and closed source tooling support ecosystem. SPDX already meets the requirements of the executive order for SBOMs.

Finally, several LF foundations have focused on the security of various verticals. For example, LF Public Health and LF Energy have worked on security in their respective sectors. Our cloud computing industry collaborating within CNCF has also produced a guide for supporting software supply chain best practices for cloud systems and applications.

Given that context, let’s look at some of the EO statements (in the order they are written) and how our communities have invested years in open collaboration to address these challenges.

Best Practices

The EO 4(b) and 4(c) says that

The “Secretary of Commerce [acting through NIST] shall solicit input from the Federal Government, private sector, academia, and other appropriate actors to identify existing or develop new standards, tools, and best practices for complying with the standards, procedures, or criteria [including] criteria that can be used to evaluate software security, include criteria to evaluate the security practices of the developers and suppliers themselves, and identify innovative tools or methods to demonstrate conformance with secure practices [and guidelines] for enhancing software supply chain security.” Later in EO 4(e)(ix) it discusses “attesting to conformity with secure software development practices.”

The OpenSSF’s CII Best Practices badge project specifically identifies best practices for OSS, focusing on security and including criteria to evaluate the security practices of developers and suppliers (it has over 3,800 participating projects). LF is also working with SLSA (currently in development) as potential additional guidance focused on addressing supply chain issues further.

Best practices are only useful if developers understand them, yet most software developers have never received education or training in developing secure software. The LF has developed and released its Secure Software Development Fundamentals set of courses available on edX to anyone at no cost. The OpenSSF Best Practices Working Group (WG) actively works to identify and promulgate best practices. We also provide a number of specific standards, tools, and best practices, as discussed below.

Encryption and Data Confidentiality

The EO 3(d) requires agencies to adopt “encryption for data at rest and in transit.” Encryption in transit is implemented on the web using the TLS (“https://”) protocol, and Let’s Encrypt is the world’s largest certificate authority for TLS certificates.

In addition, the LF Confidential Computing Consortium is dedicated to defining and accelerating the adoption of confidential computing. Confidential computing protects data in use (not just at rest and in transit) by performing computation in a hardware-based Trusted Execution Environment. These secure and isolated environments prevent unauthorized access or modification of applications and data while in use.

Supply Chain Integrity

The EO 4(e)(iii) states a requirement for

 “employing automated tools, or comparable processes, to maintain trusted source code supply chains, thereby ensuring the integrity of the code.” 

The LF has many projects that support SC integrity, in particular:

in-toto is a framework specifically designed to secure the integrity of software supply chains.

The Update Framework (TUF) helps developers maintain the security of software update systems, and is used in production by various tech companies and open source organizations.

Uptane is a variant of TUF; it’s an open and secure software update system design which protects software delivered over-the-air to the computerized units of automobiles.

sigstore is a project to provide a public good / non-profit service to improve the open source software supply chain by easing the adoption of cryptographic software signing (of artifacts such as release files and container images) backed by transparency log technologies (which provide a tamper-resistant public log).

We are also funding focused work on tools to ease signature and verify origins, e.g., we’re working to extend git to enable pluggable support for signatures, and the patatt tool provides an easy way to provide end-to-end cryptographic attestation to patches sent via email.

OpenChain (ISO 5230) is the International Standard for open source license compliance. Application of OpenChain requires identification of OSS components. While OpenChain by itself focuses more on licenses, that identification is easily reused to analyze other aspects of those components once they’re identified (for example, to look for known vulnerabilities).

Software Bill of Materials (SBOMs) support supply chain integrity; our SBOM work is so extensive that we’ll discuss that separately.

Software Bill of Materials (SBOMs)

Many cyber risks come from using components with known vulnerabilities. Known vulnerabilities are especially concerning in key infrastructure industries, such as the national fuel pipelines,  telecommunications networks, utilities, and energy grids. The exploitation of those vulnerabilities could lead to interruption of supply lines and service, and in some cases, loss of life due to a cyberattack.

One-time reviews don’t help since these vulnerabilities are typically found after the component has been developed and incorporated. Instead, what is needed is visibility into the components of the software environments that run these key infrastructure systems, similar to how food ingredients are made visible.

A Software Bill of Materials (SBOM) is a nested inventory or a list of ingredients that make up the software components used in creating a device or system. This is especially critical as it relates to a national digital infrastructure used within government agencies and in key industries that present national security risks if penetrated. Use of SBOMs would improve understanding of the operational and cyber risks of those software components from their originating supply chain.

The EO has extensive text about requiring a software bill of materials (SBOM) and tasks that depend on SBOMs:

EO 4(e) requires providing a purchaser an SBOM “for each product directly or by publishing it on a public website” and “ensuring and attesting… the integrity and provenance of open source software used within any portion of a product.” It also requires tasks that typically require SBOMs, e.g., “employing automated tools, or comparable processes, that check for known and potential vulnerabilities and remediate them, which shall operate regularly….” and “maintaining accurate and up-to-date data, provenance (i.e., origin) of software code or components, and controls on internal and third-party software components, tools, and services present in software development processes, and performing audits and enforcement of these controls on a recurring basis.” EO 4(f) requires publishing “minimum elements for an SBOM,” and EO 10(j) formally defines an SBOM as a “formal record containing the details and supply chain relationships of various components used in building software…  The SBOM enumerates [assembled] components in a product… analogous to a list of ingredients on food packaging.”

The LF has been developing and refining SPDX for over ten years; SPDX is used worldwide and has is in the process of being approved as ISO/IEC Draft International Standard (DIS) 5962.  SPDX is a file format that identifies the software components within a larger piece of computer software and metadata such as the licenses of those components. SPDX 2.2 already supports the current guidance from the National Telecommunications and Information Administration (NTIA) for minimum SBOM elements. Some ecosystems have ecosystem-specific conventions for SBOM information, but SPDX can provide information across all arbitrary ecosystems.

SPDX is real and in use today, with increased adoption expected in the future. For example:

An NTIA “plugfest” demonstrated ten different producers generating SPDX. SPDX supports acquiring data from different sources (e.g., source code analysis, executables from producers, and analysis from third parties). A corpus of some LF projects with SPDX source SBOMs is available. Various LF projects are working to generate binary SBOMs as part of their builds, including yocto and Zephyr. To assist with further SPDX adoption, the LF is paying to write SPDX plugins for major package managers.

Vulnerability Disclosure

No matter what, some vulnerabilities will be found later and need to be fixed. EO 4(e)(viii) requires “participating in a vulnerability disclosure program that includes a reporting and disclosure process.” That way, vulnerabilities that are found can be reported to the organizations that can fix them.

The CII Best Practices badge passing criteria requires that OSS projects specifically identify how to report vulnerabilities to them. More broadly, the OpenSSF Vulnerability Disclosures Working Group is working to help “mature and advocate well-managed vulnerability reporting and communication” for OSS. Most widely-used Linux distributions have a robust security response team, but the Alpine Linux distribution (widely used in container-based systems) did not. The Linux Foundation and Google funded various improvements to Alpine Linux, including a security response team.

We hope that the US will update its Vulnerabilities Equities Process (VEP) to work more cooperatively with commercial organizations, including OSS projects, to share more vulnerability information. Every vulnerability that the US fails to disclose is a vulnerability that can be found and exploited by attackers. We would welcome such discussions.

Critical Software

It’s especially important to focus on critical software — but what is critical software? EO 4(g) requires the executive branch to define “critical software,” and 4(h) requires the executive branch to “identify and make available to agencies a list of categories of software and software products… meeting the definition of critical software.”

Linux Foundation and the Laboratory for Innovation Science at Harvard (LISH) developed the report Vulnerabilities in the Core,’ a Preliminary Report and Census II of Open Source Software, which analyzed the use of OSS to help identify critical software. The LF and LISH are in the process of updating that report. The CII identified many important projects and assisted them, including OpenSSL (after Heartbleed), OpenSSH,  GnuPG, Frama-C, and the OWASP Zed Attack Proxy (ZAP). The OpenSSF Securing Critical Projects Working Group has been working to better identify critical OSS projects and to focus resources on critical OSS projects that need help. There is already a first-cut list of such projects, along with efforts to fund such aid.

Internet of Things (IoT)

Unfortunately, internet-of-things (IoT) devices often have notoriously bad security. It’s often been said that “the S in IoT stands for security.”

EO 4(s) initiates a pilot program to “educate the public on the security capabilities of Internet-of-Things (IoT) devices and software development practices [based on existing consumer product labeling programs], and shall consider ways to incentivize manufacturers and developers to participate in these programs.” EO 4(t) states that such “IoT cybersecurity criteria” shall “reflect increasingly comprehensive levels of testing and assessment.”

The Linux Foundation develops and is home to many of the key components of IoT systems. These include:

The Linux kernel, used by many IoT devices. The yocto project, which creates custom Linux-based systems for IoT and embedded systems. Yocto supports full reproducible builds. EdgeX Foundry, which is a flexible OSS framework that facilitates interoperability between devices and applications at the IoT edge, and has been downloaded millions of times. The Zephyr project, which provides a real-time operating system (RTOS) used by many for resource-constrained IoT devices and is able to generate SBOM’s automatically during build. Zephyr is one of the few open source projects that is a CVE Numbering Authority.The seL4 microkernel, which is the most assured operating system kernel in the world; it’s notable for its comprehensive formal verification.

Security Labeling

EO 4(u) focuses on identifying:

“secure software development practices or criteria for a consumer software labeling program [that reflects] a baseline level of secure practices, and if practicable, shall reflect increasingly comprehensive levels of testing and assessment that a product may have undergone [and] identify, modify, or develop a recommended label or, if practicable, a tiered software security rating system.”

The OpenSSF’s CII Best Practices badge project (noted earlier) specifically identifies best practices for OSS development, and is already tiered (passing, silver, and gold). Over 3,800 projects currently participate.

There are also a number of projects that relate to measuring security and/or broader quality:

Community Health Analytics Open Source Software (CHAOSS) focuses on creating analytics and metrics to help define community health and identify risk The OpenSSF Security Metrics Project, which is in the process of development, was created to collect, aggregate, analyze, and communicate relevant security data about open source projects.The OpenSSF Security Reviews initiative provides a collection of security reviews of open source software.The OpenSSF Security Scorecards provide a set of automated pass/fail checks to provide a quick review of arbitrary OSS.

Conclusion

The Linux Foundation (LF) has long been working to help improve the security of open source software (OSS), which powers systems worldwide. We couldn’t do this without the many contributions of time, money, and other resources from numerous companies and individuals; we gratefully thank them all.  We are always delighted to work with anyone to improve the development and deployment of open source software, which is important to us.

David A. Wheeler, Director of Open Source Supply Chain Security at the Linux Foundation

The post How LF communities enable security measures required by the US Executive Order on Cybersecurity appeared first on Linux Foundation.

The post How LF communities enable security measures required by the US Executive Order on Cybersecurity appeared first on Linux.com.

How WASI Makes Containerization More Efficient

Thursday 13th of May 2021 09:00:22 PM

By Marco Fioretti

WebAssembly, or Wasm for brevity, is a standardized binary format that allows software written in any language to run without customizations on any platform, inside sandboxes or runtimes – that is virtual machines – at near native speed. Since those runtimes are isolated from their host environment, a WebAssembly System Interface (WASI) gives developers – who adopt Wasm exactly to be free to write software once, but ignoring where it will run – a single, standard way to call the low-level functions that are present on any platform.

The previous article in this series describes the goals, design principles and architecture of WASI. This time, we present real-world, usable projects and services based on WASI, that also clarify its role in the big picture: to facilitate the containerization of virtually any application, much more efficiently than bulkier containers like Docker may do.

Coding with WASI is only half the job

Programmers can already write and compile code, for example in C or Rust, to create .wasm modules usable in any WASI-compliant environment. The problem is, do we already have runtimes that can actually execute those modules “outside web browsers”? The answer is yes, and more than one. One general-purpose solution is Wasmtime, from the Bytecode Alliance. This project develops a WASI-compliant runtime for Wasm modules that may be used standalone, as a command line tool, or be embedded into other applications, as a library: at the moment, besides plain Bash, Wasmtime is usable from Rust, C, Python, .NET and Go.

Other WASI runtimes are more or less optimized for particular use cases, or programming communities. The following examples give an idea of what is possible, without pretense at completeness.

WASI on servers, or REPLACING some servers

Wasmer is a Rust, open-source Wasm runtime, whose 1.0 version was released in January 2021. Wasmer is specifically designed to run – on generic servers – .Wasm modules that use WASI methods to interact with native functions of the host operating system.

Besides a standalone runtime that may run Wasm binaries on any platform and chipset, Wasmer is designed, like Wasmtime, to allow the use of Wasm modules from many other languages, starting from C/C++, Rust, Python, Go, PHP and Ruby.

To prove its capabilities, the developers of Wasmer have compiled as a .wasm module – and then actually run – an unmodified version of the nGinx web server, obviously using WASI calls to interact with the host system.

Wasmer also is the first Wasm runtime to fully support both WASI and high performance programming with the Single Instruction, Multiple Data technique (SIMD): in 2019, the two technologies were used together, with very interesting results, to emulate particle physics. Wasmer developers also participate in work to run Wasm modules on the Linux kernel to execute securely, via WASI, tasks that would otherwise need more checks and more context switching; that is performance hits.

Artificial Intelligence, faster than Docker and simpler than Node.js

Second State has developed another virtual machine to run server-side applications “safer and 10x faster than Docker”, called SSVM. What is particularly interesting in the SSVM runtime is why and how it added and optimized support for WebAssembly and WASI: direct access to hardware to provide Artificial Intelligence and machine learning “as a service in Node.js, written in Rust, over the Web”. Typical applications, running up to 25 times faster than equivalent Python code, include recognition of images and other patterns.

The SSVM toolchain can be used also to create Wasm modules for Deno. This is a Rust runtime for JavaScript and TypeScript created to address the “10 things the creator of Node.js regrets about it”, and supports WASI for Wasm modules that need to access system resources.

WASI gaming and more, right at the cloud edge

Fastly, an edge cloud platform provider, has developed and then released as Open Source its own WebAssembly compiler and runtime, called Lucet. Fastly created this tool specifically to support faster and safer execution of the code that its customers write in any language, for the several use cases of the Fastly platform. To show the capabilities of Wasm and WASI in edge computing, a Fastly engineer recently announced that he has ported the Doom first-person shooter game to run on Fastly’s edge cloud.

WebAssembly and containers? What’s the difference?

Using WASI and the already mentioned Wasmtime, it is possible both to run Wasm modules from .NET Core Applications, and to generate modules in the same format from .NET’s Roslyn compiler. Even more interesting are Microsoft’s Krustlets, that is “Kubernetes Rust kubelets”. These are a way to orchestrate and run WebAssembly “workloads” alongside standard containers, with Kubernetes. In other words Wasm and WASI can already enable the orchestration, with standard systems like Kubernetes, of thousands of generic applications, each isolated at least like with traditional containers – and side by side with them if needed – but with much smaller overhead.

A WASI-driven Internet of Things

The possibility to execute the same binary format on extremely efficient virtual machines that run on many different platforms means even more than it may seem at first sight, because:

“a WASI-enabled JavaScript runtime and simple firmware may keep a device’s software in sync with a cloud-hosted or locally hosted repository”.

In case you haven’t noticed, procedures like that may make automatic testing and deployment of new firmware or software for IoT, or any remote device, really, much easier and reliable than they are today. If a remote device can run WebAssembly bytecode, any developer may reliably write and test new software for it, simply using “basic simulators with digital twins” of that device, as discussed here. Isn’t WASI… interesting?

The post How WASI Makes Containerization More Efficient appeared first on Linux Foundation – Training.

The post How WASI Makes Containerization More Efficient appeared first on Linux.com.

Hyperledger Announces 2021 Brand Study

Wednesday 12th of May 2021 08:48:48 PM

The debate is no longer about deploying blockchain technology, but rather about building production networks that will scale and interoperate. In 2020, the focus shifted from proving the value of blockchain to scaling, governance, and managing blockchain networks. COVID-19 has given the digitization of trust-based processes a new urgency, driving more profound interest in identity, interoperability, and supply chain use cases. 

Together with Linux Foundation Research, Hyperledger is conducting a survey to measure the market awareness and perceptions of Hyperledger and its projects, specifically identifying myths and misperceptions. Additionally, the survey seeks to help Hyperledger articulate the perceived time to production readiness for products and understand motivations for developers that both use and contribute to Hyperledger technologies.

Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. It is a global collaboration including participation from leaders in finance, banking, healthcare, supply chains, manufacturing, and technology. 

Please participate now; we intend to close the survey in early June. 

Privacy and confidentiality are important to us. Neither participant names, nor their company names, will be displayed in the final results. 

This survey should take no more than 20 minutes of your time.

To take the 2021 Hyperledger Market Survey, click the button below:

Take Survey (EN) Take Survey (調査) Take Survey (民意调查)

Thanks to our survey partner Linux Foundation Japan.

SURVEY GOALS

Thank you for taking the time to participate in this survey conducted by Hyperledger, an open source project at the Linux Foundation focused on developing a suite of stable frameworks, tools, and libraries for enterprise-grade blockchain deployments.

Hyperledger and its affiliated projects are hosted by the Linux Foundation.

This survey will provide insights into the challenges, familiarity, and misconceptions about Hyperledger and its suite of technologies. We hope these insights will help guide us in the growth and expansion of marketing and recruitment efforts to help grow projects and our community.

This survey will provide insights into:

What is the awareness, familiarity, and understanding of Hyperledger overall and by project?What are the myths and misperceptions of Hyperledger (e.g., around what it seeks to achieve  (e.g., the number of projects, who is involved and who the competitors are)?How likely are respondents to purchase or adopt blockchain technology?What is the appeal of joining the Hyperledger community?What are the perceptions of business blockchain technology?What is the perceived time to production readiness?What are developers’ motivations for contributing to /using Hyperledger?

PRIVACY

Your name and company name will not be displayed. Reviews are attributed to your role, company size, and industry. Responses will be subject to the Linux Foundation’s Privacy Policy, available at https://linuxfoundation.org/privacy. Please note that members of the Hyperledger survey committee who are not LF employees will review the survey results. 

VISIBILITY

We will summarize the survey data and share the findings during the Hyperledger Member Summit later in the year. The summary report will be published on the Hyperledger and Linux Foundation websites. In addition, we will be producing an in-depth report of the survey which will be shared with Hyperledger membership.

QUESTIONS

If you have questions regarding this survey, please email us at survey@hyperledger.org

Sign up for the Hyperledger Newsletter at https://hyperledger.org 

The post Hyperledger Announces 2021 Brand Study appeared first on Linux Foundation.

The post Hyperledger Announces 2021 Brand Study appeared first on Linux.com.

Recursive Vim macros: One step further into automating repetitive tasks

Wednesday 12th of May 2021 08:34:31 PM

Take Vim to the limit with recursive macros.
Read More at Enable Sysadmin

The post Recursive Vim macros: One step further into automating repetitive tasks appeared first on Linux.com.

Open Source API Gateway KrakenD Becomes Linux Foundation Project

Tuesday 11th of May 2021 10:00:00 PM

KrakenD framework becomes the Lura Project and gets home at Linux Foundation where it will be the only enterprise-grade API Gateway hosted in a neutral, open forum

SAN FRANCISCO, May 11, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced it is hosting the Lura Project, formerly the KrakenD open source project. Lura is a framework for building Application Programming Interfaces (API) Gateways that goes beyond simple reverse proxy, functioning as an aggregator for many microservices and is a declarative tool for creating endpoints. 

Partners include 99P Labs (backed by Ohio State University), Ardan Studios, Hepsiburada, Openroom, Postman, Skalena and Stayforlong. 

“By being hosted at the Linux Foundation, the Lura Project will extend the legacy of the KrakenD open source framework and be better poised to support its massive adoption among more than one million servers every month,” said Albert Lombarte, CEO, KrakenD. “The Foundation’s open governance model will accelerate development and community support for this amazing success.”

API Gateways have become even more valuable as the necessary fabric for connecting cloud applications and services in hybrid environments. KrakenD was created five years ago as a library for engineers to create fast and reliable API Gateways. It has been in production among some of the world’s largest Internet businesses since 2016 As the Lura Project, it is a stateless, distributed, high-performance API Gateway that enables microservices adoption. 

“The Lura Project is an essential connection tissue for applications and services across open source cloud projects and so it’s a natural decision to host it at the Linux Foundation,” said Mike Dolan, senior vice president and general manager of Projects at the Linux Foundation. “We’re looking forward to providing the open governance structure to support Lura Project’s massive growth.” 

For more information about the Lura Project, please visit: https://www.luraproject.org

Supporting Comments

Ardan Studios

“I’m excited to hear that KrakenD API Gateway is being brought into the family of open source projects managed by the Linux Foundation. I believe this shows the global community the commitment KrakenD has to keeping their technology open source and free to use. With the adoption that already exists, and this new promise towards the future, I expect amazing things for the product and the community around it,” said William Kennedy, Managing Partner at Ardan Studios.

Hepsiburada

“At Hepsiburada we have a massive amount of traffic and a complex ecosystem of around 500 microservices and different datacenters. Adding KrakenD to our Kubernetes clusters has helped us reduce the technical and organizational challenges of dealing with a vast amount of resources securely and easily. We have over 800 containers running with KrakenD and looking forward to having more,” said Alper Hankendi, Engineering Director Hepsiburada.

Openroom

“KrakenD allowed us to focus on our backend and deploy a secure and performant system in a few days. After more than 2 years of use in production and 0 crash or malfunction, it also has proven its robustness,” said Jonathan Muller, CTO Openroom Inc.

Postman

“KrakenD represents a renaissance of innovation and investment in the API gateway and management space by challenging the established players with a more lightweight, high performance, and modern gateway for API publisher to put to work across their API operations, while also continuing to establish the LInux Foundation as the home for open API specifications and tooling that are continuing to touch and shape almost every business sector today,” said Kin Lane, chief evangelist, Postman.

Stayforlong

“KrakenD makes it easier for us to manage authentication, filter bots, and integrate our apps. It has proved to be stable and reliable since day one. It is wonderful!” said Raúl M. Sillero, CTO Stayforlong.com.

Skalena

“The Opensource model always was a great proof of innovation and nowadays a synonym of high-quality products and incredible attention with the real needs from the market (Customer Experience). The Linux Foundation is one of the catalysts of incredible solutions, and KrakenD and now Lura would not have a better place to be. With this move, I am sure that it is a start of a new era for this incredible solution in the API Gateway space,  the market will be astonished by a lot of good things about to come,” said Edgar Silva, founder and partner at Skalena. 

About The Linux Foundation

Founded in 2000, The Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. The Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer
for the Linux Foundation
503-867-2304
jennifer@storychangesculture.com

The post Open Source API Gateway KrakenD Becomes Linux Foundation Project appeared first on Linux Foundation.

The post Open Source API Gateway KrakenD Becomes Linux Foundation Project appeared first on Linux.com.

Save up to 50% on Cloud Training Bundles and Bootcamps!

Tuesday 11th of May 2021 09:00:04 PM

We probably don’t need to tell you how in demand cloud skills are right now, and how big of a shortage there is of qualified professionals. Just read these articles from TechHQ, CRN, TechRepublic, or our own 2020 Open Source Jobs Report which found hiring managers are more influenced by knowledge of cloud technologies than any other skill. If you are looking for a career change or to advance in your current IT career, cloud is the best place to start, and now is the time.

To make it easier to get started, Linux Foundation Training & Certification is offering 40% off our cloud training plus certification bundles, and 50% off our cloud engineer bootcamps through May 18! These offerings provide the knowledge you need to be successful in an entry-level cloud position, and the industry-leading certifications to prove it. 

Bundles, which include a training course and certification exam, are discounted by 40%:

Kubernetes Fundamentals (LFS258) + CKA Exam Bundle

This course will teach you how to use the container management platform used by companies like Google to manage their application infrastructure. It prepares you for the CKA exam, which demonstrates the ability to install, configure and manage production-grade Kubernetes clusters, in addition to your understanding of key concepts such as Kubernetes networking, storage, security, maintenance, logging and monitoring, application lifecycle, troubleshooting, API object primitives and the ability to establish basic use-cases for end users.

Kubernetes for Developers (LFD259) + CKAD Exam Bundle

This course will teach you how to containerize, host, deploy, and configure an application in a multi-node cluster. It prepares you for the CKAD exam, which demonstrates the ability to design, build, configure and expose cloud native applications for Kubernetes, define application resources and use core primitives to build, monitor, and troubleshoot scalable applications and tools in Kubernetes.

Kubernetes Security Essentials (LFS260) + CKS Exam Bundle

This course exposes you to knowledge and skills needed to maintain security in dynamic, multi-project environments. It prepares you for the CKS exam, which demonstrates the requisite abilities to secure container-based applications and Kubernetes platforms during build, deployment and runtime, and is qualified to perform these tasks in a professional setting.

The above bundles are reduced from $499 to $299 with coupon code CLOUD21.

Bootcamps, which are self-paced programs presented in a structured format with a dedicated mentor and access to live online video office hours with instructors, are discounted 50%:

Cloud Engineer Bootcamp

This program will prepare an absolute beginner to learn the most in-demand cloud computing skills in as little as 6 months. Components of the bootcamp include:

Essentials of Linux System Administration (LFS201) – This course will teach you how to administer, configure and upgrade Linux systems, which serve as the foundation of modern cloud infrastructures.
Linux Networking and Administration (LFS211) – Learn how to design, deploy and maintain a network running under Linux, administer network services and securely configure the network interfaces.
Linux Foundation Certified System Administrator Exam (LFCS) – Take some time to study and redo labs from the previous courses to improve your speed before taking your first certification exam. The performance-based LFCS certification will demonstrate your Linux skills to future employers.
Containers Fundamentals (LFS253) – In our app-driven world, containers and microservices are the perfect home for an application. Containers bundle an application with all its dependencies and deploy it on the platform of our choice. This course will help you build a solid foundation on container technologies.
DevOps and SRE Fundamentals (LFS261) – The DevOps movement is changing the way applications are built, tested, and deployed. This course will teach you the skills to deploy software with confidence, agility and high reliability using modern practices such as Continuous Integration and Continuous Delivery, which are essential to modern cloud administration.
Kubernetes Fundamentals (LFS258) – This course will teach you how to use Kubernetes, the container management platform used by companies like Google to manage their application infrastructure. This includes learning how to install and configure a production-grade Kubernetes cluster, from network configuration to upgrades to making deployments available via services.
Certified Kubernetes Administrator Exam (CKA) – Revisit the labs from LFS253 and LFS258 before sitting for your final exam of the bootcamp. Earning your CKA will demonstrate you have the skills, knowledge, and competency to perform the responsibilities of a Kubernetes administrator and cloud engineer.

Advanced Cloud Engineer Bootcamp

This program is designed for existing IT professionals who want to transition into a cloud administrator or engineer role. It assumes you already have basic knowledge of Linux, networking and related technologies. Components of this bootcamp include:

Containers Fundamentals (LFS253) – In our app-driven world, containers and microservices are the perfect home for an application. This course will help you build a solid foundation for container technologies.
Kubernetes Fundamentals (LFS258) – This course will teach you how to install and configure a production-grade Kubernetes cluster, from network configuration to upgrades to making deployments available via services.
Certified Kubernetes Administrator Exam (CKA) – Earning your CKA will demonstrate you have the skills, knowledge, and competency to perform the responsibilities of a Kubernetes administrator and cloud engineer.
Service Mesh Fundamentals (LFS243) – With the growth of microservices and Kubernetes, production environments need to have tools to monitor and manage network traffic. This course explores the use of Envoy Proxy and Istio to take control of network access.
Monitoring Systems and Services with Prometheus (LFS241) – Prometheus is a monitoring system and time series database that is especially well suited for monitoring dynamic cloud environments. This course walks through installation and deployment, many of its major features, best practices, and use cases.
Cloud Native Logging with Fluentd (LFS242) – Known as the “unified logging layer”, Fluentd provides fast and efficient log transformation and enrichment, as well as aggregation and forwarding. This course is designed to introduce you to a technical background to the Fluentd log forwarding and aggregation tool for use in Cloud Native Logging.
Managing Kubernetes Applications with Helm (LFS244) – Deploying complex and interrelated microservices can be challenging. The course explains how to use Helm to package, install, and verify Kubernetes components in a production cluster.

The following benefits are included with both Bootcamps:

Daily, Live Instructor Office Hours
Access to a Dedicated Mentor
Dedicated Discussion Forum
And More…

 Bootcamps are regularly $999 but currently discounted to $499 with coupon code BOOTCAMP21.

Keep in mind that standard pricing on both the bundles and bootcamps will be increasing on July 1, so by enrolling now you’re saving even more.

Visit the promotion page for more information and to start your journey to a new cloud career!

The post Save up to 50% on Cloud Training Bundles and Bootcamps! appeared first on Linux Foundation – Training.

The post Save up to 50% on Cloud Training Bundles and Bootcamps! appeared first on Linux.com.

The Linux Foundation and NGMN Collaborate on End-to-End 5G and Beyond

Monday 10th of May 2021 11:00:00 PM

SAN FRANCISCO, Calif.  and FRANKFURT, GERMANY – May 10, 2021 – The Linux Foundation and the Next Generation Mobile Network Alliance (NGMN), today announce the signing of a Memorandum of Understanding (MoU) for formal collaboration regarding end-to-end 5G and beyond. 

NGMN’s mission is to provide impactful industry guidance to achieve innovative and affordable mobile telecommunication services for the end user, placing a particular focus on Mastering the Route to Disaggregation, Sustainability and Green Future Networks, as well as on 6G and the continuous support of 5G’s full implementation.

Creating and providing open, scalable building blocks for operators and service providers is critical to the industry adoption of 5G and beyond. Therefore, the collaboration between NGMN and the Linux Foundation will focus on end-to-end 5G architecture and beyond 5G. Specific areas of alignment may include sustainability, network automation and network autonomy based on Artificial Intelligence, security, edge cloud, virtualization, disaggregation, cloud native, and service-based architecture, to name a few. 

“We very much look forward to a mutually inspiring and beneficial collaboration with The Linux Foundation. Open Source is gaining increasing relevance for the strategic topics of our Work Programmes such as Mastering the Route to Disaggregation, Green Future Networks and 6G. We are delighted to partner with The Linux Foundation to jointly drive our mission for the benefit of the global ecosystem”, said Anita Doehler, CEO, NGMN Alliance.

“We are thrilled to be aligning with such an innovative, industry-leading organization,” said Arpit Joshipura, General Manager, Networking, Edge and IoT, the Linux Foundation. “Integrating NGMN’s expertise across pivotal areas like Disaggregation, Green Future Networks, cloud native, automation, and early work on 6G into LF Networking’s 5G Super Blueprint initiative is a natural next step for the industry.”

The Linux Foundation’s vision of harmonizing open source software with open standards has been in effect for several years, including collaborations with ETSI, TMF, MEF, GSMA, the O-RAN Alliance, and more. NGMN also maintains longstanding co-operations with all of these organisations. The alignment between The Linux Foundation and NGMN represents the latest in a long-standing effort to integrate open source and open standards across the industry. 

About NGMN

About NGMN Alliance (www.ngmn.org)

The NGMN Alliance (Next Generation Mobile Networks Alliance) is a forum founded by world-leading Mobile Network Operators and open to all partners in the mobile industry. Its goal is to ensure that next generation network infrastructure, service platforms and devices will meet the requirements of operators and, ultimately, will satisfy end user demand and expectations. The vision of the NGMN Alliance is to provide impactful industry guidance to achieve innovative and affordable mobile telecommunication services for the end user with a particular focus on supporting 5G’s full implementation, Mastering the Route to Disaggregation, Sustainability and Green Networks, and work on 6G.

NGMN seeks to incorporate the views of all interested stakeholders in the telecommunications industry and is open to three categories of participants (NGMN Partners): Mobile Network Operators (Members), vendors, software companies and other industry players (Contributors), as well as research institutes (Advisors).

About the Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

The post The Linux Foundation and NGMN Collaborate on End-to-End 5G and Beyond appeared first on Linux Foundation.

The post The Linux Foundation and NGMN Collaborate on End-to-End 5G and Beyond appeared first on Linux.com.

Btrfs: Advantages of upgrading from UEK5 to UEK6

Monday 10th of May 2021 10:00:00 PM

Advantages in btrfs you will receive when you upgrade from UEK5 to UEK6
Click to Read More at Oracle Linux Kernel Development

The post Btrfs: Advantages of upgrading from UEK5 to UEK6 appeared first on Linux.com.

OpenPOWER Foundation announces LibreBMC, a POWER-based, fully open-source BMC

Monday 10th of May 2021 05:56:02 PM

News from the OpenPOWER Blog:

Baseboard management controllers (BMCs) are a mainstay in data centers. They enable remote monitoring and access to servers, and they’re responsible for the rise of “lights out management.” But from a hardware perspective, there has been little innovation in this space for years. BMC processors are built on legacy architectures that are proprietary and closed.

The OpenPOWER Foundation is announcing a new workgroup to develop LibreBMC, the first ever baseboard management controller with completely open-source software and hardware. The processor will be based on the POWER ISA, which was open-sourced by IBM at OpenPOWER Summit North America in August, 2019.

Read more at OpenPOWER

The post OpenPOWER Foundation announces LibreBMC, a POWER-based, fully open-source BMC appeared first on Linux.com.

Interview with Masato Endo, OpenChain Project Japan

Monday 10th of May 2021 07:37:06 AM

Linux Foundation Editorial Director Jason Perlow had a chance to speak with Masato Endo, OpenChain Project Automotive Chair and Leader of the OpenChain Project Japan Work Group Promotion Sub Group, about the Japan Ministry of Economy, Trade and Industry’s (METI) recent study on open source software management.

JP: Greetings, Endo-san! It is my pleasure to speak with you today. Can you tell me a bit about yourself and how you got involved with the Japan Ministry of Economy, Trade, and Industry?

遠藤さん、こんにちは!本日はお話しできることをうれしく思います。あなた自身について、また経済産業省とどのように関わっていますか。

ME: Hi, Jason-san! Thank you for such a precious opportunity. I’m a manager and scrum master in the planning and development department of new services at a Japanese automotive company. We were also working on building the OSS governance structure of the company, including obtaining OpenChain certification.

As an open source community member, I participated in the OpenChain project and was involved in establishing the OpenChain Japan Working Group and Automotive Working Group. Recently, as a leader of the Promotion SG of the Japan Working Group, I am focusing on promoting OSS license compliance in Japan.

In this project, I contribute to it as a bridge between the Ministry of Economic, Trade, and Industry and the members of OSS community projects such as OpenChain.

For example, I recently gave a presentation of OpenChain at the meeting and introduced the companies that cooperate with the case study.

Jasonさん、こんにちは。このような貴重な機会をありがとうございます。

私は、自動車メーカーの新サービスの企画・開発部署でマネージャーやスクラムマスターを務めています。また、OpenChain認証取得等の会社のオープンソースガバナンス体制構築についても取り組んでいました。

一方、コミュニティメンバーとしてもOpenChainプロジェクトに参加し、OpenChain Japan WGやAutomotive WGの設立に関わりました。最近では、Japan WGのPromotion SGのリーダーとして日本におけるOSSライセンスコンプライアンスの啓発活動に注力しています。

今回のプロジェクトにおいては、経済産業省のタスクフォースとOpenChainとの懸け橋として、ミーティングにてOpenChainの活動を紹介させて頂いたり、ケーススタディへの協力企業を紹介させて頂いたりすることで、コントリビューションさせて頂きました。

JP: What does the Ministry of Economy, Trade, and Industry (METI) do?

経済産業省(METI)はどのような役割の役所ですか?

ME: METI has jurisdiction over the administration of the Japanese economy and industry. This case study was conducted by a task force that examines software management methods for ensuring cyber-physical security of the Commerce and Information Policy Bureau’s Cyber Security Division.

経済産業省は経済や産業に関する行政を所管しています。今回のケーススタディは商務情報政策局サイバーセキュリティ課によるサイバー・フィジカル・セキュリティ確保に向けたソフトウェア管理手法等検討タスクフォースにより実施されたものです。

JP: Why did METI commission a study on the management of open source program offices and open source software management at Japanese companies?

なぜ経済産業省は、日本企業のオープンソースプログラムオフィスの管理とオープンソースソフトウェアの管理に関する調査を実施したのですか?

ME: METI itself conducted this survey. The Task Force has been considering appropriate software management methods, vulnerability countermeasures, license countermeasures, and so on.

Meanwhile, as the importance of OSS utilization has increased in recent years, it concluded that sharing the knowledge of each company regarding OSS management methods helps solve each company’s problems.

今回の調査は、METIが主体的に行ったものです。タスクフォースは適切なソフトウェアの管理手法、脆弱性対応やライセンス対応などについて検討してきました。

そんな中、昨今のOSS利活用の重要性が高まる中、OSSの管理手法に関する各企業の知見の共有が各社の課題解決に有効だという結論に至りました。

JP: How do Japanese corporations differ from western counterparts in open source culture?

日本の企業は、オープンソース文化において欧米の企業とどのように違いますか?

ME: Like Western companies, Japanese companies also use OSS in various technical fields, and OSS has become indispensable. In addition, more than 80 companies have participated in the Japan Working Group of the OpenChain project. As a result, the momentum to promote the utilization of OSS is increasing in Japan.

On the other hand, some survey results show that Japanese companies’ contribution process and support system are delayed compared to Western companies. So, it is necessary to promote community activities in Japan.

欧米の企業と同様、日本の企業でもOSSは様々な技術領域で使われており、欠かせないものになっています。また、OpenChainプロジェクトのJPWGに80社以上の企業が参加するなど、企業としてOSSの利活用を推進する機運も高まってきています。

一方で、欧米企業と比較するとコントリビューションのプロセスやサポート体制の整備が遅れているという調査結果も出ているため、コミュニティ活動を促進する仕組みをより強化していく必要があると考えられます。

JP: What are the challenges that the open source community and METI have identified due to the study that Japanese companies face when adopting open source software within their organizations?

日本企業が組織内でオープンソースソフトウェアを採用する際に直面する調査の結果、オープンソースコミュニティと経済産業省が特定した課題は何ですか?

ME: In this case study, many companies mentioned license compliance. It was found that each company has established a company-wide system and rules to comply with the license and provides education to engineers. The best way to do this depends on the industry and size of the company, but I believe the information from this case study is very useful for each company of all over the world.

In addition, it was confirmed that Software Bill of Materials (SBOM) is becoming more critical for companies in the viewpoint of both vulnerability response and license compliance. Regardless of whether companies are using OSS internally or exchanging software with an external partner, it’s important to clarify which OSS they are using. I recognize that this issue is a hot topic as “Software transparency” in Western companies as well.

In this case study, several companies also mentioned OSS supply chain management. In addition to clarifying the rules between companies, it is characterized by working to raise the level of the entire supply chain through community activities such as OpenChain.

今回のケーススタディでは、多くの企業がライセンスコンプライアンスに言及していました。各企業はライセンスを遵守するために、全社的な体制やルールを整え、エンジニアに対してライセンス教育を実施していることがわかりました。ベストな方法は産業や企業の規模によっても異なりますが、各社の情報はこれからライセンスコンプライアンスに取り組もうとしている企業やプロセスの改善を進めている企業にとって非常に有益なものであると私は考えます。

また、脆弱性への対応、ライセンスコンプライアンスの両面から、企業にとってSBOMの重要性が高まっていることが確認できました。社内でOSSを利用する場合であっても、社外のパートナーとソフトウエアをやりとりする場合であっても、どのOSSを利用しているかを明確にすることが最重要だからです。この課題はソフトウエアの透過性といって欧米でも話題になっているものであると私は認識しています。

このケーススタディの中で複数の企業がOSSのサプライチェーンマネジメントについても言及していました。企業間でのルールを明確化する他、OpenChainなどのコミュニティ活動によって、サプライチェーン全体のレベルアップに取り組むことが特徴になっています。

Challenge 1: License compliance

When developing software using OSS, it is necessary to comply with the license declared by each OSS. If companies don’t conduct in-house licensing education and management appropriately, OSS license violations will occur.

Challenge 2: Long term support

Since the development term of OSS depends on the community’s activities, the support term may be shorter than the product life cycle in some cases.

Challenge 3:OSS supply chain management

Recently, the software supply chain scale has expanded, and there are frequent cases where OSS is included in deliveries from suppliers. OSS information sharing in the supply chain has become important to implement appropriate vulnerability countermeasures and license countermeasures.

Challenge 1: ライセンスコンプライアンス

OSSを利用してソフトウエアを開発する場合は、各OSSが宣言しているライセンスを遵守する必要があります。社内におけるライセンスに関する教育や管理体制が不十分な場合、OSSライセンスに違反してしまう可能性があります。

Challenge 2: ロングタームサポート

OSSの開発期間はコミュニティの活性度に依存するため、場合によっては製品のライフサイクルよりもサポート期間が短くなってしまう可能性があります。

Challenge 3: サプライチェーンにおけるOSSの使用

最近はソフトウエアサプライチェーンの規模が拡大しており、サプライヤからの納品物にOSSが含まれるケースも頻繁に起こっています。適切な脆弱性対応、ライセンス対応などを実施するため、サプライチェーンの中でのOSSの情報共有が重要になってきています。

JP: What are the benefits of Japanese companies adopting standards such as OpenChain and SPDX?

OpenChainやSPDXなどの標準を採用している日本企業のメリットは何ですか?

ME: Companies need to do a wide range of things to ensure proper OSS license compliance, so some guidance is needed. The OpenChain Specification, which has become an ISO as a guideline for that, is particularly useful. In fact, several companies that responded to this survey have built an OSS license compliance process based on the OpenChain Specification.

Also, from the perspective of supply chain management, it is thought that if each supply chain company obtains OpenChain certification, software transparency will increase, and appropriate OSS utilization will be promoted.

In addition, by participating in OpenChain’s Japan Working Group, companies can share the best practices of each company and work together to solve problems.

Since SPDX is a leading international standard for SBOM, it is very useful to use it when exchanging information about OSS in the supply chain from the viewpoint of compatibility.

Japanese companies use the SPDX standard and actively contribute to the formulation of SPDX specifications like SPDX Lite.

企業がOSSライセンスコンプライアンスを適切に行うために行うべきことは多岐に渡るために何かしらの指針が必要です。そのための指針としてISOになったOpenChain Specificationは非常に有用なものです。実際、今回の調査に回答した複数の企業がOpenChain Specificationに基づいてOSSライセンスコンプライアンスプロセスを構築し、認証を取得しています。

また、サプライチェーンマネジメントの観点からも、サプライチェーン各社がOpenChain認証を取得することで、ソフトウエアの透過性が高まり、適切なOSSの利活用を促進されると考えられます。

更にOpenChainのJPWGに参加することで、各社のベストプラクティスを共有したり、協力して課題解決をすることもできます。

SPDXは重要性の高まっているSBOMの有力な国際標準であるため、サプライチェーン内でOSSに関する情報を交換する場合に、SPDXを利用することは互換性等の観点から非常に有益です。

日本企業はSPDXの標準を利用するだけではなく、SPDX LiteのようにSPDXの使用策定にも積極的にコントリビューションしています。

JP: Thank you, Endo-san! It has been great speaking with you today.

遠藤さん、ありがとうございました!本日は素晴らしい議論になりました。

The post Interview with Masato Endo, OpenChain Project Japan appeared first on Linux Foundation.

The post Interview with Masato Endo, OpenChain Project Japan appeared first on Linux.com.

More in Tux Machines

today's leftovers

  • Fedora Community Blog: Friday’s Fedora Facts: 2021-29

    Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

  • Nostalgia and efficiency - MATE Desktop Tour

    It's time we started taking a look at MATE, the last major desktop environment I have never used. All I know about MATE is that it's basically a continuation of the GNOME 2 desktop, which I have used for a long time back when I started using Linux in 2006 on Ubuntu Dapper Drake. Let's see if that is true, and if GNOME 2, or MATE, is still up to the challenge in 2021.

  • Full Circle Weekly News #219
  • System76: Laptops, Servers, and PCs Optimized for Linux and Open-Source Solutions

    Despite a lineage that predates Microsoft Windows and Apple macOS, the Linux operating system has struggled to gain traction in the mass commercial market. That challenge extends not only to the software but also to the dedicated hardware optimized to maximize the benefits of Linux on desktops and laptops. Linux was initially popular with tech enthusiasts, but the commercial PC industry skewed toward Windows and Intel consumer hardware. Part of the challenge for Linux related to its early lack of dedicated hardware solutions. The founders of System76 set out to make the Linux ecosystem more inviting by integrating the hardware and software components to provide consumers with easy access to desktops and laptops.

  • Jon McDonald: How System76 paves the way for Linux hardware adoption

    System76 has found its footing in an industry largely geared towards Windows users. Jon McDonald, Contributing Editor for web hosting company HostingAdvice, took to the company’s blog to share a deep dive on System76’s success in the world of Linux hardware. He’s joined by Sam Mondlick, VP of Sales at System76.

  • Space Cowboy, Guardians of Cleveland, and Tony Award winner Ellen Barkin considers a Subtack – here is this week’s Top Shelf.

    At Mozilla, we believe part of making the internet we want is celebrating the best of the internet, and that can be as simple as sharing a tweet that made us pause in our feed. Twitter isn’t perfect, but there are individual tweets that come pretty close. Each week in Top Shelf, we will be sharing the tweets that made us laugh, think, Pocket them for later, text our friends, and want to continue the internet revolution each week.

Programming Leftovers

  • with Statement – Linux Hint

    The Python with statement is a very advanced feature that helps to implement the context management protocol. When the programmer starts coding, they are basically using the try/except/finally to maintain the resources. But there is another way to do this automatically, called the ‘with’ statement. So, in this article, we will discuss how we can use the ‘with‘ statement. We can understand this with a very simple example. Whenever we code something to read or write a file, the first thing which we have to do is to open the file, and then we perform the read or write operations on that and, at last, we close the file so that all the resources will not be busy. So it means that we have to release the resource after we complete our work.

  • Assembly of Python External C++ procedure returning the value of string type

    Writing C++ procedure below we get a final answer as C++ string , then via sequence of operations which convert string to the pointer (say c) to "const char" and finally return required value via pointer to PyObject provided by PyUnicode_FromString(c) to Python Runtime module.

  • How to split string in C++ – Linux Hint

    Working with string data is an essential part of any programming language. Sometimes we need to split the string data for programming purposes. The split() function exists in many programming languages to divide the string into multiple parts. There is no built-in split() function in C++ for splitting string but many multiple ways exist in C++ to do the same task, such as using getline() function, strtok() function, using find() and erase() functions, etc. The uses of these functions to split strings in C++ have been explained in this tutorial.

  • Do while in c – Linux Hint

    Loops in C are divided into two parts. One is the loop body, and the other is the control statement. Each loop is unique in its way. Do while loop is alike to a while loop in some aspects. In this loop, firstly, all the statements inside the body are executed. In case the condition is true, then the loop is again executed until the condition becomes false. In this guide, we will shed some light on the examples of do-while loops.

  • C++ class constructors – Linux Hint

    Constructors are like functions. These are used to initialize the values and the objects of the class. These constructors are initiated when the object of a class is created. Constructor directly does not return any value. To get the value of the constructor, we need to describe a separate function as the constructor doesn’t have any return type. Constructor differs from the simple function in different ways. A constructor is created when the object is generated. It is defined in the public segment of the class. In this article, we will deliberate on all these types of constructors with examples.

  • Comparing Strings in Java – Linux Hint

    It is easier to understand the comparison of characters before learning the comparison of string literals. A comparison of strings is given below this introduction. With Java, characters are represented in the computer by integers (whole numbers). Comparing characters means comparing their corresponding numbers. With Java, uppercase A to uppercase Z are the integers from 65 to 90. A is 65, B is 66, C is 67, until Z, which is 90. Lowercase ‘a’ to lowercase ‘z’ are the integers from 97 to 122. ‘a’ is 97, ‘b’ is 98, ‘c’ is 99, until ‘z,’ which is 122. Decimal digits are the integers, 48 to 57. That is, ‘0’ is 48, ‘1’ is 49, ‘2’ is 50, until 9, which is 57. So, in this new order, digits come first before uppercase letters, which come next before lowercase letters. Before the digits, there is the bell, which is a sounding and not a printable character. Its number is 7. There is the tab character of the keyboard, whose number is 9. There is the newline character (pressing the Enter key), whose number is 10. There is the space character (pressing the space-bar key), whose number is 32. There is the exclamation character, whose number is 33. There is the forward-slash character, whose number is 47. ‘(’ has the number, 40 and ‘)’ has the number, 41.

  • How to use HashMap in Java – Linux Hint

    The column on the left has the keys, and the column on the right has the corresponding values. Note that the fruits, kivi, and avocado have the same color, green. Also, the fruits, grapes, and figs have the same color, purple. At the end of the list, three locations are waiting for their own colors. These locations have no corresponding fruits; in other words, these three locations have no corresponding keys.

Computer scientist showcases world's first RISC-V-based Linux PC coupled with an AMD RX 6700 XT GPU

Back when Nvidia was announcing the intentions to buy ARM and many industry analysts immediately expressed their concern regarding the status of the ARM architecture that might not remain open source for too long, SiFive came out with a big push for its RISC-V CPU architecture as a true open source alternative. Similar to the Windows-on-ARM initiative, SiFive promised to deliver a general use PC platform that would allow software developers to adapt the Windows and Linux-based code for the RISC-V processors. It only took SiFive a few months to launch its first PC motherboard called the HiFive Unmatched, which is based on the U7 SoC. However, since the RISC-V community is not that big, development on the PC platform is not exactly fast. Interestingly enough, Nvidia recently managed to enable RTX 3000 support for ARM-based laptops, and, almost at the same time, a RISC-V enthusiast managed to make an AMD RX 6700 XT work on Linux-based HiFive Unmatched system. This is essentially a double milestone for the RISC-V community. Hackster.io reports that computer scientist René Rebe first managed to make the HiFive Unmatched run Linux, and then added support for the Radeon RX 6700 XT GPU through the Mesa Gallium 21.1.5 driver. Apparently, the U7 SoC is not properly supported in Linux, but Rebe was able to work his magic and patched the Linux kernel to support both the RISC-V architecture and the RDNA2 GPU in around 10 hours. The GPU is not fully functional as of yet. It can display the GUI, can render 3D graphics in accelerated-mode and also decode hi-res videos, but cannot run games. Nevertheless, this is still an impressive achievement that is not facilitated by the SiFive team itself. Read more

today's howtos

  • Evgeni Golov: It's not *always* DNS

    Two weeks ago, I had the pleasure to play with Foremans Kerberos integration and iron out a few long standing kinks. It all started with a user reminding us that Kerberos authentication is broken when Foreman is deployed on CentOS 8, as there is no more mod_auth_kerb available. Given mod_auth_kerb hasn't seen a release since 2013, this is quite understandable. Thankfully, there is a replacement available, mod_auth_gssapi. Even better, it's available in CentOS 7 and 8 and in Debian and Ubuntu too! So I quickly whipped up a PR to completely replace mod_auth_kerb with mod_auth_gssapi in our installer and successfully tested that it still works in CentOS 7 (even if upgrading from a mod_auth_kerb installation) and CentOS 8.

  • [Older] How To Install MariaDB 10.5 on Ubuntu 20.04

    MariaDB is one of the most popular open-source databases next to its originator MySQL. The original creators of MySQL developed MariaDB in response to fears that MySQL will suddenly become a paid service due to Oracle acquiring it in 2010. With its history of doing similar tactics, the developers behind MariaDB have promised to keep it open source and free from such fears as what has happened to MySQL.

  • Save a dict to a file – Linux Hint

    Dictionary is a very famous object in python. And it is a collection of keys and values. The key of the dict must be immutable, and it can be integer, float, string, but neither a list nor a dict itself can be a key. So, sometimes we need to save the dict objects into a file. So we are going to see different methods to save a dict object in a file.

  • Introduction to RPM/YUM Package Management – Linux Hint

    Red Hat Package Manager is the default open-source package management utility built under General Public License (GPU). The package management system is for all Red Hat-based Linux derivatives like Fedora, RHEL, and CentOS. RPM facilitates system administrators with the basic five modes of package management operations: installing, updating, removing, querying, and verifying packages. Moreover, Yellowdog Updater Modified (YUM) is to RPM what APT package management tool is for dpkg utility in Debian packaging system: it resolves the package dependency issues of RPM. In this guide, we will briefly introduce YUM. Whereas, we will have an in-depth introduction and background to the RPM packaging system for Red Hat Linux distributions.

  • What is ngrep and How to Use It? – Linux Hint

    Even though tshark and tcpdump are the most popular packet sniffing tools that dig down to the level of bits and bytes of the traffic. ngrep is another command-line nix utility that analyzes network packets and searches for them on a given regex pattern. The utility uses pcap and GNU library to perform regex string searches. ngrep stands for Network grep that is similar to the regular grep utility. The only difference is that ngrep parses text in network packets by using regular or hexadecimal expressions. In this article, we learn about a command-line, feature-rich utility known as ngrep that is handy for quick PCAP analysis and packet dumping.

  • Kubectl Port Forward – Linux Hint

    Forwarding a port using kubectl is relatively easy, although it only operates with individual pods but not with services. Port forwarding is a valuable tool for debugging different applications and deployments in the Kubernetes cluster. For illustration, if one of your pods is acting strangely, you will need to link to it directly. As this is a microservice setting, you can utilize port forwarding to communicate with a back-end service that would otherwise be hidden. The Kubelet delivers all information entered into the stream to the destination pod and port. When designing Kubernetes applications, it’s common to wish for immediate use of a service from the surrounding environment without exposing it via a load balancer or perhaps an ingress resource. We can use kubectl to create a proxy that forwards all traffic from a local port to a port linked to our chosen Pod. The kubectl port-forward instruction can be utilized to accomplish this. The kubectl port-forward sends an appeal to the Kubernetes API. That implies the machine that runs it requires access to the API server, and all communication is tunneled through a single HTTP connection. By passing one (or more) local ports to a pod, we can access container content with this command. This command performs effectively when you are required to debug a malfunctioning pod. We are going to talk about a step-by-step method to check port forwarding using kubectl.

  • Kubectl Get Events To Sort By Time – Linux Hint

    While other resources have changes, errors, or other notifications that should be broadcasted to the system, Kubernetes events are generated automatically. There is not so much documentation on events, but they are a great help when troubleshooting problems in your Kubernetes cluster. When compared to many other Kubernetes objects, events have a lot of activity. Events have a one-hour life period by default, and a distinct etcd cluster is advised for scalability. Events on their own, when combined with the inability to filter or aggregate, may not be particularly valuable unless they are transferred to external systems. Kubernetes events are entities that inform you what’s going on inside a cluster, like the scheduler’s decisions and why some pods were ejected from a node. The API Server allows all key components and extensions (operators) to generate events. When something is not operating as planned, the first area to check at is events and network operations. If the failure is the outcome of earlier events or when performing post-mortem analysis, keeping them for a longer duration is critical. Kubernetes generates events every time any of the resources it manages changes. The entity that initiated the event, the kind of event, and the cause are generally included in these events. Now to sort events by time, you have to follow the appended steps described in this tutorial.

  • Introduction to Manjaro Package Manager Pacman – Linux Hint

    The Linux distributions package management system has covered a long way. The timely practice of software management by creating independent repositories, application packages, and installation tools made software accessible across environments. Similar to all other Linux distributions, Manjaro has a default package manager of Arch Linux. In this article, we learn to use the command-line package manager Pacman to add, remove, and update software packages from the distribution or user build repository. The tutorial also covers how to query details of installed packages on the system.