Language Selection

English French German Italian Portuguese Spanish

Syndicate content
News For Open Source Professionals
Updated: 4 hours 26 min ago

Role of Training and Certification at the Linux Foundation

Friday 23rd of October 2020 04:59:34 PM

Open source allows anyone to dip their toes in the code, read up on the documentation, and learn everything on their own. That’s how most of us did it, but that’s just the first step. Those who want to have successful careers in building, maintaining, and managing IT infrastructures of companies need more structured hands-on learning with real-life experience. That’s where Linux Foundation’s Training and Certification unit enters the picture. It helps not only greenhorn developers but also members of the ecosystem who seek highly trained and certified engineers to manage their infrastructure. Swapnil Bhartiya sat down with Clyde Seepersad, SVP and GM of Training and Certification at the Linux Foundation, to learn more about the Foundation’s efforts to create a generation of qualified professionals.

Swapnil Bhartiya: Can you tell us a bit about what is the primary goal of Training and Certification at the Linux Foundation?

Clyde Seepersad: If you look at the history of open source, the first wave of folks was very DIY. They would jump in, they would read the docs, and they would get on IRC channels. That was a sort of true way in which you get into open source — by figuring it out yourself. But of course, that’s never been true for software in general. People always get trained on commercial products. Companies have whole market enablement arms.

One of the things that we identified a few years ago is that there was this gap between amazing quality open source products that were changing how computing gets done, and the talent development side of things. How do we create on-ramps for talent in an age of open source where you don’t necessarily have the same quarter market commercial organizations, making sure that people get trained and up-to-speed technically on the software products? And so that’s a piece where we’re really trying to fill — that entry-level talent gap.

Swapnil Bhartiya: I have seen that there is no shortage of all these training but mostly, they are specific to a vendor and its product. So, when it comes to vendor-neutral technologies, core technologies, that is where there is a huge void. I think that’s the void LF is trying to fill?

Clyde Seepersad: Correct. It’s for the core technologies. And we always say you need a starting point that’s most useful to most people and that is a vendor-neutral understanding of the core technologies. We really try to focus on entry-level talent because we recognize that the commercial ecosystems are really valuable. When you get up into the intermediate and advanced layers, by definition, you’re working with specific tool sets and it’s appropriate for you to move into that more specific type of training. But when you’re getting started, you really need that broadest possible foundation because you don’t know if you’re going to be working in an Azure shop. You don’t know if you’re going to be in a GCP shop. You don’t know which district you’re going to be using. And so the broader the footprint that we can give people to start with, the better. That’s kind of where we focus —entry level, vendor-neutral— so people are best prepared for the maximum number of career opportunities.

Swapnil Bhartiya: The way we learned was we just learned everything ourselves: find it on the Internet, read a lot of books, download stuff, get series. But in today’s world where everybody’s connected, what kind of demand is there for this very basic entry level, for respecting the open source space?

Clyde Seepersad: Yeah. I’ll give you a good example, Swapnil. Just this past week, we actually announced the 1,000,000th enrollment in our free Intro to Linux course on edX, which kind of blows my mind. We were able to get out on the internet and find one million people from 222 different countries who wanted to learn the fundamentals of Linux, what it is and what it can do. I think that really shines a good light on just how broad the basis is and I think more importantly, how global the basis is, right? There’s a lot of data on-ramps into a technology career if you’re in North America or if you’re in Western Europe, as they are more mature ecosystems with different entry points. When you look globally, there are a lot fewer of those. So that part of our mission is, “Hey, this is not an isolated technical challenge for the US or for France or Germany. This is a global technical challenge.” And we’re seeing that demand.

The second highest number of enrollments for free Linux courses is from India and that there’s a broad, deep move to this. The example I’ve taken to giving recently is with the pandemic of 2020, my favorite local Chinese restaurant, which is a small mom-and-pop operation, shut down. When they came back online, they came back online with a website and an online ordering system. And I asked, “How did you guys get that set up?” And she said, “Oh, we had to go hire somebody. We had to go figure out how to make a mom-and-pop strip model Chinese food business into a web-enabled business.” It gives an example of the breadth that also is increasingly true: every business is now a technology business.

Swapnil Bhartiya: One more thing that people do not give credit or recognize is that despite these tough times, open source technologies, the way they have democratized, because building your own stack is so hard and so expensive. At the same time, if you want to start a business, having your own data centers so cloud and open source, you gave an example. As is the case with my Indian store because of this social distancing, or we did not want to go out, now they never did that, but suddenly everything was available online, you can just go online, place the order and get it delivered to your home. What enabled them to move quickly was all these democratization that has happened here. And at the same time, there is enough talent pool that you guys help create who can actually handle that kind of work. Because of that, suddenly, there is a surge. When we do look at all these technologies, we hear buzzwords like Kubernetes and all those things, they are intimidating. For somebody who is kind of new who wants to get into that, but they have no experience in any of these technologies, or any of these industries, how should they get started?

Clyde Seepersad: That’s a great point, Swapnil. Actually, that exact challenge is why we recently announced the creation of a new entry-level exam for what we call “IT associates”. It’s one of these recognitions that for those of us in tech, getting started seems like a fairly obvious thing, right? You learn the basic operating system, you get familiar with the cloud technologies, and you start thinking about the problems of stability and scale insecurity. If you’re on the outside looking in and you have never learned this stuff, you don’t know anybody in your community or your family who does it. It is a very tall ask to say, “Hey, go start by getting certified in Linux” or “Go start by getting the Azure certification or an AWS certification.” It’s just too much to ask for folks. You need some intermediate step to help people build confidence that this is something that they can do, even if they don’t have a support system and a network and a set of role models around them.

So, we developed this program to see if we can create a pre-professional certification exam that demonstrates that somebody has understood the fundamental concepts in terms of the new cloud infrastructure, the microservice infrastructure, the cloud native infrastructure without forcing them to get to the finish line of “Hey, I’m a competent cloud administrator,” right? It’s too much to ask folks to get in one go. It’s too much in terms of the time, it’s too much in terms of the level of effort without giving them some midpoint to see, “Okay, I feel confident that I can do this. I have the aptitude. I’ve been able to demonstrate that I can learn some of the basics.” And that really is the audience that we’re targeting. These are folks who are coming from the outside, new to IT, who understand the potential and they can see themselves doing it, but we have to give them somewhere to hang their hat to see, “Okay, it’s going to be fine. It’s a lot to learn, but I’ve shown that I can do it. I’ve shown competence. I’ve shown the aptitude. And potentially, I’ve shown enough to start getting a look from a potential employer or for a potential internship, but it’s some entry rung on the ladder.” That’s really what we’re going after: the recognition of it can be a daunting task to try to get somebody all the way up to technical competence. A pre-professional stepping stone could really help make IT seem like a more realistic career option for a lot of folks. 

Swapnil Bhartiya: If you look at open source, we all know a lot of core developer maintainers, they have no formal training. Somebody was a doctor and suddenly became a maintainer of a major open source project. But when we look at this whole “serving the enterprise space”, why do we need formal training when you can just go online and learn everything on your own?

Clyde Seepersad: That’s true. It reminds me of the last time I went to the doctor and he had a cartoon printed out on the wall that said that “Your Google search is not as good as my medical degree.” This is not a technology problem. The explosion of information on the Internet has made it possible to access a lot of knowledge and a lot of information. What it doesn’t do is make it easy and structured. So there are always going to be folks, just like they have been historically, who can go between the documentation and the discussion boards and the YouTube videos. They can figure it out for themselves. And our perspective is that’s great. Those people probably don’t need our help, but they’re probably in the single digits if you think of the percentages of people. 

Most folks need more structure. They need more guidance. They need labs that they can get through to have a solution that if they get stuck, they could go say, “Oh, that’s it, I forgot to open that port.” It’s not that training brings any dramatic new content to the table. What it does is it creates a structured path to help people go through a structured set of exercises and the availability of help if you get stuck. We have discussion boards and different forums for providing that help. It’s not that you couldn’t do it by scouring the web. It’s that the vast majority of people who take an already daunting topic and make it just impossible, right? We’ve got to put the breadcrumbs now to help people find it. That’s where we focus. We’re saying this information exists, but it doesn’t exist in a way that most people can digest it and can wrap their head around and stay committed to a path of getting from here to there. The training program, that’s what it does. It helps people find the path to get to where they want to go without having to invent the path by themselves.

Swapnil Bhartiya: Right. Also, the reason you need this structured training is you’re going to serve a particular industry, you are not just learning something. There’s a big difference in learning about something versus serving a specific industry. There are a lot of challenges. There are a lot of sets of procedures. So yes, it does play a very big role. You can learn everything yourself, but you should go through that specific training to prepare yourself for the job. Now, if you look at Linux Foundation, you guys do a lot of work in this training space. Can you kind of just give a few examples of the work that you’re doing to kind of help that talent gap? Linux Foundation also comes up with the report every year where we see there is such a huge gap between supply and demand of talent.

Clyde Seepersad: Correct. And we’re actually going to publish our newest version, the 2020 version of that report, the Open Source Jobs Report shortly. I’ll give you sort of a sneak preview. Even with the pandemic going on, more than 50% of the respondents said that they’re going to be hiring entry-level talent. And it’s really because there’s only so many times you can go to LinkedIn and try to poach somebody, right? Companies have realized that it’s a zero-sum game. You’re going to have to build and grow talent in-house, especially if you’re taking legacy loads and try to make them cloud-native and move them into the cloud, right? Getting brand-new people is not necessarily going to be the best way to make that happen. As LF, what we were doing is trying to say, “You need a portfolio of solutions to try to help fill that gap in the market.”

So, we do things like the Intro to Linux course I was talking about, which is available for free on the web. Anybody can go sign up for it and you don’t have to pay a dime. We have new exams like this entry-level certification exam. We’ve got instructor training for folks who want that. We’ve got affordable e-learning options for folks who want that. We recently put together some bootcamp tech programs to train people, to have that extra layer of instructor support. We recognize that there is no one silver bullet. It’s a portfolio of different actions to try to figure out different people who are in different places. How do we create solutions for them to find a path to get to where they want to go with the right level of intensity, the right level of support, and importantly, the right level of availability, and the right affordability. Because that, in reality, is a barrier for a lot of folks. Not everybody can drop $10,000 on a coding bootcamp. 

Swapnil Bhartiya: That also made me think that how do you also help individuals meet their own educational goals. As you said, sometimes, you need so many resources there?

Clyde Seepersad: Yeah. The structured training programs help because it helps folks see that there is a sequence in which they can learn and grow. It’s also helpful for them to just get into the discussion boards that we provide and be able to engage, not just with the instructors, but with the other people in the programs, to figure out “These are the challenges we’re all facing, I’m not alone in this. Other people are stuck in similar places.” Just like we were talking about with the new certified IT associate, folks see that they’re not alone and helping them get help and making it easy for them to access that help is an important part of making it accessible. I mean, ultimately, what we want is to create a pathway where people can succeed, where the barriers to entry come down. 

A lot of that is around building the community, the affordability, the accessibility, and coming from a place where we are fortunate in the Foundation that we’re a nonprofit. Folks get that we’re not trying to appease shareholders. We really are a mission-driven organization and I think that also helps give people the confidence that the agenda here really is to expand the talent pool. It really is to try to help folks. I think the mantra for my team has been, “Great code alone can’t change the world.” You still need people in there implementing systems, implementing solutions, providing support. So, the open source revolution does need a talent revolution to help sustain it.

Swapnil Bhartiya: Now, we did touch upon this point at different points, but do you need to have some specific qualification or you should be in a specific location or you should be of certain age to join these training programs?

Clyde Seepersad: No. We really do make this, as my training director likes to joke about it, we try to go down to what is the file level, right? If you look at our Intro to Linux course, for instance, it really starts by saying, “What’s an OS? What’s a file? How do you install it?” And the beauty of doing this stuff as self-paced learning is it allows people to skip ahead. Usually, you would look at the outline and you can figure out, “Oh, okay, Chapter 7 is where my journey needs to start.” So, it allows people to opt into a training program and find their level, but it also allows people who truly are new to this to find an accessible path in.

Swapnil Bhartiya: Awesome. Clyde, thank you so much for taking time out today and talk about this training and certification. And I look forward to talking to you again. Thank you.

Clyde Seepersad: Same here. I really appreciate you having me, Swapnil.




The post Role of Training and Certification at the Linux Foundation appeared first on

New Training Course Provides a Deep Dive Into Node.js Services Development

Tuesday 20th of October 2020 05:29:31 PM

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the availability of a new training course, LFW212 – Node.js Services Development.

LFW212, developed in conjunction with the OpenJS Foundation, is geared toward developers on their way to senior level who wish to master and demonstrate their Node.js knowledge and skills, in particular how to use Node with frameworks to rapidly and securely compose servers and services. This course provides a deep dive into Node core HTTP clients and servers, web servers, RESTful services, and web security essentials.

Source: Linux Foundation Training

The post New Training Course Provides a Deep Dive Into Node.js Services Development appeared first on

Goldman Sachs Open Sources its Data Modeling Platform through FINOS

Monday 19th of October 2020 10:36:51 PM

The Linux Foundation has announced that FINOS has started a new open source project, Legend, a data management and governance platform contributed by Goldman Sachs:

The Fintech Open Source Foundation (“FINOS“), together with platinum member Goldman Sachs (GS), today announced the launch of Legend, Goldman’s flagship data management and data governance platform. Developed internally and used by both engineers and non-engineers alike across all divisions of the bank, the source code for five of the platforms’ modules have today been made available as open source within FINOS.

Today’s launch comes on the heels of the completion of a six-month pilot in which other leading investment banks, such as Deutsche Bank, Morgan Stanley and RBC Capital Markets, used a shared version of Legend, hosted on FINOS infrastructure in the public cloud, to prototype interbank collaborative data modeling and standardization, in particular to build extensions to the Common Domain Model (CDM), developed by the International Swaps and Derivatives Association (ISDA). This shared environment is now, starting today, generally available for industry participants to use and build models collaboratively. With the Legend code now available as open source, organizations may also launch and operate their own instances. The components open-sourced today allow any individual and organization across any industry to harness the power of Goldman Sachs’ internal data platform for their own data management and governance needs as well as contribute to the open code base.

“Legend provides both engineers and non-engineers a single platform that allows everyone at Goldman Sachs to develop data-centric applications and data-driven insights,” said Atte Lahtiranta, chief technology officer at Goldman Sachs. “The platform allows us to serve our clients better, automate some of the most difficult data governance challenges, as well as provide self-service tools to democratize data and analytics. We anticipate that the broad adoption of Legend will bring real, tangible value for our clients as well as greater standardization and efficiency across the entire financial services ecosystem.”

Read more at the Linux Foundation

The post Goldman Sachs Open Sources its Data Modeling Platform through FINOS appeared first on

Introducing the Open Governance Network Model

Thursday 15th of October 2020 01:00:34 PM

The Linux Foundation has long served as the home for many of the world’s most important open source software projects. We act as the vendor-neutral steward of the collaborative processes that developers engage in to create high quality and trustworthy code. We also work to build the developer and commercial communities around that code to sponsor each project’s members. We’ve learned that finding ways for all sorts of companies to benefit from using and contributing back to open source software development is key to the project’s sustainability.

Over the last few years, we have also added a series of projects focused on lightweight open standards efforts — recognizing the critical complementary role that standards play in building the open technology landscape. Linux would not have been relevant if not for POSIX, nor would the Apache HTTPD server have mattered were it not for the HTTP specification. And just as with our open source software projects, commercial participants’ involvement has been critical to driving adoption and sustainability.

On the horizon, we envision another category of collaboration, one which does not have a well-established term to define it, but which we are today calling “Open Governance Networks.” Before describing it, let’s talk about an example.

Consider ICANN, the agency that arose after demands emerged from evolving the global domain name system (DNS) from its single-vendor control by Network Solutions. With ICANN, DNS became something more vendor-neutral, international, and accountable to the Internet community. It evolved to develop and manage the “root” of the domain name system, independent from any company or nation. ICANN’s control over the DNS comes primarily through its establishment of an operating agreement among domain name registrars that establishes rules for registrations, guarantees your domain names are portable, and a uniform dispute resolution protocol (the UDRP) for times when a domain name conflicts with an established trademark or causes other issues.

ICANN is not a standards body; they happily use the standards for DNS developed at the IETF. They also do not create software other than software incidental to their mission, perhaps they also fund some DNS software development, but that’s not their core. ICANN is not where all DNS requests go to get resolved to IP addresses, nor even where everyone goes to register their domain name — that is all pushed to registrars and distributed name servers. In this way, ICANN is not fully decentralized but practices something you might call “minimum viable centralization.” Its management of the DNS has not been without critics, but by pushing as much of the hard work to the edge and focusing on being a neutral core, they’ve helped the DNS and the Internet achieve a degree of consistency, operational success, and trust that would have been hard to imagine building any other way.

There are similar organizations that interface with open standards and software but perform governance functions. A prime example of this is the CA Browser Forum, who manages the root certificates for the SSL/TLS web security infrastructure.

Do we need such organizations? Can’t we go completely decentralized? While some cryptocurrency networks claim not to need formal human governance, it’s clear that there are governance roles performed by individuals and organizations within those communities. Quite a bit of governance is possible to automate via smart contracts (and repairing damage from exploiting them), promoting the platform’s adoption to new users, onboarding new organizations, or even coordinating hard fork upgrades still require humans in the mix. And this is especially important in environments where competitors need to participate in the network to succeed, but do not trust one competitor to make the decisions.

Network governance is not a solved problem

Network governance is not just an issue for the technical layers. As one moves up the stack into more domain-specific applications, it turns out that there are network governance challenges up here as well, which look very familiar.

Consider a typical distributed application pattern: supply chain traceability, where participants in the network can view, on a distributed database or ledger, the history of the movement of an object from source to destination, and update the network when they receive or send an object. You might be a raw materials supplier, or a manufacturer, or distributor, or retailer. In any case, you have a vested interest in not only being able to trust this distributed ledger to be an accurate and faithful representation of the truth. You also want the version you see to be the same ledger everyone else sees, be able to write to it fairly, and understand what happens if things go wrong. Achieving all of these desired characteristics requires network governance!

You may be thinking that none of this is strictly needed if only everyone agreed to use one organization’s centralized database to serve as the system of record. Perhaps that is a company like eBay, or Amazon, Airbnb, or Uber. Or perhaps, a non-profit charity or government agency can run this database for us. There are some great examples of shared databases managed by non-profits, such as Wikipedia, run by the Wikimedia Foundation. This scenario might work for a distributed crowdsourced encyclopedia, but would it work for a supply chain?

This participation model requires everyone engaging in the application ecosystem to trust that singular institution to perform a very critical role — and not be hacked, or corrupted, or otherwise use that position of power to unfair ends. There is also a trust the entity will not become insolvent or otherwise unable to meet the community’s needs. How many Wikipedia entries have been hijacked or subject to “edit wars” that go on forever? Could a company trust such an approach for its supply chain? Probably not.

Over the last ten years, we’ve seen the development of new tools that allow us to build better-distributed data networks without that critical need for a centralized database or institution holding all the keys and trust. Most of these new tools use distributed ledger technology (“DLT”, or “blockchain”) to build a single source of truth across a network of cooperating peers, and embed programmatic functionality as “smart contracts” or “chaincode” across the network.

The Linux Foundation has been very active in DLT, first with the launch of Hyperledger in December of 2015. The launch of the Trust Over IP Foundation earlier this year focused on the application of self-sovereign identity, and in many examples, usually using a DLT as the underlying utility network.

As these efforts have focused on software, they left the development, deployment, and management of these DLT networks to others. Hundreds of such networks built on top of Hyperledger’s family of different protocol frameworks have launched, some of which (like the Food Trust Network) have grown to hundreds of participating organizations. Many of these networks were never intended to extend beyond an initial set of stakeholders, and they are seeing very successful outcomes.

However, many of these networks need a critical mass of industry participants and have faced difficulty achieving their goal. A frequently cited reason is the lack of clear or vendor-neutral governance of the network. No business wants to place its data, or the data it depends upon, in the hands of a competitor; and many are wary even of non-competitors if it locks down competition or creates a dependency on a market participant. For example, what if the company doesn’t do well and decides to exit this business segment? And at the same time, for most applications, you need a large percentage of any given market to make it worthwhile, so addressing these kinds of business, risk, or political objections to the network structure is just as important as ensuring the software works as advertised.

In many ways, this resembles the evolution of successful open source projects, where developers working at a particular company realize that just posting their source code to a public repository isn’t sufficient. Nor even is putting their development processes online and saying “patches welcome.”

To take an open source project to the point where it becomes the reference solution for the problem being solved and can be trusted for mission-critical purposes, you need to show how its governance and sustainability are not dependent upon a single vendor, corporate largess, or charity. That usually means a project looks for a neutral home at a place like the Linux Foundation, to provide not just that neutrality, but also competent stewarding of the community and commercial ecosystem.

Announcing LF Open Governance Networks

To address this need, today, we are announcing that the Linux Foundation is adding “Open Governance Networks” to the types of projects we host. We have several such projects in development that will be announced before the end of the year. These projects will operate very similarly to the Linux Foundation’s open source software projects, but with some additional key functions. Their core activities will include:

  • Hosting a technical steering committee to specify the software and standards used to build the network, to monitor the network’s health, and to coordinate upgrades, configurations, and critical bug fixes
  • Hosting a policy and legal committee to specify a network operating agreement the organizations must agree to for connecting their nodes to the network
  • Running a system for identity on the network, so participants to trust other participants who they say they are, monitor the network for health, and take corrective action if required.
  • Building out a set of vendors who can be hired to deploy peers-as-a-service on behalf of members, in addition to allowing members’ technical staff to run their own if preferred.
  • Convene a Governing Board composed of sponsoring members who oversee the budget and priorities.
  • Advocate for the network’s adoption by the relevant industry, including engaging relevant regulators and secondary users who don’t run their own peers.
  • Potentially manage an open “app store” approach to offering vetted re-usable deployable smart contracts of add-on apps for network users.

These projects will be sustained through membership dues set by the Governing Board on each project, which will be kept to what’s needed for self-sufficiency. Some may also choose to establish transaction fees to compensate operators of peers if usage patterns suggest that would be beneficial. Projects will have complete autonomy regarding technical and software choices – there are no requirements to use other Linux Foundation technologies.

To ensure that these efforts live up to the word “open” and the Linux Foundation’s pedigree, the vast majority of technical activity on these projects, and development of all required code and configurations to run the software that is core to the network will be done publicly. The source code and documentation will be published under suitable open source licenses, allowing for public engagement in the development process, leading to better long-term trust among participants, code quality, and successful outcomes. Hopefully, this will also result in less “bike-shedding” and thrash, better visibility into progress and activity, and an exit strategy should the cooperation efforts hit a snag.

Depending on the industry that it services, the ledger itself might or might not be public. It may contain information only authorized for sharing between the parties involved on the network or account for GDPR or other regulatory compliance. However, we will certainly encourage long term approaches that do not treat the ledger data as sensitive. Also, an organization must be a member of the network to run peers on the network, required to see the ledger, and particularly write to it or participate in consensus.

Across these Open Governance Network projects, there will be a shared operational, project management, marketing, and other logistical support provided by Linux Foundation personnel who will be well-versed in the platform issues and the unique legal and operational issues that arise, no matter which specific technology is chosen.

These networks will create substantial commercial opportunity:

  • For software companies building DLT-based applications, this will help you focus on the truly value-delivering apps on top of such a shared network, rather than the mechanics of forming these networks.
  • For systems integrators, DLT integration with back-office databases and ERP is expected to grow to be billions of dollars in annual activity.
  • For end-user organizations, the benefits of automating thankless, non-differentiating, perhaps even regulatorily-required functions could result in huge cost savings and resource optimization.

For those organizations acting as governing bodies on such networks today, we can help you evolve those projects to reach an even wider audience while taking off your hands the low margin, often politically challenging, grunt work of managing such networks.

And for those developers concerned before about whether such “private” permissioned networks would lead to dead cul-de-sacs of software and wasted effort or lost opportunity, having the Linux Foundation’s bedrock of open source principles and collaboration techniques behind the development of these networks should help ensure success.

We also recognize that not all networks should be under this model. We expect a diversity of approaches that will be long term sustainable, and encourage these networks to find a model that works for them. Let’s talk to see if it would be appropriate.

LF Governance Networks will enable our communities to establish their own Open Governance Network and have an entity to process agreements and collect transaction fees. This new entity is a Delaware nonprofit, a nonstock corporation that will maximize utility and not profit. Through agreements with the Linux Foundation, LF Governance Networks will be available to Open Governance Networks hosted at the Linux Foundation.

If you’re interested in learning more about hosting an Open Governance Network at the Linux Foundation, please contact us at



The post Introducing the Open Governance Network Model appeared first on The Linux Foundation.

The post Introducing the Open Governance Network Model appeared first on

Why Congress should invest in open-source software (Brookings)

Wednesday 14th of October 2020 04:30:36 PM

Frank Nagle at Brookings writes:

As the pandemic has highlighted, our economy is increasingly reliant on digital infrastructure. As more and more in-person interactions have moved online, products like Zoom have become critical infrastructure supporting business meetings, classroom education, and even congressional hearings. Such communication technologies build on FOSS and rely on the FOSS that is deeply ingrained in the core of the internet. Even grocery shopping, one of the strongholds of brick and mortar retail, has seen an increased reliance on digital technology that allows higher-risk shoppers to pay someone to shop for them via apps like InstaCart (which itself relies on, and contributes to, FOSS).

As the pandemic has highlighted, our economy is increasingly reliant on digital infrastructure. As more and more in-person interactions have moved online, products like Zoom have become critical infrastructure supporting business meetings, classroom education, and even congressional hearings. Such communication technologies build on FOSS and rely on the FOSS that is deeply ingrained in the core of the internet. Even grocery shopping, one of the strongholds of brick and mortar retail, has seen an increased reliance on digital technology that allows higher-risk shoppers to pay someone to shop for them via apps like InstaCart (which itself relies on, and contributes to, FOSS).

Read more at Brookings

The post Why Congress should invest in open-source software (Brookings) appeared first on

Sysadmin careers: the correlation between mentors and success

Tuesday 13th of October 2020 01:30:00 PM

Click to Read More at Enable Sysadmin

The post Sysadmin careers: the correlation between mentors and success appeared first on

Open Source Processes Driving Software-Defined Everything (LinuxInsider)

Monday 12th of October 2020 10:13:07 PM

Jack Germain writes at LinuxInsider:

The Linux Foundation (LF) has been quietly nudging an industrial revolution. It is instigating a unique change towards software-defined everything that represents a fundamental shift for vertical industries.

LF on Sept. 24 published an extensive report on how software-defined everything and open-source software is digitally transforming essential vertical industries worldwide.

“Software-defined vertical industries: transformation through open source” delves into the major vertical industry initiatives served by the Linux Foundation. It highlights the most notable open-source projects and why the foundation believes these key industry verticals, some over 100 years old, have transformed themselves using open source software.

Digital transformation refers to a process that turns all businesses into tech businesses driven by software. This change towards software-defined everything is a fundamental shift for vertical industry organizations, many of which typically have small software development teams relative to most software vendors.

Read more at LinuxInsider

The post Open Source Processes Driving Software-Defined Everything (LinuxInsider) appeared first on

Linux interface analytics on-demand with iftop

Saturday 10th of October 2020 07:13:56 AM

Click to Read More at Enable Sysadmin

The post Linux interface analytics on-demand with iftop appeared first on

Deconstructing an Ansible playbook

Saturday 10th of October 2020 06:20:58 AM

Click to Read More at Enable Sysadmin

The post Deconstructing an Ansible playbook appeared first on

Kubernetes basics for sysadmins

Friday 9th of October 2020 05:19:32 AM

Click to Read More at Enable Sysadmin

The post Kubernetes basics for sysadmins appeared first on

Amundsen: one year later (Lyft Engineering)

Thursday 8th of October 2020 08:09:55 PM

On October 30, 2019, we officially open sourced Amundsen, our solution to solve metadata catalog and data discovery challenges. Ten months later, Amundsen joined the Linux foundation AI (LFAI) as its incubation project.

In almost every modern data-driven company, each interaction with the platform is powered by data. As data resources are constantly growing, it becomes increasingly difficult to understand what data resources exist, how to access them, and what information is available in those sources without tribal knowledge. Poor understanding of data leads to bad data quality, low productivity, duplication of work, and most importantly, a lack of trust in the data. The complexity of managing a fragmented data landscape is not just a problem unique to Lyft, but a common one that exists throughout the industry.

In a nutshell, Amundsen is a data discovery and metadata platform for improving the productivity of data analysts, data scientists, and engineers when interacting with data. By indexing the data resources (tables, dashboards, users, etc.) and powering a page-rank style search based on usage patterns (e.g. highly-queried tables show up earlier than less-queried tables), these customers are able to address their data needs faster.

Read more at Lyft Engineering

The post Amundsen: one year later (Lyft Engineering) appeared first on

How to install and set up SeedDMS

Wednesday 7th of October 2020 08:19:45 PM

Click to Read More at Enable Sysadmin

The post How to install and set up SeedDMS appeared first on

Telcos Move from Black boxes to Open Source

Wednesday 7th of October 2020 02:24:06 PM

Linux Foundation Networking (LFN) organized its first virtual event last week and we sat down with Arpit Joshipura, the General Manager of Networking, IoT and Edge at the Linux Foundation, to talk about the key points of the event and how LFN is leading the adoption of open source within the telco space. 

Swapnil Bhartiya: Today, we have with us Arpit Joshipura, General Manager of Networking, IoT and Edge, at the Linux Foundation. Arpit, what were some of the highlights of this event? Some big announcements that you can talk about?

Arpit Joshipura: This was a global event with more than 80 sessions and was attended by attendees from over 75 countries. The sessions were very diverse. A lot of the sessions were end-user driven, operator driven as well as from our vendors and partners. If you take LF Networking and LFH as two umbrellas that are leading the Networking and Edge implementations here, we had a very significant announcement. I would probably group them into 5 main things:

Number one, we released a white paper at the Linux Foundation level where we had a bunch of vertical industries transformed using open source. These are over 100-year-old industries like telecom, automotive, finance, energy, healthcare, etc. So, that’s kind of one big announcement where vertical industries have taken advantage of open source.

The second announcement was easy enough: Google Cloud joins Linux Foundation Networking as a partner. That announcement comes on the basis of the telecom market and the cloud market converging together and building on each other.

The third major announcement was a project under LF Networking. If you remember, two years ago, a project collaboration with GSMA was started. It was called CNTT, which really defined and narrowed the scope of interoperability and compliance. And we have OPNFV under LFN. What we announced at Open Networking and Edge summit is the two projects are going to come together. This would be fantastic to a global community of operators who are simplifying the deployment and interoperability of implementation of NFVI manual VNFs and CNFs.

The next announcement was around a research study that we released on open source code that was created by Linux Foundation Networking, using LFN analytics and COCOMO estimation. We’re talking $7.2 billion worth of IP investment, right? This is the power of shared technology.

And finally, we released a survey on the Edge community asking them, “Why are you contributing to open source?” And the answer was fascinating. It was all-around innovation, speed to deployment, market creation. Yes, cost was important, but not initially.

So those were the 5 big highlights of the show from an LFN and LFH perspective.

Swapnil Bhartiya: There are two things that I’m interested in. One is the consolidation that you talk about, and the second is survey. The fact is that everybody is using open source. There is no doubt about it. But the problem that is happening is since everybody’s using it, there seems to be some gap between the awareness of how to be a good open source citizen as well. What have you seen in the telco space?

Arpit Joshipura: First of all, 5 years ago, they were all using black box and proprietary technologies. Then, we launched a project called OpenDaylight. And of course, OpenDaylight announced its 13th release today, but that’s kind of on their 6-year anniversary from being proprietary to today in one of the more active projects called ONAP. The telcos are 4 of the Top 10 contributors of source code and open source, right? Who would have imagined that an AT&T, Verizon, Amdocs, DT, Vodafone, and a China mobile and a China telecom, you name it are all actively contributing code? So that’s a paradigm shift in terms of not only consuming it, but also contributing towards it.

Swapnil Bhartiya: And since you mentioned ONAP, if I’m not wrong, I think AT&T released its own work as E-com. And then the projects within the Foundation were merged to create ONAP. And then you mentioned actually NTD. So, what I want to understand from you is how many projects are there that you see within the Foundation? The problem is that Linux Foundation and all those other foundations are open servers. It’s a very good place for those products to come in. It’s obvious that there will be some projects that will overlap. So what is the situation right now? Where do you see some overlap happening and, at the same time, are there still gaps that you need to fill?

Arpit Joshipura: So that’s a question of the philosophies of a foundation, right? I’ll start off with the most loose situation, which is GitHub. Millions and millions of projects on GitHub. Any PhD student can throw his code on GitHub and say that’s open source and at the end of the day, if there’s no community around it, that project is dead. Okay. That’s the most extreme scenario. Then, there are foundations like CNCF who have a process of accepting projects that could have competing solutions. May the best project win.

From an LF Networking and LFH perspective, the process is a little bit more restrictive: there is a formal project life cycle document and a process available on the Wiki that looks at the complementary nature of the project, that looks at the ecosystem, that looks at how it will enable and foster innovation. Then based on that, the governing board and the neutral governance that we have set up under the Linux Foundation, they would approve it.

Overall, it depends on the philosophy for LFN and LFH. We have 8 projects each in the umbrella, and most of these projects are quite complementary when it comes to solving different use cases in different parts of the network.

Swapnil Bhartiya: Awesome. Now, I want to talk about 5G a bit. I did not hear any announcements, but can you talk a bit about what is the word going on to help the further deployment of 5G technologies?

Arpit Joshipura: Yeah. I’m happy and sad to say that 5G is old news, right? The reality is all of the infrastructure work on 5G already was released earlier this year. So ONAP Frankfurt release, for example, has a blueprint on 5G slicing, right? All the work has been done, lots of blueprint and Akraino using 5G and mech. So, that work is done. The cities are getting lit up by the carriers. You see announcements from global carriers on 5G deployments. I think there are 2 missing pieces of work remaining for 5G.

One is obviously the O-RAN support, right? The O-RAN software community, which we host at the Linux Foundation also is coming up with a second release. And, all the support for 5G is in there.

The second part of 5G is really the compliance and verification testing. A lot of work is going into CMTT and OPN and feed. Remember that merge project we talked about where 5G is in context of not just OpenStack, but also Kubernetes? So the cloud-native aspects of 5G are all being worked on this year. I think we’ll see a lot more cloud-native 5G deployments next year primarily because projects like ONAP or cloud native integrate with projects like ONAP and Anthos or Azure stack and things like that.

Swapnil Bhartiya: What are some of the biggest challenges that the telco industry is facing? I mean, technically, no externalization and all those things were there, but foundations have solved the problem. Some rough ideas are still there that you’re trying to resolve for them.

Arpit Joshipura: Yeah. I think the recent pandemic caused a significant change in the telcos’ thinking, right? Fortunately, because they had already started on a virtualization and open-source route, you heard from Android, and you heard from Deutsche Telekom, and you heard from Achronix, all of the operators were able to handle the change in the network traffic, change in the network, traffic direction, SLS workloads, etc., right? All because of the softwarization as we call it on the network.

Given the pandemic, I think the first challenge for them was, can the network hold up? And the answer is, yes. Right? All the work-from-home and all these video recordings, we have to hang out with the web, that was number one.

Number two is it’s good to hold up the network, but did I end up spending millions and millions of dollars for operational expenditures? And the answer to that is no, especially for the telcos who have embraced an open-source ecosystem, right? So people who have deployed projects like SDN or ONAP or automation and orchestration or closed-loop controls, they automatically configure and reconfigure based on workloads and services and traffic, right? And that does not require manual labor, right? Tremendous amounts of costs were saved from an opex perspective, right?

For operators who are still in the old mindset have significantly increased their opex, and what that has caused is a real strain on their budget sheets.

So those were the 2 big things that we felt were challenges, but have been solved. Going forward, now it’s just a quick rollout/build-out of 5G, expanding 5G to Edge, and then partnering with the public cloud providers, at least, here in the US to bring the cloud-native solutions to market.

Swapnil Bhartiya: Awesome. Now, Arpit, if I’m not wrong, LF Edge is I think, going to celebrate its second anniversary in January. What do you feel the product has achieved so far? What are its accomplishments? And what are some challenges that the project still has to tackle?

Arpit Joshipura: Let me start off with the most important accomplishment as a community and that is terminology. We have a project called State of the Edge and we just issued a white paper, which outlines terminology, terms and definitions of what Edge is because, historically, people use terms like thin edge, thick edge, cloud edge, far edge, near edge and blah, blah, blah. They’re all relative terms. Okay. It’s an edge in relation to who I am.

Instead of that, the paper now defines absolute terms. If I give you a quick example, there are really 2 kinds of edges. There’s a device edge, and then there is a service provider edge. A device edge is really controlled by the operator, by the end user, I should say. Service provider edge is really shared as a service and the last mile typically separates them.

Now, if you double click on each of these categories, then you have several incarnations of an edge. You can have an extremely constrained edge, microcontrollers, etc., mostly manufacturing, IIoT type. You could have a smart device edge like gateways, etc. Or you could have an on-prem silver type device edge. Either way, an end user controls that edge versus the other edge. Whether it’s on the radio-based stations or in a smart central office, the operator controls it. So that’s kind of the first accomplishment, right? Standardizing on terminology.

The second big Edge accomplishment is around 2 projects: Akraino and EdgeX Foundry. These are stage 3 mature projects. They have come out with significant [results]. Akraino, for example, has come out with 20 plus blueprints. These are blueprints that actually can be deployed today, right? Just to refresh, a blueprint is a declarative configuration that has everything from end to end to solve a particular use case. So things like connected classrooms, AR/VR, connected cars, right? Network cloud, smart factories, smart cities, etc. So all these are available today.

EdgeX is the IoT framework for an industrial setup, and that’s kind of the most downloaded. Those 2 projects, along with Fledge, EVE, Baetyl, Home Edge, Open Horizon, security advanced onboarding, NSoT, right? Very, very strong growth over 200% growth in terms of contributions. Huge growth in membership, huge growth in new projects and the community overall. We’re seeing that Edge is really picking up great. Remember, I told you Edge is 4 times the size of the cloud. So, everybody is in it.

Swapnil Bhartiya: Now, the second part of the question was also some of the challenges that are still there. You talked about accomplishment. What are the problems that you see that you still think that the project has to solve for the industry and the community?

Arpit Joshipura: The fundamental challenge that remains is we’re still working as a community in different markets. I think the vendor ecosystem is trying to figure out who is the customer and who is the provider, right? Think of it this way: a carrier, for example, AT&T, could be a provider to a manufacturing factory, who actually could consume something from a provider, and then ship it to an end user. So, there’s like a value shift, if you may, in the business world, on who gets the cut, if you may. That’s still a challenge. People are trying to figure out, I think people who are going to be quick to define, solve and implement solutions using open technology will probably turn out to be winners.

People who will just do analysis per analysis will be left behind like any other industry. I think that is kind of fundamentally number one. And number two, I think the speed at which we want to solve things. The pandemic has just accelerated the need for Edge and 5G. I think people are just eager to get gaming with low latency, get manufacturing, predictive maintenance with low latency, home surveillance with low latency, connected cars, autonomous driving, all the classroom use cases. They should have been done next year, but because of the pandemic, it just got accelerated.

The post Telcos Move from Black boxes to Open Source appeared first on

New Training Course from Continuous Delivery Foundation Helps Gain Expertise with Jenkins CI/CD

Tuesday 6th of October 2020 04:19:58 PM

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the availability of a new training course, LFS267 – Jenkins Essentials.

LFS267, developed in conjunction with the Continuous Delivery Foundation, is designed for DevOps engineers, Quality Assurance personnel, SREs as well as software developers and architects who want to gain expertise with Jenkins for their continuous integration (CI) and continuous delivery (CD) activities.

Source: Linux Foundation Training

The post New Training Course from Continuous Delivery Foundation Helps Gain Expertise with Jenkins CI/CD appeared first on

Quantum networks: The next generation of secure computing

Tuesday 6th of October 2020 12:43:08 PM

Click to Read More at Enable Sysadmin

The post Quantum networks: The next generation of secure computing appeared first on

Setting up a webserver to use HTTPS

Saturday 3rd of October 2020 10:12:21 PM

Click to Read More at Enable Sysadmin

The post Setting up a webserver to use HTTPS appeared first on

Top five Vim plugins for sysadmins

Saturday 3rd of October 2020 07:34:00 PM

Click to Read More at Enable Sysadmin

The post Top five Vim plugins for sysadmins appeared first on

September 2020 rewind

Friday 2nd of October 2020 07:26:42 AM

Click to Read More at Enable Sysadmin

The post September 2020 rewind appeared first on

Akraino: An Open Source Project for the Edge

Thursday 1st of October 2020 07:33:44 PM

Akraino is an open-source project designed for the Edge community to easily integrate open source components into their stack. It’s a set of open infrastructures and application blueprints spanning a broad variety of use cases, including 5G, AI, Edge IaaS/PaaS, IoT, for both provider and enterprise Edge domains. We sat down with Tina Tsou, TSC Co-Chair of the Akraino project to learn more about it and its community.

Here is a lightly edited transcript of the interview:

Swapnil Bhartiya: Today, we have with us Tina Tsou, TSC Co-Chair of the Akraino project. Tell us a bit about the Akraino project.

Tina Tsou: Yeah, I think Akraino is an Edge Stack project under Linux Foundation Edge. Before Akraino, the developers had to go to the upstream community to download the upstream software components and integrate in-store to test. With the blueprint ideas and concept, the developers can directly do the use-case base to blueprint, do all the integration, and [have it] ready for the end-to-end deployment for Edge.

Swapnil Bhartiya: The blueprints are the critical piece of it. What are these blueprints and how do they integrate with the whole framework?

Tina Tsou: Based on the certain use case, we do the community CI/CD ( continuous integration and continuous deployment). We also have proven security requirements. We do the community lab and we also do the life cycle management. And then we do the production quality, which is deployment-ready.

Swapnil Bhartiya: Can you explain what the Edge computing framework looks like?

Tina Tsou: We have four segments: Cloud, Telco, IoT, and Enterprise. When we do the framework, it’s like we have a framework of the Edge compute in general, but for each segment, they are slightly different. You will see in the lower level, you have the network, you have the gateway, you have the switches. In the upper of it, you have all kinds of FPGA and then the data plan. Then, you have the controllers and orchestration, like the Kubernetes stuff and all kinds of applications running on bare metal, virtual machines or the containers. By the way, we also have the orchestration on the site.

Swapnil Bhartiya: And how many blueprints are there? Can you talk about it more specifically?
Tina Tsou: I think we have around 20-ish blueprints, but they are converged into blueprint families. We have a blueprint family for telco appliances, including Radio Edge Cloud, and SEBA that has enabled broadband access. We also have a blueprint for Network Cloud. We have a blueprint for Integrated Edge Cloud. We have a blueprint for Edge Lite IoT. So, in this case, the different blueprints in the same blueprint family can share the same software framework, which saves a lot of time. That means we can deploy it at a large scale.

Swapnil Bhartiya: The software components, which you already talked about in each blueprint, are they all in the Edge project or there are some components from external projects as well?

Tina Tsou: We have the philosophy of upstream first. If we can find it from the upstream community, we just directly take it from the upstream community and install and integrate it. If we find something that we need, we go to the upstream community to see whether it can be changed or updated there.

Swapnil Bhartiya: How challenging or easy it is to integrate these components together, to build the stack?

Tina Tsou: It depends on which group and family we are talking about. I think most of them at the middle level of middle are not too easy, not too complex. But the reference has to create the installation, like the YAML files configuration and for builds on ISO images, some parts may be more complex and some parts will be easy to download and integrate.

Swapnil Bhartiya: We have talked about the project. I want to talk about the community. So first of all, tell us what is the role of TSC?

Tina Tsou: We have a whole bunch of documentation on how TSA runs if you want to read. I think the role for TSC is more tactical steering. We have a chair and co-chair, and there are like 6-7 subcommittees for specific topics like security, technical community, CI and documentation process.

Swapnil Bhartiya: What kind of community is there around the Akraino project?

Tina Tsou: I think we have a pretty diverse community. We have the end-users like the telcos and the hyperscalers, the internet companies, and also enterprise companies. Then we have the OEM/ODM vendors, the chip makers or the SoC makers. Then have the IP companies and even some universities.

Swapnil Bhartiya: Tina, thank you so much for taking the time today to explain the Akraino project and also about the blueprints, the community, and the roadmap for the project. I look forward to seeing you again to get more updates about the project.

Tina Tsou: Thank you for your time. I appreciate it.

The post Akraino: An Open Source Project for the Edge appeared first on

More in Tux Machines

My Open Source meltdown, and the rise of a star

There comes a time when you feel that you don’t fit anywhere. Where your ideas, principles, motivation and struggles simply don’t align with anyone else. For years, I felt part of something that was larger than myself, had the motivation to use a huge part of my free time to contribute to projects and in several cases, make personal sacrifices to help others, and even envisioned a future for myself in places where I thought it was impossible. It’s that struggle trying to find our place in this huge Open Source world what usually ends up in personal meltdown and professional burnout. It’s not a secret that as fast as technologies evolve, the faster we end up being obsolete, unless we dedicate most of our time to keep up to date on every break through. I’m not the exception to this, and after being an active contributor for almost 15 years, and then have my “time off” to be a full time mom and employee, what happened in the Projects I used to Contribute left me feeling way far from my comfort zone. I’m grateful that most of the places where I’ve contributed has been because people asks for my help, and even after a long absence it was not different from before. Read more

What Linux needs to make it a better mobile desktop

I have a bit of a confession to make. Although Linux is my operating system of choice on the desktop, I tend to skip over my open source-powered laptop in favor of either a MacBook Pro or Chromebook when I'm working beyond my desk. I know...blasphemy, right? I've reached a point in my career and life where I need the tools to be able to get my jobs done as efficiently as possible and without frustration or headache. To be absolutely fair, primary reasons why I overlook my one Linux laptop are because it's too big and the keyboard is absolutely terrible. Given I am a writer by profession, a bad keyboard can be a deal-breaker. Once again, in favor of honesty, the 2016 MacBook Pro keyboard isn't much better. The "butterfly" keys are loud and way too prone to sticking. My 2015 Pixel was, at one point, an absolute dream machine, but the battery life is waning, and sometimes ChromeOS can be a bit flaky with the trackpad. Read more

Linux and open-source jobs are hotter than ever

The Linux Foundation and , the leading online course company, released the 2020 Open Source Jobs Report on October 26. Once again, despite the COVID-19 pandemic, the demand for open-source technology skills is growing. 37% of hiring managers say they will hire more IT professionals in the next six months. Specifically, 81% of hiring managers say hiring open source talent is a priority going forward. 56% of hiring managers plan to increase their hiring of open source pros in the next six months Why? The answer to that is simple. As a recent Red Hat survey found, 86% of IT leaders said the most innovative companies are using open-source software, citing higher quality solutions, lower cost of ownership, improved security, and cloud-native capabilities as the top reasons for usage. So, even in these bad times, the demand for open-source savvy is higher than ever. Read more

Vote for the Debian GNU/Linux 11 “Bullseye” Desktop Artwork Now

Opened to submissions since early August, the artwork proposals for Debian GNU/Linux 11 “Bullseye,” the next major release of the popular Debian GNU/Linux operating system, has reached its deadline last week on October 15th, and now the community can vote for the winner. Jonathan Carter announced today that it’s time for the Debian community to choose the desktop artwork to be used in Debian GNU/Linux 11 “Bullseye.” The review period for the final proposals starts today, October 26th, until November 9th, and winners will be unveiled in mid-November. Read more