Language Selection

English French German Italian Portuguese Spanish

Syndicate content
News For Open Source Professionals
Updated: 6 hours 13 min ago

TODO Group: Why Open Source matters to your enterprise

Tuesday 8th of September 2020 05:00:55 PM

There are many business reasons to use open source software. Many of today’s most significant business breakthroughs, including big data, machine learning, cloud computing, Internet of Things, and streaming analytics, sprang from open source software innovations. Open source software often comes into an organization as the backbone of many essential devices, programs, platforms, and tools such as robotics, sensors, the Internet of Things (IoT), automotive telematics, and autonomous driving, edge computing, and big data computing. Open source software code is working on many smartphones, laptops, servers, databases, and cloud infrastructures and services. Developers build most applications by leveraging frameworks like Node. js or pulling in libraries that have been tested and proven in many production use cases. To use almost any of these things is to use open source software in one form or another, and often in combination.

By using open source software, companies also avoid building everything from the ground up, saving time, money, and effort while also rendering more innovation from the investment. Open source software is generally more secure than using the commercial proprietary counterparts too. That is due in large part to the collaborative nature of open source software projects. A common phrase used by Open Source developers and advocates is that “given enough eyeballs, all bugs are shallow.” That holds so long as there are “enough eyeballs,” which, given open source software’s adoption rate, may be challenging to have across all projects. Drawbacks do exist, as no software is perfect, not even open source software. However, for most organizations, the good far outweighs the bad. The codebase’s open nature also means it’s easier to report and fix software versus alternative models.

While open source software offers many reliable and provable business advantages, sometimes those advantages remain obscure to those who have not looked deeply into the topic, including many high-level decision-makers. This paper, published by the European Chapter of the TODO Group, aims to provide a balanced and quick overview of the business pros and cons of using open source software.

To download Why Open Source Matters to Your Enterprise click on the button below Download Whitepaper

The post TODO Group: Why Open Source matters to your enterprise appeared first on The Linux Foundation.

The post TODO Group: Why Open Source matters to your enterprise appeared first on

TODO Group: Why Open Source Matters to Your Enterprise

Tuesday 8th of September 2020 10:00:29 AM

The TODO Group writes at the Linux Foundation blog:

While open source software offers many reliable and provable business advantages, sometimes those advantages remain obscure to those who have not looked deeply into the topic, including many high-level decision-makers. This paper, published by the European Chapter of the TODO Group, aims to provide a balanced and quick overview of the business pros and cons of using open source software.

Click here to read more at the Linux Foundation

The post TODO Group: Why Open Source Matters to Your Enterprise appeared first on

Setting up port redirects in Linux with ncat

Tuesday 1st of September 2020 05:21:55 PM

Click to Read More at Enable Sysadmin

The post Setting up port redirects in Linux with ncat appeared first on

Developing an email alert system using a surveillance camera with Node-RED and TensorFlow.js

Tuesday 1st of September 2020 05:18:45 PM

In a previous article, we introduced a procedure for developing an image recognition flow using Node-RED and TensorFlow.js. Now, let’s apply those learnings from what we have done and develop an e-mail alert system that uses a surveillance camera together with image recognition. As shown in the following image, we will create a flow that automatically sends an email alert when a suspicious person is captured within a surveillance camera frame.

Objective: Develop flow

In this flow, the image of the surveillance camera is periodically acquired from the webserver, and the image is displayed under the “Original image” node in the lower left. After that, the image is recognized using the TensorFlow.js node. The recognition result and the image with recognition results are displayed under the debug tab and the “image with annotation” node, respectively.

If a person is detected by image recognition, an alert mail with the image file attached will be sent using the SendGrid node.  Since it is difficult to set up a real surveillance camera, we will use a sample image sent by a surveillance camera in Kanagawa Prefecture of Japan  to check the amount of water in the river.

We will explain the procedure for creating this flow in the following sections. For the Node-RED environment, use your local PC, a Raspberry Pi, or a cloud-based deployment.

Install the required nodes

Click the hamburger menu on the top right of the Node-RED flow editor, go to “Manage palette” -> “Palette” tab -> “Install” tab, and install the following nodes.

Create a flow of acquiring image data

First, create a flow that acquires the image binary data from the webserver. As in the flow below, place an inject node (the name will be changed to “timestamp” when placed in the workspace), http request node, and image preview node, and connect them with wires in the user interface.

Then double-click the http request node to change the node property settings.

Adjust http request node property settings


Paste the URL of the surveillance camera image to the URL on the property setting screen of the http request node. (In Google Chrome, when you right-click on the image and select “Copy image address” from the menu, the URL of the image is copied to the clipboard.) Also, select “a binary buffer” as the output format.

Execute the flow to acquire image data

Click the Deploy button at the top right of the flow editor, then click the button to the inject node’s left. Then, the message is sent from the inject node to the http request node through the wire, and the image is acquired from the web server that provides the image of the surveillance camera. After receiving the image data, a message containing the data in binary format is sent to the image preview node, and the image is displayed under the image preview node.

 An image of the river taken by the surveillance camera is displayed in the lower right.

Create a flow for image recognition of the acquired image data

Next, create a flow that analyzes what is in the acquired image. Place a cocossd node, a debug node (the name will be changed to msg.payload when you place it), and a second image preview node.

Then, connect the output terminal on the right side of the http request node, and the input terminal on the left side of the cocossd node.

Next, connect the output terminal on the right side of the cocossd node and the debug node, the output terminal on the right side of the cocossd node, and the input terminal on the left side of the image preview node with the respective wires.

Through the wire, the binary data of the surveillance camera image is sent to the cocossd node, and after the image recognition is performed using TensorFlow.js, the object name is displayed in the debug node, and the image with the image recognition result is displayed in the image preview node. 

The cocossd node is designed to store the object name in the variable msg.payload, and the binary data of the image with the annotation in the variable msg.annotatedInput

To make this flow work as intended, you need to double-click the image preview node used to display the image and change the node property settings.

Adjust image preview node property settings

By default, the image preview node displays the image data stored in the variable msg.payload. Here, change this default variable to msg.annotatedInput.

Adjust inject node property settings

Since the flow is run regularly every minute, the inject node’s property needs to be changed. In the Repeat pull-down menu, select “interval” and set “1 minute” as the time interval. Also, since we want to start the periodic run process immediately after pressing the Deploy button, select the checkbox on the left side of “inject once after 0.1 seconds”.

Run the flow for image recognition

The flow process will be run immediately after pressing the Deploy button. When the person (author) is shown on the surveillance camera, the image recognition result “person” is displayed in the debug tab on the right. Also, below the image preview node, you will see the image annotated with an orange square.

Create a flow of sending an email when a person caught in the surveillance camera

Finally, create a flow to send the annotated image by email when the object name in the image recognition result is “person”. As a subsequent node of the cocossd node, place a switch node that performs condition determination, a change node that assigns values, and a sendgrid node that sends an email, and connect each node with a wire.

Then, change the property settings for each node, as detailed in the sections below.

Adjust the switch node property settings

Set the rule to execute the subsequent flow only if msg.payload contains the string “person” 

To set that rule, enter “person” in the comparison string for the condition “==” (on the right side of the “az” UX element in the property settings dialog for the switch node).

Adjust the change node property settings

To attach the image with annotation to the email, substitute the image data stored in the variable msg.annotatedInput to the variable msg.payload. First, open the pull-down menu of “az” on the right side of the UX element of “Target value” and select “msg.”. Then enter “annotatedInput” in the text area on the right.

If you forget to change to “msg.” in the pull-down menu that appears when you click “az”, the flow often does not work well, so check again to be sure that it is set to “msg.”.

Adjust the sendgrid node property settings

Set the API key from the SendGrid management screen. And then input the sender email address and recipient email address.

Finally, to make it easier to see what each node is doing, open each node’s node properties, and set the appropriate name.

Validate the operation of the flow to send an email when the surveillance camera captures a person in frame

When a person is captured in the image of the surveillance camera, the image recognition result is displayed in the debug tab the same as in the previous flow of confirmation and the orange frame is displayed in the image under the image preview node of “Image with annotation”. You can see that the person is recognized correctly.

After that, if the judgment process, the substitution process, and the email transmission process works as designed, you will receive an email with the image file with the annotation attached to your smartphone as follows:


By using the flow created in this article, you can also build a simple security system for your own garden using a camera connected to a Raspberry Pi. At a larger scale, image recognition can also be run on image data acquired using network cameras that support protocols such as ONVIF.

About the author: Kazuhito Yokoi is an Engineer at Hitachi’s OSS Solution Center, located in Yokohama, Japan. 

The post Developing an email alert system using a surveillance camera with Node-RED and TensorFlow.js appeared first on

Open Source Project For Earthquake Warning Systems

Tuesday 1st of September 2020 03:07:26 AM

Earthquakes or the shaking doesn’t kill people, buildings do. If we can get people out of buildings in time, we can save lives. Grillo has founded OpenEEW in partnership with IBM and the Linux Foundation to allow anyone to build their own earthquake early-warning system. Swapnil Bhartiya, the founder of TFiR, talked to the founder of Grillo on behalf of The Linux Foundation to learn more about the project.

Here is the transcript of the interview:

Swapnil Bhartiya: If you look at these natural phenomena like earthquakes, there’s no way to fight with nature. We have to learn to coexist with them. Early warnings are the best thing to do. And we have all these technologies – IoTs and AI/ML. All those things are there, but we still don’t know much about these phenomena. So, what I do want to understand is if you look at an earthquake, we’ll see that in some countries the damage is much more than some other places. What is the reason for that?

Andres Meira: earthquakes disproportionately affect countries that don’t have great construction. And so, if you look at places like Mexico, the Caribbean, much of Latin America, Nepal, even some parts of India in the North and the Himalayas, you find that earthquakes can cause more damage than say in California or in Tokyo. The reason is it is buildings that ultimately kill people, not the shaking itself. So, if you can find a way to get people out of buildings before the shaking that’s really the solution here. There are many things that we don’t know about earthquakes. It’s obviously a whole field of study, but we can’t tell you for example, that an earthquake can happen in 10 years or five years. We can give you some probabilities, but not enough for you to act on.

What we can say is that an earthquake is happening right now. These technologies are all about reducing the latency so that when we know an earthquake is happening in milliseconds we can be telling people who will be affected by that event.

Swapnil Bhartiya: What kind of work is going on to better understand earthquakes themselves?

Andres Meira: I have a very narrow focus. I’m not a seismologist and I have a very narrow focus related to detecting earthquakes and alerting people. I think in the world of seismology, there are a lot of efforts to understand the tectonic movement, but I would say there are a few interesting things happening that I know of. For example, undersea cables. People in Chile and other places are looking at undersea telecommunications cables and the effects that any sort of seismic movement have on the signals. They can actually use that as a detection system. But when you talk about some of the really deep earthquakes, 60-100 miles beneath the surface, man has not yet created holes deep enough for us to place sensors. So we’re very limited as to actually detecting earthquakes at a great depth. We have to wait for them to affect us near the surface.

Swapnil Bhartiya: So then how do these earthquake early warning systems work? I want to understand from a couple of points: What does the device itself look like? What do those sensors look like? What does the software look like? And how do you kind of share data and interact with each other?

Andres Meira: The sensors that we use, we’ve developed several iterations over the last couple of years and effectively, they are a small microcontroller, an accelerometer, this is the core component and some other components. What the device does is it records accelerations. So, it looks on the X, Y, and Z axes and just records accelerations from the ground so we are very fussy about how we install our sensors. Anybody can install it in their home through this OpenEEW initiative that we’re doing.

The sensors themselves record shaking accelerations and we send all of those accelerations in quite large messages using MQTT. We send them every second from every sensor and all of this data is collected in the cloud, and in real-time we run algorithms. We want to know that the shaking, which the accelerometer is getting is not a passing truck. It’s actually an earthquake.

So we’ve developed the algorithms that can tell those things apart. And of course, we wait for one or two sensors to confirm the same event so that we don’t get any false positives because you can still get some errors. Once we have that confirmation in the cloud we can send a message to all of the client devices. If you have an app, you will be receiving a message saying, there’s an earthquake at this location, and your device will then be calculating how long it will take to reach it. Therefore, how much energy will be lost and therefore, what shaking you’re going to be expecting very soon.

Swapnil Bhartiya: Where are these devices installed?

Andres Meira: They are installed at the moment in several countries – Mexico, Chile, Costa Rica, and Puerto Rico. We are very fussy about how people install them, and in fact, on the OpenEEW website, we have a guide for this. We really require that they’re installed on the ground floor because the higher up you go, the different the frequencies of the building movement, which affects the recordings. We need it to be fixed to a solid structural element. So this could be a column or a reinforced wall, something which is rigid and it needs to be away from the noise. So it wouldn’t be great if it’s near a door that was constantly opening and closing. Although we can handle that to some extent. As long as you are within the parameters, and ideally we look for good internet connections, although we have cellular versions as well, then that’s all we need.

The real name of the game here is a quantity more than quality. If you can have a lot of sensors, it doesn’t matter if one is out. It doesn’t matter if the quality is down because we’re waiting for confirmation from other ones and redundancy is how you achieve a stable network.

Swapnil Bhartiya: What is the latency between the time when sensors detect an earthquake and the warning is sent out? Does it also mean that the further you are from the epicenter, the more time you will get to leave a building?

Andres Meira: So the time that a user gaps in terms of what we call the window of opportunity for them to actually act on the information is a variable and it depends on where the earthquake is relative to the user. So, I’ll give you an example. Right now, I’m in Mexico City. If we are detecting an earthquake in Acapulco, then you might get 60 seconds of advanced warning because an earthquake travels at more or less a fixed velocity, which is unknown and so the distance and the velocity gives you the time that you’re going to be getting.

If that earthquake was in the South of Mexico in Oaxaca, we might get two minutes. Now, this is a variable. So of course, if you are in Istanbul, you might be very near the fault line or Kathmandu. You might be near the fault line. If the distance is less than what I just described, the time goes down. But even if you only have five seconds or 10 seconds, which might happen in the Bay area, for example, that’s still okay. You can still ask children in a school to get underneath the furniture. You can still ask surgeons in a hospital to stop doing the surgery. There’s many things you can do and there are also automated things. You can shut off elevators or turn off gas pipes. So anytime is good, but the actual time itself is a variable.

Swapnil Bhartiya: The most interesting thing that you are doing is that you are also open sourcing some of these technologies. Talk about what components you have open source and why.

Andres Meira: Open sourcing was a tough decision for us. It wasn’t something we felt comfortable with initially because we spent several years developing these tools, and we’re obviously very proud. I think that there came a point where we realized why are we doing this? Are we doing this to develop cool technologies to make some money or to save lives? All of us live in Mexico, all of us have seen the devastation of these things. We realized that open source was the only way to really accelerate what we’re doing.

If we want to reach people in these countries that I’ve mentioned; if we really want people to work on our technology as well and make it better, which means better alert times, less false positives. If we want to really take this to the next level, then we can’t do it on our own. It will take a long time and we may never get there.

So that was the idea for the open source. And then we thought about what we could do with open source. We identified three of our core technologies and by that I mean the sensors, the detection system, which lives in the cloud, but could also live on a Raspberry Pi, and then the way we alert people. The last part is really quite open. It depends on the context. It could be a radio station. It could be a mobile app, which we’ve got on the website, on the GitHub. It could be many things. Loudspeakers. So those three core components, we now have published in our repo, which is OpenEEW on GitHub. And from there, people can pick and choose.

It might be that some people are data scientists so they might go just for the data because we also publish over a terabyte of accelerometer data from our networks. So people might be developing new detection systems using machine learning, and we’ve got instructions for that and we would very much welcome it. Then we have something for the people who do front end development. So they might be helping us with the applications and then we also have people something for the makers and the hardware guys. So they might be interested in working on the census and the firmware. There’s really a whole suite of technologies that we published.

Swapnil Bhartiya: There are other earthquake warning systems. How is OpenEEW different?

Andres Meira: I would divide the other systems into two categories. I would look at the national systems. I would look at say the Japanese or the California and the West coast system called Shake Alert. Those are systems with significant public funding and have taken decades to develop. I would put those into one category and another category I would look at some applications that people have developed. My Shake or Skylert, or there’s many of them.

If you look at the first category, I would say that the main difference is that we understand the limitations of those systems because an earthquake in Northern Mexico is going to affect California and vice versa. An earthquake in Guatemala is going to affect Mexico and vice versa. An earthquake in Dominican Republic is going to affect Puerto Rico. The point is that earthquakes don’t respect geography or political boundaries. And so we think national systems are limited, and so far they are limited by their borders. So, that was the first thing.

In terms of the technology, actually in many ways, the MEMS accelerometers that we use now are streets ahead of where we were a couple of years ago. And it really allows us to detect earthquakes hundreds of kilometers away. And actually, we can perform as well as these national systems. We’ve studied our system versus the Mexican national system called SASMEX, and more often than not, we are faster and more accurate. It’s on our website. So there’s no reason to say that our technology is worse. In fact, having cheaper sensors means you can have huge networks and these arrays are what make all the difference.

In terms of the private ones, the problems with those are that sometimes they don’t have the investment to really do wide coverage. So the open sources are our strength there because we can rely on many people to add to the project.

Swapnil Bhartiya: What kind of roadmap do you have for the project? How do you see the evolution of the project itself?

Andres Meira: So this has been a new area for me; I’ve had to learn. The governance of OpenEEW as of today, like you mentioned, is now under the umbrella of the Linux Foundation. So this is now a Linux Foundation project and they have certain prerequisites. So we had to form a technical committee. This committee makes the steering decisions and creates the roadmap you mentioned. So, the roadmap is now published on the GitHub, and it’s a work in progress, but effectively we’re looking 12 months ahead and we’ve identified some areas that really need priority. Machine learning, as you mentioned, is definitely something that will be a huge change in this world because if we can detect earthquakes, potentially with just a single station with a much higher degree of certainty, then we can create networks that are less dense. So you can have something in Northern India and in Nepal, in Ecuador, with just a handful of sensors. So that’s a real Holy grail for us.

We also are asking on the roadmap for people to work with us in lots of other areas. In terms of the sensors themselves, we want to do more detection on the edge. We feel that edge computing with the sensors is obviously a much better solution than what we do now, which has a lot of cloud detection. But if we can move a lot of that work to the actual devices, then I think we’re going to have much smarter networks and less telemetry, which opens up new connectivity options. So, the sensors as well are another area of priority on the road map.

Swapnil Bhartiya: What kind of people would you like to get involved with and how can they get involved?

Andres Meira: So as of today, we’re formally announcing the initiative and I would really invite people to go to, where we’ve got a site that outlines some areas that people can get involved with. We’ve tried to consider what type of people would join the project. So you’re going to get seismologists. We have seismologists from Harvard University and from other areas. They’re most interested in the data from what we’ve seen so far. They’re going to be looking at the data sets that we’ve offered and some of them are already looking at machine learning. So there’s many things that they might be looking at. Of course, anyone involved with Python and machine learning, data scientists in general, might also do similar things. Ultimately, you can be agnostic about seismology. It shouldn’t put you off because we’ve tried to abstract it away. We’ve got down to the point where this is really just data.

Then we’ve also identified the engineers and the makers, and we’ve tried to guide them towards the repos, like the sensory posts. We are asking them to help us with the firmware and the hardware. And then we’ve got for your more typical full stack or front end developer, we’ve got some other repos that deal with the actual applications. How does the user get the data? How does the user get the alerts? There’s a lot of work we can be doing there as well.

So, different people might have different interests. Someone might just want to take it all. Maybe someone might want to start a network in the community, but isn’t technical and that’s fine. We have a Slack channel where people can join and people can say, “Hey, I’m in this part of the world and I’m looking for people to help me with the sensors. I can do this part.” Maybe an entrepreneur might want to join and look for the technical people.

So, we’re just open to anybody who is keen on the mission, and they’re welcome to join.

The post Open Source Project For Earthquake Warning Systems appeared first on

Making Zephyr More Secure

Tuesday 1st of September 2020 12:42:31 AM

Zephyr is gaining momentum where more and more companies are embracing this open source project for their embedded devices. However, security is becoming a huge concern for these connected devices. The NCC Group recently conducted an evaluation and security assessment of the project to help harden it against attacks. In the interview, Kate Stewart, Senior Director of Strategic Programs at Linux Foundation talk about the assessment and the evolution of the project.

Here is a quick transcript of the interview:

Swapnil Bhartiya: The NCC group recently evaluated Linux for security. Can you talk about what was the outcome of that evaluation?
Kate Stewart: We’re very thankful for the NCC group for the work that they did and helping us to get Zephyr hardened further. In some senses when it had first hit us, it was like, “Okay, they’re taking us seriously now. Awesome.” And the reason they’re doing this is that their customers are asking for it. They’ve got people who are very interested in Zephyr so they decided to invest the time doing the research to see what they could find. And the fact that we’re good enough to critique now is a nice positive for the project, no question.

Up till this point, we’d had been getting some vulnerabilities that researchers had noticed in certain areas and had to tell us about. We’d issued CVEs so we had a process down, but suddenly being hit with the whole bulk of those like that was like, “Okay, time to up our game guys.” And so, what we’ve done is we found out we didn’t have a good way of letting people who have products with Zephyr based on them know about our vulnerabilities. And what we wanted to be able to do is make it clear that if people have products and they have them out in the market and that they want to know if there’s a vulnerability. We just added a new webpage so they know how to register, and they can let us know to contact them.

The challenge of embedded is you don’t quite know where the software is. We’ve got a lot of people downloading Zephyr, we got a lot of people using Zephyr. We’re seeing people upstreaming things all the time, but we don’t know where the products are, it’s all word of mouth to a large extent. There’re no tracers or anything else, you don’t want to do that in an embedded space on IoT; battery life is important. And so, it’s pretty key for figuring out how do we let people who want to be notified know.

We’d registered as a CNA with Mitre several years ago now and we can assign CVE numbers in the project. But what we didn’t have was a good way of reaching out to people beyond our membership under embargo so that we can give them time to remediate any issues that we’re fixing. By changing our policies, it’s gone from a 60-day embargo window to a 90-day embargo window. In the first 30 days, we’re working internally to get the team to fix the issues and then we’ve got a 60-day window for our people who do products to basically remediate in the field if necessary. So, getting ourselves useful for product makers was one of the big focuses this year.

Swapnil Bhartiya: Since Zephyr’s LTS release was made last year, can you talk about the new releases, especially from the security perspective because I think the latest version is 2.3.0?
Kate Stewart: Yeah, 2.3.0 and then we also have 1.14.2. and 1.14 is our LTS-1 as we say. And we’ve put an update out to it with the security fixes and a long-term stable like the Linux kernel has security fixes and bug fixes backported into it so that people can build products on it and keep it active over time without as much change in the interfaces and everything else that we’re doing in the mainline development tree and what we’ve just done with the 2.3.

2.3 has a lot of new features in it and we’ve got all these vulnerabilities remediated. There’s a lot more coming up down the road, so the community right now is working. We’ve adopted new set of coding guidelines for the project and we will be working on that so we can get ourselves ready for going after safety certifications next year. So there’s a lot of code in motion right now, but there’s a lot of new features being added every day. It’s great.

Swapnil Bhartiya: I also want to talk a bit about the community side of it. Can you talk about how the community is growing new use cases?
Kate Stewart: We’ve just added two new members into Zephyr. We’ve got Teenage Engineering has just joined us and Laird Connectivity has just joined us and it’s really cool to start seeing these products coming out. There are some rather interesting technologies and products that are showing up and so I’m really looking forward to being able to have blog posts about them.

Laird Connectivity is basically a small device running Zephyr that you can use for monitoring distance without recording other information. So, in days of COVID, we need to start figuring out technology assists to help us keep the risk down. Laird Connectivity has devices for that.

So we’re seeing a lot of innovation happening very quickly in Zephyr and that’s really Zephyr’s strength is it’s got a very solid code base and lets people add their innovation on top.

Swapnil Bhartiya: What role do you think Zephyr going to play in the post-COVID-19 world?
Kate Stewart: Well, I think they offer us interesting opportunities. Some of the technologies that are being looked at for monitoring for instance – we have distance monitoring, contact tracing and things like that. We can either do it very manually or we can start to take advantage of the technology infrastructures to do so. But people may not want to have a device effectively monitoring them all the time. They may just want to know exactly, position-wise, where they are. So that’s potentially some degree of control over what’s being sent into the tracing and tracking.

These sorts of technologies I think will be helping us improve things over time. I think there’s a lot of knowledge that we’re getting out of these and ways we can optimize the information and the RTOS and the sensors are discrete functionality and are improving how do we look at things.

Swapnil Bhartiya: There are so many people who are using Zephyr but since it is open source we not even aware of them. How do you ensure that whether someone is an official member of the project or not if they are running Zephyr their devices are secure?
Kate Stewart: We do a lot of testing with Zephyr, there’s a tremendous amount of test infrastructure. There’s the whole regression infrastructure. We work to various thresholds of quality levels and we’ve got a lot of expertise and have publicly documented all of our best practices. A security team is a top-notch group of people. I’m really so proud to be able to work with them. They do a really good job of caring about the issues as well as finding them, debugging them and making sure anything that comes up gets solved. So in that sense, there’s a lot of really great people working on Zephyr and it makes it a really fun community to work with, no question. In fact, it’s growing fast actually.

Swapnil Bhartiya: Kate, thank you so much for taking your time out and talking to me today about these projects.

The post Making Zephyr More Secure appeared first on

Download the 2020 Linux Kernel History Report

Wednesday 26th of August 2020 09:10:59 PM

Over the last few decades, we’ve seen Linux steadily grow and become the most widely used operating system kernel. From sensors to supercomputers, we see it used in spacecraft, automobiles, smartphones, watches, and many more devices in our everyday lives. Since the Linux Foundation started publishing the Linux Kernel Development Reports in 2008, we’ve observed progress between points in time.

Since that original 1991 release, Linux has become one of the most successful collaborations in history, with over 20,000 contributors. Given the recent announcement of version 5.8 as one of the largest yet, there’s no sign of it slowing down, with the latest release showing a new record of over ten commits per hour.

In this report, we look at Linux’s entire history. Our analysis of Linux is based on early releases, and the developer community commits from BitKeeper and git since the first Kernel release on September 17, 1991, through August 2, 2020. With the 5.8 release tagging on August 2, 2020, and with the merge window for 5.9 now complete, over a million commits of recorded Linux Kernel history are available to analyze from the last 29 years.

This report looks back through the history of the Linux kernel and the impact of some of the best practices and tooling infrastructure that has emerged to enable one of the most significant software collaborations known.

Download the 2020 Linux Kernel History Report

The post Download the 2020 Linux Kernel History Report appeared first on

Linux Kernel Training Helps Security Engineer Move into Full Time Kernel Engineering

Wednesday 26th of August 2020 02:43:46 PM

In 2017, Mohamed Al Samman was working on the Linux kernel, doing analysis, debugging, and compiling. He had also built an open source Linux firewall, and a kernel module to monitor power supply electrical current status (AC/DC) by using the Linux kernel notifier. He hoped to become a full-time kernel developer, and expand the kernel community in Egypt, which led him to apply for, and be awarded, a Linux Foundation Training (LiFT) Scholarship in the Linux Kernel Guru category.

We followed up with Mohamed recently to hear what he’s been up to since completing his Linux Foundation training.

Source: Linux Foundation Training

The post Linux Kernel Training Helps Security Engineer Move into Full Time Kernel Engineering appeared first on

SD Times Open-Source Project of the Week: OpenEEW

Monday 24th of August 2020 01:56:23 PM

Jenna Sargeant writes at SD Times:

The project was recently accepted into the Linux Foundation. The Linux Foundation in collaboration with IBM will work to accelerate the standardization and deployment of EEW systems to make communities more prepared for earthquakes. 

The project was developed as a way to reduce the costs of EEW systems, accelerate deployments around the world, and save lives. 

Click to read more at SD Times

The post SD Times Open-Source Project of the Week: OpenEEW appeared first on

How Open Source Is Transforming The Energy Industry

Thursday 20th of August 2020 02:45:39 AM

In this interview Swapnil Bhartiya, creator of TFiR, sat down with Shuli Goodman, Executive Director of LF Energy to discuss the role open source and the foundation is playing in helping the energy sector to embark on its own digital transformation and cloud-native journey.

Here is a lightly edited transcript of the interview:

Swapnil Bhartiya: Shuli, first of all, welcome to the show once again. When we look at the energy sector – we see power lines and grids. It creates the image of an ancient system to move electrons and protons from one place to another. Are we still talking about the same power lines and grids or we’re also talking about a modern infrastructure?

Shuli Goodman: Well, we’re definitely talking about modern infrastructure. One of the defining features of the grid that we’re moving from is you have centralized energy generation that is being pushed out over high voltage to distribution systems. We lose nearly 60% of the electrons. There’s a tremendous opportunity for optimization and being able to reduce the amount of electron loss.

The digitalization of energy, in terms of the metadata and the data, enables system operators to be able to work much more effectively. It’s going to be critical in ensuring that we actually are able to balance supply and demand in a different way than we’ve been balancing supply and demand for the last 150 years.

Swapnil Bhartiya: What role is LF Energy playing in helping address these problems?
Shuli Goodman: We’re at the beginning of a period of accelerated innovation, which will be addressing these issues. The Digital Substation project, for example, is addressing the ability to have torrents of data being managed from the edge, and to be able to provide grid intelligence out at the edge, and have a mechanism for being able to bring that in and then to be able to orchestrate, choreograph, and to even have control or shared control mechanisms that enable us to manage the grid.

What we’re working on now is blocking and tackling at a very fundamental level. You have utilities who have always thought of themselves as hardware guys – dealing with power lines. It’s been a very manual, highly intensive industry.

We are moving towards network operators, almost carriers like approach. Kind of an amalgamation of electricity, telecommunications and the internet. This whole new process of being able to orchestrate energy and digitalization is essential in that paradigm. It could even be up to 50% of that. And then there’s other stuff that’s happening both at the chip-level and at the hardware-level that is going to enable that intelligence at the edge and the ability to choreograph that through market signals.

What we’re doing is shifting to a price-based grid coordination model. In other words, that price signals that will shift and change based on the amount of sun, or the amount of wind, or the availability of energy. We’ll actually begin getting pushed out to the edge and enable coordination between assets at the edge.

Swapnil Bhartiya: You mentioned the Digital Substation Project. Tell us more about it.

Shuli Goodman: So, for those of you who are watching, who’ve been along the journey with LF Networking or have seen what’s happened with 5G, the revolution of 5G was the virtualization and the dis-aggregation. The shift from purely hardware-centric to really moving to 75% virtualization.

The Digital Substation, the DSAS project, is an umbrella of four different projects that are addressing the digitalization at the substation. The Substation is the critical infrastructure that separates high, medium, and low voltage between the generation and then moving it, stepping it down before it goes out into your house. I refer to them as edge node routers, which may or may not be exactly the right term, but we’re moving into a territory where we’re inventing things.

I think of the edge node and the DSAS project is really about virtualizing hardware, abstracting the complexity of hardware and software so that we begin to have really software-defined environments. And perhaps in the future, we’ll have increasingly software-defined substations, transformers. All kinds of things that we considered to be de facto the standard today, may in fact move more and more towards software-defined. And the DSAS project is really the start of that.

Swapnil Bhartiya: What kind of collaboration is there around DSAS?

Shuli Goodman: It’s a great project. It really started with RTE. Last summer we had a series of meetings and we opened it up to all of the OEMs, vendors, suppliers, such as, GE, ABB, Schneider Electric and all the network operators, utilities all over the world that wanted to participate.

We have a core group, from RTE, France, and then we have Alliander and TenneT, which are the distribution and the transmission system operators in the Netherlands. TenneT also operates in Germany. We have General Electric, which is driving it from a vendor, OEM perspective. And then Schneider Electric is also participating, and we hope that others will join us.

Swapnil Bhartiya: You also have something called CoMPAS or Configuration Modules for Power Industry Automation System. What is that?

Shuli Goodman: So, CoMPAS is the first of the four projects in the DSAS umbrella. It’s essentially a configuration model. One of the problems that end-users – the utilities – have is that when they think about their portfolio of hardware and software there are tremendous interoperability challenges. 61850, which is an IEC Standard, was created precisely in order to facilitate interoperability. The CoMPAS project is leveraging 61850 to enable interoperability between various different vendors so that we can have a more heterogeneous environment for things like a substation. There are millions of substations on the planet so any single player managing at the transmission level could be managing thousands of these. And if you are at the distribution level, you would be managing many thousands of thousands.

So if you don’t have that interoperability, then you have vendor lock-in. And if you have vendor lock-in, it’s not just that it’s bad for the utility, it’s also really bad for the OEM, because it slows innovation. It keeps the vendor and the supplier sort of focused on a portfolio as opposed to really looking ahead. Right now, solving this interoperability problem is ground zero, and that’s where CoMPAS comes in.

Swapnil Bhartiya: How has Open Source made your life easier to not only convince these stakeholders, these players, to collaborate with each other but also to innovate at a much faster rate than it would take traditional companies to do in a proprietary manner?

Shuli Goodman: So, if just for a moment, you just imagined in your mind, a pie, and each of the wedges in the pie represented parts of the stack that you need to build and support in order to go to market. What Open Source does is it allows us to identify what are the commodity parts of that pie, and can we agree on working on those together. That then frees up engineers and resources and facilitates interoperability. It does really great things to accelerate innovation because instead of, let’s say, Siemens, GE, ABB or Schneider Electric, putting in 30% of their resources to supporting the 61850 integration, or something like that, they can put in a quarter of those engineers and then relocate those resources somewhere else. The same thing is true with the utilities, because, for the most part, utilities have given responsibility for their network operations at a digital level.

Vendors need to become Digital Native and Cloud Native, because to get to where we need to go, it is going to be so digitally intensive, perhaps 50% of the problem is going to be digital. So, we need to really build that capacity.

The post How Open Source Is Transforming The Energy Industry appeared first on

My first real experience with Open Source

Wednesday 19th of August 2020 04:21:17 PM

Arthur Silva Sens wrote the following on Medium:

I just graduated from my internship at Linux Foundation’s Community Bridge program, and I’d like to share my experience and explain why you should also consider applying if you are new to open source or the cloud-native world.

I already had some experience with cloud-native projects, I’ve been using cloud-native tools at my workplace for a couple of years. It is thanks to them that I got my first full-time job as an Infrastructure Analyst and later on as a Cloud Architect. And if you are reading this blog post, I assume that you have at least some knowledge about what CNCF does and you do know some of its projects, like Kubernetes and Prometheus, i.e.

click to read more about Arthur’s experience the Linux Foundation’s community tools:

The post My first real experience with Open Source appeared first on

Why Linux’s biggest ever kernel release is really no big deal

Monday 17th of August 2020 09:56:20 PM

When the Linux 5.8 Release Candidate opened for testing recently, the big news wasn’t so much what was in it, but its size. As Linus Torvalds himself noted, “despite not really having any single thing that stands out … 5.8 looks to be one of our biggest releases of all time.”

True enough, RC 5.8 features over 14,000 non-merge commits, some 800,000 new lines of code, and added around a hundred new contributors. It might have gotten that large simply because few have been traveling thanks to COVID-19, and we’ve all been able to get more work done in a release window than usual. But from the perspective of this seasoned Linux kernel contributor and maintainer, what is particularly striking about the 5.8 RC release is that its unprecedented size just was not an issue for those that are maintaining it. That, I’d argue, is because Linux has the best workflow process of any software project in the world.

What does it mean to have the best workflow process? For me, it comes down to a set of basic rules that Linux kernel developers have established over time to allow them to produce relentlessly steady and reliable progress on a massive scale.

One key factor is git

It’s worth starting with a little Linux history. In the project’s early days (1991–2002), people simply sent patches directly to Linus. Then he began pulling in patches from sub-maintainers, and those people would be taking patches from others. It quickly became apparent that this couldn’t scale. Everything was too hard to keep track of, and the project was at constant risk of merging incompatible code.

That led Linus to explore various change management systems including BitKeeper, which took an unusually decentralized approach. Whereas other change management systems used a check-out/modify/check-in protocol, BitKeeper gave everyone a copy of the whole repo and allowed developers to send their changes up to be merged. Linux briefly adopted BitKeeper in 2002, but its status as a proprietary solution proved incompatible with the community’s belief in open source development, and the relationship ended in 2005. In response, Linus disappeared for a while and came back with git, which took decentralized change management in a powerful new direction and was the first significant instantiation of the management process that makes Linux development work so well today.

Here are seven best practices — or fundamental tenets — that are key to the Linux kernel workflow:

Each commit must do only one thing

A central tenet of Linux is that all changes must be broken up into small steps. Every commit that you submit should do only one thing. That doesn’t mean every commit has to be small in size. A simple change to the API of a function that is used in a thousand files can make the change massive but is still acceptable as it is all part of performing one task. By always obeying this single injunction, you make it much easier to identify and isolate any change that turns out to be problematic. It also makes it easier for the patch reviewer only to need to worry about a single task that the patch accomplishes. 

Commits cannot break the build

Not only should all changes be broken into the smallest possible increments, but they also can’t break the kernel. Every step needs to be fully functional and not cause regressions. This is why a change to a function’s prototype must also update every file that calls it, to prevent the build from breaking. So every step has to work as a standalone change, which brings us to the next point:

All code is bisectable

If a bug is discovered at some point, you need to know which change caused the problem. Essentially, a bisect is an operation that allows you to find the exact point in time where everything went wrong.

You do that by going to the middle of where the last known working commit exists, and the first commit known to be broken, and test the code at that point. If it works, you go forward to the next middle point. If it doesn’t, you go back to the middle point in the other direction. In that way, you can find the commit that breaks the code from tens of thousands of possible commits in just a dozen or so compiles/tests. Git even helps in automating this process with the git bisect functionality. 

Importantly, this only works well if you abide by the previous rules: that each commit does just one thing. Otherwise, you would not know which of the many changes in the problem commit caused the issue. If a commit breaks the build or does not boot, and the bisect lands on that commit, you will not know which direction of the bisect to take. This means that you should never write a commit that depends on a future commit, like calling a function that doesn’t exist yet, or changing the parameters of a global function without changing all its callers in that same commit.

Never rebase a public repository

The Linux workflow process won’t allow you to rebase any public branch used by others. Once you rebase, the commits that were rebased will no longer match the same commits in the repositories based on that tree. A public tree that is not a leaf in a hierarchy of trees must not rebase. Otherwise, it will break the trees lower in the hierarchy. When a git repository is based on another tree, it builds on top of a commit in that tree. A rebase replaces commits, possibly removing a commit that other trees are based on. 

Git gets merging right

Getting merging right is far from a given. Other change management systems are a nightmare to merge code from different branches. It often ends up with hard to figure out conflicts and takes a huge amount of manual work to resolve. Git was structured to do the job effortlessly, and Linux benefits directly as a result. It’s a huge part of why the size of the 5.8 release wasn’t really a big deal. The 5.8-rc1 release averaged 200 commits a day, with 880 total merges from 5.7. Some maintainers noticed a bit more of a workload, but nothing was too stressful or would cause burnout.

Keep well-defined commit logs

Unfortunately, this may be one of the most essential best practices that are skipped over by many other projects. Every commit needs to be a stand-alone, and that includes its commit log. Everything required to understand the change being made must be explained in the change’s commit log. I found that some of my most lengthy and descriptive changelogs were for single line commits. That’s because a single line change may be for a very subtle bug fix, and that bug fix should be thoroughly described in the changelog.

A couple of years after submitting a change, it is highly unlikely that anyone would know why that change was made. A git blame can show what commits changed the code of a file. Some of these commits may be very old. Perhaps you need to remove a lock, or make a change to some code and do not precisely know why it exists. A well-written changelog for that code change can help determine if that code can be removed or how it can be modified. There have been several times I was glad I wrote detailed changelogs on code as I had to remove code, and the changelog description let me know that my changes were fine to make. 

Run continuous testing and integration

Finally, an essential practice is running continuous testing and continuous integration. I test every one of my pull requests before I send them upstream. We also have a repro called linux-next that pulls in all the changes that maintainers have on a specific branch of their repositories and tests them to assure that they integrate correctly. Effectively, linux-next runs a testable branch of the entire kernel that is destined for the next release. This is a public repo so anyone can test it, which happens pretty often – people now even release bug reports on code that’s in linux-next. But the upshot is that code that’s been in linux-next for a couple of weeks is very likely to be good to go into mainline.

Best practices exemplified

All of these practices enable the Linux community to release incredibly reliable code on a regular 9-week schedule at such a massive scale (average of 10,000 commits per release, and over 14,000 for the last release).  

I’d point to one more factor that’s been key to our success: culture. There’s a culture of continuous improvement within the kernel community that led us to adopt these practices in the first place. But we also have a culture of trust. We have a clear pathway via which people can make contributions and demonstrate over time that they are both willing and able to move the project forward. That builds a web of trusted relationships that have been key to the project’s long term success.

At the kernel layer, we have no choice but to follow these practices. All other applications run on top of the kernel. Any performance problem or bug in the kernel becomes a performance problem or bug for the applications on top. All error paths must exit peacefully; otherwise, the entire system will be compromised. We care about every error because the stakes are so high, but this mindset will serve any software project well.

Applications can have the luxury of merely crashing due to a bug. It will annoy users, but the stakes are not as high. Quality software should not take bugs lightly. This is why the Linux development workflow is considered the golden standard to follow.

About the author: Steven Rostedt (@srostedt) is a Linux kernel contributor and an Open Source Engineer at VMware. You can learn more about Steven’s work at or @VMWopensource on Twitter

The post Why Linux’s biggest ever kernel release is really no big deal appeared first on

Linux Foundation: Open Source Collaboration is a Global Endeavor

Thursday 13th of August 2020 12:47:48 AM

The Linux Foundation would like to reiterate its statements and analysis of the application of US Export Control regulations to public, open collaboration projects (e.g. open source software, open standards, open hardware, and open data) and the importance of open collaboration in the successful, global development of the world’s most important technologies. At this time, we have no information to believe recent Executive Orders regarding WeChat and TikTok will impact our analysis for open source collaboration. Our members and other participants in our project communities, which span many countries, are clear that they desire to continue collaborating with their peers around the world.

As a reminder, we would like to point anyone with questions to our prior blog post on US export regulations, which also links to our more detailed analysis of the topic. Both are available in English and Simplified Chinese for the convenience of our audiences.

The post Linux Foundation: Open Source Collaboration is a Global Endeavor appeared first on

Participate in the 2020 Open Source Jobs Report!

Tuesday 11th of August 2020 10:00:30 AM

The Linux Foundation has partnered with edX to update the Open Source Jobs Report, which was last produced in 2018. The report examines the latest trends in open source careers, which skills are in demand, what motivates open source job seekers, and how employers can attract and retain top talent. In the age of COVID-19, this data will be especially insightful both for companies looking to hire more open source talent, as well as individuals looking to advance or change careers.

The report is anchored by two surveys, one of which explores what hiring managers are looking for in employees, and one focused on what motivates open source professionals. Ten respondents to each survey will be randomly selected to receive a US$100 gift card to a leading online retailer as a thank you for participating!

All those working with open source technology, or hiring folks who do, are encouraged to share your thoughts and experiences. The surveys take around 10 minutes to complete, and all data is collected anonymously. Links to the surveys are at the top and bottom of this post.

Take the open source professionals survey

Take the hiring managers survey

The post Participate in the 2020 Open Source Jobs Report! appeared first on

New Hyperledger Fabric Training Course Prepares Developers to Create Enterprise Blockchain Applications

Thursday 6th of August 2020 07:00:54 PM

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the availability of a new training course, LFD272 – Hyperledger Fabric for Developers.

LFD272, developed in conjunction with Hyperledger, is designed for developers who want to master Hyperledger Fabric chaincode – Fabric’s smart contracts – and application development.   Source: Linux Foundation Training

The post New Hyperledger Fabric Training Course Prepares Developers to Create Enterprise Blockchain Applications appeared first on

TARS: Contributing to an open source microservices ecosystem

Thursday 6th of August 2020 05:20:21 PM

Linux Foundation Executive Director Jim Zemlin recently spoke at Cloud Native + Open Source Virtual Summit China 2020. We’d now like to republish his opening comments and a guide on how to get involved with the TARS project, the open source microservices framework.

The pandemic has thrown our global society into a health and economic crisis. It seems like there are conflicts every day from all over the world. Today, I want to remind you that open source is one of the great movements where collaboration, working together, and getting along is the essence of what we do. 

Open source is not a zero-sum game, but it has had an incredible impact on us in a net positive way. I like to remind everyone that open source is public goods that will be freely available to everyone worldwide, no matter what wind of political or economic change brings us. The LF is dedicated to all of that. 

Today, we are working hard to help folks during hard times, expanding our mentorship programs with over a quarter of million dollars of new donations to allow people to come in and train themselves on new skills during this tough time. We had a wonderful set of virtual events with thousands of people from hundreds of companies from countries worldwide working together. 

We want to bring the power of open source to help during these times and have several new initiatives that we are working on. Most notably, our recently launched LFPH initiative, which has started with seven members: Cisco,, Geometer, IBM, NearForm, Tencent, and VMware, and it’s hosting exposure notification projects such as Covid-Shield and Covid-Green, which are currently being deployed in Canada, Ireland, and several U.S. states to help find and reduce the spread of COVID-19. 

We are also working on a considerable number of new initiatives, which I will talk about. Still, I like to remind you of what we are here to talk about, which is cloud computing, and how much cloud computing has impacted all of us. Microservices are an essential part of that. In China we are seeing the TARS project; the microservices framework is really taking off. 

Two years ago, TARS joined the Linux Foundation, and ever since its community has been growing and new projects and contributors have been coming in. The TARS project provides a mature, high-performance microservices framework that supports multiple programming languages. We will talk more about the TARS Foundation in a little bit, but the microservices ecosystem has been growing and quickly turning applications and ideas in scale.

In addition to TARS, we have been seeing amazing work going on in the open source community. It begins with things such as the Software Package Data Exchange specification (SPDX), which was recently contributed as an international specification to the ISO/IEC JTC 1 for approval. This will help us track the usages of open source software across a complex global supply chain and reaffirm our commitment to the global movement. 

We also see growth and projects with recent releases, such as our networking project, the Open network automation platform Frankfurt release, which is being used to automate the networks and edge computing service for telecommunication providers, cloud providers, and enterprises. 

We‘ve seen new projects join our organization. One good example is MLflow — this project was contributed to our organization from Data Brick. This project has had an impressive community with over 200 contributors, which has been downloaded more than 2 million times. MLflow is part of the LF AI initiative. It will be a neutral home and open governance model to broaden the adaptation and contribution of things like MLflow. We have also seen new projects come to our organization, such as the FinOps Foundation, the consortium of financial companies. We are working together to grow the use of open source throughout our global financial system. 

It’s impressive to see all the different projects that have been coming. And today, I’d like to introduce the TARS Foundation formally. TARS has been an amazing project, and in just the last few years, I’ve noticed that developers here in China are for the first time incubating and sharing new open-source projects in China and the rest of the world. 

And the rest of the world is watching the progress of open source projects and seeing fantastic work. We are so proud of the work that is coming out of TARS. 

You know, Just like the Linux Foundation is about more than Linux, the TARS Foundation is more than just TARS. It’s a microservices ecosystem. 

Unfortunately, because of the COVID-19 pandemic, we had to cancel the Linux Foundation Member Summit this Spring, and we were unable to announce the TARS Foundation at that time. 

But today, the Linux Foundation is proud to announce again that the TARS project has become the TARS Foundation, an open-source microservice foundation within the overall framework of the Linux Foundation, and its outcome has been rapid growth for both the TARS project and projects associated with TARS. TARS has really taken off, and it’s just amazing to see the amount of development. 

We hope the TARS foundation will create a neutral home for additional projects for solving significant problems surrounding microservices, including but not limited to:

Agile development, DevOps best practices, and the comprehensive governance that we have will enable multi-languages, high performance, scalable solutions.

It is my pleasure to present what the TARS Foundation has achieved in the open source community. 

There are many companies whose contributions are instrumental in establishing TARS’ microservices ecosystem. The TARS Foundation is proof of that. Currently, the TARS Foundation has Arm and Tencent as premier members and five general members: AfterShip, Ampere, API7, Kong, and Zenlayer. 

In terms of TARS applications, it serves more than 100 companies from different industries, including Edge, E-sport, Fintech, Streaming, E-commerce, Entertainment, Telecommunication, Education, and more.

Furthermore, the TARS Foundation is striving to expand its microservices ecosystem, and it’s incorporating more functions such as Testing, Gateway, and Edge, to name a few. So far, the TARS Foundation has more than 30 projects.

Developers around the world are starting to realize that the TARS project is amazing and contribute as such. There are 12,000 developers actively using TARS. Also, 150 developers contribute code to TARS projects, from companies like Arm, Tencent, Google, Microsoft, Vmware, Webank, TAL, China Literature, iFlytek, Longtu Game, and many more.  

An overview of the TARS framework and how you can contribute to the open source microservices community What is TARS? 

TARS is a new generation distributed microservice application framework that was created in 2008. It provides developers and enterprises with a complete set of solutions to build, release, deploy, and maintain stable and reliable applications that run at scale.

In June 2018, TARS joined the Linux Foundation umbrella and became one of its projects. On March 10th, 2020, it was announced that the TARS Project would transition into the TARS Foundation

“a neutral home for open source microservices projects that empower any industry to quickly turn ideas into applications at scale”.

The TARS Foundation’s goal is to address the most common problems related to microservices application, including solving multi-programming language interoperability issues, mitigating transfer issues, maintaining data storage consistency, and ensuring high performance while supporting a growing number of requests.

Many companies have successfully used TARS framework from diverse industries such as fintech, esports, edge computing, online streaming, e-commerce, and education, to name a few.

Here is a complete timeline of the TARS Foundation’s development:

The TARS Foundation’s contributor ecosystem

Initially developed by Tencent, the world’s largest online gaming company, the TARS project has created an open source microservices platform for modern enterprises to realize innovative ideas quickly with the user-friendly technology in the TARS framework. 

In March 2020, the TARS project transitioned into the TARS Foundation under the Linux Foundation umbrella, aiming to support microservices development through DevOps best practices, comprehensive service governance, high-performance data transfer, storage scalability with massive data requests, and built-in cross-language interoperability. TARS has a mission to support the rapid growth of contributions and membership for a community focused on building a robust microservices platform.

The TARS Foundation provides a great platform for developers who are interested in contributing to an open source project. The organization extends different opportunities for developers to contribute to open source projects and the possibility to take on leadership roles and create major contributions in the broader open source community. 

There are Contributor, Committer, Maintainer, and Ambassador roles in their open source ecosystem, each having different requirements and responsibilities. 

How to become a Contributor

To get involved with TARS open source projects, you can first become a Contributor by participating in software construction and having at least one pull request merged into the source code. 

There are several ways for software developers to engage with the TARS community and become contributors:

    • Help other users and answer questions.
    • Submit meaningful issues.
    • Use TARS projects in production to increase testing scenarios.
    • Improve technical documentations.
    • Publish articles on applications and case studies related to TARS projects.
    • Report or repair the bugs found in TARS software.
    • Write source code analysis or annotate. 
    • Submit your first pull request.

Here are the steps to submit your pull request:

    • Fork the project from the TARS repository to your GitHub account.
    • Git clone the repository to your local machine.
    • Create a sub-branch.
    • Make changes to the code and test it on your local machine.
    • Commit those changes.
    • Push the committed code to GitHub.
    • Open a new pull request to submit your changes for review.
    • Your changes will be merged into the master branch if accepted.
    • Now you did it! You’ve become a TARS Contributor, and you will receive a Contributor t-shirt! 
How to become a Committer

A Committer is a contributor who has made distinct contributions to the TARS repositories and has accomplished at least one essential construction project or has repaired critical bugs. He or she can also take on some leadership opportunities.

The Committer is expected to:

    • Display excellent ability to make technical decisions.
    • Have successfully submitted and merged five pull requests.
    • Have contributed to the improvement of project code quality and performance.
    • Have implemented significant features or fixed major bugs.

After meeting the above requirements, you can submit a Committer request:

    • STEP 1: Provide your proof of the above criteria under Repo ISSUE.
    • STEP 2: Submit your pull request after you receive a response with instructions
    • STEP 3: Once your application is accepted, you will become a TARS Committer!

As a Committer, you are able to:

    • Control the code quality as a whole.
    • Respond to the pull requests submitted by the community.
    • Mentor contributors to promote collaborations in the open source community.
    • Attend regular meetings for committers. 
    • Know about project updates and trends in advance.
How to become a Maintainer

Maintainers are responsible for devising the subprojects in the TARS community. They will take the lead to make decisions associated with project development while holding power to merge branches. They should demonstrate excellent judgment and a sense of responsibility for subprojects’ well-being, as they need to define or approve design strategies suitable for developing subprojects. 

The Maintainer is expected to:

    • Have a firm grasp of TARS technology.
    • Be proactive in organizing technical seminars and put forward construction projects.
    • Be able to handle more complicated problems in coding.
    • Get unanimously approved by the technical support committee (TSC).

As a Maintainer, you have the right to:

    • Devise and decide the top-level technical design of subprojects.
    • Define the technical direction and priority of sub-projects.
    • Participate in version releases and ensure code quality.
    • Guide Contributors and Committers to promote collaborations in the open source community.
How to become an Ambassador

Passionate about open source technology and community, Ambassadors promote and support extensive use of TARS technology to a wider audience of software developers. Ambassadors’ expertise and involvement in TARS projects will also acquire greater recognition in the community. 

The Ambassador can:

    • Become a general member of the TARS Foundation.
    • Participate in TARS Foundation’s projects as a contributor, lecturer, or blogger.
    • Engage with developers by presenting at community events or sharing technology articles on online media platforms.
Looking forward

Ultimately, the TARS Foundation encourages a contributor to becoming a member of the governing board and the Technical Support Committee (TSC). At this level, you will focus on the organization’s strategic directions and decision-making as a whole.

If you are interested in learning more, you can check out their websites: or


Contributing to open source projects has many benefits. It strengthens your development skills, and your code is reviewed by other developers who can give a new perspective. You are also making new connections and even lifelong friendships with like-minded developers in the process of contributing. This is the open source model that has built many tech innovations that all of us enjoy today. Its sustainability depends on a free exchange of ideas and technology in our global community. Open source value and innovations are embedded in developers like you who can attempt development challenges and share insights with the broader community. 

The post TARS: Contributing to an open source microservices ecosystem appeared first on

Uniting for better open-source security: The Open Source Security Foundation (ZDNet)

Monday 3rd of August 2020 07:03:53 PM

Steven Vaughn-Nichols writes at ZDNet:

Eric S. Raymond, one of open-source’s founders, famously said, “Given enough eyeballs, all bugs are shallow,” which he called “Linus’s Law.” That’s true. It’s one of the reasons why open-source has become the way almost everyone develops software today. That said, it doesn’t go far enough. You need expert eyes hunting and fixing bugs and you need coordination to make sure you’re not duplicating work. 
So, it is more than past time that The Linux Foundation started the Open Source Security Foundation (OpenSSF). This cross-industry group brings together open-source leaders by building a security broader community. It combines efforts from the Core Infrastructure Initiative (CII)GitHub’s Open Source Security Coalition, and other open-source security-savvy companies such as GitHub, GitLab, Google, IBM,  Microsoft, NCC Group, OWASP Foundation, Red Hat, and VMware.

Read more at ZDNet

The post Uniting for better open-source security: The Open Source Security Foundation (ZDNet) appeared first on

Role Of SPDX In Open Source Software Supply Chain

Thursday 30th of July 2020 04:47:43 PM

Kate Stewart is a Senior Director of Strategic Programs, responsible for the Open Compliance program at the Linux Foundation encompassing SPDX, OpenChain, Automating Compliance Tooling related projects. In this interview, we talk about the latest release and the role it’s playing in the open source software supply chain.

Here is a transcript of our interview. 

Swapnil Bhartiya: Hi, this is Swapnil Bhartiya, and today we have with us, once again, Kate Stewart, Senior Director of Strategic Programs at Linux Foundation. So let’s start with SPDX. Tell us, what’s new going on in there in this specification?

Kate Stewart: Well, the SPDX specification just a month ago released auto 2.2 and what we’ve been doing with that is adding in a lot more features that people have been wanting for their use cases, more relationships, and then we’ve been working with the Japanese automotive-made people who’ve been wanting to have a light version. So there’s lots of really new technology sitting in the SPDX 2.2 spec. And I think we’re at a stage right now where it’s good enough that there’s enough people using it, we want to probably take it to ISO. So we’ve been re-formatting the document and we’ll be starting to submit it into ISO so it can become an international specification. And that’s happening.

Swapnil Bhartiya: Can you talk a bit about if there is anything additional that was added to the 2.2 specification. Also, I would like to talk about some of the use cases since you mentioned the automaker. But before that, I just want to talk about anything new in the specification itself.

Kate Stewart: So in the 2.2 specifications, we’ve got a lot more relationships. People wanted to be able to handle some of the use cases that have come up from containers now. And so they wanted to be able to start to be able to express that and specify it. We’ve also been working with the NTIA. Basically they have a software bill of materials or SBoM working groups, and SPDX is one of the formats that’s been adopted. And their framing group has wanted to see certain features so that we can specify known unknowns. So that’s been added into the specification as well.

And then there are, how you can actually capture notices since that’s something that people want to use. The license has called for it and we didn’t have a clean way of doing it and so some of our tool vendors basically asked for this. Not the vendors, I guess there are partners, there are open source projects that wanted to be able to capture this stuff. And so we needed to give them a way to help.

We’re very much focused right now on making sure that SPDX can be useful in tools and that we can get the automation happening in the whole ecosystem. You know, be it when you build a binary to ship to someone or to test, you want to have your SBoM. When you’ve downloaded something from the internet, you want to have your SBoM. When you ship it out to your customer, you want to be able to be very explicit and clear about what’s there because you need to have that level of detail so that you can track any vulnerabilities.

Because right now about, I guess, 19… I think there was a stat from earlier in the year from one of the surveys. And I can dig it up for you if you’d like, but I think 99% of all the code that was scanned by Synopsys last year had open source in it. And of which it was 70% of that whole build materials was open source. Open source is everywhere. And what we need to do is, be able to work with it and be able to adhere to the licenses, and transparency on the licenses is important as is being able to actually know what you have, so you can remediate any vulnerabilities.

Swapnil Bhartiya: You mentioned a couple of things there. One was, you mentioned tooling. So I’m kind of curious, what sort of tooling that is already there? Whether it’s open source or open source be it basically commercialization that worked with the SPDX documents.

Kate Stewart: Actually, I’ve got a document that basically lists all of these tools that we’ve been able to find and more are popping up as the day goes by. We’ve got common tools. Like, some of the Linux Foundation projects are certainly working with it. Like FOSSology, for instance, is able to both consume and generate SPDX. So if you’ve got an SPDX document and you want to pull it in and cross check it against your sources to make sure it’s matching and no one’s tampered with it, the FOSSology tool can let you do that pretty easily and codes out there that can generate FOSSology.

Free Software Foundation Europe has a Lindt tool in their REUSE project that will basically generate an SPDX document if you’re using the IDs. I guess there’s actually a whole bunch more. So like I say, I’ve got a document with a list of about 30 to 40, and obviously the SPDX tools are there. We’ve got a free online, a validator. So if someone gives you an SPDX document, you can paste it into this validator, and it’ll tell you if it’s a valid SPDX document or not. And we’re looking to it.

I’m finding also some tools that are emerging, one of which is decodering, which we’ll be bringing into the Act umbrella soon, which is looking at transforming between SPDX and SWID tags, which is another format that’s commonly in use. And so we have tooling emerging and making sure that what we’ve got with SPDX is usable for tool developers and that we’ve got libraries right now for SPDX to help them in Java, Python and Go. So hopefully we’ll see more tools come in and they’ll be generating SPDX documents and people will be able to share this stuff and make it automatic, which is what we need.

Another good tool, I can’t forget this one, is Tern. And actually Tern, and so what Tern does is, it’s another tool that basically will sit there and it will decompose a container and it will let you know the bill of materials inside that container. So you can do there. And another one that’s emerging that we’ll hopefully see more soon is something called OSS Review Toolkit that goes into your bill flow. And so it goes in when you work with it in your system. And then as you’re doing bills, you’re generating your SBoMs and you’re having accurate information recorded as you go.

As I said, all of this sort of thing should be in the background, it should not be a manual time-intensive effort. When we started this project 10 years ago, it was, and we wanted to get it automated. And I think we’re finally getting to the stage where it’s going to be… There’s enough tooling out there and there’s enough of an ecosystem building that we’ll get this automation to happen.

This is why getting it to ISO and getting the specification to ISO means it’ll make it easier for people in procurement to specify that they want to see the input as an SPDX document to compliment the product that they’re being given so that they can ingest it, manage it and so forth. But by it being able to say it’s an ISO standard, it makes the things a lot easier in the procurement departments.

OpenChain recognized that we needed to do this and so they went through and… OpenChain is actually the first specification we’re taking through to ISO. But for SPDX, we’re taking it through as well, because once they say you need to follow the process, you also need some for a format. And so it’s very logical to make it easy for people to work with this information.

Swapnil Bhartiya: And as you’ve worked with different players, different of the ecosystem, what are some of the pressing needs? Like improve automation is one of those. What are some of the other pressing needs that you think that the community has to work on?

Kate Stewart: So some of the other pressing needs that we need to be working on is more playbooks, more instructions, showing people how they can do things. You know, we figured it out, okay, here’s how we can model it, here’s how you can represent all these cases. This is all sort of known in certain people’s heads, but we have not done a good job of expressing to people so that it’s approachable for them and they can do it.

One of the things that’s kind of exciting right now is the NTIA is having this working group on these software bill of materials. It’s coming from the security side, but there’s various proof of concepts that are going on with it. One of which is a healthcare proof of concept. And so there’s a group of about five to six device manufacturers, medical device manufacturers that are generating SBoMs in SPDX and then there are handing them into hospitals to go and be able to make sure they can ingest them in.

And this level of bringing people up to this level where they feel like they can do these things, it’s been really eye-opening to me. You know, how much we need to improve our handholding and improve the infrastructure to make it approachable. And this obviously motivates more people to be getting involved. From the vendors and commercial side, as well as the open source, but it wouldn’t have happened, I think, to a large extent for SPDX without this open source and without the projects that have adopted it already.

Swapnil Bhartiya: Now, just from the educational awareness point of view, like if there’s an open source project, how can they easily create SBoM documents that uses the SPDX specification with their releases and keep it synced?

Kate Stewart: That’s exactly what we’d love to see. We’d love to see the upstream projects basically generate SPDX documents as they’re going forward. So the first step is to use the SPDX license identifiers to make sure you understand what the licensing should be in each file, and ideally you can document with eTags. But then there’s three or four tools out there that actually scan them and will generate an SPDX document for you.

If you’re working at the command line, the REUSE Lindt tool that I was mentioning from Free Software Foundation Europe will work very fast and quickly with what you’ve got. And it’ll also help you make sure you’ve got all your files tagged properly.

If you haven’t done all the tagging exercising and you wonder [inaudible 00:09:40] what you got, a scan code works at the command line, and it’ll give you that information as well. And then if you want to start working in a larger system and you want to store results and looking things over time, and have some state behind it all so like there’ll different versions of things over time, FOSSology will remember from one version to another and will help you create these [inaudible 00:10:01] off of bill materials.

Swapnil Bhartiya: Can you talk about some of the new use cases that you’re seeing now, which maybe you did not expect earlier and which also shows how the whole community is actually growing?

Kate Stewart: Oh yeah. Well, when we started the project 10 years ago, we didn’t understand containers. They weren’t even not on the raw mindset of people. And there’s a lot of information sitting in containers. We’ve had some really good talks over the last couple of years that illustrate the problems. There was a report that was put out from the Linux Foundation by Armijn Hemel, that goes into the details of what’s going on in containers and some of the concerns.

So being able to get on top of automating, what’s going on with concern inside a container and what you’re shipping and knowing you’re not shipping more than you need to, figuring out how we can improve these sorts of things is certainly an area that was not initially thought about.

We’ve also seen a tremendous interest in what’s going on in IOT space. And so that you need to really understand what’s going on in your devices when they’re being deployed in the field and to know whether or not, effectively is vulnerability going to break it, or can you recover? Things like that. The last 10 years we’ve seen tremendous spectrum of things we just didn’t anticipate. And the nice thing about SPDX is, you’ve got a use case that we’re not able to represent. If we can’t tell you how to do it, just open an issue, and we’ll start trying to figure it out and start to figure if we need to add fields in for you or things like that.

Swapnil Bhartiya:  Kate, thank you so much for taking your time out and talking to me today about this project.

The post Role Of SPDX In Open Source Software Supply Chain appeared first on

More in Tux Machines

F(x)tec Pro1-X Announced – with physical keyboard, Lineage OS and Ubuntu Touch support but dated Snapdragon 835

Today, F(x)tec has re-launched their Pro1 smartphone, but renamed as Pro1-X and running LineageOS out of the box combined with compatibility with Ubuntu Touch OS. The phone has been developed in partnership with XDA, hence the name. The hardware remains the same which includes the dated Qualcomm Snapdragon 835 chipset; however, this phone isn't about raw power, it is a productivity tool with a strong focus on privacy. It will then combine the chipset with 8GB of RAM a 5.99-inch FHD+ AMOLED display, an 8MP front-facing camera, and a 12MP camera at the rear. Read more

Python Programming

Announcing NetBSD 9.1

The NetBSD Project is pleased to announce NetBSD 9.1, the first update of the NetBSD 9 release branch. It represents a selected subset of fixes deemed important for security or stability reasons, as well as new features and enhancements. Read more Also: NetBSD 9.1 Released With Parallelized Disk Encryption, Better ZFS, X11 Improvements

today's howtos

  • Btrfs on CentOS: Living with Loopback | Linux Journal

    The btrfs filesystem has taunted the Linux community for years, offering a stunning array of features and capability, but never earning universal acclaim. Btrfs is perhaps more deserving of patience, as its promised capabilities dwarf all peers, earning it vocal proponents with great influence. Still, none can argue that btrfs is unfinished, many features are very new, and stability concerns remain for common functions. Most of the intended goals of btrfs have been met. However, Red Hat famously cut continued btrfs support from their 7.4 release, and has allowed the code to stagnate in their backported kernel since that time. The Fedora project announced their intention to adopt btrfs as the default filesystem for variants of their distribution, in a seeming juxtaposition. SUSE has maintained btrfs support for their own distribution and the greater community for many years. For users, the most desirable features of btrfs are transparent compression and snapshots; these features are stable, and relatively easy to add as a veneer to stock CentOS (and its peers). Administrators are further compelled by adjustable checksums, scrubs, and the ability to enlarge as well as (surprisingly) shrink filesystem images, while some advanced btrfs topics (i.e. deduplication, RAID, ext4 conversion) aren't really germane for minimal loopback usage. The systemd init package also has dependencies upon btrfs, among them machinectl and systemd-nspawn. Despite these features, there are many usage patterns that are not directly appropriate for use with btrfs. It is hostile to most databases and many other programs with incompatible I/O, and should be approached with some care.

  • How To List Filesystems In Linux Using Lfs - OSTechNix

    Lfs is a commandline tool used to list filesystems in Linux system. Lfs is slightly a better alternative to "df -H" command.

  • How to Install Debian Linux 10.5 with MATE Desktop + VMware Tools on VMware Workstation - SysAdmin

    This video tutorial shows how to install Debian Linux 10.5 with MATE Desktop on VMware Workstation step by step.

  • How to Install Mageia Linux 7.1 + VMware Tools on VMware Workstation - SysAdmin

    This video tutorial shows how to install Mageia Linux 7.1 on VMware Workstation step by step.

  • How to install Krita 4.3.0 on Deepin 20 - YouTube

    In this video, we are looking at how to install Krita 4.3.0 on Deepin 20.

  • How to install PHP 7.4 in Ubuntu 20.04? | LibreByte

    PHP-FPM is used together with a web server like Apache or NGINX, PHP-FPM serves dynamic content, while the web server serve static content

  • How to install the Blizzard on a Chromebook

    Today we are looking at how to install the Blizzard on a Chromebook. Please follow the video/audio guide as a tutorial where we explain the process step by step and use the commands below.

  • How to install the MGT GTK theme on Linux

    MGT is a modern theme that is based on the Materia GTK theme. It comes in 4 different colors (Grey, Semi-Dark, Light, and Dark) and brings the Google Material Design look that many Linux users love. In this guide, we’ll show you how to install the MGT GTK theme on Linux.

  • How to install the RavenDB NoSQL database on Ubuntu 20.04 - TechRepublic

    If you're looking to deploy a powerful NoSQL database on Linux, let Jack Wallen walk you through the process of installing RavenDB.

  • Implementing a self-signed certificate on an Ubuntu Server > Tux-Techie

    In this tutorial, we will show you how to create a self-signed certificate with OpenSSL on an Ubuntu 20.04 server and discuss its use cases.