Language Selection

English French German Italian Portuguese Spanish

Syndicate content
News For Open Source Professionals
Updated: 13 min 39 sec ago

Uniting for better open-source security: The Open Source Security Foundation (ZDNet)

Monday 3rd of August 2020 07:03:53 PM

Steven Vaughn-Nichols writes at ZDNet:

Eric S. Raymond, one of open-source’s founders, famously said, “Given enough eyeballs, all bugs are shallow,” which he called “Linus’s Law.” That’s true. It’s one of the reasons why open-source has become the way almost everyone develops software today. That said, it doesn’t go far enough. You need expert eyes hunting and fixing bugs and you need coordination to make sure you’re not duplicating work. 
So, it is more than past time that The Linux Foundation started the Open Source Security Foundation (OpenSSF). This cross-industry group brings together open-source leaders by building a security broader community. It combines efforts from the Core Infrastructure Initiative (CII)GitHub’s Open Source Security Coalition, and other open-source security-savvy companies such as GitHub, GitLab, Google, IBM,  Microsoft, NCC Group, OWASP Foundation, Red Hat, and VMware.

Read more at ZDNet

The post Uniting for better open-source security: The Open Source Security Foundation (ZDNet) appeared first on

Role Of SPDX In Open Source Software Supply Chain

Thursday 30th of July 2020 04:47:43 PM

Kate Stewart is a Senior Director of Strategic Programs, responsible for the Open Compliance program at the Linux Foundation encompassing SPDX, OpenChain, Automating Compliance Tooling related projects. In this interview, we talk about the latest release and the role it’s playing in the open source software supply chain.

Here is a transcript of our interview. 

Swapnil Bhartiya: Hi, this is Swapnil Bhartiya, and today we have with us, once again, Kate Stewart, Senior Director of Strategic Programs at Linux Foundation. So let’s start with SPDX. Tell us, what’s new going on in there in this specification?

Kate Stewart: Well, the SPDX specification just a month ago released auto 2.2 and what we’ve been doing with that is adding in a lot more features that people have been wanting for their use cases, more relationships, and then we’ve been working with the Japanese automotive-made people who’ve been wanting to have a light version. So there’s lots of really new technology sitting in the SPDX 2.2 spec. And I think we’re at a stage right now where it’s good enough that there’s enough people using it, we want to probably take it to ISO. So we’ve been re-formatting the document and we’ll be starting to submit it into ISO so it can become an international specification. And that’s happening.

Swapnil Bhartiya: Can you talk a bit about if there is anything additional that was added to the 2.2 specification. Also, I would like to talk about some of the use cases since you mentioned the automaker. But before that, I just want to talk about anything new in the specification itself.

Kate Stewart: So in the 2.2 specifications, we’ve got a lot more relationships. People wanted to be able to handle some of the use cases that have come up from containers now. And so they wanted to be able to start to be able to express that and specify it. We’ve also been working with the NTIA. Basically they have a software bill of materials or SBoM working groups, and SPDX is one of the formats that’s been adopted. And their framing group has wanted to see certain features so that we can specify known unknowns. So that’s been added into the specification as well.

And then there are, how you can actually capture notices since that’s something that people want to use. The license has called for it and we didn’t have a clean way of doing it and so some of our tool vendors basically asked for this. Not the vendors, I guess there are partners, there are open source projects that wanted to be able to capture this stuff. And so we needed to give them a way to help.

We’re very much focused right now on making sure that SPDX can be useful in tools and that we can get the automation happening in the whole ecosystem. You know, be it when you build a binary to ship to someone or to test, you want to have your SBoM. When you’ve downloaded something from the internet, you want to have your SBoM. When you ship it out to your customer, you want to be able to be very explicit and clear about what’s there because you need to have that level of detail so that you can track any vulnerabilities.

Because right now about, I guess, 19… I think there was a stat from earlier in the year from one of the surveys. And I can dig it up for you if you’d like, but I think 99% of all the code that was scanned by Synopsys last year had open source in it. And of which it was 70% of that whole build materials was open source. Open source is everywhere. And what we need to do is, be able to work with it and be able to adhere to the licenses, and transparency on the licenses is important as is being able to actually know what you have, so you can remediate any vulnerabilities.

Swapnil Bhartiya: You mentioned a couple of things there. One was, you mentioned tooling. So I’m kind of curious, what sort of tooling that is already there? Whether it’s open source or open source be it basically commercialization that worked with the SPDX documents.

Kate Stewart: Actually, I’ve got a document that basically lists all of these tools that we’ve been able to find and more are popping up as the day goes by. We’ve got common tools. Like, some of the Linux Foundation projects are certainly working with it. Like FOSSology, for instance, is able to both consume and generate SPDX. So if you’ve got an SPDX document and you want to pull it in and cross check it against your sources to make sure it’s matching and no one’s tampered with it, the FOSSology tool can let you do that pretty easily and codes out there that can generate FOSSology.

Free Software Foundation Europe has a Lindt tool in their REUSE project that will basically generate an SPDX document if you’re using the IDs. I guess there’s actually a whole bunch more. So like I say, I’ve got a document with a list of about 30 to 40, and obviously the SPDX tools are there. We’ve got a free online, a validator. So if someone gives you an SPDX document, you can paste it into this validator, and it’ll tell you if it’s a valid SPDX document or not. And we’re looking to it.

I’m finding also some tools that are emerging, one of which is decodering, which we’ll be bringing into the Act umbrella soon, which is looking at transforming between SPDX and SWID tags, which is another format that’s commonly in use. And so we have tooling emerging and making sure that what we’ve got with SPDX is usable for tool developers and that we’ve got libraries right now for SPDX to help them in Java, Python and Go. So hopefully we’ll see more tools come in and they’ll be generating SPDX documents and people will be able to share this stuff and make it automatic, which is what we need.

Another good tool, I can’t forget this one, is Tern. And actually Tern, and so what Tern does is, it’s another tool that basically will sit there and it will decompose a container and it will let you know the bill of materials inside that container. So you can do there. And another one that’s emerging that we’ll hopefully see more soon is something called OSS Review Toolkit that goes into your bill flow. And so it goes in when you work with it in your system. And then as you’re doing bills, you’re generating your SBoMs and you’re having accurate information recorded as you go.

As I said, all of this sort of thing should be in the background, it should not be a manual time-intensive effort. When we started this project 10 years ago, it was, and we wanted to get it automated. And I think we’re finally getting to the stage where it’s going to be… There’s enough tooling out there and there’s enough of an ecosystem building that we’ll get this automation to happen.

This is why getting it to ISO and getting the specification to ISO means it’ll make it easier for people in procurement to specify that they want to see the input as an SPDX document to compliment the product that they’re being given so that they can ingest it, manage it and so forth. But by it being able to say it’s an ISO standard, it makes the things a lot easier in the procurement departments.

OpenChain recognized that we needed to do this and so they went through and… OpenChain is actually the first specification we’re taking through to ISO. But for SPDX, we’re taking it through as well, because once they say you need to follow the process, you also need some for a format. And so it’s very logical to make it easy for people to work with this information.

Swapnil Bhartiya: And as you’ve worked with different players, different of the ecosystem, what are some of the pressing needs? Like improve automation is one of those. What are some of the other pressing needs that you think that the community has to work on?

Kate Stewart: So some of the other pressing needs that we need to be working on is more playbooks, more instructions, showing people how they can do things. You know, we figured it out, okay, here’s how we can model it, here’s how you can represent all these cases. This is all sort of known in certain people’s heads, but we have not done a good job of expressing to people so that it’s approachable for them and they can do it.

One of the things that’s kind of exciting right now is the NTIA is having this working group on these software bill of materials. It’s coming from the security side, but there’s various proof of concepts that are going on with it. One of which is a healthcare proof of concept. And so there’s a group of about five to six device manufacturers, medical device manufacturers that are generating SBoMs in SPDX and then there are handing them into hospitals to go and be able to make sure they can ingest them in.

And this level of bringing people up to this level where they feel like they can do these things, it’s been really eye-opening to me. You know, how much we need to improve our handholding and improve the infrastructure to make it approachable. And this obviously motivates more people to be getting involved. From the vendors and commercial side, as well as the open source, but it wouldn’t have happened, I think, to a large extent for SPDX without this open source and without the projects that have adopted it already.

Swapnil Bhartiya: Now, just from the educational awareness point of view, like if there’s an open source project, how can they easily create SBoM documents that uses the SPDX specification with their releases and keep it synced?

Kate Stewart: That’s exactly what we’d love to see. We’d love to see the upstream projects basically generate SPDX documents as they’re going forward. So the first step is to use the SPDX license identifiers to make sure you understand what the licensing should be in each file, and ideally you can document with eTags. But then there’s three or four tools out there that actually scan them and will generate an SPDX document for you.

If you’re working at the command line, the REUSE Lindt tool that I was mentioning from Free Software Foundation Europe will work very fast and quickly with what you’ve got. And it’ll also help you make sure you’ve got all your files tagged properly.

If you haven’t done all the tagging exercising and you wonder [inaudible 00:09:40] what you got, a scan code works at the command line, and it’ll give you that information as well. And then if you want to start working in a larger system and you want to store results and looking things over time, and have some state behind it all so like there’ll different versions of things over time, FOSSology will remember from one version to another and will help you create these [inaudible 00:10:01] off of bill materials.

Swapnil Bhartiya: Can you talk about some of the new use cases that you’re seeing now, which maybe you did not expect earlier and which also shows how the whole community is actually growing?

Kate Stewart: Oh yeah. Well, when we started the project 10 years ago, we didn’t understand containers. They weren’t even not on the raw mindset of people. And there’s a lot of information sitting in containers. We’ve had some really good talks over the last couple of years that illustrate the problems. There was a report that was put out from the Linux Foundation by Armijn Hemel, that goes into the details of what’s going on in containers and some of the concerns.

So being able to get on top of automating, what’s going on with concern inside a container and what you’re shipping and knowing you’re not shipping more than you need to, figuring out how we can improve these sorts of things is certainly an area that was not initially thought about.

We’ve also seen a tremendous interest in what’s going on in IOT space. And so that you need to really understand what’s going on in your devices when they’re being deployed in the field and to know whether or not, effectively is vulnerability going to break it, or can you recover? Things like that. The last 10 years we’ve seen tremendous spectrum of things we just didn’t anticipate. And the nice thing about SPDX is, you’ve got a use case that we’re not able to represent. If we can’t tell you how to do it, just open an issue, and we’ll start trying to figure it out and start to figure if we need to add fields in for you or things like that.

Swapnil Bhartiya:  Kate, thank you so much for taking your time out and talking to me today about this project.

The post Role Of SPDX In Open Source Software Supply Chain appeared first on

SODA Foundation: Autonomous data management framework for data mobility

Thursday 30th of July 2020 04:42:34 PM

SODA Foundation is an open source project under Linux Foundation that aims to establish an open, unified, and autonomous data management framework for data mobility from the edge, to core, to cloud. We talked to Steven Tan, SODA Foundation Chair, to learn more about the project.

Here is a transcript of the interview:

Swapnil Bhartiya: Hi, this is Swapnil Bhartiya, and today we have with us Steven Tan, chair of the SODA foundation. First of all, welcome to the show.
Steven Tan: Thank you.

Swapnil Bhartiya: Tell us a bit about what is SODA?
Steven Tan: The foundation is actually a collaboration among vendors and users to focus on data management for, how do you call, autonomous data mesh management. And the point of this whole thing is how do we serve the users? Because a lot of our users are getting a lot of data challenges, and that’s what this foundation is for. To get users and vendors together to help to address these data challenges.

Swapnil Bhartiya: What kind of data are we talking about?
Steven Tan: The data that we’re talking about is referring to anything like data protection, data governance, data replication, data copy management and stuff like that. And also data integration, how to connect the different data silos and stuff.

Swapnil Bhartiya: Right. But are we talking about enterprise data or are we talking consumer data? Like there is a lot of data with Facebook, Google, and Gmail, and then there are a lot of enterprise data, which companies … Sorry, as an enterprise, I might put something on this cloud, I can put it on this cloud. So can you please clarify what data are we talking about?
Steven Tan: Actually, the data that we’re talking about is … It depends on the users. There’re all kinds of data. Like for example, I mean, in the keynote that I gave two days ago, the example I gave was from Toyota. So Toyota use case is actually car data. So car data refers to things like the car sensor data, videos, map data and stuff. And then we have users like China Unicom. I mean, they have enterprise companies going to the cloud and so on. So they’ve all kinds of enterprise data over there. And then we also have other users like Yahoo Japan, and they have like a website. So the data that you’re talking about is web data, consumer data and stuff like that. So it’s across the board.

Swapnil Bhartiya: Oh, so it’s not as specific to an industry or any space or sector, okay. But why do you need it? What is the problem that you see in the market and in the current sphere that you’re like, hey, we should create something like that?
Steven Tan: So the problem that came, I mean the reason why all these companies came together is that they are building data centers that are from small to big. But a lot of the challenges that you have is like, it’s hard for a single project to address. It’s not like a business where we have a specific problem and then we need this to be solved and so on, it’s not like that. A lot of it is like, how do you connect the different pieces together in the data center together?
So there’s nothing like, no organization like that that can help them solve this kind of problem. Like how do you have, in order to address the data of … Or how do you address things like taking care of data protection and data privacy at the same time? And at the same time, you want to make sure that this data can be governed properly. So there isn’t any single organization that can help to take care of this kind of stuff, so we’re helping these users understand their problems and then come together and then we plan projects and roadmaps based on their problems and try to address them through these projects in the SODA foundation.

Swapnil Bhartiya: And you gave an example of data from the cars and all these things. Does that also mean that open source has helped solving a lot of problems by breaking down a lot of silos so that there’s a lot of interaction between different silos, which were like earlier separated and isolated? Today, as you mentioned, we are living in a data driven world. No matter what we do all the way from the Ring, to what we are doing right now, talking to each other, to the product that we’ll create in the end. But most of this data is living in their own silos. There may be a lot of value in that data, which cannot be extracted because one, it is locked into the silos. The second problem is that these days, data is kind of becoming the next oil. These companies are trying to capture all the data, irrespective of the fact of what value do they see in that data today? And by leveraging machine learning and deep learning, they can in the future … So how do you look at that, and how is SODA foundation going to break those silos, without compromising on our privacy, yet allow companies … Because the fact is, as much as I prefer my privacy, I also want Google Maps to tell me the fastest route where I want to go.
Steven Tan: Right. So I think there are certain, I mean, there are different levels of privacy that we’re going to take care of. And in terms of like, first of all, there are all kinds of … I mean, in terms of the different countries or different States or different provinces like in different countries, there are different kinds of regulations and so on. So first of all, like the data silos you talk about. Yes, that’s one of the key problems that we’re trying to solve. How to connect all the different data silos so as to reduce fragmentation, and then try to minimize the so called dark data that you’re talking about, and then extract all the values over there. So that’s one of the things that we try to get here. I mean, we try to connect all the different pieces, like in the different … The data may be sitting in the edge in the data center or different data centers and in the cloud. We try to connect all these pieces together.

I mean, that’s one of the first things that we tried to do. And then we tried to have data policies. I think this is a critical piece of things that a lot of the solutions out there don’t address. You have data policies, but it may be the data policies just for a single vendor solution. But once the data gets out, that solution then is out of control. So what we’re trying to do here is say, how do you have data policies across different solutions, so no matter where the data is it’s governed the same way, consistently? That’s the key. So then you can talk about how can you really protect the data in terms of privacy or govern the data or control the data? And in terms of the, I mentioned about the regions, right? So you know where the data is, and you know what kind of regulations that need to be taken care of and you apply it right there. That’s how it should work.

Swapnil Bhartiya: When we look at the kind of a scenario you talked about, I see it as two-fold. One is there is a technology problem and the second is people problem. So is SODA foundation going to deal with both, or are you going to just deal with the technology aspect of it?
Steven Tan: The technology part that we talk about, we try to define in terms of the API and so on to all the data policies and so on, and try to get as many companies to support this as possible. And then the next thing that we try to do is actually try to work with standards organizations to try to make this into a standard. I mean, that’s what we’re trying to do here.

And then government aspects, there are certain organizations that we are talking to. Like there’s the CESI, it’s China Electronic Standards organizations that we’re talking to that’s trying to work things into their … Actually, I’m not sure about China, because it’s, I mean, we don’t know about their sphere of influence within the CSI and so on. And then for the industry standards, there’s [inaudible 00:09:05] and so on, we’re trying to work with them and trying to get it to work.

Swapnil Bhartiya: Can we talk about the ecosystem that you’re trying to build around SODA foundation? One would be the participants who are actually contributing either the code or the vision, and then the users community who would actually be benefiting from it?
Steven Tan: So the ecosystem that we are trying to build, that’s the core part, which is actually the framework. So the framework, I mean, this part will be more of the data vendors or the storage vendors that will be involved in trying to build this ecosystem. And then the outer part, what I call the outer part of the ecosystem will be things like the platforms. Things like Kubernetes, VMware, all these different vendors, and then networking kind of stuff that you need to take care of like the big data analytics and stuff.

And then for the users, actually, if you can see from the SODA end-user advisory committee, I mean, that’s where most of our users are participating in the communication. So most of these users, I mean, they are from different regions and different countries and different industries. So we try to serve, I mean, whichever participant is interested in, they can participate in this thing. But the main thing is that because they may be from different industries, but actually most of the issues that they have is still the same thing. So there are some commonalities among all these users.

Swapnil Bhartiya: We are in the middle of 2020, because of COVID-19 everything has slowed down, things have changed. What does your roadmap, what does your plan look like? The structure, the governance and the plan for ’21 or end of the year?
Steven Tan: We are very, how do you call it? Very community-driven or focused kind of organization. We hold a lot of meetups and events and so on where we get together the users and the vendors and so on and the community in general. So with this COVID-19 thing, a lot of the plans has been upset. I mean, it’s in chaos right now. So most of the things are like what everybody is doing, moving online. So we are having some webinars and stuff, even as of right now when we are talking, we are having a mini summit going on with the Open Source Summit North America right now.

So for the rest of this year, most of our events will be online. We’re going to have some webinars and some meetups, you can find it out from our website. And the other plans that we have is that we are going to have, we just released the SODA federal release, which is the 1.0 release. And through the end of this year, we’re going to have two more releases, the G release and the H release at the end of this year. G release is going to be in September, and H is in the end of the year. And we’re trying to engage our users with things like the POC testing for the federal. Because each release that we have, we try to get them to do the testing, and then so that’s the way of them trying to provide feedback to us. Whether that works for them or how can we improve to make the code work for what they need.

Swapnil Bhartiya: Awesome. So thank you so much for taking your time out and explaining more about SODA foundation, and I look forward to talking to you again because I can see that you have a very exciting pipeline ahead. So thank you.
Steven Tan: Thank you, thank you very much.

The post SODA Foundation: Autonomous data management framework for data mobility appeared first on

Linux System Administration Training and Certification Leads to New Career

Wednesday 29th of July 2020 06:06:22 PM

Fabian Pichardo has worked with multiple hardware platforms such as Nvidia, Xilinx, Microchip, and National Instruments, and is skilled in languages such as C++, Python, Matlab, and Julia. During university, Fabian created the Mechatronic Student Society to offer programming training for newbies and demonstrate new technology trends.

In 2018 he applied for and was awarded a Linux Foundation Training (LiFT) Scholarship in the Open Source Newbies category to increase his experience with open source technologies.

Source: Linux Foundation Training

The post Linux System Administration Training and Certification Leads to New Career appeared first on

Developer Velocity: How software excellence fuels business performance (McKinsey)

Tuesday 28th of July 2020 10:30:55 PM

McKinsey and Co writes:

With technology powering everything from how a business runs to the products and services it sells, companies in industries ranging from retail to manufacturing to banking are having to develop a range of new skill sets and capabilities. In addition to mastering the nuances of their industry, they need to excel first and foremost at developing software.

It’s a big leap for many, yet a large number of businesses are working hard to make it. At the Goldman Sachs Group, for instance, computer engineers make up about one-quarter of the total workforce. Within retail, software development is the fastest-growing job category. Indeed, of the 20 million software engineers worldwide, more than half are estimated to be working outside the technology industry, and that percentage is growing.

Read more at McKinsey

The post Developer Velocity: How software excellence fuels business performance (McKinsey) appeared first on

Participate in the 2020 Open Source Jobs Report!

Tuesday 28th of July 2020 10:26:30 PM

The Linux Foundation has partnered with edX to update the Open Source Jobs Report, which was last produced in 2018. The report examines the latest trends in open source careers, which skills are in demand, what motivates open source job seekers, and how employers can attract and retain top talent. In the age of COVID-19, this data will be especially insightful both for companies looking to hire more open source talent, as well as individuals looking to advance or change careers.

The report is anchored by two surveys, one of which explores what hiring managers are looking for in employees, and one focused on what motivates open source professionals. Ten respondents to each survey will be randomly selected to receive a US$100 gift card to a leading online retailer as a thank you for participating!

All those working with open source technology, or hiring folks who do, are encouraged to share your thoughts and experiences. The surveys take around 10 minutes to complete, and all data is collected anonymously. Links to the surveys are at the top and bottom of this post.

Take the open source professionals survey

Take the hiring managers survey

The post Participate in the 2020 Open Source Jobs Report! appeared first on

Welcome Antmicro to the OpenPOWER Foundation

Monday 27th of July 2020 04:39:08 PM

OpenPOWER Foundation Executive Director James Kulina writes:

This May, Antmicro announced support for the POWER ISA in Renode, its open source, multi-architecture, heterogeneous multi-core capable simulator for software development and software-hardware co-development.

It’s an exciting development, as developers can now test applications based on the POWER ISA before running them on actual hardware. It’s an important step in achieving the vision of the OpenPOWER Foundation – to make POWER the easiest architecture on which to go from an idea to a silicon chip.

I recently caught up with Michael Gielda, VP of business development, to discuss Antmicro, its role in the OpenPOWER Foundation ecosystem and its beliefs on open source hardware in general.

Read more at OpenPOWER Foundation

The post Welcome Antmicro to the OpenPOWER Foundation appeared first on

Meet the new GM of CNCF – Priyanka Sharma

Thursday 23rd of July 2020 08:57:48 PM

CNCF, a Linux Foundation project, recently appointed Priyanka Sharma as its new GM. As a long time expert of cloud native technologies Sharma brings unique vision and insights to the organization. On behalf of the Linux Foundation, Swapnil Bhartiya, founder and producer at TFiR talked to Sharma to better understand the vision she has for CNCF and what goals she has set for herself and the foundation.

Here is the transcript of our interview.

Swapnil Bhartiya: Hi, this is Swapnil Bhartiya. Today, we have with us Priyanka Sharma. Now she’s in the role of general manager of CNCF. Priyanka, first of all, welcome to the show in your new role.

Priyanka Sharma: Thank you so much for having me, Swapnil.

Swapnil Bhartiya: What exactly is the role of GM at CNCF, and how different is it from the role of executive director that Dan used to have there?

Priyanka Sharma: No difference at all, actually. I am stepping into the role Duncan had. Across the LF, various projects and some foundations have different titles for the leadership, and me being a GM is really giving a nod to trying to consolidate everything as one title, so that’s really where it comes from, it’s the same job.

Swapnil Bhartiya: If you look at CNCF now, it has played a very critical role in creating a home for cloud native technologies like Kubernetes, and now there are so many … I mean the landscape is so huge you cannot even see it, which also mean that a lot of consolidation within CNCF has to happen from the point of view of a lot of projects are overlapping, a lot of projects have gaps. What are your thoughts about that?

Priyanka Sharma: Yes. Absolutely. I actually think it’s a great thing. By charter, the CNCF does not intend to be a kingmaker. We are very different, I guess, from any other foundations in that we really focus on spreading the wave of cloud native for helping the ecosystem build better software quicker and more resiliently. For that, there are multiple tools people can use. They may use option A for telemetry versus option B for reasons that are specific to their system. And we don’t want to be getting into the middle of that. We want to support every solid, good project out there with a neutral IP space, open governance, best practices, support with marketing education, etc. It’s actually a good thing for the end users to have choice, and we enable that.

Swapnil Bhartiya: Right. If you look at CNCF, I think it’s like ’13, ’14, it’s been four or five years since the organization has been around, a lot of projects under the foundation have kind of matured. The ecosystem itself has matured. There are a lot of companies who are doing … and things are moving from testing to production. And there is a very healthy ecosystem there. What role is cloud CNCF playing today for the ecosystem, and how do you see the evolution of CNCF itself?

Priyanka Sharma: Great question. A few things. First off is yes, we’ve made great progress. The first wave of cloud native has gone exceptionally well. 2016, when I joined this ecosystem as a project contributor to open tracing, we were still talking about what are microservices, why you need to do cloud computing, very basic, right? And since then, a lot has changed, which is awesome. However, with new maturity, comes new complexity, and that’s why you see we are still accepting new projects, right, to support the entire development cycle.

In addition, there’s the crossing the chasm, as they say, for various technologies and projects. Kubernetes is definitely crossing the chasm right now, but we have not just 1 but 10 graduated projects, including Kubernetes. We are supporting all of those projects to also cross the chasm. We need to also make Kubernetes more widespread. If you notice, most KubeCons that happen, which are our flagship events, I think at least 25% of the audience each year is brand new first timers.

We actually were having conversation just a few hours back today about don’t underestimate the importance and need for consistent cloud native one on one nurturing. The job is far from done. We need to go deeper with developer engagement. We need to go deeper with end user engagement now that we have made some headway. The second wave of cloud native is just starting.

Swapnil Bhartiya: Excellent. Now when we look at second wave, so far the ride has become kind of easy breeze. But what are the challenges that you see that you want to tackle as you move into the second wave? Or what kind of challenges you’re setting for yourself, which are not the easy one, but you see there is a demand so that you have to do that?

Priyanka Sharma: I had various thoughts and ideas around this stuff. And when I was going to join the organization, I was going to take a complete few months to do a listening tour. Of course you know what they say about the best-laid plans of mice and men, the pandemic hit. The world scenario has completely changed. There’s been shelter-in-place orders various places. People are suffering many places with illness. There is the COVID illness. And then there’s other things that come up and you’re stuck at home for so long, so it’s not an easy time. It’s not a normal time. It’s not a usual time. And that reflects for the cloud native community as well. As an example, we’ve hosted the KubeCons, our flagship events in person with great fanfare, with lots of support, love and excitement from the community.

Now, we have to pivot completely and do it all online in a world where the online solutions are sort of catching up to be able to support large scale events like ours. So joining in, there are challenges that have been thrown my way by just the timing, right? In addition to the events which we’re working very hard on as a team, there is also just your community has different needs. There’s some people may want to be switching jobs or looking for jobs. That’s one element that we need to think about. Some people may need the support that they felt otherwise by going to meet ups, by being more in touch with people around them on cloud native. There are others whose businesses actually might be growing exponentially just because everything’s going online, just supporting them with the technology. There are various elements to this new, strange time that we find ourselves in. So that is a big challenge.

In addition, I would say Dan and Chris have built an amazing, massively impactful organization. For me, I intend to keep this momentum going, to keep building on what they have created. We all stand on the shoulders of giants here. I think the next big thing once we get through pandemic is to double down on the end user ecosystem. The end users have grown and become consistently more sophisticated and technical over the times in the last four years I’ve been involved. We need to support that and enable greater adoption, better insights, safe spaces to discuss and communicate with each other, so that’s coming.

And then finally, as I said, developer education and engagement has to go deeper and wider. That’s what I set for myself.

Swapnil Bhartiya: When you look at CNCF, what vision do you have? Because you yourself have been in the community, in the industry for so long, but you were also on the outside. You are not inside Linux Foundation. You have been working with private companies, so you have an outsider’s view. What unique vision did you bring to the CNCF? Because sometimes when we work within an organization for so long, we have our own myopic view. Can you talk about that?

Priyanka Sharma: You’re absolutely right, that I have worn multiple hats, seen CNCF through different lenses, and I can bring that perspective to this foundation. I’d say one thing that’s been a somewhat disturbing trend I notice was this othering of different parts of the community. It’s like CNCF staff versus end user versus project creator versus GB versus this. You can have so many different categories. But the reality is I really don’t think that’s the way the ecosystem truly functions well I don’t think there’s that much meat in that way of thinking. And we need to change and go back to what we’re good at, which is being builders and doers and being team cloud native, all of us together.

If we in fight, then we don’t stand strong and build upon our work, but rather just dissipate energy. And I’ve seen that trend happen in cloud native. I cannot speculate on the reasons for it, but I make a call to each and every one of you, just know we’re in it together. I have worn multiple hats in this industry. I have been a project contributor. I have been an educator, a marketer. I have been a developer advocate. I have been a governing board member. I have done many things. And now, I’m the GM. Let me tell you, we are all in it together no matter what hat we wear, and we need to make an extra effort to do that. And that is something I think will be a big change if we can achieve it.

Swapnil Bhartiya: You can have as much GitHub repository for tech issues. But what realistic efforts we can see from CNCF to kind of achieve the kind of vision you are bringing, because this is kind of different than a technological problem?

Priyanka Sharma: I hear that. I think that a lot of it starts with the leadership. I have been put in this position and my number one goal is to always keep my door open, these days virtually. I live by an open calendar. Anybody can book time with me, talk to me, tell me what you think, and reach out to me. And I mean it. I have serious blocks open. Of course, they’re starting to get booked up really quickly, which is nice because that means people are taking me up on this offer, that let’s engage. Let’s talk it out. Let’s see where we are disagreeing, and either agree to disagree, which is a totally fair thing to do, or come closer together in some form of consensus.

I think conversation is the first step. We all get so busy with the day-to-day work, that that goes away to the wayside. And when that happens, miscommunication just develops and deepens. So number one is open door policy. Let’s talk. Whenever there’s confusion, let’s do that.

The other is bringing greater transparency. It’s just a habit I have that I picked up at GitLab working under Sid, which is being all remote, it’s important to document everything. So most of my meetings, they will have a document where we write down agenda notes, etc. Sharing that with the people you talk to so everyone’s actually on the same page. We wrote this down. This is what we’re doing. Little things like that can I think go a really long way in making sure people are moving in lockstep together. All this is also, by the way, an ongoing effort that you cannot let up. You have to keep being transparent. You have to keep being open. This is not a onetime thing. People have to keep being transparent. People have to keep their door open. It’s an ongoing effort that I will not stop and let up on. I think it will make a difference.

I’m actually proud to report that I’m already seeing, having taken the time to talk to a lot of people, we really are on the same team. Everyone wants us to just build better software together, and I’m very confident that the cultural change is happening as we speak.

Swapnil Bhartiya: Awesome. Before we dive this last question, we are going through a crisis, a very serious crisis, and we don’t see any end in the sight right now. It has impacted all of us. For example, we were supposed to be in person at open source event, but everything is moving to online events. How does this impact the industry in general? Because a lot of these events, they do bring people together, where they not only hallway track, where people just touch base with colleagues, but a lot of … actually, a lot of partnerships are forged there. What impact do you see, and how do you see CNCF would respond to that or is already responding to that?

Priyanka Sharma: Absolutely. Events play a great role in the community and ecosystem, and that’s just evidenced by the awesomeness of KubeCons. Being at every KubeCon that I could be had open doors for me. Connected me to people who were happy to mentor, guide, talk to me. We cannot lose that, right? We all are waiting for things to change, right? The pandemic to go away one day for us to be able to meet in person. While we wait for that, here and the CNCF team, we are working to make KubeCon EU virtual in August as awesome of an experience as possible. There’s lots of ideas that we have. We sometimes have technology limitations in terms of the platforms that are available, and we’re trying to work through that.

My sense is that we’ll have a bunch of ideas in experiment at KubeCon EU in August, and by the time KubeCon North America, which was going to be Boston, but just today was announced is going to be virtual as well, by the time that rolls around, I think we’ll have a lot more cool engagement and innovation possible.

I did a small event a few weeks before joining CNCF, just for fun. I just wanted to see other community folks. And the reality is that it was cool because we were able to livestream, and we’d expected 200 people, but 2,000 showed up. No, actually, 7,000 at maximum views. It was crazy, crazy numbers. And that’s the equalizer that comes with online events. It’s nice to be able to reach more people. We have to figure out the engagement, have more fun games and trivia prizes, ways to connect a maintainer to someone who has a question, ways to connect a student to someone who will tell them how to contribute. These are the things we need to work on and it’s actively underway.

Swapnil Bhartiya: Awesome. Thank you, Priyanka, so much for taking time out and talking to me today, and I look forward to talk to you again. Thank you.

Priyanka Sharma: Same here. Thank you, Swapnil.


The post Meet the new GM of CNCF – Priyanka Sharma appeared first on

Student Linux club refurbishes computers to support distance learning (

Thursday 23rd of July 2020 03:13:58 PM

Cam Citrowske on writes:

It was March 17, 2020, and I was in my classroom at Aspen Academy. The clock was ticking. This was to be the last day of school before we, along with every other public school in Minnesota, would close due to the outbreak of the new coronavirus. I had students in my room during lunch, advisory periods, and my elective classes all doing the same thing—installing Linux onto old computers so we could give them to students who would use them for school at home during the shelter in place order. I was only going to have the kids’ help until dismissal time, but in the end, we had 17 computers ready to go. It was a start.


The post Student Linux club refurbishes computers to support distance learning ( appeared first on

Solving technical debt with open source

Wednesday 22nd of July 2020 06:11:48 PM

Ibrahim Haddad and Cedric Bail at the Linux Foundation have published a new whitepaper on solving technical debt with open source:

Technical debt, a term used in software development, refers to the cost of maintaining source code that was caused by a deviation from the main branch where joint development happens. 

A broader interpretation of what constitutes technical debt is proprietary code by itself:

    • A single organization has developed it.
    • It is source code that the organization alone needs to carry and maintain.
    • In some cases, the organization depends on a partner’s ability to maintain the code and carry that said debt.

The following symptoms can identify technical debt:

    • Slower release cadence Time increases between the delivery of new features
    • Increased onboarding time for new developers Onboarding new developers become highly involved due to code complexity where only insider developers are familiar with the codebase. The second manifestation of this symptom is the difficulty in retaining developers or hiring new developers.
    • Increased security issues At least, experiencing more security issues than the main upstream branch.
    • Increased efforts to maintain the code base Maintenance tasks become more time consuming as the body of code to maintain becomes larger and more complex.
    • Misalignment with the upstream development cycle illustrated in the inability to keep pace, be aligned with the upstream development and release cycles.

Click here to read the abstract and download the new whitepaper

The post Solving technical debt with open source appeared first on

How open source development provides a roadmap for digital trust, security, safety, and virtual work

Wednesday 22nd of July 2020 02:47:13 PM

Mike Dolan writes on the Linux Foundation blog:

We’re seeing a shift to virtual events, remote work cultures, virtual “happy hours,” and other means of productively working together, virtually. Many of these practices will stick with us post-pandemic. Our organization is already exploring how to use virtual events to augment future physical events (yes, they will exist again). 

Virtual conferences may be a great path to offering more inclusive events where those of us unable to travel to an event physically can still find a way to participate at some level. We’re seeing the impact of virtual training and certifying professionals in freely available open source technologies — and it has a real impact on job prospects and employment. Virtual testing proctors have become an effective way to certify professionals. Similarly, virtual platforms can help facilitate mentorship and enable less experienced developers to find and connect with more skilled developers willing to lend a hand.

The coronavirus has opened the world’s eyes to the needs of systems and plans for pandemic situations. This year we will likely see technology communities and organizations adapt and develop the “playbook” for how the world does business in the face of a pandemic. But many of those practices will likely stay with us long after we defeat COVID-19. 

Read more at The Linux Foundation

The post How open source development provides a roadmap for digital trust, security, safety, and virtual work appeared first on

New Training Course Teaches Kubernetes Application Management with Helm

Friday 17th of July 2020 02:00:38 PM
The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the availability of a new training course, LFS244 – Managing Kubernetes Applications with Helm. LFS244 was developed in conjunction with the Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, and hosts both the Kubernetes and Helm open source projects. The course is designed for system administrators, DevOps engineers, site reliability engineers, software engineers and others who wish to enhance their operational experience running containerized workloads on the Kubernetes platform. Read more at Linux Foundation Training

The post New Training Course Teaches Kubernetes Application Management with Helm appeared first on

New Kubernetes Security Specialist Certification to Help Professionals Demonstrate Expertise in Securing Container-Based Applications

Thursday 16th of July 2020 07:00:20 PM
The Linux Foundation, the nonprofit organization enabling mass innovation through open source, and Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, today announced a new certification, the Certified Kubernetes Security Specialist (CKS) is in development. The certification is expected to be generally available before the KubeCon North America event this November. Read more at Linux Foundation Training

The post New Kubernetes Security Specialist Certification to Help Professionals Demonstrate Expertise in Securing Container-Based Applications appeared first on

The Linux Foundation’s First-Ever Virtual Open Source Summit (TechNewsWorld)

Wednesday 15th of July 2020 05:47:43 PM

Jack M. Germain writes on Tech News World:

The success of The Linux Foundation’s first virtual summit may well have set the standard for new levels of open source participation.

Summit masters closed the virtual doors of the four-day joint gathering on July 2. The event hosted the Open Source Summit + Embedded Linux Conference North America 2020 and ended with more than 4,000 registrants from 109 countries.

The online platform InXpo enabled participants to be part of a real immersive technical gathering. They also can view on-demand content of sponsor resources and conference sessions for one year.

The InXpo platform enabled attendees to:

    • View 250+ informative educational sessions and tutorials, across 14 different technology tracks, and participate in live Q&A;
    • Join the ‘hallway track’ and collaborate via topic-based networking lounges in group chats, and connect with attendees in 1:1 chats;
    • Visit the 3D virtual sponsor showcase and booths to speak directly with company representatives, view demos, download resources, view job openings and share contact info.

The summit’s virtual format also provided attendees the chance to “gamify” their event experience by earning points and winning prizes for attending sessions, visiting sponsor booths, and answering trivia questions.

Read more at Tech News World

The post The Linux Foundation’s First-Ever Virtual Open Source Summit (TechNewsWorld) appeared first on

Device Drivers Training Helps Advance an Embedded Linux Career

Monday 13th of July 2020 06:30:41 PM
In 2018, Anna-Lena Marx was preparing to begin the final thesis for her master’s degree. She was also working for a German company developing kernel drivers and fixing bugs in the Linux kernel and Android internal system.

Anna-Lena wanted to improve her Linux kernel development skills, so she applied for and was awarded a Linux Foundation Training (LiFT) Scholarship in the Kernel Guru category.

Read more at Linux Foundation Training

The post Device Drivers Training Helps Advance an Embedded Linux Career appeared first on

Open Source Communities and Trademarks: A Reprise

Monday 13th of July 2020 02:26:57 PM

The Linux Foundation has published a new blog about the use of Trademarks in open source communities:

A trademark is a word, phrase or design that denotes a “brand” that distinguishes one source of product or solution from another. The USPTO describes the usage of trademarks “to identify and distinguish the goods/services of one seller or provider from those of others, and to indicate the source of the goods/services.” Under US trademark law you are not able to effectively separate ownership of a project mark from control of the underlying open source project. While some may create elaborate structures around this, at the end of the day an important principle to follow is that the project community should be in control of what happens to their brand, the trademark they collectively built up as their brand in parallel with building up the functionality of their code. 

For this reason, in communities that deem their brand important, we also file registrations for trademark protection to reserve the rights in the mark for the project, commonly in the United States, China, European Union, Japan, and other countries around the world. Registered marks will often have a ® symbol. This is different from a common law trademark right where you often see a symbol with the mark. Having a registered trademark is often important because it enables us to better protect the community against misrepresentation, misuse, and confusion in the ecosystem between what is actually the community-built project, and what is not. This is often based on specific benefits that arise from the registration, which may vary from country to country.

Click to read more at the Linux Foundation

The post Open Source Communities and Trademarks: A Reprise appeared first on

Driving Compatibility with Code and Specifications through Conformance Trademark Programs

Monday 13th of July 2020 02:17:01 PM

Scott Nicholas writes at the Linux Foundation blog:

A key goal of some open collaboration efforts — whether source code or specification oriented — is to prevent technical ‘drift’ away from a core set of functions or interfaces. Projects seek a means to communicate — and know — that if a downstream product or open source project is held out as compatible with the project’s deliverable, that product or component is, in fact, compatible. Such compatibility strengthens the ecosystem by providing end-users with confidence that data and solutions from one environment can work in another conformant environment with minimal friction. It also provides product and solution providers a stable set of known interfaces they can depend on for their commercially supported offerings. 

A trademark conformance program, which is one supporting program that the LF offers its projects, can be used to encourage conformance with the project’s code base or interfaces. Anyone can use the open source project code however they want — subject to the applicable open source license — but if a downstream solution wants to describe itself as conformant using the project’s conformance trademark, it must meet the project’s definition of “conformant.” Some communities choose to use words other than “conformant” including “certified”, “ready”, or “powered by” in association with commercial uses of the open source codebase. This is the approach that some Linux Foundation projects take to maintain compatibility and reduce fragmentation of code and interfaces. 

Click to read at the Linux Foundation blog

The post Driving Compatibility with Code and Specifications through Conformance Trademark Programs appeared first on

Understanding US export controls with open source projects

Monday 13th of July 2020 02:07:48 PM

The Linux Foundation has produced a new whitepaper, in English and Chinese about export controls and open source and has summarized its findings on its blog:

The primary source of United States federal government restrictions on exports are the Export Administration Regulations or EAR. The EAR is published and updated regularly by the Bureau of Industry and Security (BIS) within the US Department of Commerce. The EAR applies to all items “subject to the EAR,” and may control the export, re-export, or transfer (in-country) of such items.

Under the EAR, the term “export” has a broad meaning. Exports can include not only the transfer of a physical product from inside the US to an external location but also other actions. The simple act of releasing technology to someone other than a US citizen or lawful permanent resident within the United States is deemed to be an export, as is making available software for electronic transmission that can be received by individuals outside the US. 

This may seem alarming for open source communities, but the good news is open source technologies that are published and made publicly available to the world are not subject to the EAR. Therefore, open source remains one of the most accessible models for global collaboration.

Click here to read the Linux Foundation blog

The post Understanding US export controls with open source projects appeared first on

All About CLAs and DCOs

Tuesday 7th of July 2020 09:00:17 PM

Of the fundamental structural questions that drive discussions within the open source community, two that continually spur fervent debate are (a) whether software code should be contributed under a Contributor License Agreement (“CLA”) or a Developer Certificate of Origin (“DCO”), and (b) whether code developed by an employee or independent contractor should be contributed under a CLA signed by the developer as an individual or by her employer under a corporate CLA.

Read More at The Standards Blog

The post All About CLAs and DCOs appeared first on

More in Tux Machines

Hardware Freedom: 3D Printing, RasPi and RPi CM3 Module

  • Can 3D Printing Really Solve PPE Shortage in COVID-19 Crisis? The Myth, and The Facts!

    Amid COVID-19 crisis, we see severe shortage of Personal Protective Equipment (PPE) worldwide, to the point that a strict organization like FDA is making exceptions for PPE usage, and there are volunteer effors to try to alleviate this shortage like GetUsPPE. Also, Centers for Disease Control and Prevention (CDC) provides an Excel spreadsheet file to help calculate the PPE Burn Rate. There are many blog posts, video tutorials, and guides that teach people how to print their face shields and masks.

  • Raspberry Pi won’t let your watched pot boil
  • Growing fresh veggies with Rpi and Mender

    Some time ago my wife and I decided to teach our kids how to grow plants. We both have experience as we were raised in small towns where it was common to own a piece of land where you could plant home-grown fresh veggies. The upbringing of our kids is very different compared to ours, and we realized we never showed our kids how to grow our own veggies. We wanted them to learn and to understand that “the vegetables do not grow on the shop-shelf”, and that there is work (and fun) involved to grow those. The fact that we are gone for most of the summer and to start our own garden just to see it die when we returned seemed to be pointless. This was a challenge. Luckily, me being a hands-on engineer I promised my wife to take care of it. There were two options: we could buy something that will water our plants when we are gone, or I could do it myself (with a little help from our kids). Obviously I chose the more fun solution…

  • Comfile Launches 15-inch Industrial Raspberry Pi Touch Panel PC Powered by RPi CM3 Module

    Three years ago, we noted Comfile has made 7-inch and 10.2-inch touch panel PC’s powered by Raspberry Pi 3 Compute Module. The company has recently introduced a new model with a very similar design except for a larger 15-inch touchscreen display with 1024×768 resolution. ComfilePi CPi-A150WR 15-inch industrial Raspberry Pi touch panel PC still features the CM3 module, and the same ports including Ethernet, USB ports, RS232, RS485, and I2C interfaces accessible via terminal blocks, and a 40-pin I/O header.

Programming: Vala, Perl and Python

  • Excellent Free Tutorials to Learn Vala

    Vala is an object-oriented programming language with a self-hosting compiler that generates C code and uses the GObject system. Vala combines the high-level build-time performance of scripting languages with the run-time performance of low-level programming languages. Vala is syntactically similar to C# and includes notable features such as anonymous functions, signals, properties, generics, assisted memory management, exception handling, type inference, and foreach statements. Its developers, Jürg Billeter and Raffaele Sandrini, wanted to bring these features to the plain C runtime with little overhead and no special runtime support by targeting the GObject object system. Rather than compiling directly to machine code or assembly language, it compiles to a lower-level intermediate language. It source-to-source compiles to C, which is then compiled with a C compiler for a given platform, such as GCC. Did you always want to write GTK+ or GNOME programs, but hate C with a passion? Learn Vala with these free tutorials! Vala is published under the GNU Lesser General Public License v2.1+.

  • Supporting Perl-related creators via Patreon

    Yesterday I posted about this in the Perl Weekly newsletter and both Mohammad and myself got 10 new supporters. This is awesome. There are not many ways to express the fact that you really value the work of someone. You can send them postcards or thank-you notes, but when was the last time you remembered to do that? Right, I also keep forgetting to thank the people who create all the free and awesome stuff I use. Giving money as a way to express your thanks is frowned upon by many people, but trust me, the people who open an account on Patreon to make it easy to donate them money will appreciate it. In any case it is way better than not saying anything.

  • 2020.31 TwentyTwenty

    JJ Merelo kicked off the special 20-day Advent Blog cycle in honour of the publication of the first RFC that would lay the foundation for the Raku Programming Language as we now know it. After that, 3 blog posts got already published:

  • Supporting The Full Lifecycle Of Machine Learning Projects With Metaflow

    Netflix uses machine learning to power every aspect of their business. To do this effectively they have had to build extensive expertise and tooling to support their engineers. In this episode Savin Goyal discusses the work that he and his team are doing on the open source machine learning operations platform Metaflow. He shares the inspiration for building an opinionated framework for the full lifecycle of machine learning projects, how it is implemented, and how they have designed it to be extensible to allow for easy adoption by users inside and outside of Netflix. This was a great conversation about the challenges of building machine learning projects and the work being done to make it more achievable.

  • Django 3.1 Released

    The Django team is happy to announce the release of Django 3.1.

  • Awesome Python Applications: buku

    buku: Browser-independent bookmark manager with CLI and web server frontends, with integrations for browsers, cloud-based bookmark managers, and emacs.

  • PSF GSoC students blogs: Week 9 Check-in

DRM and Proprietary Software Leftovers

  • Some Photoshop users can try Adobe’s anti-misinformation system later this year

    Adobe pitched the CAI last year as a general anti-misinformation and pro-attribution tool, but many details remained in flux. A newly released white paper makes its scope clearer. The CAI is primarily a more persistent, verifiable type of image metadata. It’s similar to the standard EXIF tags that show the location or date of a photograph, but with cryptographic signatures that let you verify the tags haven’t been changed or falsely applied to a manipulated photo.

    People can still download and edit the image, take a screenshot of it, or interact the way they would any picture. Any CAI metadata tags will show that the image was manipulated, however. Adobe is basically encouraging adding valuable context and viewing any untagged photos with suspicion, rather than trying to literally stop plagiarism or fakery. “There will always be bad actors,” says Adobe community products VP Will Allen. “What we want to do is provide consumers a way to go a layer deeper — to actually see what happened to that asset, who it came from, where it came from, and what happened to it.”

    The white paper makes clear that Adobe will need lots of hardware and software support for the system to work effectively. CAI-enabled cameras (including both basic smartphones and high-end professional cameras) would need to securely add tags for dates, locations, and other details. Photo editing tools would record how an image has been altered — showing that a journalist adjusted the light balance but didn’t erase or add any details. And social networks or other sites would need to display the information and explain why users should care about it.

  • EFF and ACLU Tell Federal Court that Forensic Software Source Code Must Be Disclosed

    Can secret software be used to generate key evidence against a criminal defendant? In an amicus filed ten days ago with the United States District Court of the Western District of Pennsylvania, EFF and the ACLU of Pennsylvania explain that secret forensic technology is inconsistent with criminal defendants’ constitutional rights and the public’s right to oversee the criminal trial process. Our amicus in the case of United States v. Ellis also explains why source code, and other aspects of forensic software programs used in a criminal prosecution, must be disclosed in order to ensure that innocent people do not end up behind bars, or worse—on death row.


    The Constitution guarantees anyone accused of a crime due process and a fair trial. Embedded in those foundational ideals is the Sixth Amendment right to confront the evidence used against you. As the Supreme Court has recognized, the Confrontation Clause’s central purpose was to ensure that evidence of a crime was reliable by subjecting it to rigorous testing and challenges. This means that defendants must be given enough information to allow them to examine and challenge the accuracy of evidence relied on by the government.

  • Powershell Bot with Multiple C2 Protocols

    I spotted another interesting Powershell script. It's a bot and is delivered through a VBA macro that spawns an instance of msbuild.exe This Windows tool is often used to compile/execute malicious on the fly (I already wrote a diary about this technique[1]). I don’t have the original document but based on a technique used in the macro, it is part of a Word document. It calls Document_ContentControlOnEnter[2]: [...]

  • FBI Used Information From An Online Forum Hacking To Track Down One Of The Hackers Behind The Massive Twitter Attack

    As Mike reported last week, the DOJ rounded up three alleged participants in the massive Twitter hack that saw dozens of verified accounts start tweeting out promises to double the bitcoin holdings of anyone who sent bitcoin to a certain account.

  • Twitter Expects to Pay 9-Figure Fine for Violating FTC Agreement

    That means that the complaint is not related to last month’s high-profile [cr]ack of prominent accounts on the service. That security incident saw accounts from the likes of Joe Biden and Elon Musk ask followers to send them bitcoin. A suspect was arrested in the incident last month.

  • Twitter Expects to Pay Up to $250 Million in FTC Fine Over Alleged Privacy Violations

    Twitter disclosed that it anticipates being forced to pay an FTC fine of $150 million to $250 million related to alleged violations over the social network’s use of private data for advertising.


    The company revealed the expected scope of the fine in a 10-Q filing with the SEC. Twitter said that on July 28 it received a draft complaint from the Federal Trade Commission alleging the company violated a 2011 consent order, which required Twitter to establish an information-security program designed to “protect non-public consumer information.”


    “The allegations relate to the Company’s use of phone number and/or email address data provided for safety and security purposes for targeted advertising during periods between 2013 and 2019,” Twitter said in the filing.

  • Apple removes more than 26,000 games from China app store

    Apple pulled 29,800 apps from its China app store on Saturday, including more than 26,000 games, according to Qimai Research Institute.


    The removals are in response to Beijing's crackdown on unlicensed games, which started in June and intensified in July, Bloomberg reported. This brings an end to the unofficial practice of letting games be published while awaiting approval from Chinese censors.

  • Intuit Agrees to Buy Singapore Inventory Software Maker

    Intuit will pay more than $80 million for TradeGecko, according to people familiar with the matter, marking one of the biggest exits in Singapore since the Covid-19 pandemic. TradeGecko has raised more than $20 million to date from investors including Wavemaker Partners, Openspace Ventures and Jungle Ventures.

  • Justice Department Is Scrutinizing Takeover of Credit Karma by Intuit, Maker of TurboTax

    The probe comes after ProPublica first reported in February that antitrust experts viewed the deal as concerning because it could allow a dominant firm to eliminate a competitor with an innovative business model. Intuit already dominates online tax preparation, with a 67% market share last year. The article sparked letters from Sen. Ron Wyden, D-Ore., and Rep. David Cicilline, D-R.I., urging the DOJ to investigate further. Cicilline is chair of the House Judiciary Committee’s antitrust subcommittee.

Security Leftovers

  • DNS configuration recommendations for IPFire users

    If you are familiar with IPFire, you might have noticed DNSSEC validation is mandatory, since it defeats entire classes of attacks. We receive questions like "where is the switch to turn off DNSSEC" on a regular basis, and to say it once and for all: There is none, and there will never be one. If you are running IPFire, you will be validating DNSSEC. Period. Another question frequently asked is why IPFire does not support filtering DNS replies for certain FQDNs, commonly referred to as a Response Policy Zone (RPZ). This is because an RPZ does what DNSSEC attempts to secure users against: Tamper with DNS responses. From the perspective of a DNSSEC-validating system, a RPZ will just look like an attacker (if the queried FQDN is DNSSEC-signed, which is what we strive for as much of them as possible), thus creating a considerable amount of background noise. Obviously, this makes detecting ongoing attacks very hard, most times even impossible - the haystack to search just becomes too big. Further, it does not cover direct connections to hardcoded IP addresses, which is what some devices and attackers usually do, as it does not rely on DNS to be operational and does not leave any traces. Using an RPZ will not make your network more secure, it just attempts to cover up the fact that certain devices within it cannot be trusted. Back to DNSSEC: In case the queried FQDNs are signed, forged DNS replies are detected since they do not match the RRSIG records retrieved for that domain. Instead of being transparently redirected to a fradulent web server, the client will only display a error message to its user, indicating a DNS lookup failure. Large-scale attacks by returning forged DNS replies are frequently observed in the wild (the DNSChanger trojan is a well-known example), which is why you want to benefit from validating DNSSEC and more and more domains being signed with it.

  • Security updates for Tuesday

    Security updates have been issued by Debian (libx11, webkit2gtk, and zabbix), Fedora (webkit2gtk3), openSUSE (claws-mail, ghostscript, and targetcli-fb), Red Hat (dbus, kpatch-patch, postgresql-jdbc, and python-pillow), Scientific Linux (libvncserver and postgresql-jdbc), SUSE (kernel and python-rtslib-fb), and Ubuntu (ghostscript, sqlite3, squid3, and webkit2gtk). 

  • Official 1Password Linux App is Available for Testing

    An official 1Password Linux app is on the way, and brave testers are invited to try an early development preview. 1Password is a user-friendly (and rather popular) cross-platform password manager. It provides mobile apps and browser extensions for Windows, macOS, Android, iOS, Google Chrome, Edge, Firefox — and now a dedicated desktop app for Linux, too.

  • FBI Warns of Increased DDoS Attacks

    The Federal Bureau of Investigation warned in a “private industry notification” last week that attackers are increasingly using amplification techniques in distributed denial-of-service attacks. There has been an uptick in attack attempts since February, the agency’s Cyber Division said in the alert. An amplification attack occurs when attackers send a small number of requests to a server and the server responds with numerous responses. The attackers spoof the IP address to make it look like the requests are coming from a specific victim, and the resulting responses overwhelms the victim’s network. “Cyber actors have exploited built-in network protocols, designed to reduce computation overhead of day-to-day system and operational functions to conduct larger and more destructive distributed denial-of-service amplification attacks against US networks,” the FBI alert said. Copies of the alert were posted online by several recipients, including threat intelligence company Bad Packets.

  • NSA issues BootHole mitigation guidance

    Following the disclosure of a widespread buffer-flow vulnerability that could affect potentially billions of Linux and Windows-based devices, the National Security Agency issued a follow-up cybersecurity advisory highlighting the bug and offering steps for mitigation. The vulnerability -- dubbed BootHole -- impacts devices and operating systems that use signed versions of the open-source GRUB2 bootloader software found in most Linux systems. It also affects any system or device using Secure Boot -- a root firmware interface responsible for validating the booting process -- with Microsoft's standard third party certificate authority. The vulnerability enables attackers to bypass Secure Boot to allow arbitrary code execution and “could be used to install persistent and stealthy bootkits,” NSA said in a press statement.