Language Selection

English French German Italian Portuguese Spanish

Syndicate content
News For Open Source Professionals
Updated: 2 hours 50 min ago

WASI, Bringing WebAssembly Way Beyond Browsers

Friday 16th of April 2021 01:00:19 PM

By Marco Fioretti

WebAssembly (Wasm) is a binary software format that all browsers can run directly, safely and at near-native speeds, on any operating system (OS). Its biggest promise, however, is to eventually work in the same way everywhere, from IoT devices and edge servers, to mobile devices and traditional desktops. This post introduces the main interface that should make this happen. The next post in this series will describe some of the already available, real-world implementations and applications of the same interface.

What is portability, again?

To be safe and portable, software code needs, as a minimum: 

  1. guarantees that users and programs can do only what they actually have the right to do, and only do it without creating problems to other programs or users
  2. standard, platform-independent methods to declare and apply those guarantees

Traditionally, these services are provided by libraries of “system calls” for each language, that is functions with which a software program can ask its host OS to perform some low-level, or sensitive task. When those libraries follow standards like POSIX, any compiler can automatically combine them with the source code, to produce a binary file that can run on some combination of OSes and processors.

The next level: BINARY compatibility

System calls only make source code portable across platforms. As useful as they are, they still force developers to generate platform-specific executable files, all too often from more or less different combinations of source code.

WebAssembly instead aims to get to the next level: use any language you want, then compile it once, to produce one binary file that will just run, securely, in any environment that recognizes WebAssembly. 

What Wasm does not need to work outside browsers

Since WebAssembly already “compiles once” for all major browsers, the easiest way to expand its reach may seem to create, for every target environment, a full virtual machine (runtime) that provides everything a Wasm module expects from Firefox or Chrome.

Work like that however would be really complex, and above all simply unnecessary, if not impossible, in many cases (e.g. on IoT devices). Besides, there are better ways to secure Wasm modules than dumping them in one-size-fits-all sandboxes as browsers do today.

The solution? A virtual operating system and runtime

Fully portable Wasm modules cannot happen until, to give one practical example, accesses to webcams or websites can be written only with system calls that generate platform-dependent machine code.

Consequently, the most practical way to have such modules, from any programming language, seems to be that of the WebAssembly System interface (WASI) project: write and compile code for only one, obviously virtual, but complete operating system.

On one hand WASI gives to all the developers of Wasm runtimes one single OS to emulate. On the other, WASI gives to all programming languages one set of system calls to talk to that same OS.

In this way, even if you loaded it on ten different platforms, a binary Wasm module calling a certain WASI function would still get – from the runtime that launched it – a different binary object every time. But since all those objects would interact with that single Wasm module in exactly the same way, it would not matter!

This approach would work also in the first use case of WebAssembly, that is with the JavaScript virtual machines inside web browsers. To run Wasm modules that use WASI calls, those machines should only load the JavaScript versions of the corresponding libraries.

This OS-level emulation is also more secure than simple sandboxing. With WASI, any runtime can implement different versions of each system call – with different security privileges – as long as they all follow the specification. Then that runtime could place every instance of every Wasm module it launches into a separate sandbox, containing only the smallest, and least privileged combination of functions that that specific instance really needs.

This “principle of least privilege”, or “capability-based security model“, is everywhere in WASI. A WASI runtime can pass into a sandbox an instance of the “open” system call that is only capable of opening the specific files, or folders, that were pre-selected by the runtime itself. This is a more robust, much more granular control on what programs can do than it would be possible with traditional file permissions, or even with chroot systems.

Coding-wise, functions for things like basic management of files, folders, network connections or time are needed by almost any program. Therefore the corresponding WASI interfaces are designed as similar as possible to their POSIX equivalents, and all packaged into one “wasi-core” module, that every WASI-compliant runtime must contain.

A version of the libc standard C library, rewritten usi wasi-core functions, is already available and, according to its developers, already “sufficiently stable and usable for many purposes”. 

All the other virtual interfaces that WASI includes, or will include over time, are standardized and packaged as separate modules,  without forcing any runtime to support all of them. In the next article we will see how some of these WASI components are already used today.

The post WASI, Bringing WebAssembly Way Beyond Browsers appeared first on Linux Foundation – Training.

The post WASI, Bringing WebAssembly Way Beyond Browsers appeared first on

What we learned from our survey about returning to in-person events

Friday 16th of April 2021 01:00:00 PM

Recently, the Linux Foundation Events team sent out a survey to past attendees of all events from 2018 through 2021 to get their feedback on how they feel about virtual events and gauge their thoughts on returning to in-person events. We sent the survey to 69,000 people and received 972 responses. 

The enclosed PDF document summarizes the results of that survey. Click on the embedded image to see the page advance controls.


Ultimately the good news here is that a healthy number of people feel comfortable traveling this year for events, especially domestically in the US. The results also show that about 1/4 of respondents like virtual events, and the vast majority of people who told us that they had attended in-person events before — another reason to keep a hybrid format moving forward.

The post What we learned from our survey about returning to in-person events appeared first on Linux Foundation.

The post What we learned from our survey about returning to in-person events appeared first on

How to resize a logical volume with 5 simple LVM commands

Friday 16th of April 2021 01:55:26 AM

It’s easy to add capacity to logical volumes with a few simple commands.
Read More at Enable Sysadmin

The post How to resize a logical volume with 5 simple LVM commands appeared first on

Charting the Path to a Successful IT Career

Thursday 15th of April 2021 09:00:56 PM

So, you’ve chosen to pursue a career in computer science and information technology – congratulations! Technology careers not only continue to be some of the fastest growing today, but also some of the most lucrative. Unlike many traditional careers, there are multiple paths to becoming a successful IT professional. 

What credentials do I need to start an IT career?

While certain technology careers, such as research and academia, require a computer science degree, most do not. Employers in the tech industry are typically more concerned with ensuring you have the required skills to carry out the responsibilities of a given role. 

What you need is a credential that demonstrates that you possess the practical skills to be successful; independently verifiable certifications are the best way to accomplish this. This is especially true when you are just starting out and do not have prior work experience. 

We recommend the Linux Foundation Certified IT Associate (LFCA) as a starting point. This respected certification demonstrates expertise and skills in fundamental information technology functions, especially in cloud computing, which is something that has not traditionally been included in entry-level certifications, but has become an essential skill regardless of what further specialization you may pursue.

How do I prepare for the LFCA?

The LFCA tests basic knowledge of fundamental IT concepts. It’s good to keep in mind which topics will be covered on the exam so you know how to prepare. The domains tested on the LFCA, and their scoring weight on the exam, are:

  • Linux Fundamentals – 20%
  • System Administration Fundamentals – 20%
  • Cloud Computing Fundamentals – 20%
  • Security Fundamentals – 16%
  • DevOps Fundamentals – 16%
  • Supporting Applications and Developers – 8%

Of course if you are completely new to the industry, no one expects you to be able to pass this exam without spending some time preparing. Linux Foundation Training & Certification offers a range of free resources that can help. These include free online courses covering the topics on the exam, guides, the exam handbook and more. We recommend taking advantage of these and the countless tutorials, video lessons, how-to guides, forums and more available across the internet to build your entry-level IT knowledge. 

I’ve passed the LFCA exam, now what?

Generally, LFCA alone should be sufficient to qualify for many entry-level jobs in the technology industry, such as a junior system administrator, IT support engineer, junior DevOps engineer, and more. It’s not a bad idea to try to jump into the industry at this point and get some experience.

If you’ve already been working in IT for a while, or you want to aim for a higher level position right off the bat, you will want to consider more advanced certifications to help you move up the ladder. Our 2020 Open Source Jobs Report found the majority of hiring managers prioritize candidates with relevant certifications, and 74% are even paying for their own employees to take certification exams, up from 55% only two years earlier, showing how essential these credentials are. 

We’ve developed a roadmap that shows how coupling an LFCA with more advanced certifications can lead to some of the hottest jobs in technology today. Once you have determined your career goal (if you aren’t sure, take our career quiz for inspiration!), this roadmap shows which certifications from across various providers can help you achieve it. 

Download full size version

How many certifications do I really need?

This is a difficult question to answer and really varies depending on the specific job and its roles and responsibilities. No one needs every certification on this roadmap, but you may benefit from holding two or three depending on your goals. Look at job listings, talk to colleagues and others in the industry with more experience, read forums, etc. to learn as much as you can about what has worked for others and what specific jobs or companies may require. 

The most important thing is to set a goal, learn, gain experience, and find ways to demonstrate your abilities. Certifications are one piece of the puzzle and can have a positive impact on your career success when viewed as a component of overall learning and upskilling. 

Want to learn more? See our full certification catalog to dig into what is involved in each Linux Foundation certification, and suggested learning paths to get started!

The post Charting the Path to a Successful IT Career appeared first on Linux Foundation – Training.

The post Charting the Path to a Successful IT Career appeared first on

SODA Foundation Announces 2021 Data & Storage Trends Survey

Thursday 15th of April 2021 08:00:00 PM

Data and storage technologies are evolving. The SODA Foundation is conducting a survey to identify the current challenges, gaps, and trends for data and storage in the era of cloud-native, edge, AI, and 5G. Through new insights generated from the data and storage community at large, end-users will be better equipped to make decisions, vendors can improve their products, and the SODA Foundation can establish new technical directions — and beyond!

The SODA Foundation is an open source project under Linux Foundation that aims to foster an ecosystem of open source data management and storage software for data autonomy. SODA Foundation offers a neutral forum for cross-project collaboration and integration and provides end-users quality end-to-end solutions. We intend to use this survey data to help guide the SODA Foundation and its surrounding ecosystem on important issues.

Please participate now; we intend to close the survey in late May.

Privacy and confidentiality are important to us. Neither participant names, nor their company names, will be displayed in the final results.

The first 50 survey respondents will each receive a $25 (USD) Amazon gift card. Some conditions apply.

This survey should take no more than 15 minutes of your time.

To take the 2021 SODA Foundation Data & Storage Trends Survey, click the button below:

Take Survey (EN) Take Survey (調査) Take Survey (民意调查)


Thanks to our survey partners Cloud Native Computing Foundation (CNCF), Storage Networking Industry Association (SNIA), Japan Data Storage Forum (JDSF), China Open Source Cloud League (COSCL), Open Infrastructure Foundation (OIF), Mulan Open Source Community


Thank you for taking the time to participate in this survey conducted by SODA Foundation, an open source project at the Linux Foundation focusing on data management and storage.

This survey will provide insights into the challenges, gaps, and trends for data and storage in the era of cloud-native, edge, AI, and 5G. We hope these insights will help end-users make better decisions, enable vendors to improve their products and serve as a guide to the technical direction of SODA and the surrounding ecosystem.

This survey will provide insights into:

  • What are the data & storage challenges faced by end-users?
  • Which features and capabilities do end users look for in data and storage solutions?
  • What are the key trends shaping the data & storage industry?
  • Which open source data & storage projects are users interested in?
  • What cloud strategies are businesses adopting?

Your name and company name will not be displayed. Reviews are attributed to your role, company size, and industry. Responses will be subject to the Linux Foundation’s Privacy Policy, available at Please note that members of the SODA Foundation survey committee who are not LF employees will review the survey results and coordinate the gift card giveaways. If you do not want them to have access to your name or email address in connection with this, please do not provide your name or email address and you will not be included in the giveaway.


We will summarize the survey data and share the learnings during SODACON Global 2021 – Virtual on Jul 13-14. The summary report will be published on the SODA website. In addition, we will be producing an in-depth report of the survey which will be shared with all survey participants.


Interested in attending or speaking at SODACON Global? Details for the event can be found at


If you have questions regarding this survey, please email us at or ask us on Slack at

Sign up for the SODA Newsletter at

The post SODA Foundation Announces 2021 Data & Storage Trends Survey appeared first on Linux Foundation.

The post SODA Foundation Announces 2021 Data & Storage Trends Survey appeared first on

Interview with Hilary Carter, VP of Linux Foundation Research

Wednesday 14th of April 2021 04:01:00 PM

Jason Perlow, Director of Project Insights and Editorial Content at the Linux Foundation, spoke with Hilary Carter about Linux Foundation Research and how it will create better awareness of the work being done by open source projects and their communities.

JP: It’s great to have you here today, and also, welcome to the Linux Foundation. First, can you tell me a bit about yourself, where do you live, what your interests are outside work?

HC: Thank you! I’m a Toronto native, but I now live in a little suburban town called Aurora, just north of the city. Mike Meyers — a fellow Canadian — chose “Aurora, IL” for his setting of Wayne’s World, but he really named the town after Aurora, ON. I also spend a lot of time about 3 hours north of Aurora in the Haliburton Highlands, a region noted for its beautiful landscape of rocks, trees, and lakes — and it’s here where my husband and I have a log cabin. We ski, hike and paddle, with our kids, depending on the season. It’s an interesting location because we’re just a few kilometers north of the 45th parallel — and at the spring and fall equinox, the sun sets precisely in the west right off of our dock. At the winter and summer solstice, it’s 45 degrees to the south and north, respectively. It’s neat. As much as I have always been a bit obsessed with geolocation, I had never realized we were smack in the middle of the northern hemisphere until our kids’ use of Snapchat location filters brought it to our attention. Thank you, mobile apps! 

JP: And what organization are you joining us from?

HC: My previous role was Managing Director at the Blockchain Research Institute, where I helped launch and administer their research program in 2017. Over nearly four years, we produced more than 100 research projects that explored how blockchain technology — as the so-called Internet of value — was transforming all facets of society — at the government and enterprise-level as well as at the peer-to-peer level. We also explored how blockchain converged with other technologies like IoT, AI, additive manufacturing and how these developments would change traditional business models. It’s a program that is as broad as it is deep into a particular subject matter without being overly technical, and it was an absolutely fascinating and rewarding experience to be part of building that.

JP: Tell me a bit more about your academic background; what disciplines do you feel most influence your research approach? 

HC: I was a Political Studies major as an undergrad, which set the stage for my ongoing interest in geopolitical issues and how they influence the economy and society. I loved studying global political systems, international political economy, and supranational organizations and looking at the frameworks built for global collaboration to enable international peace and security under the Bretton Woods system. That program made me feel incredibly fortunate to have been born into a time of relative peace and prosperity, unlike generations before me.

I did my graduate studies in Management at the London School of Economics (LSE), and it was here that I came to learn about the role of technology in business. The technologies we were studying at the time were those that enabled real-time inventory. Advanced manufacturing was “the” hot technology of the mid-1990s, or so it seemed in class. I find it so interesting that the curriculum at the time did not quite reflect the technology that would profoundly and most immediately shape our world, and of course, that was the Web. In fairness, the digital economy was emerging slowly, then. Tasks like loading web pages still took a lot of time, so in a way, it’s understandable that the full extent of the web’s power did not make it into many of my academic lectures and texts. I believe academia is different today — and I’m thrilled to see the LSE at the forefront of new technology research, including blockchain, AI, robotics, big data, preparing students for a digital world.

JP: I did do some stalking of your LinkedIn profile; I see that you also have quite a bit of journalistic experience as well.

HC: I wish I could have had more! I was humbled when my first piece was published in Canada’s national newspaper. I had no formal training or portfolio of past writing to lend credibility to my authorship. Still, fortunately, after much persistence, the editor gave me a shot, and I’m forever grateful to her for that. I was inspired to write opinion pieces on the value of digital tools because I saw a gap that needed filling — and I was really determined to fill it. And the subject that inspired me was leadership around new technologies. I try to be a good storyteller and create something that educates and inspires all in one go. I suppose I come by a bit of that naturally. My father was an award-winning author in Canada, but his day job was Chief of Surgery at a hospital in downtown Toronto. He had a gift to take complex subject matter about diseases, such as cancer, and humanize the content by making it personal. I think that’s what makes writing about complex concepts “sticky.” When you believe that the author is, at some level, personally committed to their work and successful in setting the context for their subject matter to the world at large and do so in a way that creates action or additional thinking, then they’ve done a successful job. 

JP: Let’s try a tough existential question. Why do you feel that the Linux Foundation now needs a dedicated research and publications division? Is it an organizational maturity issue? Has open source gotten so widespread and pervasive that we need better metrics to understand these projects’ overall impact?

HC: Well, let me start by saying that I’m delighted that the LF has prioritized research as a new business unit. In my past role at the Blockchain Research Institute, it was clear that there was and still is a huge demand for research — the program kept growing because technologies continued to evolve, and there was no shortage of issues to cover. So I think the LF is tapping into a deep need for knowledge in the market at large and specific insights on open source ecosystems, in particular, to create greater awareness of incredible open source projects and inspire greater participation in them. There are also threats that we as a society — as human beings — need to deal with urgently. So the timing couldn’t be better to broaden the understanding of what is happening in open source communities, new tools to share knowledge, and encourage greater collaboration levels in open source projects. If we accomplish one thing, it will be to illustrate the global context for open source software development and why getting involved in these activities can create positive global change on so many levels. We want more brains in the game.

JP: So let’s dive right into the research itself. You mentioned your blockchain background and your previous role — I take it that this will have some influence on upcoming surveys and analysis? What is coming down the pike on the front?

HC: Blockchain as a technology has undoubtedly influenced my thinking about systems architecture and how research is conducted — both technological frameworks and the human communities that organize around them. Decentralization. Coordination. Transparency. Immutability. Privacy. These are all issues that have been front and center for me these past many years. Part of what I have learned about what makes good blockchain systems work comes from the right combination of great dependability and security with leadership, governance, and high mass collaboration levels. I believe those values transfer over readily to the work of the Linux Foundation and its community. I’m very much looking forward to learning about the many technology ecosystems beyond blockchain currently under the LF umbrella. I’m excited to discover what I imagine will be a new suite of technologies that are not yet part of our consciousness.

JP: What other LF projects and initiatives do you feel need to have deeper dives in understanding their impact besides blockchain? Last year, we published a contributor survey with Harvard. It reached many interesting conclusions about overall motivations for participation and potential areas for remediation or improvement in various organizations. Where do we go further in understanding supply chain security issues — are you working with the Harvard team on any of those things?

HC: The FOSS Contributor Survey was amazing, and there are more good things to come through our collaboration with the Laboratory of Innovation Science at Harvard. Security is a high-priority research issue, and yes, ongoing contributions to this effort from that team will be critical. You can definitely expect a project that dives deep into security issues in software supply chains in the wake of SolarWinds.

I’ve had excellent preliminary discussions with some executive team members about their wish-lists for projects that could become part of the LF Research program in terms of other content. We’ll hope to be as inclusive as we can, based on what our capacity allows. We look forward to exploring topics along industry verticals and technology horizontals, as well as looking at issues that don’t fall neatly into this framework, such as strategies to increase diversity in open source communities, or the role of governance and leadership as a factor in successful adoption of open source projects.

Ultimately, LF Research will have an agenda shaped not only from feedback from within the LF community but by the LF Research Advisory Board, a committee of LF members and other stakeholders who will help shape the agenda and provide support and feedback throughout the program. Through this collaborative effort, I’m confident that LF Research will add new value to our ecosystem and serve as a valuable resource for anyone wanting to learn more about open source software and the communities building it and help them make decisions accordingly. I’m looking forward to our first publications, which we expect out by mid-summer. And I’m most excited to lean on, learn from, and work with such an incredible team as I have found within the LF. Let’s do this!!!

JP: Awesome, Hilary. It was great having you for this talk, and I look forward to the first publications you have in store for us.

The post Interview with Hilary Carter, VP of Linux Foundation Research appeared first on Linux Foundation.

The post Interview with Hilary Carter, VP of Linux Foundation Research appeared first on

Using Web Assembly Written in Rust on the Server-Side

Wednesday 14th of April 2021 04:00:21 PM

By Bob Reselman

This article was originally published at TheNewStack

WebAssembly allows you to write code in a low-level programming language such as Rust, that gets compiled into a transportable binary. That binary can then be run on the client-side in the WebAssembly virtual machine that is standard in today’s web browsers. Or, the binary can be used on the server-side, as a component consumed by another programming framework — such as Node.js or Deno.

WebAssembly combines the efficiency inherent in low-level code programming with the ease of component transportability typically found in Linux containers. The result is a development paradigm specifically geared toward doing computationally intensive work at scale — for example, artificial intelligence and complex machine learning tasks.

As Solomon Hykes, the creator of Docker, tweeted on March 27, 2019: “If WASM+WASI existed in 2008, we wouldn’t have needed to have created Docker. That’s how important it is. WebAssembly on the server is the future of computing.”

WebAssembly is a compelling approach to software development. However, in order to get a true appreciation for the technology, you need to see it in action.

In this article, I am going to show you how to program a WebAssembly binary in Rust and use it in a TypeScript-powered web server running under Deno. I’ll show you how to install Rust and prep the runtime environment. We’ll compile the source code into a Rust binary. Then, once the binary is created, I’ll demonstrate how to run it on the server-side under Deno. Deno is a TypeScript-based programming framework that was started by Ryan Dahl, the creator of Node.js.

Understanding the Demonstration Project

The demonstration project that accompanies this article is called Wise Sayings. The project stores a collection of “wise sayings” in a text file named wisesayings.txt. Each line in the text file is a wise saying, for example, “A friend in need is a friend indeed.

The Rust code publishes a single function, get_wise_saying(). That function gets a random line from the text file, wisesayings.txt, and returns the random line to the caller. (See Figure 1, below)

Figure 1: The demonstration project compiles data in a text file directly into the WebAssembly binary

Both the code and text file are compiled into a single WebAssembly binary file, named wisesayings.wasm. Then another layer of processing is performed to make the WebAssembly binary consumable by the Deno web server code. The Deno code calls the function get_wise_sayings() in the WebAssembly binary, to produce a random wise saying. (See Figure 2.)

Figure 2: WebAssembly binaries can be consumed by a server-side programming framework such as Deno.

You get the source code for the Wise Sayings demonstration project used in this article on GitHub. All the steps described in this article are listed on the repository’s main Readme document.

Prepping the Development Environment

The first thing we need to do to get the code up and running is to make sure that Rust is installed in the development environment. The following steps describe the process.

Step 1: Make sure Rust is installed on your machine by typing:

1 rustc —version

You’ll get output similar to the following:

1 rustc 1.50.0 (cb75ad5db 2021–02–10)

If the call to rustc –version fails, you don’t have Rust installed. Follow the instructions below and make sure you do all the tasks presented by the given installation method.

To install Rust, go here and install on Linux/MAC: …

1 curl —proto ‘=https’ —tlsv1.2 –sSf | sh

… or here to install it on Windows:

Download and run rustup-init.exe which you can find at this URL:

Step 2: Modify your system’s PATH

1 export PATH=“$HOME/.cargo/bin:$PATH”

Step 3: If you’re working in a Linux environment do the following steps to install the required additional Linux components.

1 2 3 4 5 sudo apt–get update –y sudo apt–get install –y libssl–dev apt install pkg–config

Developer’s Note: The optimal development environment in which to run this code is one that uses the Linux operating system.

Step 4: Get the CLI tool that you’ll use for generating the TypeScript/JavaScript adapter files. These adapter files (a.k.a. shims) do the work of exposing the function get_wise_saying() in the WebAssembly binary to the Deno web server that will be hosting the binary. Execute the following command at the command line to install the tool, wasm-bindgen-cli.

1 cargo install wasm–bindgen–cli

The development environment now has Rust installed, along with the necessary ancillary libraries. Now we need to get the Wise Saying source code.

Working with the Project Files

The Wise Saying source code is hosted in a GitHub repository. Take the following steps to clone the source code from GitHub onto the local development environment.

Step 1: Execute the following command to clone the Wise Sayings source code from GitHub

1 git clone

Step 2: Go to the working directory

1 cd wisesayingswasm/

Listing 1, below lists the files that make up the source code cloned from the GitHub repository.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 . ├── Cargo.toml ├── cheatsheet.txt ├── LICENSE ├── lldbconfig ├── package–lock.json ├── ├── server │   ├── main.ts │   └── package–lock.json └── src     ├── fortunes.txt     ├──     └──

Listing 1: The files for the source code for the Wise Sayings demonstration project hosted in the GitHub repository

Let’s take a moment to describe the source code files listed above in Listing 1. The particular files of interest with regard to creating the WebAssembly binary are the files in the directory named, src at Line 11 and the file, Cargo.toml at Line 2.

Let’s discuss Cargo.toml first. The content of Cargo.toml is shown in Listing 2, below.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 [package] name = “wise-sayings-wasm” version = “0.1.0” authors = [“Bob Reselman <>”] edition = “2018” [dependencies] rand = “0.8.3” getrandom = { version = “0.2”, features = [“js”] } wasm–bindgen = “0.2.70” [lib] name = “wisesayings” crate–type =[“cdylib”, “lib”]

Listing 2: The content of Cargo.toml for the demonstration project Wise Sayings

Cargo.toml is the manifest file that describes various aspects of the Rust project under development. The Cargo.toml file for the Wise Saying project is organized into three sections: package, dependencies, and lib. The section names are defined in the Cargo manifest specification, which you read here.

Understanding the Package Section of Cargo.toml

The package section indicates the name of the package (wise-sayings-wasm), the developer assigned version (0.1.0), the authors (Bob Reselman <>) and the edition of Rust (2018) that is used to program the binary.

Understanding the Dependencies Section of Cargo.toml

The dependencies section lists the dependencies that the WebAssembly project needs to do its work. As you can see in Listing 2, above at Line 8, the Cargo.toml lists the rand library as a dependency. The rand library provides the capability to generate a random number which is used to get a random line of wise saying text from the file, wisesayings.txt.

The reference to getrandom at Line 9 in Listing 2 above indicates that the WebAssembly binary’s getrandom is running under Javascript and that the JavaScript interface should be used. This condition is very particular to running a WebAssembly binary under JavaScript. The long and short of it is that if the line getrandom = { version = “0.2”, features = [“js”] } is not included in the Cargo.toml, the WebAssembly binary will not be able to create a random number.

The entry at Line 10 declares the wasm-bindgen library as a dependency. The wasm-bindgen library provides the capability for wasm modules to talk to JavaScript and JavaScript to talk to wasm modules.

Understanding the Lib Section of Cargo.toml

The entry crate-type =[“cdylib”, “lib”] at Line 14 in the lib section of the Cargo.toml file tells the Rust compiler to create a wasm binary without a start function. Typically when cdylib is indicated, the compiler will create a dynamic library with the extension .dll in Windows, .so in Linux, or .dylib in MacOS. In this case, because the deployment unit is a WebAssembly binary, the compiler will create a file with the extension .wasm. The name of the wasm file will be wisesayings.wasm, as indicated at Line 13 above in Listing 2.

The important thing to understand about Cargo.toml is that it provides both the design and runtime information needed to get your Rust code up and running. If the Cargo.toml file is not present, the Rust compiler doesn’t know what to do and the build will fail.

Understanding the Core Function, get_wise_saying()

The actual work of getting a random line that contains a Wise Saying from the text file wisesayings.txt is done by the function get_wise_saying(). The code for get_wise_sayings() is in the Rust library file, ./src/ The Rust code is shown below in Listing 3.

1 2 3 4 5 6 7 8 9 10 11 12 13 use rand::seq::IteratorRandom; use wasm_bindgen::prelude::*; #[wasm_bindgen] pub fn get_wise_saying() -> String {     let str = include_str!(“fortunes.txt”);     let mut lines = str.lines();     let line = lines         .choose(&mut rand::thread_rng())         .expect(“File had no lines”);     return line.to_string(); }

Listing 3: The function file, contains the function, get_wise_saying().

The important things to know about the source is that it’s tagged at Line 4 with the attribute #[wasm_bindgen], which lets the Rust compiler know that the source code is targeted as a WebAssembly binary. The code publishes one function, get_wise_saying(), at Line 5. The way the wise sayings text file is loaded into memory is to use the Rust macroinclude_str!. This macro does the work of getting the file from disk and loading the data into memory. The macro loads the file as a string and the function str.lines() separates the lines within the string into an array. (Line 7.)

The rand::thread_rng() call at Line 10 returns a number that is used as an index by the .choose() function at Line 10. The result of it all is an array of characters (a string) that reflects the wise saying returned by the function.

Creating the WebAssembly Binary

Let’s move on compiling the code into a WebAssembly Binary.

Step 1: Compile the source code into a WebAssembly is shown below.

1 cargo build —lib —target wasm32–unknown–unknown


cargo build is the command and subcommand to invoke the Rust compiler using the settings in the Cargo.toml file.

–lib is the option indicating that you’re going to build a library against the source code in the ./lib directory.

–targetwasm32-unknown-unknown indicates that Rust will use the wasm-unknown-unknown compiler and will store the build artifacts as well as the WebAssembly binary into directories within the target directory, wasm32-unknown-unknown.

Understanding the Rust Target Triple Naming Convention

Rust has a naming convention for targets. The term used for the convention is a target triple. A target triple uses the following format: ARCH-VENDOR-SYS-ABI.


ARCH describes the intended target architecture, for example wasm32 for WebAssembly, or i686 for current-generation Intel chips.

VENDOR describes the vendor publishing the target; for example, Apple or Nvidia.

SYS describes the operating system; for example, Windows or Linux.

ABI describes how the process starts up, for eabi is used for bare metal, while gnu is used for glibc.

Thus, the name i686-unknown-linux-gnu means that the Rust binary is targeted to an i686 architecture, the vendor is defined as unknown, the targeted operating system is Linux, and ABI is gnu.

In the case of wasm32-unknown-unknown, the target is WebAssembly, the operating system is unknown and the ABI is unknown. The informal inference of the name is “it’s a WebAssembly binary.”

There are a standard set of built-in targets defined by Rust that can be found here.

If you find the naming convention to be confusing because there are optional fields and sometimes there are four sections to the name, while other times there will be three sections, you are not alone.

Deploying the Binary Server-Side Using Deno

After we build the base WeAssembly binary, we need to create the adapter (a.k.a shim) files and a special version of the WebAssembly binary — all of which can be run from within JavaScript. We’ll create these artifacts using the wasm-bindgen tool.

Step 1: We create these new artifacts using the command shown below.

1 wasm–bindgen —target deno ./target/wasm32–unknown–unknown/debug/wisesayings.wasm —out–dir ./server


wasm-bindgen is the command for creating the adapter files and the special WebAssembly binary.

–target deno ./target/wasm32-unknown-unknown/debug/wisesayings.wasm is the option that indicates the adapter files will be targeted for Deno. Also, the option denotes the location of the original WebAssembly wasm binary that is the basis for the artifact generation process.

–out-dir ./server is the option that declares the location where the created adapter files will be stored on disk; in this case, ./server.

The result of running wasm-bindgen is the server directory shown in Listing 4 below.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 . ├── Cargo.toml ├── cheatsheet.txt ├── LICENSE ├── lldbconfig ├── package–lock.json ├── ├── server │   ├── main.ts │   ├── package–lock.json │   ├── wisesayings_bg.wasm │   ├── wisesayings_bg.wasm.d.ts │   ├── wisesayings.d.ts │   └── wisesayings.js └── src     ├── fortunes.txt     ├──     └──

Listing 4: The server directory contains the results of running wasm-bindgen

Notice that the contents of the server directory, shown above in Listing 4, now has some added JavaScript (js) and TypeScript (ts) files. Also, the server directory has the special version of the WebAssembly binary, named wisesayings_bg.wasm. This version of the WebAssembly binary is a stripped-down version of the wasm file originally created by the initial compilation, done when invoking cargo build earlier. You can think of this new wasm file as a JavaScript-friendly version of the original WebAssembly binary. The suffix, _bg, is an abbreviation for bindgen.

Running the Deno Server

Once all the artifacts for running WebAssembly have been generated into the server directory, we’re ready to invoke the Deno web server. Listing 5 below shows content of main.ts, which is the source code for the Deno web server.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 import { serve } from “”; import { get_wise_saying } from “./wisesayings.js”; const env = Deno.env.toObject(); let port = 4040; if(env.WISESAYING_PORT){   port = Number(env.WISESAYING_PORT); }; const server = serve({ hostname: “”, port}); console.log(`HTTP webserver running at ${new Date()}.  Access it at:  http://localhost:${port}/`); for await (const request of server) {     const saying = get_wise_saying();     request.respond({ status: 200, body: saying });   }

Listing 5: main.ts is the Deno webserver code that uses the WebAssembly binary

You’ll notice that the WebAssembly wasm binary is not imported directly. This is because the work of representing the WebAssembly binary is done by the JavaScript and TypeScript adapter (a.k.a shim) files generated earlier. The WebAssembly/Rust function, get_wise_sayings(), is exposed in the auto-generated JavaScript file, wisesayings.js.

The function get_wise_saying is imported into the webserver code at Line 2 above. The function is used at Line 16 to get a wise saying that will be returned as an HTTP response by the webserver.

To get the Deno web server up and running, execute the following command in a terminal window.

Step 1:

1 deno run —allow–read —allow–net —allow–env ./main.ts


deno run is the command set to invoke the webserver.

–allow-read is the option that allows the Deno webserver code to have permission to read files from disk.

–allow-net is the option that allows the Deno webserver code to have access to the network.

–allow-env is the option that allows the Deno webserver code read environment variables.

./main.ts is the TypeScript file that Deno is to run. In this case, it’s the webserver code.

When the webserver is up and running, you’ll get output similar to the following:

HTTP webserver running at Thu Mar 11 2021 17:57:32 GMT+0000 (Coordinated Universal Time). Access it at: http://localhost:4040/

Step 2:

Run the following command in a terminal on your computer to exercise the Deno/WebAssembly code

1 curl localhost:4040

You’ll get a wise saying, for example:

True beauty lies within.

Congratulations! You’ve created and run a server-side WebAssembly binary.

Putting It All Together

In this article, I’ve shown you everything you need to know to create and use a WebAssembly binary in a Deno web server. Yet for as detailed as the information presented in this article is, there is still a lot more to learn about what’s under the covers. Remember, Rust is a low-level programming language. It’s meant to go right up against the processor and memory directly. That’s where its power really is. The real benefit of WebAssembly is using the technology to do computationally intensive work from within a browser. Applications that are well suited to WebAssembly are visually intensive games and activities that require complex machine learning capabilities — for example, real-time voice recognition and language translation. WebAssembly allows you to do computation on the client-side that previously was only possible on the server-side. As Solomon Hykes said, WebAssembly is the future of computing. He might very well be right.

The important thing to understand is that WebAssembly provides enormous opportunities for those wanting to explore cutting-edge approaches to modern distributed computing. Hopefully, the information presented in this piece will motivate you to explore those opportunities.

The post Using Web Assembly Written in Rust on the Server-Side appeared first on Linux Foundation – Training.

The post Using Web Assembly Written in Rust on the Server-Side appeared first on

The Linux Foundation launches research division to explore open source ecosystems and impact

Wednesday 14th of April 2021 04:00:00 PM

Linux Foundation Research will provide objective, decision-useful insights into the scope of open source collaboration

SAN FRANCISCO, Calif. – April 14, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced Linux Foundation Research, a new division that will broaden the understanding of open source projects, ecosystem dynamics, and impact, with never before seen insights on the efficacy of open source collaboration as a means to solve many of the world’s pressing problems. Through a series of research projects and related content, Linux Foundation Research will leverage the Linux Foundation’s vast repository of data, tools, and communities across industry verticals and technology horizontals. The methodology will apply quantitative and qualitative techniques to create an unprecedented knowledge network to benefit the global open source community, academia, and industry.

“As we continue in our mission to collectively build the world’s most critical open infrastructure, we can provide a first-of-its-kind research program that leverages the Linux Foundation’s experience, brings our communities together, and can help inform how open source evolves for decades to come,” said Jim Zemlin, executive director at the Linux Foundation. “As we have seen in our previous studies on supply chain security and FOSS contribution, research is an important way to measure the progress of both open source ecosystems and contributor trends. With a dedicated research organization, the Linux Foundation will be better equipped to draw out insights, trends, and context that will inform discussions and decisions around open collaboration.”

As part of the launch, the Linux Foundation is pleased to welcome Hilary Carter, VP Research, to lead this initiative. Hilary most recently led the development and publication of more than 100 enterprise-focused technology research projects for the Blockchain Research Institute. In addition to research project management, Hilary has authored, co-authored, and contributed to reports on blockchain in pandemics, government, enterprise, sustainability, and supply chains.

“The opportunity to measure, analyze, and describe the impact of open source collaborations in a more fulsome way through Linux Foundation Research is inspiring,” says Carter. “Whether we’re exploring the security of digital supply chains or new initiatives to better report on climate risk, the goal of LF Research is to enhance decision-making and encourage collaboration in a vast array of open source projects. It’s not enough to simply describe what’s taking place. It’s about getting to the heart of why open source community initiatives matter to all facets of our society, as a means to get more people — and more organizations — actively involved.”

Critical to the research initiative will be establishing the Linux Foundation Research Advisory Board, a rotating committee of community leaders and subject matter experts, who will collectively influence the program agenda and provide strategic input, oversight, and ongoing support on next-generation issues.

About the Linux Foundation

Founded in 2000, The Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects are critical to the world’s infrastructure, including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: Linux is a registered trademark of Linus Torvalds.

The post The Linux Foundation launches research division to explore open source ecosystems and impact appeared first on Linux Foundation.

The post The Linux Foundation launches research division to explore open source ecosystems and impact appeared first on

Save 30% Sitewide Through April 20!

Tuesday 13th of April 2021 01:00:19 PM

Spring has arrived in the northern hemisphere, and to celebrate we are offering a 30% discount sitewide on all training courses, certifications, bundles and bootcamps! All you have to do is use code TUX30 when you check out. 

This is a great opportunity to learn a new in demand skill like cloud, which has the biggest impact on hiring decisions according to the 2020 Open Source Jobs Report, or blockchain, which LinkedIn named the most in demand skill of 2020. If you aren’t sure which path is right for you, start with our career quiz, then explore recommended learning paths. You can even start with one of our free courses before taking the plunge into intermediate and advanced training.

View more details on the sale, and start learning today!

The post Save 30% Sitewide Through April 20! appeared first on Linux Foundation – Training.

The post Save 30% Sitewide Through April 20! appeared first on

Minimizing struct page overhead

Monday 12th of April 2021 10:00:00 PM

Discussion on how to improve Linux memory management efficiency.
Click to Read More at Oracle Linux Kernel Development

The post Minimizing struct page overhead appeared first on

Linux Foundation Hosts Collaboration Among World’s Largest Insurance Companies

Monday 12th of April 2021 08:00:00 PM

openIDL platform provides a standardized data repository streamlining regulatory reporting and enabling the delivery of next-gen risk and insurance applications

San Francisco, Calif., April 12, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, and the American Association of Insurance Services (AAIS), today are announcing the launch of OpenIDL, the Open Insurance Data Link platform and project. The platform will reduce the cost of regulatory reporting for insurance carriers, provide a standardized data repository for analytics and a connection point for third parties to deliver new applications to members.

openIDL brings together some of the world’s largest insurance companies, including The Hanover and Selective Insurance Group, along with technology and service providers Chainyard, KatRisk and MOBI to advance a common distributed ledger platform for sharing information and business processes across the insurance ecosystem.

The first use case for the openIDL network is regulatory reporting in the Property and Casualty (P&C) insurance industry. Initially built with guidance from AAIS, a leading insurance advisory organization and statistical reporting agent, openIDL leverages the trust and integrity inherent in distributed ledger networks. The secure platform guarantees to regulators and other insurance industry participants that data is accurate and complete, implemented by a “P&C Reporting Working Group” within the openIDL network.

“From the very beginning, we recognized the enormous transformative potential for openIDL and distributed ledger technology,” said AAIS CEO Ed Kelly. “We are happy to work with the Linux Foundation to help affect meaningful, positive change for the insurance ecosystem.”

Insurance sectors beyond P&C are expected to be supported by openIDL in the coming months, and use cases will expand beyond regulatory. A “Flood Working Group” has already been assembled to develop use case catastrophe modeling in support of insurers and regulators. openIDL is also collaborating on joint software development activities, building upon Hyperledger Fabric, Hadoop, Node.js, MongoDB and other open technologies to implement a “harmonized data store,” enabling data privacy and accountable operations.

The combined packaging of this software is called an “openIDL Node,” approved and certified by developers working on this project, and every member of the network will be running that software in order to participate in the openIDL network. Additional joint software development for analytics and reporting are also included in the openIDL Linux Foundation network.

“We’re delighted to join openIDL with AAIS and the Linux Foundation. It is strategically important for Selective to be part of industry efforts to innovate our regulatory reporting and use distributed ledgers,” said Michael H. Lanza, executive vice president, general counsel & chief compliance officer of Selective Insurance Group, Inc.

openIDL is a Linux Foundation “Open Governance Network.” These networks comprise nodes run by many different organizations, bound by a shared distributed ledger that provides an industry utility platform for recording transactions and automating business processes. It leverages open source code and community governance for objective transparency and accountability among participants. The network and the node software are built using open source development practices and principles managed by the Linux Foundation in a manner that enterprises can trust.

“AAIS, and the insurance industry in general, are trailblazers in their contribution and collaboration to these technologies,” said Mike Dolan, senior vice president and general manager of Projects at the Linux Foundation. “Open governance networks like openIDL can now accelerate innovation and development of new product and service offerings for insurance providers and their customers. We’re excited to host this work.”

As an open source project, all software source code developed will be licensed under an OSI-approved open source license, and all interface specifications developed will be published under an open specification license. And all technical discussions between participants will take place publicly, further enhancing the ability to expand the network to include other participants. As with an openly accessible network, organizations can develop their own proprietary applications and infrastructure integrations.

Additional Members & Partner Statements


Chainyard is pleased to join the OpenIDL initiative as an infrastructure member,” said Isaac Kunkel, Chainyard SVP Consulting Services. “Blockchain is a team sport and with the openIDL platform, companies, regulators and vendors are forming an ecosystem to collaborate on common issues for the betterment of the insurance industry. The entire industry will benefit through more accurate data and better decision making.”


“The openIDL platform will serve to increase access to state of the art catastrophe modelling data from KatRisk and others, serving to reduce the friction required to house and run said models. KatRisk expects all parties, from direct insurance entities to regulators, to see an increase in data quality, reliability and ease of access as catastrophe modelling output is effectively streamed across OpenIDL nodes to generate automated reports and add to or create internal business intelligence databases. If catastrophe models are about owning your own risk, then the OpenIDL platform is an effective tool to better understand and manage that risk,” said Brandon Katz, executive vice president, member, KatRisk.


“The Mobility Open Blockchain Initiative (MOBI) is delighted to join with the Linux Foundation, AAIS, and insurance industry leaders in founding OpenIDL.  Data sharing and digital collaboration in business ecosystems via industry consortium ledgers like OpenIDL will drive competitive advantage for many years to come,” said Chris Ballinger, founder and CEO, MOBI.

For more information, please visit

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at


Established in 1936, AAIS serves the property casualty insurance industry as the modern, Member-based advisory organization. AAIS delivers custom advisory solutions, including best-in-class forms, rating information and data management capabilities for commercial lines, inland marine, farm & agriculture, commercial auto, personal auto, and homeowners insurers. Its consultative approach, unrivaled customer service and modern technical capabilities underscore a focused commitment to the success of its Members. For more information about AAIS, please visit


The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: Linux is a registered trademark of Linus Torvalds.

Media Contact
Jennifer Cloer for Linux Foundation

The post Linux Foundation Hosts Collaboration Among World’s Largest Insurance Companies appeared first on Linux Foundation.

The post Linux Foundation Hosts Collaboration Among World’s Largest Insurance Companies appeared first on

6 advanced tcpdump formatting options

Tuesday 6th of April 2021 10:57:38 PM

The final article in this three-part tcpdump series covers six more tcpdump packet capturing trick options.
Read More at Enable Sysadmin

The post 6 advanced tcpdump formatting options appeared first on

6 options for tcpdump you need to know

Tuesday 6th of April 2021 10:09:29 PM

Six more tcpdump command options to simplify and filter your packet captures.
Read More at Enable Sysadmin

The post 6 options for tcpdump you need to know appeared first on

6 tcpdump network traffic filter options

Tuesday 6th of April 2021 08:58:59 PM

6 tcpdump network traffic filter options

The first six of eighteen common tcpdump options that you should use for network troubleshooting and analysis.
Kedar Vijay Kulkarni
Tue, 4/6/2021 at 1:58pm


Image by Pexels from Pixabay

The tcpdump utility is used to capture and analyze network traffic. Sysadmins can use it to view real-time traffic or save the output to a file and analyze it later. In this three-part article, I demonstrate several common options you might want to use in your day-to-day operations with tcpdump.

Linux Administration  
Command line utilities  
Read More at Enable Sysadmin

The post 6 tcpdump network traffic filter options appeared first on

Scaling Microservices on Kubernetes

Monday 5th of April 2021 09:00:02 PM

By Ashley Davis

*This article was originally published at TheNewStack

Applications built on microservices can be scaled in multiple ways. We can scale them to support development by larger development teams and we can also scale them up for better performance. Our application can then have a higher capacity and can handle a larger workload.

Using microservices gives us granular control over the performance of our application. We can easily measure the performance of our microservices to find the ones that are performing poorly, are overworked, or are overloaded at times of peak demand. Figure 1 shows how we might use the Kubernetes dashboard to understand CPU and memory usage for our microservices.

Figure 1: Viewing CPU and memory usage for microservices in the Kubernetes dashboard

If we were using a monolith, however, we would have limited control over performance. We could vertically scale the monolith, but that’s basically it.

Horizontally scaling a monolith is much more difficult; and we simply can’t independently scale any of the “parts” of a monolith. This isn’t ideal, because it might only be a small part of the monolith that causes the performance problem. Yet, we would have to vertically scale the entire monolith to fix it. Vertically scaling a large monolith can be an expensive proposition.

Instead, with microservices, we have numerous options for scaling. For instance, we can independently fine-tune the performance of small parts of our system to eliminate bottlenecks and achieve the right mix of performance outcomes.

There are also many advanced ways we could tackle performance issues, but in this post, we’ll overview a handful of relatively simple techniques for scaling our microservices using Kubernetes:

  1. Vertically scaling the entire cluster
  2. Horizontally scaling the entire cluster
  3. Horizontally scaling individual microservices
  4. Elastically scaling the entire cluster
  5. Elastically scaling individual microservices

Scaling often requires risky configuration changes to our cluster. For this reason, you shouldn’t try to make any of these changes directly to a production cluster that your customers or staff are depending on.

Instead, I would suggest that you create a new cluster and use blue-green deployment, or a similar deployment strategy, to buffer your users from risky changes to your infrastructure.

Vertically Scaling the Cluster

As we grow our application, we might come to a point where our cluster generally doesn’t have enough compute, memory or storage to run our application. As we add new microservices (or replicate existing microservices for redundancy), we will eventually max out the nodes in our cluster. (We can monitor this through our cloud vendor or the Kubernetes dashboard.)

At this point, we must increase the total amount of resources available to our cluster. When scaling microservices on a Kubernetes cluster, we can just as easily make use of either vertical or horizontal scaling. Figure 2 shows what vertical scaling looks like for Kubernetes.

Figure 2: Vertically scaling your cluster by increasing the size of the virtual machines (VMs)

We scale up our cluster by increasing the size of the virtual machines (VMs) in the node pool. In this example, we increased the size of three small-sized VMs so that we now have three large-sized VMs. We haven’t changed the number of VMs; we’ve just increased their size — scaling our VMs vertically.

Listing 1 is an extract from Terraform code that provisions a cluster on Azure; we change the vm_size field from Standard_B2ms to Standard_B4ms. This upgrades the size of each VM in our Kubernetes node pool. Instead of two CPUs, we now have four (one for each VM). As part of this change, memory and hard-drive for the VM also increase. If you are deploying to AWS or GCP, you can use this technique to vertically scale, but those cloud platforms offer different options for varying VM sizes.

We still only have a single VM in our cluster, but we have increased our VM’s size. In this example, scaling our cluster is as simple as a code change. This is the power of infrastructure-as-code, the technique where we store our infrastructure configuration as code and make changes to our infrastructure by committing code changes that trigger our continuous delivery (CD) pipeline

Listing 1: Vertically scaling the cluster with Terraform (an extract)

Horizontally Scaling the Cluster

In addition to vertically scaling our cluster, we can also scale it horizontally. Our VMs can remain the same size, but we simply add more VMs.

By adding more VMs to our cluster, we spread the load of our application across more computers. Figure 3 illustrates how we can take our cluster from three VMs up to six. The size of each VM remains the same, but we gain more computing power by having more VMs.

Figure 3: Horizontally scaling your cluster by increasing the number of VMs

Listing 2 shows an extract of Terraform code to add more VMs to our node pool. Back in listing 1, we had node_count set to 1, but here we have changed it to 6. Note that we reverted the vm_size field to the smaller size of Standard_B2ms. In this example, we increase the number of VMs, but not their size; although there is nothing stopping us from increasing both the number and the size of our VMs.

Generally, though, we might prefer horizontal scaling because it is less expensive than vertical scaling. That’s because using many smaller VMs is cheaper than using fewer but bigger and higher-priced VMs.

Listing 2: Horizontal scaling the cluster with Terraform (an extract)

Horizontally Scaling an Individual Microservice

Assuming our cluster is scaled to an adequate size to host all the microservices with good performance, what do we do when individual microservices become overloaded? (This can be monitored in the Kubernetes dashboard.)

Whenever a microservice becomes a performance bottleneck, we can horizontally scale it to distribute its load over multiple instances. This is shown in figure 4.

Figure 4: Horizontally scaling a microservice by replicating it

We are effectively giving more compute, memory and storage to this particular microservice so that it can handle a bigger workload.

Again, we can use code to make this change. We can do this by setting the replicas field in the specification for our Kubernetes deployment or pod as shown in listing 3.

Listing 3: Horizontally scaling a microservice with Terraform (an extract)

Not only can we scale individual microservices for performance, we can also horizontally scale our microservices for redundancy, creating a more fault-tolerant application. By having multiple instances, there are others available to pick up the load whenever any single instance fails. This allows the failed instance of a microservice to restart and begin working again.

Elastic Scaling for the Cluster

Moving into more advanced territory, we can now think about elastic scaling. This is a technique where we automatically and dynamically scale our cluster to meet varying levels of demand.

Whenever a demand is low, Kubernetes can automatically deallocate resources that aren’t needed. During high-demand periods, new resources are allocated to meet the increased workload. This generates substantial cost savings because, at any given moment, we only pay for the resources necessary to handle our application’s workload at that time.

We can use elastic scaling at the cluster level to automatically grow our clusters that are nearing their resource limits. Yet again, when using Terraform, this is just a code change. Listing 4 shows how we can enable the Kubernetes autoscaler and set the minimum and maximum size of our node pool.

Elastic scaling for the cluster works by default, but there are also many ways we can customize it. Search for “auto_scaler_profile” in the Terraform documentation to learn more.

Listing 4: Enabling elastic scaling for the cluster with Terraform (an extract)

Elastic Scaling for an Individual Microservice

We can also enable elastic scaling at the level of an individual microservice.

Listing 5 is a sample of Terraform code that gives microservices a “burstable” capability. The number of replicas for the microservice is expanded and contracted dynamically to meet the varying workload for the microservice (bursts of activity).

The scaling works by default, but can be customized to use other metrics. See the Terraform documentation to learn more. To learn more about pod auto-scaling in Kubernetes, see the Kubernetes docs.

Listing 5: Enabling elastic scaling for a microservice with Terraform

About the Book: Bootstrapping Microservices

You can learn about building applications with microservices with Bootstrapping Microservices.

Bootstrapping Microservices is a practical and project-based guide to building applications with microservices. It will take you all the way from building one single microservice all the way up to running a microservices application in production on Kubernetes, ending up with an automated continuous delivery pipeline and using infrastructure-as-code to push updates into production.

Other Kubernetes Resources

This post is an extract from Bootstrapping Microservices and has been a short overview of the ways we can scale microservices when running them on Kubernetes.

We specify the configuration for our infrastructure using Terraform. Creating and updating our infrastructure through code in this way is known as intrastructure-as-code, as a technique that turns working with infrastructure into a coding task and paved the way for the DevOps revolution.

To learn more about Kubernetes, please see the Kubernetes documentation and the free Introduction to Kubernetes training course.

To learn more about working with Kubernetes using Terraform, please see the Terraform documentation.

About the Author, Ashley Davis Ashley is a software craftsman, entrepreneur, and author with over 20 years of experience in software development, from coding to managing teams, then to founding companies. He is the CTO of Sortal, a product that automatically sorts digital assets through the magic of machine learning.

The post Scaling Microservices on Kubernetes appeared first on Linux Foundation – Training.

The post Scaling Microservices on Kubernetes appeared first on

Top Enable Sysadmin content of March 2021

Friday 2nd of April 2021 08:42:57 PM

Check out our top articles from a record-breaking month.
Read More at Enable Sysadmin

The post Top Enable Sysadmin content of March 2021 appeared first on

Linux Foundation Training Scholarships Are Back! Apply by April 30

Thursday 1st of April 2021 09:00:22 PM

Linux Foundation Training (LiFT) Scholarships are back! Since 2011, The Linux Foundation has awarded over 600 scholarships for more than a million dollars in training and certification to deserving individuals around the world who would otherwise be unable to afford it. This is part of our mission to grow the open source community by lowering the barrier to entry and making quality training options accessible to those who want them.

Applications are being accepted through April 30 in 10 different categories:

  • Open Source Newbies
  • Teens-in-Training
  • Women in Open Source
  • Software Developer Do-Gooder
  • SysAdmin Super Star
  • Blockchain Blockbuster
  • Cloud Captain
  • Linux Kernel Guru
  • Networking Notable
  • Web Development Wiz

Whether you are just starting in your open source career, or you are a veteran developer or sysadmin who is looking to gain new skills, if you feel you can benefit from training and/or certification but cannot afford it, you should apply. 

Recipients will receive a Linux Foundation training course and certification exam. All our certification exams, and most training courses, are offered remotely, meaning they can be completed from anywhere. 

Winners will be announced early summer.

Apply today!

The post Linux Foundation Training Scholarships Are Back! Apply by April 30 appeared first on Linux Foundation – Training.

The post Linux Foundation Training Scholarships Are Back! Apply by April 30 appeared first on

Announcing the Unbreakable Enterprise Kernel Release 6 Update 2 for Oracle Linux

Thursday 1st of April 2021 04:16:57 AM

The Unbreakable Enterprise Kernel (UEK) for Oracle Linux provides the latest open source innovations, key optimizations, and security to cloud and on-premises workloads. It is the Linux kernel that powers Oracle Cloud and Oracle Engineered Systems such as Oracle Exadata Database Machine and Oracle Linux on Intel/AMD as well as Arm platforms. What’s New? The Unbreakable Enterprise Kernel Release 6 Update 2 (UEK R6U2) for Oracle Linux is based on…
Click to Read More at Oracle Linux Kernel Development

The post Announcing the Unbreakable Enterprise Kernel Release 6 Update 2 for Oracle Linux appeared first on

The Linux Foundation Hosts Project to Decentralize and Accelerate Drug Development for Rare Genetic Diseases

Wednesday 31st of March 2021 10:00:00 PM

OpenTreatments and RareCamp creator Sanath Kumar Ramesh built the project to address his son’s rare disease, now that work will be available to all in an effort to accelerate treatments

SAN FRANCISCO, Calif., March 31, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, and the OpenTreatments Foundation, which enables treatments for rare genetic diseases regardless of rarity and geography, today announced the RareCamp software project will be hosted at the Linux Foundation. The Project will provide the source code and open governance for the OpenTreatments software platform to enable patients to create gene therapies for rare genetic diseases.

The project is supported by individual contributors, as well as collaborations from companies that include Baylor College of Medicine, Castle IRB, Charles River, Columbus Children’s Foundation, GlobalGenes, Odylia Therapeutics, RARE-X and

“OpenTreatments and RareCamp decentralize drug development and empowers patients, families and other motivated individuals to create treatments for diseases they care about. We will enable the hand off of these therapies to commercial, governmental and philanthropic entities to ensure patients around the world get access to the therapies for the years to come,” said Sanath Kumar Ramesh, founder of the OpenTreatments Foundation and creator of RareCamp.

There are 400 million patients worldwide affected by more than 7,000 rare diseases, yet treatments for rare genetic diseases are an underserved area. More than 95 percent of rare diseases do not have an approved treatment, and new treatments are estimated to cost more than $1 billion.

“If it’s not yet commercially viable to create treatments for rare diseases, we will take this work into our own hands with open source software and community collaboration is the way we can do it,” said Ramesh.

The RareCamp open source project provides open governance for the software and scientific community to collaborate and create the software tools to aid in the creation of treatments for rare diseases. The community includes software engineers, UX designers, content writers and scientists who are collaborating now to build the software that will power the OpenTreatments platform. The project uses the open source Javascript framework NextJS for frontend and the Amazon Web Services (AWS) Serverless stack – including AWS Lambda, Amazon API Gateway, and Amazon DynamoDB – to power the backend. The project uses the open source toolchain Serverless Framework to develop and deploy the software. The project is licensed under Apache 2.0 and available for anyone to use.

“OpenTreatments and RareCamp really demonstrate how technology and collaboration can have an impact on human life,” said Brett Andrews, RareCamp contributor and software engineer at Vendia. “Sanath’s vision is fueled with love for his son, technical savvy and the desire to share what he’s learning with others who can benefit. Contributing to this project was an easy decision.”

“OpenTreatments Foundation and RareCamp really represent exactly why open source and collaboration are so powerful – because they allow all of us to do more together than any one of us,” said Mike Dolan, executive vice president and GM of Projects at the Linux Foundation. “We’re honored to be able to support this community and are both confident and inspired about its impact on human lives.”

For more information and to contribute, please visit:

About OpenTreatments Foundation

OpenTreatments Foundation’s mission is to enable treatments for all genetic diseases regardless of rarity and geography. Through the OpenTreatments software platform, patient-led organizations get access to a robust roadmap, people, and infrastructure necessary to build a gene therapy program. The software platform offers project management capabilities to manage the program while reducing time and money necessary for the development. For more information, please visit:

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at


The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer
for the OpenTreatements Foundation
and Linux Foundation

The post The Linux Foundation Hosts Project to Decentralize and Accelerate Drug Development for Rare Genetic Diseases appeared first on Linux Foundation.

The post The Linux Foundation Hosts Project to Decentralize and Accelerate Drug Development for Rare Genetic Diseases appeared first on

More in Tux Machines

Excellent Utilities: duf – disk usage utility

This is a series highlighting best-of-breed utilities. We cover a wide range of utilities including tools that boost your productivity, help you manage your workflow, and lots more besides. There’s a complete list of the tools in this series in the Summary section. The Command Line Interface (CLI) is a way of interacting with your computer. To harness all the power of Linux, it’s highly recommended mastering the interface. It’s true the CLI is often perceived as a barrier for users migrating to Linux, particularly if they’re grown up using GUI software exclusively. While Linux rarely forces anyone to use the CLI, some tasks are better suited to this method of interaction, offering inducements like superior scripting opportunities, remote access, and being far more frugal with a computer’s resources. duf is a simple disk usage utility that offers a more attractive representation than the classic df utility. It’s written in Go. Read more

Sway 1.6.1 Wayland Compositor Released With WLROOTS 0.14

Simon Ser has released Sway 1.6.1 as the newest version of this popular i3-inspired Wayland compositor. Sway 1.6 came back in April with better Flatpak/Snap application integration, smoother move/resize operations, X11 clipboard handling improvements, and many other improvements for this popular "indie" Wayland compositor. Read more

today's howtos

  • Kali Linux Man in the Middle Attack Tutorial for Beginners 2021

    Man in the middle attack is the most popular and dangerous attack in Local Area Network. With the help of this attack, A hacker can capture the data including username and password traveling over the network. He/she is not only captured data from the network he/she can alter data as well. For example, if you send a letter to your friend the hacker can capture the letter before reaching the destination, and can edit and then send to your friend a modified letter. But a good thing is this attack only can be performed in a local area network it means one of the victims must be in the same network of the attacker. May be possible you have heard that using a public Wi-Fi network is not as secure as your home network the only reason is a man in the middle attack.

  • How to Install chrome in Ubuntu 20.04 complete Guide

    Google Chrome is a web browser, most used widely in the world. It is fast, simple, and easy to use and secure browser built for the modern web. Neither Google Chrome comes with Ubuntu default, nor included in the Ubuntu repositories. But here, I am telling about another open-source web browser. It is available in the default Ubuntu repositories. If you don’t want to install chromium and looking only for chrome, this article will help you.

  • How to Install and Use Tilix Terminal Emulator in Linux

    Tilix is an open-source advanced Linux terminal emulator that uses GTK+ 3 and offers a lot of features that are not part of the default terminal that ships with Linux distributions.

  • How to Install NetBeans IDE 12 on Fedora 34/33 – TecAdmin

    NetBeans is an open-source integrated development environment for the application development on Windows, Mac, Linux, and Solaris operating systems. It offers excellent debugging capabilities, coding, plugins, and extensions with multiple out-of-the-box features. The NetBeans is widely used by the PHP and Java application developers. A shell script is provided by the official team for easier installation of Netbeans on Linux systems. However, we can have also use the Snap package to install the latest NetBeans IDE on the Fedora system quickly. This tutorial will help you to install NetBeans IDE on a Fedora system using the Snap package manager.

  • How to Fix 504 Gateway Timeout in Nginx Server

    I use NGINX a lot. I recently deployed a Node.js web application with NGINX as a reverse proxy server for it. One of the key features of the application is support for data imports using excel templates. However, it didn’t take long before users uploading bulky files started getting a 504 Gateway Timeout error from NGINX.

  • How To Install Next.js on Ubuntu 20.04 LTS - idroot

    In this tutorial, we will show you how to install Next.js on Ubuntu 20.04 LTS. For those of you who didn’t know, Next.js is a Javascript framework built on React.js, which allows developers to build static and dynamic websites and web applications. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you the step-by-step installation of the Next.js open-source Javascript framework on Ubuntu 20.04 (Focal Fossa). You can follow the same instructions for Ubuntu 18.04, 16.04, and any other Debian-based distribution like Linux Mint.

  • How To Install AlmaLinux Desktop

    This tutorial explains the installation of AlmaLinux Desktop to computer. This begins with where to grab the OS itself, make a bootable medium of it, boot the computer with it, then starts the installation and partitioning until finished. The final result will be a fully functional computer with AlmaLinux GNOME.

  • Generate Rainbow Tables and Crack Hashes in Kali Linux Complete Guide

    Rcracki_mt is a tool used to crack hashes and found in kali linux by default. It is used rainbow tables to crack the password. Some other tools generate rainbow tables. You can download Rainbow table if you don’t want to download rainbow table you can create you own by Using winrtgen in window and rtgen in Kali Linux

AMD SFH Linux Driver Updated For "Next Gen" Ryzen Laptops

There's the next chapter to the unfortunately rather sad state of the AMD Sensor Fusion Hub (SFH) driver support under Linux. Since 2018 with AMD Ryzen laptops there has been the Sensor Fusion Hub for various accelerometer/gyroscopic sensor functionality, among other uses and akin to Intel's Sensor Hub. It wasn't though until January 2020 that AMD published their SFH driver for Linux. Read more