Language Selection

English French German Italian Portuguese Spanish

Linux.com

Syndicate content
News For Open Source Professionals
Updated: 5 hours 29 min ago

6 tcpdump network traffic filter options

Tuesday 6th of April 2021 08:58:59 PM

6 tcpdump network traffic filter options

The first six of eighteen common tcpdump options that you should use for network troubleshooting and analysis.
Kedar Vijay Kulkarni
Tue, 4/6/2021 at 1:58pm

Image

Image by Pexels from Pixabay

The tcpdump utility is used to capture and analyze network traffic. Sysadmins can use it to view real-time traffic or save the output to a file and analyze it later. In this three-part article, I demonstrate several common options you might want to use in your day-to-day operations with tcpdump.

Topics:  
Linux  
Linux Administration  
Command line utilities  
Read More at Enable Sysadmin

The post 6 tcpdump network traffic filter options appeared first on Linux.com.

Scaling Microservices on Kubernetes

Monday 5th of April 2021 09:00:02 PM

By Ashley Davis

*This article was originally published at TheNewStack

Applications built on microservices can be scaled in multiple ways. We can scale them to support development by larger development teams and we can also scale them up for better performance. Our application can then have a higher capacity and can handle a larger workload.

Using microservices gives us granular control over the performance of our application. We can easily measure the performance of our microservices to find the ones that are performing poorly, are overworked, or are overloaded at times of peak demand. Figure 1 shows how we might use the Kubernetes dashboard to understand CPU and memory usage for our microservices.

Figure 1: Viewing CPU and memory usage for microservices in the Kubernetes dashboard

If we were using a monolith, however, we would have limited control over performance. We could vertically scale the monolith, but that’s basically it.

Horizontally scaling a monolith is much more difficult; and we simply can’t independently scale any of the “parts” of a monolith. This isn’t ideal, because it might only be a small part of the monolith that causes the performance problem. Yet, we would have to vertically scale the entire monolith to fix it. Vertically scaling a large monolith can be an expensive proposition.

Instead, with microservices, we have numerous options for scaling. For instance, we can independently fine-tune the performance of small parts of our system to eliminate bottlenecks and achieve the right mix of performance outcomes.

There are also many advanced ways we could tackle performance issues, but in this post, we’ll overview a handful of relatively simple techniques for scaling our microservices using Kubernetes:

  1. Vertically scaling the entire cluster
  2. Horizontally scaling the entire cluster
  3. Horizontally scaling individual microservices
  4. Elastically scaling the entire cluster
  5. Elastically scaling individual microservices

Scaling often requires risky configuration changes to our cluster. For this reason, you shouldn’t try to make any of these changes directly to a production cluster that your customers or staff are depending on.

Instead, I would suggest that you create a new cluster and use blue-green deployment, or a similar deployment strategy, to buffer your users from risky changes to your infrastructure.

Vertically Scaling the Cluster

As we grow our application, we might come to a point where our cluster generally doesn’t have enough compute, memory or storage to run our application. As we add new microservices (or replicate existing microservices for redundancy), we will eventually max out the nodes in our cluster. (We can monitor this through our cloud vendor or the Kubernetes dashboard.)

At this point, we must increase the total amount of resources available to our cluster. When scaling microservices on a Kubernetes cluster, we can just as easily make use of either vertical or horizontal scaling. Figure 2 shows what vertical scaling looks like for Kubernetes.

Figure 2: Vertically scaling your cluster by increasing the size of the virtual machines (VMs)

We scale up our cluster by increasing the size of the virtual machines (VMs) in the node pool. In this example, we increased the size of three small-sized VMs so that we now have three large-sized VMs. We haven’t changed the number of VMs; we’ve just increased their size — scaling our VMs vertically.

Listing 1 is an extract from Terraform code that provisions a cluster on Azure; we change the vm_size field from Standard_B2ms to Standard_B4ms. This upgrades the size of each VM in our Kubernetes node pool. Instead of two CPUs, we now have four (one for each VM). As part of this change, memory and hard-drive for the VM also increase. If you are deploying to AWS or GCP, you can use this technique to vertically scale, but those cloud platforms offer different options for varying VM sizes.

We still only have a single VM in our cluster, but we have increased our VM’s size. In this example, scaling our cluster is as simple as a code change. This is the power of infrastructure-as-code, the technique where we store our infrastructure configuration as code and make changes to our infrastructure by committing code changes that trigger our continuous delivery (CD) pipeline

Listing 1: Vertically scaling the cluster with Terraform (an extract)

Horizontally Scaling the Cluster

In addition to vertically scaling our cluster, we can also scale it horizontally. Our VMs can remain the same size, but we simply add more VMs.

By adding more VMs to our cluster, we spread the load of our application across more computers. Figure 3 illustrates how we can take our cluster from three VMs up to six. The size of each VM remains the same, but we gain more computing power by having more VMs.

Figure 3: Horizontally scaling your cluster by increasing the number of VMs

Listing 2 shows an extract of Terraform code to add more VMs to our node pool. Back in listing 1, we had node_count set to 1, but here we have changed it to 6. Note that we reverted the vm_size field to the smaller size of Standard_B2ms. In this example, we increase the number of VMs, but not their size; although there is nothing stopping us from increasing both the number and the size of our VMs.

Generally, though, we might prefer horizontal scaling because it is less expensive than vertical scaling. That’s because using many smaller VMs is cheaper than using fewer but bigger and higher-priced VMs.

Listing 2: Horizontal scaling the cluster with Terraform (an extract)

Horizontally Scaling an Individual Microservice

Assuming our cluster is scaled to an adequate size to host all the microservices with good performance, what do we do when individual microservices become overloaded? (This can be monitored in the Kubernetes dashboard.)

Whenever a microservice becomes a performance bottleneck, we can horizontally scale it to distribute its load over multiple instances. This is shown in figure 4.

Figure 4: Horizontally scaling a microservice by replicating it

We are effectively giving more compute, memory and storage to this particular microservice so that it can handle a bigger workload.

Again, we can use code to make this change. We can do this by setting the replicas field in the specification for our Kubernetes deployment or pod as shown in listing 3.

Listing 3: Horizontally scaling a microservice with Terraform (an extract)

Not only can we scale individual microservices for performance, we can also horizontally scale our microservices for redundancy, creating a more fault-tolerant application. By having multiple instances, there are others available to pick up the load whenever any single instance fails. This allows the failed instance of a microservice to restart and begin working again.

Elastic Scaling for the Cluster

Moving into more advanced territory, we can now think about elastic scaling. This is a technique where we automatically and dynamically scale our cluster to meet varying levels of demand.

Whenever a demand is low, Kubernetes can automatically deallocate resources that aren’t needed. During high-demand periods, new resources are allocated to meet the increased workload. This generates substantial cost savings because, at any given moment, we only pay for the resources necessary to handle our application’s workload at that time.

We can use elastic scaling at the cluster level to automatically grow our clusters that are nearing their resource limits. Yet again, when using Terraform, this is just a code change. Listing 4 shows how we can enable the Kubernetes autoscaler and set the minimum and maximum size of our node pool.

Elastic scaling for the cluster works by default, but there are also many ways we can customize it. Search for “auto_scaler_profile” in the Terraform documentation to learn more.

Listing 4: Enabling elastic scaling for the cluster with Terraform (an extract)

Elastic Scaling for an Individual Microservice

We can also enable elastic scaling at the level of an individual microservice.

Listing 5 is a sample of Terraform code that gives microservices a “burstable” capability. The number of replicas for the microservice is expanded and contracted dynamically to meet the varying workload for the microservice (bursts of activity).

The scaling works by default, but can be customized to use other metrics. See the Terraform documentation to learn more. To learn more about pod auto-scaling in Kubernetes, see the Kubernetes docs.

Listing 5: Enabling elastic scaling for a microservice with Terraform

About the Book: Bootstrapping Microservices

You can learn about building applications with microservices with Bootstrapping Microservices.

Bootstrapping Microservices is a practical and project-based guide to building applications with microservices. It will take you all the way from building one single microservice all the way up to running a microservices application in production on Kubernetes, ending up with an automated continuous delivery pipeline and using infrastructure-as-code to push updates into production.

Other Kubernetes Resources

This post is an extract from Bootstrapping Microservices and has been a short overview of the ways we can scale microservices when running them on Kubernetes.

We specify the configuration for our infrastructure using Terraform. Creating and updating our infrastructure through code in this way is known as intrastructure-as-code, as a technique that turns working with infrastructure into a coding task and paved the way for the DevOps revolution.

To learn more about Kubernetes, please see the Kubernetes documentation and the free Introduction to Kubernetes training course.

To learn more about working with Kubernetes using Terraform, please see the Terraform documentation.

About the Author, Ashley Davis Ashley is a software craftsman, entrepreneur, and author with over 20 years of experience in software development, from coding to managing teams, then to founding companies. He is the CTO of Sortal, a product that automatically sorts digital assets through the magic of machine learning.

The post Scaling Microservices on Kubernetes appeared first on Linux Foundation – Training.

The post Scaling Microservices on Kubernetes appeared first on Linux.com.

More in Tux Machines

today's leftovers

  • Fedora Community Blog: Friday’s Fedora Facts: 2021-29

    Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

  • Nostalgia and efficiency - MATE Desktop Tour

    It's time we started taking a look at MATE, the last major desktop environment I have never used. All I know about MATE is that it's basically a continuation of the GNOME 2 desktop, which I have used for a long time back when I started using Linux in 2006 on Ubuntu Dapper Drake. Let's see if that is true, and if GNOME 2, or MATE, is still up to the challenge in 2021.

  • Full Circle Weekly News #219
  • System76: Laptops, Servers, and PCs Optimized for Linux and Open-Source Solutions

    Despite a lineage that predates Microsoft Windows and Apple macOS, the Linux operating system has struggled to gain traction in the mass commercial market. That challenge extends not only to the software but also to the dedicated hardware optimized to maximize the benefits of Linux on desktops and laptops. Linux was initially popular with tech enthusiasts, but the commercial PC industry skewed toward Windows and Intel consumer hardware. Part of the challenge for Linux related to its early lack of dedicated hardware solutions. The founders of System76 set out to make the Linux ecosystem more inviting by integrating the hardware and software components to provide consumers with easy access to desktops and laptops.

  • Jon McDonald: How System76 paves the way for Linux hardware adoption

    System76 has found its footing in an industry largely geared towards Windows users. Jon McDonald, Contributing Editor for web hosting company HostingAdvice, took to the company’s blog to share a deep dive on System76’s success in the world of Linux hardware. He’s joined by Sam Mondlick, VP of Sales at System76.

  • Space Cowboy, Guardians of Cleveland, and Tony Award winner Ellen Barkin considers a Subtack – here is this week’s Top Shelf.

    At Mozilla, we believe part of making the internet we want is celebrating the best of the internet, and that can be as simple as sharing a tweet that made us pause in our feed. Twitter isn’t perfect, but there are individual tweets that come pretty close. Each week in Top Shelf, we will be sharing the tweets that made us laugh, think, Pocket them for later, text our friends, and want to continue the internet revolution each week.

Programming Leftovers

  • with Statement – Linux Hint

    The Python with statement is a very advanced feature that helps to implement the context management protocol. When the programmer starts coding, they are basically using the try/except/finally to maintain the resources. But there is another way to do this automatically, called the ‘with’ statement. So, in this article, we will discuss how we can use the ‘with‘ statement. We can understand this with a very simple example. Whenever we code something to read or write a file, the first thing which we have to do is to open the file, and then we perform the read or write operations on that and, at last, we close the file so that all the resources will not be busy. So it means that we have to release the resource after we complete our work.

  • Assembly of Python External C++ procedure returning the value of string type

    Writing C++ procedure below we get a final answer as C++ string , then via sequence of operations which convert string to the pointer (say c) to "const char" and finally return required value via pointer to PyObject provided by PyUnicode_FromString(c) to Python Runtime module.

  • How to split string in C++ – Linux Hint

    Working with string data is an essential part of any programming language. Sometimes we need to split the string data for programming purposes. The split() function exists in many programming languages to divide the string into multiple parts. There is no built-in split() function in C++ for splitting string but many multiple ways exist in C++ to do the same task, such as using getline() function, strtok() function, using find() and erase() functions, etc. The uses of these functions to split strings in C++ have been explained in this tutorial.

  • Do while in c – Linux Hint

    Loops in C are divided into two parts. One is the loop body, and the other is the control statement. Each loop is unique in its way. Do while loop is alike to a while loop in some aspects. In this loop, firstly, all the statements inside the body are executed. In case the condition is true, then the loop is again executed until the condition becomes false. In this guide, we will shed some light on the examples of do-while loops.

  • C++ class constructors – Linux Hint

    Constructors are like functions. These are used to initialize the values and the objects of the class. These constructors are initiated when the object of a class is created. Constructor directly does not return any value. To get the value of the constructor, we need to describe a separate function as the constructor doesn’t have any return type. Constructor differs from the simple function in different ways. A constructor is created when the object is generated. It is defined in the public segment of the class. In this article, we will deliberate on all these types of constructors with examples.

  • Comparing Strings in Java – Linux Hint

    It is easier to understand the comparison of characters before learning the comparison of string literals. A comparison of strings is given below this introduction. With Java, characters are represented in the computer by integers (whole numbers). Comparing characters means comparing their corresponding numbers. With Java, uppercase A to uppercase Z are the integers from 65 to 90. A is 65, B is 66, C is 67, until Z, which is 90. Lowercase ‘a’ to lowercase ‘z’ are the integers from 97 to 122. ‘a’ is 97, ‘b’ is 98, ‘c’ is 99, until ‘z,’ which is 122. Decimal digits are the integers, 48 to 57. That is, ‘0’ is 48, ‘1’ is 49, ‘2’ is 50, until 9, which is 57. So, in this new order, digits come first before uppercase letters, which come next before lowercase letters. Before the digits, there is the bell, which is a sounding and not a printable character. Its number is 7. There is the tab character of the keyboard, whose number is 9. There is the newline character (pressing the Enter key), whose number is 10. There is the space character (pressing the space-bar key), whose number is 32. There is the exclamation character, whose number is 33. There is the forward-slash character, whose number is 47. ‘(’ has the number, 40 and ‘)’ has the number, 41.

  • How to use HashMap in Java – Linux Hint

    The column on the left has the keys, and the column on the right has the corresponding values. Note that the fruits, kivi, and avocado have the same color, green. Also, the fruits, grapes, and figs have the same color, purple. At the end of the list, three locations are waiting for their own colors. These locations have no corresponding fruits; in other words, these three locations have no corresponding keys.

Computer scientist showcases world's first RISC-V-based Linux PC coupled with an AMD RX 6700 XT GPU

Back when Nvidia was announcing the intentions to buy ARM and many industry analysts immediately expressed their concern regarding the status of the ARM architecture that might not remain open source for too long, SiFive came out with a big push for its RISC-V CPU architecture as a true open source alternative. Similar to the Windows-on-ARM initiative, SiFive promised to deliver a general use PC platform that would allow software developers to adapt the Windows and Linux-based code for the RISC-V processors. It only took SiFive a few months to launch its first PC motherboard called the HiFive Unmatched, which is based on the U7 SoC. However, since the RISC-V community is not that big, development on the PC platform is not exactly fast. Interestingly enough, Nvidia recently managed to enable RTX 3000 support for ARM-based laptops, and, almost at the same time, a RISC-V enthusiast managed to make an AMD RX 6700 XT work on Linux-based HiFive Unmatched system. This is essentially a double milestone for the RISC-V community. Hackster.io reports that computer scientist René Rebe first managed to make the HiFive Unmatched run Linux, and then added support for the Radeon RX 6700 XT GPU through the Mesa Gallium 21.1.5 driver. Apparently, the U7 SoC is not properly supported in Linux, but Rebe was able to work his magic and patched the Linux kernel to support both the RISC-V architecture and the RDNA2 GPU in around 10 hours. The GPU is not fully functional as of yet. It can display the GUI, can render 3D graphics in accelerated-mode and also decode hi-res videos, but cannot run games. Nevertheless, this is still an impressive achievement that is not facilitated by the SiFive team itself. Read more

today's howtos

  • Evgeni Golov: It's not *always* DNS

    Two weeks ago, I had the pleasure to play with Foremans Kerberos integration and iron out a few long standing kinks. It all started with a user reminding us that Kerberos authentication is broken when Foreman is deployed on CentOS 8, as there is no more mod_auth_kerb available. Given mod_auth_kerb hasn't seen a release since 2013, this is quite understandable. Thankfully, there is a replacement available, mod_auth_gssapi. Even better, it's available in CentOS 7 and 8 and in Debian and Ubuntu too! So I quickly whipped up a PR to completely replace mod_auth_kerb with mod_auth_gssapi in our installer and successfully tested that it still works in CentOS 7 (even if upgrading from a mod_auth_kerb installation) and CentOS 8.

  • [Older] How To Install MariaDB 10.5 on Ubuntu 20.04

    MariaDB is one of the most popular open-source databases next to its originator MySQL. The original creators of MySQL developed MariaDB in response to fears that MySQL will suddenly become a paid service due to Oracle acquiring it in 2010. With its history of doing similar tactics, the developers behind MariaDB have promised to keep it open source and free from such fears as what has happened to MySQL.

  • Save a dict to a file – Linux Hint

    Dictionary is a very famous object in python. And it is a collection of keys and values. The key of the dict must be immutable, and it can be integer, float, string, but neither a list nor a dict itself can be a key. So, sometimes we need to save the dict objects into a file. So we are going to see different methods to save a dict object in a file.

  • Introduction to RPM/YUM Package Management – Linux Hint

    Red Hat Package Manager is the default open-source package management utility built under General Public License (GPU). The package management system is for all Red Hat-based Linux derivatives like Fedora, RHEL, and CentOS. RPM facilitates system administrators with the basic five modes of package management operations: installing, updating, removing, querying, and verifying packages. Moreover, Yellowdog Updater Modified (YUM) is to RPM what APT package management tool is for dpkg utility in Debian packaging system: it resolves the package dependency issues of RPM. In this guide, we will briefly introduce YUM. Whereas, we will have an in-depth introduction and background to the RPM packaging system for Red Hat Linux distributions.

  • What is ngrep and How to Use It? – Linux Hint

    Even though tshark and tcpdump are the most popular packet sniffing tools that dig down to the level of bits and bytes of the traffic. ngrep is another command-line nix utility that analyzes network packets and searches for them on a given regex pattern. The utility uses pcap and GNU library to perform regex string searches. ngrep stands for Network grep that is similar to the regular grep utility. The only difference is that ngrep parses text in network packets by using regular or hexadecimal expressions. In this article, we learn about a command-line, feature-rich utility known as ngrep that is handy for quick PCAP analysis and packet dumping.

  • Kubectl Port Forward – Linux Hint

    Forwarding a port using kubectl is relatively easy, although it only operates with individual pods but not with services. Port forwarding is a valuable tool for debugging different applications and deployments in the Kubernetes cluster. For illustration, if one of your pods is acting strangely, you will need to link to it directly. As this is a microservice setting, you can utilize port forwarding to communicate with a back-end service that would otherwise be hidden. The Kubelet delivers all information entered into the stream to the destination pod and port. When designing Kubernetes applications, it’s common to wish for immediate use of a service from the surrounding environment without exposing it via a load balancer or perhaps an ingress resource. We can use kubectl to create a proxy that forwards all traffic from a local port to a port linked to our chosen Pod. The kubectl port-forward instruction can be utilized to accomplish this. The kubectl port-forward sends an appeal to the Kubernetes API. That implies the machine that runs it requires access to the API server, and all communication is tunneled through a single HTTP connection. By passing one (or more) local ports to a pod, we can access container content with this command. This command performs effectively when you are required to debug a malfunctioning pod. We are going to talk about a step-by-step method to check port forwarding using kubectl.

  • Kubectl Get Events To Sort By Time – Linux Hint

    While other resources have changes, errors, or other notifications that should be broadcasted to the system, Kubernetes events are generated automatically. There is not so much documentation on events, but they are a great help when troubleshooting problems in your Kubernetes cluster. When compared to many other Kubernetes objects, events have a lot of activity. Events have a one-hour life period by default, and a distinct etcd cluster is advised for scalability. Events on their own, when combined with the inability to filter or aggregate, may not be particularly valuable unless they are transferred to external systems. Kubernetes events are entities that inform you what’s going on inside a cluster, like the scheduler’s decisions and why some pods were ejected from a node. The API Server allows all key components and extensions (operators) to generate events. When something is not operating as planned, the first area to check at is events and network operations. If the failure is the outcome of earlier events or when performing post-mortem analysis, keeping them for a longer duration is critical. Kubernetes generates events every time any of the resources it manages changes. The entity that initiated the event, the kind of event, and the cause are generally included in these events. Now to sort events by time, you have to follow the appended steps described in this tutorial.

  • Introduction to Manjaro Package Manager Pacman – Linux Hint

    The Linux distributions package management system has covered a long way. The timely practice of software management by creating independent repositories, application packages, and installation tools made software accessible across environments. Similar to all other Linux distributions, Manjaro has a default package manager of Arch Linux. In this article, we learn to use the command-line package manager Pacman to add, remove, and update software packages from the distribution or user build repository. The tutorial also covers how to query details of installed packages on the system.