Language Selection

English French German Italian Portuguese Spanish

Fedora Magazine

Syndicate content
Guides, information, and news about the Fedora operating system for users, developers, system administrators, and community members.
Updated: 3 hours 58 min ago

Web of Trust, Part 2: Tutorial

Monday 19th of October 2020 08:00:00 AM

The previous article looked at how the Web of Trust works in concept, and how the Web of Trust is implemented at Fedora. In this article, you’ll learn how to do it yourself. The power of this system lies in everybody being able to validate the actions of others—if you know how to validate somebody’s work, you’re contributing to the strength of our shared security.

Choosing a project

Remmina is a remote desktop client written in GTK+. It aims to be useful for system administrators and travelers who need to work with lots of remote computers in front of either large monitors or tiny netbooks. In the current age, where many people must work remotely or at least manage remote servers, the security of a program like Remmina is critical. Even if you do not use it yourself, you can contribute to the Web of Trust by checking it for others.

The question is: how do you know that a given version of Remmina is good, and that the original developer—or distribution server—has not been compromised?

For this tutorial, you’ll use Flatpak and the Flathub repository. Flatpak is intentionally well-suited for making verifiable rebuilds, which is one of the tenets of the Web of Trust. It’s easier to work with since it doesn’t require users to download independent development packages. Flatpak also uses techniques to prevent in‑flight tampering, using hashes to validate its read‑only state. As far as the Web of Trust is concerned, Flatpak is the future.

For this guide, you use Remmina, but this guide generally applies to every application you use. It’s also not exclusive to Flatpak, and the general steps also apply to Fedora’s repositories. In fact, if you’re currently reading this article on Debian or Arch, you can still follow the instructions. If you want to follow along using traditional RPM repositories, make sure to check out this article.

Installing and checking

To install Remmina, use the Software Center or run the following from a terminal:

flatpak install flathub org.remmina.Remmina -y

After installation, you’ll find the files in:

/var/lib/flatpak/app/org.remmina.Remmina/current/active/files/

Open a terminal here and find the following directories using ls -la:

total 44 drwxr-xr-x. 2 root root 4096 Jan 1 1970 bin drwxr-xr-x. 3 root root 4096 Jan 1 1970 etc drwxr-xr-x. 8 root root 4096 Jan 1 1970 lib drwxr-xr-x. 2 root root 4096 Jan 1 1970 libexec -rw-r--r--. 2 root root 18644 Aug 25 14:37 manifest.json drwxr-xr-x. 2 root root 4096 Jan 1 1970 sbin drwxr-xr-x. 15 root root 4096 Jan 1 1970 share Getting the hashes

In the bin directory you will find the main binaries of the application, and in lib you find all dependencies that Remmina uses. Now calculate a hash for ./bin/remmina:

sha256sum ./bin/*

This will give you a list of numbers: checksums. Copy them to a temporary file, as this is the current version of Remmina that Flathub is distributing. These numbers have something special: only an exact copy of Remmina can give you the same numbers. Any change in the code—no matter how minor—will produce different numbers.

Like Fedora’s Koji and Bodhi build and update services, Flathub has all its build servers in plain view. In the case of Flathub, look at Buildbot to see who is responsible for the official binaries of a package. Here you will find all of the logs, including all the failed builds and their paper trail.

Getting the source

The main Flathub project is hosted on GitHub, where the exact compile instructions (“manifest” in Flatpak terms) are visible for all to see. Open a new terminal in your Home folder. Clone the instructions, and possible submodules, using one command:

git clone --recurse-submodules https://github.com/flathub/org.remmina.Remmina Developer tools

Start off by installing the Flatpak Builder:

sudo dnf install flatpak-builder

After that, you’ll need to get the right SDK to rebuild Remmina. In the manifest, you’ll find the current SDK is.

"runtime": "org.gnome.Platform", "runtime-version": "3.38", "sdk": "org.gnome.Sdk", "command": "remmina",

This indicates that you need the GNOME SDK, which you can install with:

flatpak install org.gnome.Sdk//3.38

This provides the latest versions of the Free Desktop and GNOME SDK. There are also additional SDK’s for additional options, but those are beyond the scope of this tutorial.

Generating your own hashes

Now that everything is set up, compile your version of Remmina by running:

flatpak-builder build-dir org.remmina.Remmina.json --force-clean

After this, your terminal will print a lot of text, your fans will start spinning, and you’re compiling Remmina. If things do not go so smoothly, refer to the Flatpak Documentation; troubleshooting is beyond the scope of this tutorial.

Once complete, you should have the directory ./build-dir/files/, which should contain the same layout as above. Now the moment of truth: it’s time to generate the hashes for the built project:

sha256sum ./bin/*

You should get exactly the same numbers. This proves that the version on Flathub is indeed the version that the Remmina developers and maintainers intended for you to run. This is great, because this shows that Flathub has not been compromised. The web of trust is strong, and you just made it a bit better.

Going deeper

But what about the ./lib/ directory? And what version of Remmina did you actually compile? This is where the Web of Trust starts to branch. First, you can also double-check the hashes of the ./lib/ directory. Repeat the sha256sum command using a different directory.

But what version of Remmina did you compile? Well, that’s in the Manifest. In the text file you’ll find (usually at the bottom) the git repository and branch that you just used. At the time of this writing, that is:

"type": "git", "url": "https://gitlab.com/Remmina/Remmina.git", "tag": "v1.4.8", "commit": "7ebc497062de66881b71bbe7f54dabfda0129ac2"

Here, you can decide to look at the Remmina code itself:

git clone --recurse-submodules https://gitlab.com/Remmina/Remmina.git cd ./Remmina git checkout tags/v1.4.8

The last two commands are important, since they ensure that you are looking at the right version of Remmina. Make sure you use the corresponding tag of the Manifest file. you can see everything that you just built.

What if…?

The question on some minds is: what if the hashes don’t match? Quoting a famous novel: “Don’t Panic.” There are multiple legitimate reasons as to why the hashes do not match.

It might be that you are not looking at the same version. If you followed this guide to a T, it should give matching results, but minor errors will cause vastly different results. Repeat the process, and ask for help if you’re unsure if you’re making errors. Perhaps Remmina is in the process of updating.

But if that still doesn’t justify the mismatch in hashes, go to the maintainers of Remmina on Flathub and open an issue. Assume good intentions, but you might be onto something that isn’t totally right.

The most obvious upstream issue is that Remmina does not properly support reproducible builds yet. The code of Remmina needs to be written in such a way that repeating the same action twice, gives the same result. For developers, there is an entire guide on how to do that. If this is the case, there should be an issue on the upstream bug-tracker, and if it is not there, make sure that you create one by explaining your steps and the impact.

If all else fails, and you’ve informed upstream about the discrepancies and they to don’t know what is happening, then it’s time to send an email to the Administrators of Flathub and the developer in question.

Conclusion

At this point, you’ve gone through the entire process of validating a single piece of a bigger picture. Here, you can branch off in different directions:

  • Try another Flatpak application you like or use regularly
  • Try the RPM version of Remmina
  • Do a deep dive into the C code of Remmina
  • Relax for a day, knowing that the Web of Trust is a collective effort

In the grand scheme of things, we can all carry a small part of responsibility in the Web of Trust. By taking free/libre open source software (FLOSS) concepts and applying them in the real world, you can protect yourself and others. Last but not least, by understanding how the Web of Trust works you can see how FLOSS software provides unique protections.

systemd-resolved: introduction to split DNS

Friday 16th of October 2020 08:00:00 AM

Fedora 33 switches the default DNS resolver to systemd-resolved. In simple terms, this means that systemd-resolved will run as a daemon. All programs wanting to translate domain names to network addresses will talk to it. This replaces the current default lookup mechanism where each program individually talks to remote servers and there is no shared cache.

If necessary, systemd-resolved will contact remote DNS servers. systemd-resolved is a “stub resolver”—it doesn’t resolve all names itself (by starting at the root of the DNS hierarchy and going down label by label), but forwards the queries to a remote server.

A single daemon handling name lookups provides significant benefits. The daemon caches answers, which speeds answers for frequently used names. The daemon remembers which servers are non-responsive, while previously each program would have to figure this out on its own after a timeout. Individual programs only talk to the daemon over a local transport and are more isolated from the network. The daemon supports fancy rules which specify which name servers should be used for which domain names—in fact, the rest of this article is about those rules.

Split DNS

Consider the scenario of a machine that is connected to two semi-trusted networks (wifi and ethernet), and also has a VPN connection to your employer. Each of those three connections has its own network interface in the kernel. And there are multiple name servers: one from a DHCP lease from the wifi hotspot, two specified by the VPN and controlled by your employer, plus some additional manually-configured name servers. Routing is the process of deciding which servers to ask for a given domain name. Do not mistake this with the process of deciding where to send network packets, which is called routing too.

The network interface is king in systemd-resolved. systemd-resolved first picks one or more interfaces which are appropriate for a given name, and then queries one of the name servers attached to that interface. This is known as “split DNS”.

There are two flavors of domains attached to a network interface: routing domains and search domains. They both specify that the given domain and any subdomains are appropriate for that interface. Search domains have the additional function that single-label names are suffixed with that search domain before being resolved. For example, a lookup for “server” is treated as a lookup for “server.example.com” if the search domain is “example.com.” In systemd-resolved config files, routing domains are prefixed with the tilde (~) character.

Specific example

Now consider a specific example: your VPN interface tun0 has a search domain private.company.com and a routing domain ~company.com. If you ask for mail.private.company.com, it is matched by both domains, so this name would be routed to tun0.

A request for www.company.com is matched by the second domain and would also go to tun0. If you ask for www, (in other words, if you specify a single-label name without any dots), the difference between routing and search domains comes into play. systemd-resolved attempts to combine the single-label name with the search domain and tries to resolve www.private.company.com on tun0.

If you have multiple interfaces with search domains, single-label names are suffixed with all search domains and resolved in parallel. For multi-label names, no suffixing is done; search and routing domains are are used to route the name to the appropriate interface. The longest match wins. When there are multiple matches of the same length on different interfaces, they are resolved in parallel.

A special case is when an interface has a routing domain ~. (a tilde for a routing domain and a dot for the root DNS label). Such an interface always matches any names, but with the shortest possible length. Any interface with a matching search or routing domain has higher priority, but the interface with ~. is used for all other names. Finally, if no routing or search domains matched, the name is routed to all interfaces that have at least one name server attached.

Lookup routing in systemd-resolved Domain routing

This seems fairly complex, partially because of the historic names which are confusing. In actual practice it’s not as complicated as it seems.

To introspect a running system, use the resolvectl domain command. For example:

$ resolvectl domain
Global:
Link 4 (wlp4s0): ~.
Link 18 (hub0):
Link 26 (tun0): redhat.com

You can see that www would resolve as www.redhat.com. over tun0. Anything ending with redhat.com resolves over tun0. Everything else would resolve over wlp4s0 (the wireless interface). In particular, a multi-label name like www.foobar would resolve over wlp4s0, and most likely fail because there is no foobar top-level domain (yet).

Server routing

Now that you know which interface or interfaces should be queried, the server or servers to query are easy to determine. Each interface has one or more name servers configured. systemd-resolved will send queries to the first of those. If the server is offline and the request times out or if the server sends a syntactically-invalid answer (which shouldn’t happen with “normal” queries, but often becomes an issue when DNSSEC is enabled), systemd-resolved switches to the next server on the list. It will use that second server as long as it keeps responding. All servers are used in a round-robin rotation.

To introspect a running system, use the resolvectl dns command:

$ resolvectl dns
Global:
Link 4 (wlp4s0): 192.168.1.1 8.8.4.4 8.8.8.8
Link 18 (hub0):
Link 26 (tun0): 10.45.248.15 10.38.5.26

When combined with the previous listing, you know that for www.redhat.com, systemd-resolved will query 10.45.248.15, and—if it doesn’t respond—10.38.5.26. For www.google.com, systemd-resolved will query 192.168.1.1 or the two Google servers 8.8.4.4 and 8.8.8.8.

Differences from nss-dns

Before going further detail, you may ask how this differs from the previous default implementation (nss-dns). With nss-dns there is just one global list of up to three name servers and a global list of search domains (specified as nameserver and search in /etc/resolv.conf).

Each name to query is sent to the first name server. If it doesn’t respond, the same query is sent to the second name server, and so on. systemd-resolved implements split-DNS and remembers which servers are currently considered active.

For single-label names, the query is performed with each of the the search domains suffixed. This is the same with systemd-resolved. For multi-label names, a query for the unsuffixed name is performed first, and if that fails, a query for the name suffixed by each of the search domains in turn is performed. systemd-resolved doesn’t do that last step; it only suffixes single-label names.

A second difference is that with nss-dns, this module is loaded into each process. The process itself communicates with remote servers and implements the full DNS stack internally. With systemd-resolved, the nss-resolve module is loaded into the process, but it only forwards the query to systemd-resolved over a local transport (D-Bus) and doesn’t do any work itself. The systemd-resolved process is heavily sandboxed using systemd service features.

The third difference is that with systemd-resolved all state is dynamic and can be queried and updated using D-Bus calls. This allows very strong integration with other daemons or graphical interfaces.

Configuring systemd-resolved

So far, this article talked about servers and the routing of domains without explaining how to configure them. systemd-resolved has a configuration file (/etc/systemd/resolv.conf) where you specify name servers with DNS= and routing or search domains with Domains= (routing domains with ~, search domains without). This corresponds to the Global: lists in the two listings above.

In this article’s examples, both lists are empty. Most of the time configuration is attached to specific interfaces, and “global” configuration is not very useful. Interfaces come and go and it isn’t terribly smart to contact servers on an interface which is down. As soon as you create a VPN connection, you want to use the servers configured for that connection to resolve names, and as soon as the connection goes down, you want to stop.

How does then systemd-resolved acquire the configuration for each interface? This happens dynamically, with the network management service pushing this configuration over D-Bus into systemd-resolved. The default in Fedora is NetworkManager and it has very good integration with systemd-resolved. Alternatives like systemd’s own systemd-networkd implement similar functionality. But the interface is open and other programs can do the appropriate D-Bus calls.

Alternatively, resolvectl can be used for this (it is just a wrapper around the D-Bus API). Finally, resolvconf provides similar functionality in a form compatible with a tool in Debian with the same name.

Scenario: Local connection more trusted than VPN

The important thing is that in the common scenario, systemd-resolved follows the configuration specified by other tools, in particular NetworkManager. So to understand how systemd-resolved names, you need to see what NetworkManager tells it to do. Normally NM will tell systemd-resolved to use the name servers and search domains received in a DHCP lease on some interface. For example, look at the source of configuration for the two listings shown above:

There are two connections: “Parkinson” wifi and “Brno (BRQ)” VPN. In the first panel DNS:Automatic is enabled, which means that the DNS server received as part of the DHCP lease (192.168.1.1) is passed to systemd-resolved. Additionally. 8.8.4.4 and 8.8.8.8 are listed as alternative name servers. This configuration is useful if you want to resolve the names of other machines in the local network, which 192.168.1.1 provides. Unfortunately the hotspot DNS server occasionally gets stuck, and the other two servers provide backup when that happens.

The second panel is similar, but doesn’t provide any special configuration. NetworkManager combines routing domains for a given connection from DHCP, SLAAC RDNSS, and VPN, and finally manual configuration and forward this to systemd-resolved. This is the source of the search domain redhat.com in the listing above.

There is an important difference between the two interfaces though: in the second panel, “Use this connection only for resources on its network” is checked. This tells NetworkManager to tell systemd-resolved to only use this interface for names under the search domain received as part of the lease (Link 26 (tun0): redhat.com in the first listing above). In the first panel, this checkbox is unchecked, and NetworkManager tells systemd-resolved to use this interface for all other names (Link 4 (wlp4s0): ~.). This effectively means that the wireless connection is more trusted.

Scenario: VPN more trusted than local network

In a different scenario, a VPN would be more trusted than the local network and the domain routing configuration reversed. If a VPN without “Use this connection only for resources on its network” is active, NetworkManager tells systemd-resolved to attach the default routing domain to this interface. After unchecking the checkbox and restarting the VPN connection:

$ resolvectl domain Global: Link 4 (wlp4s0): Link 18 (hub0): Link 28 (tun0): ~. redhat.com $ resolvectl dns Global: Link 4 (wlp4s0): Link 18 (hub0): Link 28 (tun0): 10.45.248.15 10.38.5.26

Now all domain names are routed to the VPN. The network management daemon controls systemd-resolved and the user controls the network management daemon.

Additional systemd-resolved functionality

As mentioned before, systemd-resolved provides a common name lookup mechanism for all programs running on the machine. Right now the effect is limited: shared resolver and cache and split DNS (the lookup routing logic described above). systemd-resolved provides additional resolution mechanisms beyond the traditional unicast DNS. These are the local resolution protocols MulticastDNS and LLMNR, and an additional remote transport DNS-over-TLS.

Fedora 33 does not enable MulticastDNS and DNS-over-TLS in systemd-resolved. MulticastDNS is implemented by nss-mdns4_minimal and Avahi. Future Fedora releases may enable these as the upstream project improves support.

Implementing this all in a single daemon which has runtime state allows smart behaviour: DNS-over-TLS may be enabled in opportunistic mode, with automatic fallback to classic DNS if the remote server does not support it. Without the daemon which can contain complex logic and runtime state this would be much harder. When enabled, those additional features will apply to all programs on the system.

There is more to systemd-resolved: in particular LLMNR and DNSSEC, which only received brief mention here. A future article will explore those subjects.

Web of Trust, Part 1: Concept

Wednesday 14th of October 2020 08:00:00 AM

Every day we rely on technologies who nobody can fully understand. Since well before the industrial revolution, complex and challenging tasks required an approach that broke out the different parts into smaller scale tasks. Each resulting in specialized knowledge used in some parts of our lives, leaving other parts to trust in skills that others had learned. This shared knowledge approach also applies to software. Even the most avid readers of this magazine, will likely not compile and validate every piece of code they run. This is simply because the world of computers is itself also too big for one person to grasp.

Still, even though it is nearly impossible to understand everything that happens within your PC when you are using it, that does not leave you blind and unprotected. FLOSS software shares trust, giving protection to all users, even if individual users can’t grasp all parts in the system. This multi-part article will discuss how this ‘Web of Trust’ works and how you can get involved.

But first we’ll have to take a step back and discuss the basic concepts, before we can delve into the details and the web. Also, a note before we start, security is not just about viruses and malware. Security also includes your privacy, your economic stability and your technological independence.

One-Way System

By their design, computers can only work and function in the most rudimentary ways of logic: True or false. And or Or. This (boolean logic) is not readily accessible to humans, therefore we must do something special. We write applications in a code that we can (reasonably) comprehend (human readable). Once completed, we turn this human readable code into a code that the computer can comprehend (machine code).

The step of conversion is called compilation and/or building, and it’s a one-way process. Compiled code (machine code) is not really understandable by humans, and it takes special tools to study in detail. You can understand small chunks, but on the whole, an entire application becomes a black box.

This subtle difference shifts power. Power, in this case being the influence of one person over another person. The person who has written the human-readable version of the application and then releases it as compiled code to use by others, knows all about what the code does, while the end user knows a very limited scope. When using (software) in compiled form, it is impossible to know for certain what an application is intended to do, unless the original human readable code can be viewed.

The Nature of Power

Spearheaded by Richard Stallman, this shift of power became a point of concern. This discussion started in the 1980s, for this was the time that computers left the world of academia and research, and entered the world of commerce and consumers. Suddenly, that power became a source of control and exploitation.

One way to combat this imbalance of power, was with the concept of FLOSS software. FLOSS Software is built on 4-Freedoms, which gives you a wide array of other ‘affiliated’ rights and guarantees. In essence, FLOSS software uses copyright-licensing as a form of moral contract, that forces software developers not to leverage the one-way power against their users. The principle way of doing this, is with the the GNU General Public Licenses, which Richard Stallman created and has since been promoting.

One of those guarantees, is that you can see the code that should be running on your device. When you get a device using FLOSS software, then the manufacturer should provide you the code that the device is using, as well as all instructions that you need to compile that code yourself. Then you can replace the code on the device with the version you can compile yourself. Even better, if you compare the version you have with the version on the device, you can see if the device manufacturer tried to cheat you or other customers.

This is where the web of Trust comes back into the picture. The Web of Trust implies that even if the vast majority of people can’t validate the workings of a device, that others can do so on their behalf. Journalists, security analysts and hobbyists, can do the work that others might be unable to do. And if they find something, they have the power to share their findings.

Security by Blind Trust

This is of course, if the application and all components underneath it, are FLOSS. Proprietary software, or even software which is merely Open Source, has compiled versions that nobody can recreate and validate. Thus, you can never truly know if that software is secure. It might have a backdoor, it might sell your personal data, or it might be pushing a closed ecosystem to create a vendor-lock. With closed-source software, your security is as good as the company making the software is trustworthy.

For companies and developers, this actually creates another snare. While you might still care about your users and their security, you’re a liability: If a criminal can get to your official builds or supply-chain, then there is no way for anybody to discover that afterwards. An increasing number of attacks do not target users directly, but instead try to get in, by exploiting the trust the companies/developers have carefully grown.

You should also not underestimate pressure from outside: Governments can ask you to ignore a vulnerability, or they might even demand cooperation. Investment firms or shareholders, may also insist that you create a vendor-lock for future use. The blind trust that you demand of your users, can be used against you.

Security by a Web of Trust

If you are a user, FLOSS software is good because others can warn you when they find suspicious elements. You can use any FLOSS device with minimal economic risk, and there are many FLOSS developers who care for your privacy. Even if the details are beyond you, there are rules in place to facilitate trust.

If you are a tinkerer, FLOSS is good because with a little extra work, you can check the promises of others. You can warn people when something goes wrong, and you can validate the warnings of others. You’re also able to check individual parts in a larger picture. The libraries used by FLOSS applications, are also open for review: It’s “Trust all the way down”.

For companies and developers, FLOSS is also a great reassurance that your trust can’t be easily subverted. If malicious actors wish to attack your users, then any irregularity can quickly be spotted. Last but not least, since you also stand to defend your customers economic well-being and privacy, you can use that as an important selling point to customers who care about their own security.

Fedora’s case

Fedora embraces the concept of FLOSS and it stands strong to defend it. There are comprehensive legal guidelines, and Fedora’s principles are directly referencing the 4-Freedoms: Freedom, Friends, Features, and First

To this end, entire systems have been set up to facilitate this kind of security. Fedora works completely in the open, and any user can check the official servers. Koji is the name of the Fedora Buildsystem, and you can see every application and it’s build logs there. For added security, there is also Bohdi, which orchestrates the deployment of an application. Multiple people must approve it, before the application can become available.

This creates the Web of Trust on which you can rely. Every package in the repository goes through the same process, and at every point somebody can intervene. There are also escalation systems in place to report issues, so that issues can quickly be tackled when they occur. Individual contributors also know that they can be reviewed at every time, which itself is already enough of a precaution to dissuade mischievous thoughts.

You don’t have to trust Fedora (implicitly), you can get something better; trust in users like you.

Recover your files from Btrfs snapshots

Monday 5th of October 2020 08:00:00 AM

As you have seen in a previous article, Btrfs snapshots are a convenient and fast way to make backups. Please note that these articles do not suggest that you avoid backup software or well-tested backup plans. Their goals are to show a great feature of this file system, snapshots, and to inspire curiosity and invite you to explore, experiment and deepen the subject. Read on for more about how to recover your files from Btrfs snapshots.

A subvolume for your project

Let’s assume that you want to save the documents related to a project inside the directory $HOME/Documents/myproject.

As you have seen, a Btrfs subvolume, as well as a snapshot, looks like a normal directory. Why not use a Btrfs subvolume for your project, in order to take advantage of snapshots? To create the subvolume, use this command:

btrfs subvolume create $HOME/Documents/myproject

You can create a hidden directory where to arrange your snapshots:

mkdir $HOME/.snapshots

As you can see, in this case there’s no need to use sudo. However, sudo is still needed to list the subvolumes, and to use the send and receive commands.

Now you can start writing your documents. Each day (or each hour, or even minute) you can take a snapshot just before you start to work:

btrfs subvolume snapshot -r $HOME/Documents/myproject $HOME/.snapshots/myproject-day1

For better security and consistency, and if you need to send the snapshot to an external drive as shown in the previous article, remember that the snapshot must be read only, using the -r flag.

Note that in this case, a snapshot of the /home subvolume will not snapshot the $HOME/Documents/myproject subvolume.

How to recover a file or a directory

In this example let’s assume a classic error: you deleted a file by mistake. You can recover it from the most recent snapshot, or recover an older version of the file from an older snapshot. Do you remember that a snapshot appears like a regular directory? You can simply use the cp command to restore the deleted file:

cp $HOME/.snapshots/myproject-day1/filename.odt $HOME/Documents/myproject

Or restore an entire directory:

cp -r $HOME/.snapshots/myproject-day1/directory $HOME/Documents/myproject

What if you delete the entire $HOME/Documents/myproject directory (actually, the subvolume)? You can recreate the subvolume as seen before, and again, you can simply use the cp command to restore the entire content from the snapshot:

btrfs subvolume create $HOME/Documents/myproject
cp -rT $HOME/.snapshots/myproject-day1 $HOME/Documents/myproject

Or you could restore the subvolume by using the btrfs snapshot command (yes, a snapshot of a snapshot):

btrfs subvolume snapshot $HOME/.snapshots/myproject-day1 $HOME/Documents/myproject How to recover btrfs snapshots from an external drive

You can use the cp command even if the snapshot resides on an external drive. For instance:

cp /run/media/user/mydisk/bk/myproject-day1/filename.odt $HOME/Documents/myproject

You can restore an entire snapshot as well. In this case, since you will use the send and receive commands, you must use sudo. In addition, consider that the restored subvolume will be created as read only. Therefore you need to also set the read only property to false:

sudo btrfs send /run/media/user/mydisk/bk/myproject-day1 | sudo btrfs receive $HOME/Documents/
mv Documents/myproject-day1 Documents/myproject
btrfs property set Documents/myproject ro false

Here’s an extra explanation. The command btrfs subvolume snapshot will create an exact copy of a subvolume in the same device. The destination has to reside in the same btrfs device. You can’t use another device as the destination of the snapshot. In that case you need to take a snapshot and use the send and receive commands.

For more information, refer to some of the online documentation:

man btrfs-subvolume man btrfs-send man btrfs-receive

Use dnsmasq to provide DNS & DHCP services

Wednesday 30th of September 2020 08:00:00 AM

Many tech enthusiasts find the ability to control their host name resolution important. Setting up servers and services usually requires some form of fixed address, and sometimes also requires special forms of resolution such as defining Kerberos or LDAP servers, mail servers, etc. All of this can be achieved with dnsmasq.

dnsmasq is a lightweight and simple program which enables issuing DHCP addresses on your network and registering the hostname & IP address in DNS. This configuration also allows external resolution, so your whole network will be able to speak to itself and find external sites too.

This article covers installing and configuring dnsmasq on either a virtual machine or small physical machine like a Raspberry Pi so it can provide these services in your home network or lab. If you have an existing setup and just need to adjust the settings for your local workstation, read the previous article which covers configuring the dnsmasq plugin in NetworkManager.

Install dnsmasq

First, install the dnsmasq package:

sudo dnf install dnsmasq

Next, enable and start the dnsmasq service:

sudo systemctl enable --now dnsmasq Configure dnsmasq

First, make a backup copy of the dnsmasq.conf file:

sudo cp /etc/dnsmasq.conf /etc/dnsmasq.conf.orig

Next, edit the file and make changes to the following to reflect your network. In this example, mydomain.org is the domain name, 192.168.1.10 is the IP address of the dnsmasq server and 192.168.1.1 is the default gateway.

sudo vi /etc/dnsmasq.conf

Insert the following contents:

domain-needed bogus-priv no-resolv server=8.8.8.8 server=8.8.4.4 local=/mydomain.org/ listen-address=::1,127.0.0.1,192.168.1.10 expand-hosts domain=mydomain.org dhcp-range=192.168.1.100,192.168.1.200,24h dhcp-option=option:router,192.168.1.1 dhcp-authoritative dhcp-leasefile=/var/lib/dnsmasq/dnsmasq.leases

Test the config to check for typos and syntax errors:

$ sudo dnsmasq --test dnsmasq: syntax check OK.

Now edit the hosts file, which can contain both statically- and dynamically-allocated hosts. Static addresses should lie outside the DHCP range you specified earlier. Hosts using DHCP but which need a fixed address should be entered here with an address within the DHCP range.

sudo vi /etc/hosts

The first two lines should be there already. Add the remaining lines to configure the router, the dnsmasq server, and two additional servers.

127.0.0.1   localhost localhost.localdomain ::1         localhost localhost.localdomain 192.168.1.1    router 192.168.1.10   dnsmasq 192.168.1.20   server1 192.168.1.30   server2

Restart the dnsmasq service:

sudo systemctl restart dnsmasq

Next add the services to the firewall to allow the clients to connect:

sudo firewall-cmd --add-service={dns,dhcp}
sudo firewall-cmd --runtime-to-permanent Test name resolution

First, install bind-utils to get the nslookup and dig packages. These allow you to perform both forward and reverse lookups. You could use ping if you’d rather not install extra packages. but these tools are worth installing for the additional troubleshooting functionality they can provide.

sudo dnf install bind-utils

Now test the resolution. First, test the forward (hostname to IP address) resolution:

$ nslookup server1 Server:       127.0.0.1 Address:    127.0.0.1#53 Name:    server1.mydomain.org Address: 192.168.1.20

Next, test the reverse (IP address to hostname) resolution:

$ nslookup 192.168.1.20 20.1.168.192.in-addr.arpa    name = server1.mydomain.org.

Finally, test resolving hostnames outside of your network:

$ nslookup fedoramagazine.org Server:       127.0.0.1 Address:    127.0.0.1#53 Non-authoritative answer: Name:    fedoramagazine.org Address: 35.196.109.67 Test DHCP leases

To test DHCP leases, you need to boot a machine which uses DHCP to obtain an IP address. Any Fedora variant will do that by default. Once you have booted the client machine, check that it has an address and that it corresponds to the lease file for dnsmasq.

From the machine running dnsmasq:

$ sudo cat /var/lib/dnsmasq/dnsmasq.leases 1598023942 52:54:00:8e:d5:db 192.168.1.100 server3 01:52:54:00:8e:d5:db 1598019169 52:54:00:9c:5a:bb 192.168.1.101 server4 01:52:54:00:9c:5a:bb Extending functionality

You can assign hosts a fixed IP address via DHCP by adding it to your hosts file with the address you want (within your DHCP range). Do this by adding into the dnsmasq.conf file the following line, which assigns the IP listed to any host that has that name:

dhcp-host=myhost

Alternatively, you can specify a MAC address which should always be given a fixed IP address:

dhcp-host=11:22:33:44:55:66,192.168.1.123

You can specify a PXE boot server if you need to automate machine builds

tftp-root=/tftpboot
dhcp-boot=/tftpboot/pxelinux.0,boothost,192.168.1.240

This should point to the actual URL of your TFTP server.

If you need to specify SRV or TXT records, for example for LDAP, Kerberos or similar, you can add these:

srv-host=_ldap._tcp.mydomain.org,ldap-server.mydomain.org,389
srv-host=_kerberos._udp.mydomain.org,krb-server.mydomain.org,88
srv-host=_kerberos._tcp.mydomain.org,krb-server.mydomain.org,88
srv-host=_kerberos-master._udp.mydomain.org,krb-server.mydomain.org,88
srv-host=_kerberos-adm._tcp.mydomain.org,krb-server.mydomain.org,749
srv-host=_kpasswd._udp.mydomain.org,krb-server.mydomain.org,464
txt-record=_kerberos.mydomain.org,KRB-SERVER.MYDOMAIN.ORG

There are many other options in dnsmasq. The comments in the original config file describe most of them. For full details, read the man page, either locally or online.

Announcing the release of Fedora 33 Beta

Tuesday 29th of September 2020 02:27:52 PM

The Fedora Project is pleased to announce the immediate availability of Fedora 33 Beta, the next step towards our planned Fedora 33 release at the end of October.

Download the prerelease from our Get Fedora site:

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:

Beta Release Highlights BTRFS by default

All of the desktop variants of Fedora 33 Beta – including Fedora Workstation, Fedora KDE, and others – will use BTRFS as the default filesystem. This is a big shift: we’ve been using ext filesystems since Fedora Core 1. BTRFS offers some really compelling features for users, including transparent compression and copy-on-write. For Fedora 33, we’re only defaulting to the basic features of BTRFS, but we’ll build out the default feature set to include more goodies in future releases.

Fedora Workstation

Fedora 33 Workstation Beta includes GNOME 3.38, the newest release of the GNOME desktop environment. It is full of performance enhancements and improvements. GNOME 3.38 now includes a welcome tour after installation to help users learn about all of the great features this desktop environment offers. It also improves screen recording and multi-monitor support. For a full list of GNOME 3.38 highlights, see the release notes.

Fedora 33 Workstation Beta also provides better thermal management and peak performance on Intel CPUs by including thermald in the default install. And because your desktop should be fun to look at as well as easy to use, Fedora 33 Workstation Beta includes animated backgrounds (a time-of-day slideshow with hue changes) by default.

Fedora IoT

With Fedora 33 Beta, Fedora IoT is now an official Fedora Edition. Fedora IoT is geared toward edge devices on a wide variety of hardware platforms. It is based on ostree technology for safe update and rollback. It includes the Platform AbstRaction for SECurity (PARSEC), an open-source initiative to provide a common API to hardware security and cryptographic services in a platform-agnostic way.

Other updates

Fedora 33 Beta defaults to using nano as the editor. nano is a more approachable editor that is more welcoming to new users. Of course, those who want to use vim, emacs, or any other editor are still able to.

Fedora 33 KDE Beta enables earlyOOM by default, as Fedora Workstation did in the previous release. This helps improve system responsiveness on systems that are running out of memory. 

Fedora 33 Beta includes updated versions of many popular packages like Ruby, Python, and Perl. .NET Core will now be available on Fedora on aarch64, in addition to x86_64. We’re also dropping a few older versions: Python 2.6 and Python 3.4 are retired. The httpd module mod_php is also dropped, as php-fpm is a more performant and more secure PHP module.

Testing needed

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in the #fedora-qa channel on Freenode IRC. As testing progresses, common issues are tracked on the Common F33 Bugs page.

For tips on reporting a bug effectively, read how to file a bug.

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora, but Linux and free software as a whole.

More information

For more detailed information about what’s new on Fedora 33 Beta release, you can consult the Fedora 33 Change set. It contains more technical information about the new packages and improvements shipped with this release.

Now available: Fedora on Lenovo laptops!

Friday 25th of September 2020 08:00:00 AM

We’ve been teasing this for a while, but today it’s finally true—Fedora Workstation is now available preinstalled on the Lenovo ThinkPad X1 Carbon Gen 8, ThinkPad P53, and ThinkPad P1 Gen 2 laptops. The ThinkPad X1 Carbon is available today for direct consumer purchase from Lenovo’s online store. The Lenovo ThinkPad P1 Gen 2 and ThinkPad P53 will be available next week via the “Contact Us” icon on Lenovo.com. What’s more, the successor models are in the works for pre-load and online ordering as well!

Lenovo has been a great partner in bringing this to market. Like the Fedora community, they are operating on an “upstream first” model. That’s part of why the only thing you’ll see on the laptop that doesn’t come from an official Fedora repository is a set of PDFs providing documentation and legal notices. Lenovo engineers have been contributing to the Linux kernel, including a patch to enable the “lap mode” sensor, which is already accepted. They have also worked with their vendors to improve Linux support in devices like the fingerprint scanner.

Of course, you already know that open source is about more than just the technology; the community is what makes it great. Lenovo is a member of Fedora and other communities. In addition to participating in the usual Fedora places (like the devel mailing list), they also were a gold-level sponsor of our Nest With Fedora conference. And they have a dedicated Fedora section on their community forum. Mark Pearson, Senior Linux Developer said “doing open source the right way is important to us” at his Nest With Fedora Q&A session.

This will be a global program, but it will take some time to roll out country-by-country. If it doesn’t appear on the website in your country, call the local sales number for your country to place a phone order. I’m excited to have Lenovo offer Fedora Workstation as a supported choice on their laptops. This is a great opportunity to grow our community.

Installing and running Vagrant using qemu-kvm

Monday 21st of September 2020 08:00:00 AM

Vagrant is a brilliant tool, used by DevOps professionals, coders, sysadmins and regular geeks to stand up repeatable infrastructure for development and testing. From their website:

Vagrant is a tool for building and managing virtual machine environments in a single workflow. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases production parity, and makes the “works on my machine” excuse a relic of the past.

If you are already familiar with the basics of Vagrant, the documentation provides a better reference build for all available features and internals.

Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team.

https://www.vagrantup.com/intro

This guide will walk through the steps necessary to get Vagrant working on a Fedora-based machine.

I started with a minimal install of Fedora Server as this reduces the memory footprint of the host OS, but if you already have a working Fedora machine, either Server or Workstation, then this should still work.

Check the machine supports virtualisation:

$ sudo lscpu | grep Virtualization Virtualization:                  VT-x Virtualization type:             full

Install qemu-kvm:

sudo dnf install qemu-kvm libvirt libguestfs-tools virt-install rsync

Enable and start the libvirt daemon:

sudo systemctl enable --now libvirtd

Install Vagrant:

sudo dnf install vagrant

Install the Vagrant libvirtd plugin:

sudo vagrant plugin install vagrant-libvirt

Add a box

vagrant box add fedora/32-cloud-base --provider=libvirt

Create a minimal Vagrantfile to test

$ mkdir vagrant-test $ cd vagrant-test $ vi Vagrantfile

Vagrant.configure("2") do |config| config.vm.box = "fedora/32-cloud-base" end

Note the capitalisation of the file name and in the file itself.

Check the file:

vagrant status

Current machine states: default not created (libvirt) The Libvirt domain is not created. Run 'vagrant up' to create it. Start the box:

vagrant up

Connect to your new machine:

vagrant ssh

That’s it – you now have Vagrant working on your Fedora machine.

To stop the machine, use vagrant halt. This simply halts the machine but leaves the VM and disk in place.
To shut it down and delete it use vagrant destroy. This will remove the whole machine and any changes you’ve made in it.

Next steps

You don’t need to download boxes before issuing the vagrant up command – you can specify the box and the provider in the Vagrantfile directly and Vagrant will download it if it’s not already there. Below is an example which also sets the amount memory and number of CPUs:

# -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure("2") do |config| config.vm.box = "fedora/32-cloud-base" config.vm.provider :libvirt do |libvirt| libvirt.cpus = 1 libvirt.memory = 1024 end end

For more information on using Vagrant, creating your own machines and using different boxes, see the official documentation at https://www.vagrantup.com/docs

There is a huge repository of boxes ready to download and use, and the official location for these is Vagrant Cloud – https://app.vagrantup.com/boxes/search. Some are basic operating systems and some offer complete functionality such as databases, web servers etc.

Incremental backups with Btrfs snapshots

Monday 14th of September 2020 01:39:33 PM

Snapshots are an interesting feature of Btrfs. A snapshot is a copy of a subvolume. Taking a snapshot is immediate. However, taking a snapshot is not like performing a rsync or a cp, and a snapshot doesn’t occupy space as soon as it is created.

Editors note: From the BTRFS Wiki – A snapshot is simply a subvolume that shares its data (and metadata) with some other subvolume, using Btrfs’s COW capabilities.

Occupied space will increase alongside the data changes in the original subvolume or in the snapshot itself, if it is writeable. Added/modified files, and deleted files in the subvolume still reside in the snapshots. This is a convenient way to perform backups.

Using snapshots for backups

A snapshot resides on the same disk where the subvolume is located. You can browse it like a regular directory and recover a copy of a file as it was when the snapshot was performed. By the way, a snapshot on the same disk of the snapshotted subvolume is not an ideal backup strategy: if the hard disk broke, snapshots will be lost as well. An interesting feature of snapshots is the ability to send them to another location. The snapshot can be sent to an external hard drive or to a remote system via SSH (the destination filesystems need to be formatted as Btrfs as well). To do this, the commands btrfs send and btrfs receive are used.

Taking a snapshot

In order to use the send and the receive commands, it is important to create the snapshot as read-only, and snapshots are writeable by default.

The following command will take a snapshot of the /home subvolume. Note the -r flag for readonly. sudo btrfs subvolume snapshot -r /home /.snapshots/home-day1

Instead of day1, the snapshot name can be the current date, like home-$(date +%Y%m%d). Snapshots look like regular subdirectories. You can place them wherever you like. The directory /.snapshots could be a good choice to keep them neat and to avoid confusion.

Editors note: Snapshots will not take recursive snapshots of themselves. If you create a snapshot of a subvolume, every subvolume or snapshot that the subvolume contains is mapped to an empty directory of the same name inside the snapshot.

Backup using btrfs send

In this example the destination Btrfs volume in the USB drive is mounted as /run/media/user/mydisk/bk . The command to send the snapshot to the destination is: sudo btrfs send /.snapshots/home-day1 | sudo btrfs receive /run/media/user/mydisk/bk This is called initial bootstrapping, and it corresponds to a full backup. This task will take some time, depending on the size of the /home directory. Obviously, subsequent incremental sends will take a shorter time.

Incremental backup

Another useful feature of snapshots is the ability to perform the send task in an incremental way. Let’s take another snapshot. sudo btrfs subvolume snapshot -r /home /.snapshots/home-day2

In order to perform the send task incrementally, you need to specify the previous snapshot as a base and this snapshot has to exist in the source and in the destination. Please note the -p option. sudo btrfs send -p /.snapshot/home-day1 /.snapshot/home-day2 | sudo btrfs receive /run/media/user/mydisk/bk And again (the day after): sudo btrfs subvolume snapshot -r /home /.snapshots/home-day3 sudo btrfs send -p /.snapshot/home-day2 /.snapshot/home-day3 | sudo btrfs receive /run/media/user/mydisk/bk

Cleanup

Once the operation is complete, you can keep the snapshot. But if you perform these operations on a daily basis, you could end up with a lot of them. This could lead to confusion and potentially a lot of used space on your disks. So it is a good advice to delete some snapshots if you think you don’t need them anymore.

Keep in mind that in order to perform an incremental send you need at least the last snapshot. This snapshot must be present in the source and in the destination. sudo btrfs subvolume delete /.snapshot/home-day1 sudo btrfs subvolume delete /.snapshot/home-day2 sudo btrfs subvolume delete /run/media/user/mydisk/bk/home-day1 sudo btrfs subvolume delete /run/media/user/mydisk/bk/home-day2

Note: the day 3 snapshot was preserved in the source and in the destination. In this way, tomorrow (day 4), you can perform a new incremental btrfs send.

As some final advice, if the USB drive has a bunch of space, you could consider maintaining multiple snapshots in the destination, while in the source disk you would keep only the last one.

Ankur Sinha: How do you Fedora?

Friday 11th of September 2020 08:00:00 AM

We recently interviewed Ankur Sinha on how he uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming an interviewee.

Who is Ankur Sinha?

Ankur is a Computational Neuroscientist and has just started his first post-doctoral fellowship at University College London and a FLOSS enthusiast trying to spread the message of FOSS and evidence based science. Ankur started using Linux a decade ago, when he was introduced to Linux in a LUG doing an install fest during his undergraduate degree.

Ankur loves reading:

“I read a lot and tend to get attached to characters from books quite easily. Holmes, Poirot (I’m a detective fiction fan), Francisco D’Anconia (fan of the book Atlas Shrugged, but not so much Ayn Rand’s philosophy), lots of random characters from books I’d read. I also read lots of Hindi comics as a child—Doga, Super commando Dhruv, Naagraj, and Chacha Chaudhary—loved them all!”.

As far as all time favorite movies go, Swades comes to his mind. His favorite genre is science fiction thrillers (think “The Prestige” and ” Predestination”). When not busy working or engaging people on IRC channels, he enjoys listening to podcasts and classic rock.

Ankur’s favorite food is his mother’s Chhole Bhature. Otherwise, if he’s away from home, his go-tos are Butter chicken, Butter Naan, and Chilli Chicken from North Indian restaurants.

The Fedora Community

Ankur found about Fedora after a distro hopping phase in 2008, and since then he has been a Fedora user. His first memory of the Fedora community is an IRC workshop on packaging fonts that the Fedora India community had organised back in 2008. Talking to and meeting other community members has been one of the most exciting parts of the Fedora community for him. “I found this great bunch of people to hang out and geek out with! It was so much fun, and extremely educational both in terms of technical knowledge and the social/philosophical side of FOSS and life in general.”

When asked what he would change in the Fedora Project if he could change one thing, he said that he prefers “Smaller tweaks” since “Smaller tweaks also allow work to be spread out, and that really helps”. Specifically, he would like to see more discussion on the philosophy and nuances of FOSS in the community.

"Perhaps we all know it so well that we take it for granted and focus on the work that needs to be done. It’s so easy to get bogged down in the work, though, that I worry that we forget the bigger picture sometimes. The end for us is to promote FOSS, and everything we do is the means to this end. So, I worry that the means sometimes becomes the end for us — that we focus so much on producing deliverables that we forget why we produce them."

Since he works in academia and science, Ankur would like the Fedora community (and FOSS in general) to get more involved with academic/scientific communities. “I think we have an excellent platform to enable education and research. NeuroFedora is a start in this direction.”

He wishes that other people knew that the Fedora community are not just OS developers, but a global community, and he’d like folks to just hang out and communicate even if they’re not contributing in the traditional sense of the word.

Ankur tries to help wherever he can, especially if newbies are involved. Nowadays, he tries to focus more on NeuroFedora as it fits well for his day-job and there’s so much to do in this Field + Open Science.

Ankur learnt most of the things from his >10 years of experience in Fedora and FOSS. He had learned theories of software development at undergrad but got to experience practical implementations from his colleagues in the community. He is a firm believer of “No question is a stupid question”. He adds that Fedora is perfect because it gets better as you start working with it.

His piece of advice for anyone thinking of getting involved in Fedora is to just go ahead and start. One doesn’t need to know anything at all. All of it can be learned over time. Secondly, don’t focus on tasks. Yes, that’s a good way of learning, but it is far more important to get to know the people of Fedora! As one meets more people, one learns more about how Fedora works and one has way more fun working and learning!!

Just like a lot of our community members, Ankur struggles from time constraints. His new challenge is to find more time to work on FOSS and Fedora. During his college years, it was to learn more and more.

One of the challenges Ankur faces about promoting open source is to explain to non-FOSS people that Windows/Mac aren’t the only OSes present. He thinks that having Fedora shipped with Lenovo systems will give a start for the community. It makes Fedora and FOSS more "official".

What Hardware?

Ankur has three machines and runs Fedora 32 on each of them:

Ankur’s Desk
  • Thinkpad E490 laptop
  • a custom workstation that university IT set up for research work
  • a headless MacPro5,1
  • 2x Microsoft Sculpt Ergonomic keyboard/mouse/numpad
  • Netgear wifi extender
  • TP-Link TL-PA8033PKIT AV1300 3-Port Gigabit Passthrough Powerline Adapters
  • Moto g7 phone with Android 10
What Software?

Fedora 32 workstation, and server on the MacPro.

Workstation/Gnome3 with a few extensions: caffeine, pomodoro,

  syncthing.

byobu with tmux: multiple sessions: default, work, fedora

taskwarrior, vit, timewarrior, gnome-pomodoro, gnome-calendar/evolution for calendars

neomutt with msmtp + offlineimap + notmuch for e-mail

vim for *everything* possible – vimrc link

qutebrowser, weechat, zathura, vimiv

– syncthing + dropbox + git for syncing/version control

For research work:

NEST + lots of python and Gnuplot for analysis, LaTeX for writing

  (stuff from NeuroFedora!)

inkscape + gimp + dia + freemind for figures/mind mapping

jabref for bibliography management

Other bits: – occasional gamer?

Oad + endless sky + openttd!

Tune up your sound with PulseEffects: Microphones

Monday 31st of August 2020 08:00:00 AM

The PulseEffects app is a full-featured set of modular effects you can use to adjust sound devices. In a previous article, you learned how you can use PulseEffects to correct or enhance output devices like speakers. However, that’s not where its features stop. You can also enhance sound input devices such as microphones. This can help when recording sound for podcasts, videos, or the like.

This article assumes you’ve already installed PulseEffects as shown in the previous article. It will not cover advanced topics like recording musical instruments, but it will show you how to do better voice or spoken-word recordings.

A word on microphones

Microphones come in a variety of forms. The one almost every laptop user has at hand is the condenser microphone built into the hardware. These microphones are limited in terms of producing quality sound. They’re built to provide basic sound, and they will pick up a lot of environmental noise due to how they work. If you want better results for a voice recording, there are many choices available based on budget.

  • USB headset with built-in condenser microphone: Generally budget-friendly and almost always gives better results than a laptop’s built-in mic. The resulting sound can be somewhat harsh and tinny, but this can be corrected. Manufacturers such as Logitech make units that are plug-and-play ready for Linux. They show up as USB sound devices (both input and output).
  • Handheld dynamic microphone: You’ll see the singer in a live band using one of these. You have to be close to them (and maintain that distance steadily) for best results, but they sound full and well-defined. These are typically a little more expensive than a USB headset.
  • Large diaphragm condenser microphone: You’ll see this type used by a singer or speaker in a broadcast or recording studio. Like other condensers they pick up a lot of the surrounding environment. By being fairly close to the mic you can essentially “turn down” the rest of the room. You can find budget friendly, good quality large condensers starting at the same price as a good dynamic mic. Prices go up from there to astronomical levels!

Most dynamic and large diaphragm condenser mics need to be plugged into a digital audio interface, using a microphone cable. This converts the signal from the mic into digital audio for the computer to use. However, you can find specialty mics made for direct connection via USB. These may be advertised as “podcaster mics,” and you can save some money using one of these, versus buying both a mic and an interface.

Making the mic sound better

Effects help you improve the recorded sound of your microphone. Whether you know it or not, you hear these effects all the time in recorded sound — in music, in TV shows and movies, on professional podcasts, and via commercial and satellite radio. Engineers apply these effects using either hardware units, or via software.

PulseEffects provides these effects in a software form, before your recording is saved on disk. Here is a list, in the order they are usually applied:

  • A gate reduces or entirely mutes the microphone when sound falls below a certain level. With proper settings, when you start speaking, the gate quickly opens, unmuting the mic. When you finish, the gate closes and other environmental sound will be either silenced or much quieter.
  • A compressor reduces the dynamic range of the input. Louder sounds are caught by the compressor and squashed down. You then turn the entire signal up slightly to compensate. This way, quieter and louder sounds become closer in volume, making the sound more even and less “peaky.” This results in a more professional, polished sound that’s much more enjoyable for listeners.
  • An equalizer (EQ) tunes up the sound of the voice. Use it to mitigate tones in your voice that you find unflattering. In addition, when you speak close to a mic, the bass frequencies in the voice are unnaturally emphasized. Sound engineers call this the proximity effect. By using an EQ to roll off the low end frequencies, you can reduce this effect and create a more pleasant sound.
  • A limiter is often the last step in a signal chain. This effect puts an absolute limit on the volume of a sound, so that unexpectedly hard sounds (such as p or b sounds, called plosives) that aren’t caught by compression don’t distort and ruin your recording.
Dive into PulseEffects

Open up the PulseEffects app. In the top left corner, choose the microphone selector icon. This lets you set up the effects chain you want for the mic as an input device. As with output devices (speakers), you can save your effects chain as well.

Use recording software that registers as a PulseAudio client to see your effects at work. The PulseCaster app is one such app, but there are many others you can choose.

Tips from a mix engineer

These guidelines may help you find the optimal sound. Remember that no two sound situations are ever the same. Use your ears, and do some test recordings, to figure out what’s best for your situation.

  • When you apply the gate, use a fast response of 5-10ms. The human voice has a significant “startup time,” so this speed makes the gate unnoticeable. Give the gate some time to close, though, so you don’t cut off the end of speech. Typically 100-200ms sounds fairly natural. A gain reduction of -12 or -18dB suffices to reduce environmental noise, and sounds more natural than more extreme values.
  • If you find a module is overloading when you speak, either reduce the output of the effects module before it, or the input of the module itself.
  • If you like the sound of your recorded voice without an EQ, use the Filter module instead to simply apply a high pass filter. For male voices, use a roll-off frequency of 80-100 Hz. For female voices, use a higher value. If you set the filter too high, the recording may sound weak or nasal.
  • Use a compressor ratio between 3 and 4 (this is actually 3:1 – 4:1) which works well with a human voice. An attack of 20ms and a release of 100-200ms is typical.
  • You may want to try the Deesser module as well, to reduce the “sizzling” of s, z, t, and f sounds. Because voices vary so widely, you’ll need to tune this to taste. A split of 6kHz and a threshold of -18dB is a good place to start.
  • A limiter setting of -1 to -3dB usually works well. Much lower settings result in a very “squashed” sounding track. In some cases that may be useful; in others it will sound unnatural.

Refer to the previous article to save your effects chain. Remember, you can store multiple chains, and then select the one you want for your particular needs.

Photo by Jacek Dylag on Unsplash.

Contribute at the Fedora Test Week for Btrfs

Wednesday 26th of August 2020 08:00:00 AM

The Fedora Project is changing the default file system for desktop variants, including Fedora Workstation, Fedora KDE, and more, for the first time since Fedora 11. Btrfs will replace ext4 as the default filesystem in Fedora 33. The Change is code complete, and has been testable in Rawhide as the default file system since early July. The Fedora Workstation working group and QA team have organized a test week from Monday, Aug 31, 2020 through Monday, Sep 07, 2020. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the btrfs test week has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you during the test week!

Btrfs Coming to Fedora 33

Monday 24th of August 2020 08:00:00 AM

by Chris Murphy and Langdon White

User data is the most important thing on a computer. Whether it’s source code for the next big release, family pictures, a music library, or anything else, you want it to be safe. Changing the default file system is not a change to make casually. The Fedora Project is changing the default file system for desktop variants (Fedora Workstation, Fedora KDE, etc), for the first time since Fedora 11. Btrfs will replace ext4 as the default filesystem in Fedora 33.

What does this mean for me?

Btrfs is a stable and mature file system with modern features: data integrity, optimizations for SSDs, compression, cheap writable snapshots, multiple device support, and more.

The switch to Btrfs will use a single-partition disk layout, and Btrfs’ built-in volume management. The previous default layout placed constraints on disk usage that can be a difficult adjustment for novice users. Btrfs solves this problem by avoiding it.

As a techie, you may have heard of bit rot, and memory bit flips. Data can be corrupted by a multitude of physical factors, even cosmic rays from the sun! Before an SSD fails outright, often it will return either zeros or garbage, instead of your data. Btrfs safeguards your data with checksums, and performs verification on every read. Corrupt data is never given to your programs, and it won’t replicate into your backups to be discovered another day (or year).

Btrfs uses a “copy-on-write” model: your data and the file system itself are never overwritten. This enhances crash-safeness. When copying a file, Btrfs does not write new data until you actually change the old data, saving space.

In fact, users will save more space when using Btrfs’ transparent compression. Compressing data reduces total writes, saves space, and extends flash drive life. In many cases, it can also improve performance. Compression can be enabled on an entire file system, or per subvolume, directory, and even per file. You will be able to opt-in to using compression in Fedora 33. And it’s one of the features we’re looking forward to taking advantage of by default in future Fedora releases.

Trusted

Facebook uses Btrfs on millions of machines in production. They compare its stability to ext4 and XFS (another file system available in Fedora). In fact, they use Btrfs to “improve” the quality of the consumer storage hardware that they use in production. Btrfs detects problems before the hardware fails.

(open)SUSE have been using Btrfs for many years now, including SUSE Linux Enterprise Server (SLES). You can’t imagine a company that provides support to customers shipping software that they don’t completely trust.

What’s next?

The Change is code complete, and has been testable in Rawhide as the default file system since early July. Btrfs has been explicitly supported in Fedora since 2012. This is expected to be a transparent change for most users, however it is still significant. Fedora will ensure we deliver the dependable and reliable experience Fedora users have come to expect.

Special thanks to: Ben Cotton, Michael Catanzaro, and the Fedora Workstation Working Group for contributing to this article.

Configure Fedora to practice and compose music

Friday 21st of August 2020 09:45:00 AM
Introduction

Using Fedora and Linux to produce and play music is now easy. Not that long ago, it was a nightmare: configuration was a complicated task and you needed to compile some applications yourself. The compatibility with electronic devices was the real story. But, now we can see the end of the road. Playing music under Linux with Fedora is becoming user friendly.

Configuration

Fedora has long been usable to play music because of the CCRMA repository. Moreover, there also exists a Fedora Spin dedicated version: Fedora Jam. And today, you also have a COPR repository (which I manage) with a lot of stuff in it.

To install the Fedora CCRMA repository:

rpm -Uvh http://ccrma.stanford.edu/planetccrma/mirror/fedora/linux/planetccrma/$(rpm -E %fedora)/x86_64/planetccrma-repo-1.1-3.fc$(rpm -E %fedora).ccrma.noarch.rpm dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm dnf install https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

To install the LinuxMAO Fedora COPR repository:

dnf copr enable ycollet/linuxmao

There are still some minimal steps to follow before being able to efficiently use a musical application. First, you will need to install the Jack audio connection kit and the qjackctl user interface:

dnf install jack-audio-connection-kit qjackctl

Then, as a root user, you will need to add yourself to the jackuser group:

sudo usermod -a -G jackuser <my_user_id>

To enable the changes made, you just have to logout of and log back in to your session or if you prefer reboot your machine.

Using basic applications

Now, you can add some applications to play with like LMMS or MuseScore.

A LMMS session. A MuseScore session.

You can also record your voice using Audacity.

All of these applications are available in the main Fedora repository:

dnf install lmms mscore audacity Fedora and your instrument, in real time Configuration

Editors note: A real time Kernel is necessary for audio recording on your PC, especially when doing multi track recording.

If you want to use your instrument (like an electric guitar) and use the sound of your instrument in some Fedora application, you will need to use Jack Audio Connection Kit with a real time kernel.

With the CCRMA repository, to install the real time kernel, use the following command as a root user:

dnf install kernel-rt

With the LinuxMAO Fedora COPR repository, use the following command:

dnf install kernel-rt-mao

The RT Kernel from CCRMA repository corresponds to a vanilla RT kernel with some Fedora patches applied whereas the one from the LinuxMAO repository is a pure vanilla one (a clean RT kernel without any patches).

Once this is done, we still need to perform some tuning on qjackctl to reduce the audio latency so it is negligible.

The main QJackCtl interface.

Click on the “Setup” button and set the following values:

  • Sample rate: 48000 or 44100 (this is the sampling frequency and these values are mostly supported on all commercially available sound cards)
  • Frames / period: 256
  • Periods / Buffer: 2
  • MIDI driver: seq (this value is required is you want to use a MIDI device)

With these parameters, you can easily achieve an audio latency of around 10 ms. While this value is the limit for the human ear and is hardly noticeable, you can reach lower latency with the penalty of increased CPU load.

Using Guitarix

To add some effects to your instrument, we will use a rack of effects: guitarix (edit: guitarix, the virtual guitar amplifier).

dnf install guitarix

Now, you have to connect your instrument to the audio card (the internal one or a USB adapter). Editors note: This normally requires an interface between the electric guitar and the audio line in of the audio card. There are also guitar to USB adapters. Once your instrument is connected, with qjackctl, we will connect:

  • the audio input to guitarix
  • the guitarix mono rack to the guitarix stereo rack
  • the guitarix stereo rack to the stereo audio output of your audio card
Connecting Guitarix using the QJackCtl graph window.

You do that by clicking on the Graph button of QJackCtl. Inside the Graph window, you just have to connect wires to the various elements . Each block represents an application. Guitarix is split into two blocks (preamp and rack). The preamp is where you select the amplifier characteristics, and the rack is where you apply mono and stereo effects. There are two other blocks with the system label for the audio input (the one on the left in the above figure) and the audio outputs (the one on the right).

Your instrument should be connected to the first audio input. You should test that your guitar is connected and that it’s able to be heard when played. Most of the time, we use the first two slots of audio output. But this will depend on your audio card.

Editors note: The actual configuration of inputs and outputs depends upon the type of hardware chosen. The stereo speakers of the PC were chosen as the output in the example shown.

If the MIDI interface of the sound card is chosen, there are also two red blocks which are dedicated to MIDI inputs / outputs. These would then be setup as the input from the instrument and the output from the rack.

Guitarix is an amp plus a rack of effects for you instrument. Mostly dedicated to guitar, but you can uses it with synthesizers too.

The Guitarix rack effects. Adding some backing tracks

Better than just playing guitar on your own, you can play guitar with a group. To do this, we will install TuxGuitar.

dnf install tuxguitar The TuxGuitar main interface.

TuxGuitar will play GuitarPro files. These files contains several instruments scores and can be played in real time. You just have to download a GuitarPro file from this website and open it with TuxGuitar.

Start TuxGuitar and click on Tools -> Plugins and check the fluidsynth plugin. Then, once fluidsynth is checked, click on Configure. Click on the Audio tab and select Jack as Audio Driver. In the Synthetizer tab, choose the same sampling frequency you chose for QjackCtl above (48000 or 44100 Hz).

In the soundfonts tab, you can add your own SF2 or SF3 file to improve the audio rendering. You can now close the Plugins window. Click on Tools -> Settings -> Sound. Here, you can select the king of audio used to render the score. If you have several SF2 / SF3 files, you will select the chosen one for the audio rendering here. Restart TuxGuitar after you’re satisfied with your selections. After restarting TuxGuitar, a new block will appear in the Graph window of QJackCtl.

QJackCtl with Guitarix and TuxGuitar.

You will just have to connect the block tagged ‘fluidsynth’ to the audio output like you have done with Guitarix.

Using MIDI devices

Using MIDI devices in real time is as easy as with audio. We will connect a virtual MIDI keyboard: vkeybd (but the same procedure applies with a real MIDI device) to a MIDI synthetizer: amsynth.

dnf install amsynth vkeybd The main interface of AmSynth. The virtual MIDI keyboard VKeyBD.

Once you have started amsynth and vkeybd, you will see new connections on the QJackCtl’s Graph window.

Amsynth and VKeyBD in QJackCtl’s graph window.

In this window, the red slots correspond to the Jack Audio MIDI connections whereas the purple ones correspond to the ALSA MIDI connections. Jack MIDI connections talk only to Jack MIDI connections. And the same for ALSA. If you want to connect a Jack MIDI connection to an ALSA MIDI connection, you will need to use a MIDI gateway: a2jmidid. You can read some more informations in the Ardour manual.

We have now covered some main topics of the audio under Fedora Linux. But there are a lot more things you can do.

Other possibilities

You can do multitrack recording with ardour, qtractor or zrythm.

QTractor for multitracks live recording.

You can do live coding using SuperCollider or SonicPi.

SonicPi in action.

Use some block connected language to perform many things: PureData

An audio / video block language: PureData

There is also a great audio looper available: SooperLooper.

SooperLooper, a great tool to build audio loops.

You can do live rehearsal through the internet: Jamulus.

Against the COVID side effects: Jamulus for live internet rehearsal.

Want to become the new famous DJ: have a look at Mixxx

Mixxx for DJing. Webography

Some links now:

Here is a YouTube video where I play guitar through Guitarix and I use TuxGuitar to play the backing tracks in real time. Both TuxGuitar and Guitarix are sent through non-mixer which is a small mixing application. To be able to record the audio of the session “on the fly”, I also use timemachine. And to avoid reconnecting everything each time I want to play guitar, I use Ray Session to start every application and connect all the Jack Audio connections.

I also made a small demonstration of the use of Jamulus for a live rehearsal. On this YouTube video, I use Jamulus, QJackCtl, Guitarix mainly. The second guitarix is 30 km away. The latency was around 15 ms. It’s quite small and hardly noticable.

On this YouTube video, I tried to make some comparison between various SF2 / SF3 soundfont files. I used a GuitarPro file for the Opeth’s song “epilogue”.

On this YouTube video, I use MuseScore to play a GuitarPro file and I play along while my guitar sound is processed by Guitarix.

Here, it’s a live performance with a dancer. TuxGuitar + Non Session Manager + Non Mixer + Guitarix. I always used this kind of combination and Linux has never hanged … Finger crossed !

Some compositions made with LMMS on Fedora 25 to 32. Using some really nice plugins like Surge, NoiseMaker from DISTRHO package and others. All these compositions are libre music and are hosted on Jamendo.

If you need some help:

  • LinuxMusicians: a great place with skilled people willing to help
  • LinuxMAO: if you speak french, this is the place to be. A lot of resources related to various software.
  • LinuxAudio: another great website with various ressources to help.

Tune up your sound with PulseEffects: Speakers

Wednesday 19th of August 2020 08:00:00 AM

Audio components for your computer don’t always produce the quality of sound you want. For instance your laptop speakers may be a bit “tinny” sounding, or a set of speakers for your desktop may be too boomy for your room. Or if you use a desk or headset microphone, you may find that recordings you make are not as high quality as you’d like. Enter PulseEffects!

PulseAudio and Gstreamer

The PulseAudio sound server comes with Fedora Workstation by default. It’s highly flexible and easily modified. PulseAudio can deal with many different inputs and outputs. For instance, it lets you switch between different inputs known as sources (such as microphones, or sound files) or outputs known as sinks (such as speakers or headphones).

By default, PulseAudio manages sound as streams, digitally sampled at a specific rate and bit depth with a defined number of channels — two for most stereo streams. It handles different sample rates for you, so you don’t have to know the details of the stream. PulseAudio simply deals with moving sound from one point to another.

The Gstreamer multimedia framework, on the other hand, provides myriad ways to modify audio and video data on their way through a pipeline. Gstreamer comes with plugins that allow it to attach to PulseAudio. This means that you can use Gstreamer to make a pipeline between your inputs and outputs to change audio streams.

PulseEffects manages this process with a nice, graphical front end. It lets you select and order different effects for your sound.

Installing PulseEffects

To install PulseEffects, use the Software tool and type pulseeffects to find the package. Fedora carries this software in its official repositories. So if you need to, you can switch the source from the Flatpak version to the Fedora one. Then click Install.

If you’re using a command line, you can use the sudo command with dnf to do the same thing:

$ sudo dnf install pulseeffects

Start playing some audio. This can come from your Videos or Rhythmbox media player, a website such as YouTube, a music streaming app such as Spotify, or something else. The best source to use is a full-fidelity digital audio source, like a CD or a FLAC file. (MP3 and online digital streams cut out some frequency information to reduce data size.) Ideally, it should be music that you are used to listening to in many places, and know well, like an album or playlist. Put it on repeat so you can use it while you tune your sound.

Then launch the program using either the Software tool’s Launch control, or your desktop’s application launcher. On Fedora Workstation, go to the Activities hotspot, use the Show Applications control to locate PulseEffects in the list, and click to launch. You should see your application in the list of active sound streams, with color bars that show an average frequency response:

PulseEffects initial screen with one sound stream from Videos

Notice all the sound modules available on the left. None are running by default when you first start PulseEffects. Any enabled modules are applied to your sound in order from top to bottom. You can use the up/down controls next to the module names to alter the order.

What does “better” mean?

Before we get started, realize that what constitutes “better” will usually be different based on many factors:

  • Specific hardware like the model of speakers or microphone
  • The environment the sound device is in (your room)
  • What your ears prefer

There is no magic cure for bad sound that works universally everywhere. So the examples you’ll see are based on some common problems. But you will need to use your ears to determine what’s best for your hardware, in the place you’re using it.

Making desktop speakers sound better

Often desktop speaker sets consist of a subwoofer and small satellite speakers. These tend to be both excessively “boomy,” meaning too much very low frequency sound, and “honky” or “boxy,” meaning too much of some middle (or “mid”) frequencies. To fix frequencies that are over- or under-represented, we can use an equalizer.

By default, all the effects are off in PulseEffects. Since you want to modify a sound output — your speakers — make sure the “speaker” icon at the upper left is selected. Locate the Equalizer control in PulseEffects and select it. Select the toggle switch at the top of the equalizer controls to turn it on.

The default equalizer appears as a 30-band graphic EQ. Older readers might be familiar with seeing physical equipment like this. Each band alters not just that specific frequency, but a fairly narrow band of frequencies around it. Think of it like a “dip” or “bump” in the frequency graph, depending on whether you lower or raise the slider.

PulseEffects default EQ, with a roll-off of extreme low frequencies and some reduction of unpleasant “boxiness” around 450Hz

If you’re not sure how to alter frequencies to get better sound, click the “tools” icon under the on/off toggle. Under Presets you can select different EQ settings to find something closest to what you like. Then you can modify those settings as you like. If things get out of control, under Settings use the Flat response control to zero out all the EQ.

Using the “tools” icon, you can also choose a different number of bands to simplify your choices. Using the “gear” icon above each band, you can choose different types of EQ, as well as the width and Q. Additional filter types like a low/high pass or low/high shelf are also available. Feel free to play with the EQ to see how it works, but be careful not to increase EQ levels too high if your speakers are above a moderate volume, because you can damage speakers that way.

Tips from a mix engineer

These guidelines may help you find the optimal sound for your situation.

  • It’s almost always better to reduce a problem frequency than to boost other things. If you boost too much, your music can start to distort.
  • To fix excessive boominess, apply a high pass filter somewhere between 30 and 50 Hz. You may also want to try a bell EQ reduction somewhere between 40 and 100 Hz.
  • If you want to fix a boxy sound (reminds you of a cardboard box), try a bell EQ to reduce some frequencies between 300 and 500 Hz.
  • To fix a honky or nasal sound, try reducing some frequencies between 650 and 900 Hz.
  • If guitar/keyboard solos or vocals seem a bit muffled, try a gentle boost centered somewhere between 1 and 2 kHz to make them a little more present.
  • If your speakers sound overly tinny, apply a high shelf reduction starting somewhere between 4 and 8 kHz — start at a high frequency and dial back to where it’s helpful. To fix a dull sound, apply a high shelf boost using the same approach.

Remember that a little EQ goes a long way. Try keeping your bell boosts or cuts between +4 and -4 on the sliders. The goal is not to make the music sound extreme, but to make slight corrections. Otherwise your ears will get tired more quickly, or in extreme cases you may even get headaches.

Watch the Input and Output meters at the bottom of every module. If you see a lot of green on one or both, the sound module is overloading at that stage. You’ll often find MP3 files, especially of modern music, have this issue. You may also see “warning” icons flashing over the check mark on enabled modules.

One way to cure this is to use the Limiter module at the beginning of the chain, and simply turn the input gain on the chain down about -3dB, leaving the limit at 0dB. This simply lowers the overall signal level without any attenuation. Then you can run other modules without worrying quite as much about distortion or overload in later stages.

Making laptop speakers sound better

While the above guidelines might be good for bigger speakers, laptops have the additional burden of being very small. Typically they lack bass response, because more and/or larger speakers, and more powerful magnets, are needed to produce those frequencies well.

However, you can correct this using the Bass Enhancer module in PulseEffects. You may want to move this module downward in the stack after your EQ for best results. Rather than turning the amount up excessively, try a modest change of +3 or +4dB, and then move the Scope frequency around until you find where you start to notice good results. Don’t be tempted to amplify too much because again, if it’s too high you could start to damage your laptop speakers over time.

Storing your work

First, set PulseEffects to run whenever you login. Use the “hamburger” tool at the top right to open up the General settings. Set Start Service at Login to enabled, and also enable the option to Process All Outputs. This does not mean all devices will get the same settings. Instead, it means that PulseEffects will run a chain for any sound output device you have connected. You can apply different chains to different devices.

Next, select the Presets button, and in the text box, type a name for your preset. One recommendation is to use the name of the device for which you’ve created a chain. Then click the “+” icon to add the preset. If you make changes, you can either use the “save” icon to save the changes to the selected preset, or click Apply to throw them away and re-apply the saved preset.

Finally, you can click the “cycle” icon if you want the preset to be applied every time the currently used sound output is detected. This is almost always a good idea. If you want to set up different presets for other outputs, first connect the output. Then make a new preset as described above, and select that to be auto-applied.

One final note: When you close the PulseEffects application, your active chain of effects does not stop. It will stay running unless you reset or stop the service. PulseEffects will consume a few percent of CPU time (depending on processor speed). On all but the oldest systems the load should not be noticeable. However, if you are sensitive to power use such as on a laptop, you may want to stop the service using this command:

$ pulseeffects -q Conclusion

Remember that every environment and person’s hearing is different, so beware of the overly dogmatic. Finally, you can’t make terrible speakers into great ones. But you can usually make them sound not so terrible — and if you have decent speakers, you usually can make them sound quite good!

The PulseEffects author also has both a LiberaPay donation site and a Patreon account, so if you find the software useful, you might want to consider contributing.

In the next installment, you’ll learn how to set up better sound on a desktop or headset microphone, to improve your teleconference meetings or make better audio or video spoken content. Until then, enjoy your new sound possibilities.

Photo by Paul Esch-Laurent on Unsplash.

Contribute at the Fedora Kernel and GNOME test days

Tuesday 18th of August 2020 08:00:00 AM

Fedora test days are events where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are two upcoming test days in the upcoming week. The first, starts on Monday 17 August through Monday 24 August, is to test the Kernel 5.8. Wednesday August 19, the test day is focusing on testing GNOME. Come and test with us to make the upcoming Fedora 33 even better. Read more below on how to do it.

Kernel test week

The kernel team is working on final integration for kernel 5.8. This version was just recently released and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week for Monday, August 17 through Monday, August 24. Refer to the wiki page for links to the test images you’ll need to participate. This document clearly outlines the steps.

GNOME test day

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora users. As a part of the planned change the GNOME megaupdate will land on Fedora which then will be shipped with Fedora 33. To ensure that everything works fine The Workstation WG and QA team will have this test day for on Wednesday, August 19. Refer to the wiki page for links and resources to test the GNOME test day.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about both test days are on the wiki pages above. If you’re available on or around the days of the events, please do some testing and report your results.

Rendering Music notation with ABC

Monday 17th of August 2020 08:00:00 AM

ABC is a human readable ascii representation of music notation.  John Chambers posted a brief history of ABC notation and there is a newer history by Chris Walshaw.

Unlike MusicXML, which is designed for exchanging music between score editing and performance applications, ABC is designed to be directly edited by humans. I can type a score in ABC with vim much faster than by fiddling with a mouse in a GUI score editor. Unlike other formats ABC works well with version control such as git. As with Latex, what you see is not what you get. But the notation is intuitive, the learning curve is pretty short, and the benefits are awesome.

While often touted as the standard for folk music, it works perfectly for Jazz lead sheets and does full scores as well. Some GUI score editors can import/export ABC notation. I used noteedit for a GUI editor until it was abandoned upstream, but was able to export as ABC. The result is reasonably human readable – letting me continue to edit in ABC notation, and serving as an example of a more complete score. When you are done with this lesson, you’ll be able to turn it into a PDF!

Ancient 4th Century Hymn

For a folk tune example, we will do a simple arrangement of a 4th century hymn, with a medieval era tune. “O Lux Beata Trinitas” is one of the twelve hymns which the Benedictine editors regarded as undoubtedly the work of St. Ambrose. It is cited as by St. Ambrose by Hinemar of Rheims in his treatise De unâ et non trinâ Deitate, 857. Hymnary.org

The hymn was still popular in the Gregorian Chant era. Here you can see the medieval notation and hear a performance as historically accurate as you will find today.

20th Century Interpretations

The medieval notation was updated in hymnals to more modern notation, This is our starting point. As reflected in those hymnals, there were no measures or bar lines in the medieval period.

Here are the basics of ABC:

  • Comments are lines beginning with ‘%’.
  • Notes beginning with “middle C” are entered as CDEFGABcdefgab.
  • Following a note with a number multiplies the timevalue by that number.
  • Notes that are next to each other are joined together whenever possible. This is the only way spaces are significant.
  • Parenthesis are used to tie or slur notes together.
  • The tune begins with X: 1, where 1 is the tune number. There can be multiple tunes in a file. Folk tunes are often collected into a file. For instance, we could collect all 12 known works of St. Ambrose into a single file named ‘ambrose.abc’.
  • The title is given with T:
  • The composer or source is given with C:
  • The key (default is C major) is given with K:

Here is a simple transcription of our tune into ABC:

X: 1 T: O lux beata Trinitas C: Plainsong, Mode VIII K: D % Fedora Magazine example (AB) (AGFG) EFG (AB) (BA) A4 (AB) (AGFG) EFG (AB) (BA) A4 (AB) d (cd) B (AG) (AB) (AGF) F4 (GA) (AGFG) EFG (AB) (BA) A4

We are going to do our work in a terminal emulator. Enter the above with your favorite text editor (bonus points if that is cat) into a file named ‘lux.abc’.
Or download lux.abc with all the tunes for this lesson. To format and view this, we need ghostscript, xreader, and abcm2ps. You probably have ghostscript and xreader (or other PDF viewer) already installed on a desktop, but it doesn’t hurt to ask again.

$ sudo dnf install ghostscript xreader abcm2ps make

I had you install make so a simple makefile can simplify rendering:

.SUFFIXES: .abc .ps .pdf .mid .abc: abcm2ps $* .abc.pdf: abcm2ps $* ps2pdf Out.ps $*.pdf .abc.mid: abc2midi $*.abc -o $*.mid

Enter or download that as a file named ‘Makefile’. Now format and view our tune:

$ make lux.pdf $ xreader lux.pdf &

I had you run xreader in the background, so you can switch back to your terminal. Xreader will update the view whenever lux.pdf is updated. If you are just reading this article, you can also view the output here.

Adding Lyrics

Lyrics are entered with ‘w:’ under the tune line they go with. Words are hyphenated to show how the syllables go with the notes. Use ’*’ to use additional notes for the last syllable.

Append tune 2 to the lux.abc file, it is the same tune with lyrics:

X: 2 T: O lux beata Trinitas C: Words: St. Ambrose 4th century C: Plainsong, Mode VIII K: D % Fedora Magazine example (AB) (AGFG) EFG (AB) (BA) A4 w: O* lux*** be-a-ta trin-* ni-* tas, (AB) (AGFG) EFG (AB) (BA) A4 w: et* prin-*** ci-pa-lis U-* ni-* tas, (AB) d (cd) B (AG) (AB) (AGF) F4 w: i-* am sol* re-ce-* dit* i-*gne-us, (GA) (AGFG) EFG (AB) (BA) A4 w: in-* fun-*** de lu-men cor-*di-* bus.

Now ‘make lux.pdf’ and see the results in your xreader window. Both tunes are rendered to the PDF.

Adding Measures

So, historical authenticity is all very fine, but I want to make a modern version. The first step for modern ears is to divide the tune into equal sized measures. My ear says that 7/8 is an excellent time signature for this tune.

  • M: 7/8 specifies a default meter of 7/8
  • L: 1/8 specifies a default note length of 1/8 of a whole note. This was already the default, but now it is documented.
  • Q: 1/4=80 specifies a suggested speed: 80 quarter notes per minute.
  • Measures are separated by bar lines represented by ‘|’.
  • There will be multiple verses, so ‘:|’ adds a repeat bar line.
  • A final bar line is ‘||’, but we don’t use it for this example.
  • It is good practice for debugging to divide the lyrics into measures as well, and not rely on automatic distribution.
  • Note that additional spaces can be added for readability.
  • Lining up the bar lines is not required, but can make it more readable.

Here is tune 3 with bar lines (appended to lux.abc):

X: 3 T: O lux beata Trinitas (3) C: Words: St. Ambrose 4th century C: Plainsong, Mode VIII M: 7/8 L: 1/8 Q: 1/4=80 K: D % Fedora Magazine example z(AB) (AGFG) | EFG (AB) (BA) | A3-A4 | w: O* lux***|be-a-ta trin-* ni-*| tas, | z(AB) (AGFG) | EFG (AB) (BA)| A3-A4 | w: et* prin-***|ci-pa-lis U-* ni-*| tas, | (AB) d (cd) B (A |G) (AB) (AGF) F-| F7| w:i-* am sol* re-ce-|* dit* i-*gne-us,| * | z(GA) (AGFG) | EFG (AB) (BA) | A3-A4 :| w:in-* fun-***| de lu-men cor-*di-*|bus. | Bass Line, Chords, and Verses

Now we begin the real departure in our interpretation. First, chords are added to assist in improvising from a “lead sheet”. Then we add a suggested bass line.

  • V:1 and V:2 switch between voices.
  • Chords are entered in double quotes in the tune line, and are rendered above the following note.
  • Each comma after a note lowers it by an octave.
  • C: can also be used to document arranger and license.
  • Addition verses are added as additional lyric lines under a tune line.
  • Verse numbers can be added by using ‘~’ to join them to the next word with a non-break space. Otherwise they would be counted as words.
  • %%MIDI these are magic comments that are used in the next section!

Here is our final tune for this lesson:

X: 4 T: O lux beata Trinitas (4) C: Words: St. Ambrose 4th century C: Plainsong, Mode VIII C: Arranged: Stuart D. Gathman C: Copyright 2012: Creative Commons Attribution-ShareAlike 2.0 M: 7/8 L: 1/8 Q: 1/4=80 K: D %%MIDI gchord c3c4 %%MIDI program 75 V:1 "D"z(AB) (AGFG) | "A7"EFG (AB) (BA) | "Dsus"A3-"D"A4 | w:i.~O* lux*** |be-a-ta trin-* ni-*| tas, | w:ii.~Te* ma-***|ne lau-dum car-*mi-*| ne, | w:iii.~De-* o***|Pa-tri sit glo-*ri-*| a, | V:2 D,3 A,2 A,2 | E,3 A,2 A,2 | D,3 A,2 A,2 | V:1 "D"z(AB) (AGFG) | "A7"EFG (AB) (BA) | "Dsus"A3-"D"A4 | w: et* prin-***|ci-pa-lis U-* ni-*| tas, | w: te* de-*** |pre-ce-mur ves-*pe-*|re: | w: ei-* us-*** |que so-li Fi-*li-*| o, | V:2 D,3 A,2 A,2 | E,3 A,2 A,2 | D,3 A,2 A,2 | V:1 "G"(AB) d (cd) B (A |"Em"G) (AB) (AGF) F-|"D"F7 | w: i-* am sol* re-ce-|* dit* i-*gne-us,| * | w: te* nos-tra* sup-plex|* glo-*ri-**a | * | w: cum* Spi-ri-*tu Pa-|* ra-*cli-**to, | * | V:2 G,3 B,2 B,2 | D,3 B,2 B,2 | D,3 A,2 A,2 | V:1 "A7"z(GA) (AGFG) | "A7"EFG (AB) (BA) | "Dsus"A3-"D"A4 :| w: in-* fun-*** | de lu-men cor-*di-*| bus. | w: per* cunc-*** | ta lau-det sae-*cu-*| la. | w: et* nunc,*** | et in per-pe-*tu-* | um | V:2 E,3 A,2 A,2 | C,3 A,2 A,2 | D,3 A,2 A,2 :| Rendering to MIDI

Rendering that makes a nice lead sheet! What does it sound like? You will need the abc to MIDI translator and a MIDI renderer. Fedora comes with a number of MIDI synthesizer and rendering options, but we will use timidity – a simple command line utility that can render to audio files or play on your speakers.

Install abcMIDI and timidity:

$ sudo dnf install abcMIDI timidity++

If you have been following the examples, you have 4 tunes in lux.abc. Render them to midi with the abc2midi utility:

$ abc2midi lux.abc

This creates four midi files, one for each tune: lux1.mid .. lux4.mid. Use timidity to play each file to your speakers:

$ timidity lux1.mid

When you play ‘lux4.mid’, you will hear what the ‘%%MIDI’ directives did. You can read more about abc2midi and its directives here. You can also hear me singing and playing piano from the lead sheet and totally butchering the Latin.

There is a lot more to ABC, but this has hopefully been a fun introduction!  There are more examples in /usr/share/doc/abcm2ps/examples, and check out folk tunes from many cultures.

Come test a new release of pipenv, the Python development tool

Friday 14th of August 2020 08:00:00 AM

Pipenv is a tool that helps Python developers maintain isolated virtual environments with specifacally defined set of dependencies to achieve reproducible development and deployment environments. It is similar to tools for different programming languages, such as bundler, composer, npm, cargo, yarn, etc.

A new version of pipenv, 2020.6.2, has been recently released. It is now available in Fedora 33 and rawhide. For older Fedoras, the maintainers decided to package it in COPR to be tested first. So come try it out, before they push it into stable Fedora versions. The new version doesn’t bring any fancy new features, but after two years of development it fixes a lot of problems and does many things differently under the hood. What worked for you previously should continue to work, but might behave slightly differently.

How to get it

If you are already running Fedora 33 or rawhide, run $ sudo dnf upgrade pipenv or $ sudo dnf install pipenv and you’ll get the new version.

On Fedora 31 or Fedora 32, you’ll need to use a copr repository until such time comes that the tested package will be updated in the official place. To enable the repository, run:

$ sudo dnf copr enable @python/pipenv

Then to upgrade pipenv to the new version, run:

$ sudo dnf upgrade pipenv

Or, if you haven’t installed it yet, install it via:

$ sudo dnf install pipenv

In case you ever need to roll back to the officially maintained version, you can run:

$ sudo dnf copr disable @python/pipenv
$ sudo dnf distro-sync pipenv

COPR is not officially supported by Fedora infrastructure. Use packages at your own risk.

How to use it

If you already have projects managed by the older version of pipenv, you should be able to use the new version in its place without issues. Let us know if something breaks.

If you are not yet familiar with pipenv or want to start a new project, here is a quick guide:

Create a working directory:

$ mkdir new-project && cd new-project

Initialize pipenv with Python 3:

$ pipenv --three

Install the packages you want, e.g.:

$ pipenv install six

Generate a Pipfile.lock file:

$ pipenv lock

Now you can commit the created Pipfile and Pipfile.lock files into your version control system (e.g. git) and others can use this command in the cloned repository to get the same environment:

$ pipenv install

See pipenv’s documentation for more examples.

How to report problems

If you encounter any problems with the new pipenv version, please report any issues in Fedora’s Bugzilla. The maintainers of the pipenv package in official Fedora repositories and in the copr repository are the same. Please indicate in the text that the report is regarding this new version.

Create a wifi hotspot with Raspberry Pi 3 and Fedora

Wednesday 12th of August 2020 08:00:00 AM

If you’re already running Fedora on your Pi, you’re already most of the way to a wifi hotspot. A Raspberry Pi has a wifi interface that’s usually set up to join an existing wifi network. This interface can be reconfigured to provide a new wifi network. If a room has a good network cable and a bad wifi signal (a brick wall, foil-backed plasterboard, and even a window with a metal oxide coating are all obstacles), fix it with your Pi.

This article describes the procedure for setting up the hotspot. It was tested on third generation Pis – a Model B v1.2, and a Model B+ (the older 2 and the new 4 weren’t tested). These are the credit-card size Pis that have been around a few years.

This article also delves a little way into the network concepts behind the scenes. For instance, “hotspot” is the term that’s caught on in public places around the world, but it’s more accurate to use the term WLAN AP (Wireless Local Area Network Access Point).In fact, if you want to annoy your friendly neighborhood network administrator, call a hotspot a “wifi router”. The inaccuracy will make their eyes cross.

A few nmcli commands configure the Raspberry Pi as a wifi AP. The nmcli command-line tool controls the NetworkManager daemon. It’s not the only network configuration system available. More complex solutions are available for the adventurous. Check out the hostapd RPM package and the OpenWRT distro. Have a look at Internet connection sharing with NetworkManager for more ideas.

A dive into network administration

The hotspot is a routed AP (Access Point). It sits between two networks, the current wired network and its new wireless network, and takes care of the post-office-style forwarding of IP packets between them.

Routing and interfaces

The wireless interface on the Raspberry Pi is named wlan0 and the wired one is eth0. The new wireless network uses one range of IP addresses and the current wired network uses another. In this example, the current network range is 192.168.0.0/24 and the new network range is 10.42.0.0/24. If these numbers make no sense, that’s OK. You can carry on without getting to grips with IP subnets and netmasks. The Raspberry Pi’s two interfaces have IP addresses from these ranges.

Packets are sent to local computers or remote destinations based on their IP addresses. This is routing work, and it’s where the routed part of routed AP name comes from. If you’d like to build a more complex router with DHCP and DNS, pick up some tips from the article How to use Fedora Server to create a router / gateway.

It’s not a bridged AP

Netowrk bridging is another way of extending a network, but it’s not how this Pi is set up. This routed AP is not a bridged AP. To understand the difference between routing and bridging, you have to know a little about the networking layers of the OSI network model. A good place to start is the beginner’s guide to network troubleshooting in Linux. Here’s the short answer.

  • layer 3, network ← Yes, our routed AP is here.
  • layer 2, data link ← No, it’s not a bridged AP.
  • layer 1, physical ← Radio transmission is covered here.

A bridge works at a lower layer of the network stack – it uses ethernet MAC addresses to send data. If this was a bridged AP, it wouldn’t have two sets of IP addresses; the new wireless network and the current wired network would use the same IP subnet.

IP masquerading

You won’t find an IP address starting with 10. anywhere on the Internet. It’s a private address, not a public address. To get an IP packet routed out of the wifi network and back in again, packet addresses have to be changed. IP masquerading is a way of making this routing work. The masquerade name is used because the packets’ real addresses are hidden. the wired network doesn’t see any addresses from the wireless network.

IP masquerading is set up automatically by NetworkManager. NetworkManager adds nftables rules to handle IP masquerading.

The Pi’s network stack

A stack of network hardware and software makes wifi work.

  • Network hardware
  • Kernel space software
  • User space software

You can see the network hardware. The Raspberry Pi has two main hardware components – a tiny antenna and Broadcom wifi chip. MagPi magazine has some great photos.

Kernel software provides the plumbing. There’s no need to work on these directly – it’s all good to go in the Fedora distribution.

User space software customizes the system. It’s full of utilities that either help the user, talk to the kernel, or connect other utilities together. For instance, the firewall-cmd tool talks to the firewalld service, firewalld talks to the nftables tool, and nftables talks to the netfilter framework in the kernel. The nmcli commands talk to NetworkManager. And NetworkManager talks to pretty much everything.

Create the AP

That’s enough theory — let’s get practical. Fire up your Raspberry Pi running Fedora and run these commands.

Install software

Nearly all the required software is included with the Fedora Minimal image. The only thing missing is the dnsmasq package. This handles the DHCP and IP address part of the new wifi network, automatically. Run this command using sudo:

$ sudo dnf install dnsmasq Create a new NetworkManager connection

NetworkManager sets up one network connection automatically, Wired connection 1. Use the nmcli tool to tell NetworkManager how to add a wifi connection. NetworkManager saves these settings, and a bunch more, in a new config file.

The new configuration file is created in the directory /etc/sysconfig/network-scripts/. At first, it’s empty; the image has no configuration files for network interfaces. If you want to find out more about how NetworkManager uses the network-scripts directory, the gory details are in the nm-settings-ifcfg-rh man page.

[nick@raspi ~]$ ls /etc/sysconfig/network-scripts/ [nick@raspi ~]$

The first nmcli command, to create a network connection, looks like this. There’s more to do — the Pi won’t work as a hotspot after running this.

nmcli con add \ type wifi \ ifname wlan0 \ con-name 'raspi hotspot' \ autoconnect yes \ ssid 'raspi wifi'

The following commands complete several more steps:

  • Create a new connection.
  • List the connections.
  • Take another look at the network-scripts folder. NetworkManager added a config file.
  • List available APs to connect to.

This requires running several commands as root using sudo:

$ sudo nmcli con add type wifi ifname wlan0 con-name 'raspi hotspot' autoconnect yes ssid 'raspi wifi' Connection 'raspi wifi' (13ea67a7-a8e6-480c-8a46-3171d9f96554) successfully added. $ sudo nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 59b7f1b5-04e1-3ad8-bde8-386a97e5195d ethernet eth0 raspi wifi 13ea67a7-a8e6-480c-8a46-3171d9f96554 wifi wlan0 $ ls /etc/sysconfig/network-scripts/ ifcfg-raspi_wifi $ sudo nmcli device wifi list IN-USE BSSID SSID MODE CHAN RATE SIGNAL BARS SECURITY 01:0B:03:04:C6:50 APrivateAP Infra 6 195 Mbit/s 52 ▂▄__ WPA2 02:B3:54:05:C8:51 SomePublicAP Infra 6 195 Mbit/s 52 ▂▄__ --

You can remove the new config and start again with this command:

$ sudo nmcli con delete 'raspi hotspot' Change the connection mode

A NetworkManager connection has many configuration settings. You can see these with the command nmcli con show ‘raspi hotspot’. Some of these settings start with the label 802-11-wireless. This is to do with industry standards that make wifi work – the IEEE organization specified many protocols for wifi, named 802.11. This new wifi connection is in infrastructure mode, ready to connect to a wifi access point. The Pi isn’t supposed to connect to another AP; it’s supposed to be the AP that others connect to.

This command changes the mode from infrastructure to AP. It also sets a few other wireless properties. The bg value tells NetworkManager to follow two old IEEE standards – 802.11b and 802.11g. Basically it configures the radio to use the 2.4GHz frequency band, not the 5GHz band. ipv4.method shared means this connection will be shared with others.

  • Change the connection to a hotspot by changing the mode to ap.
sudo nmcli connection \ modify "raspi hotspot" \ 802-11-wireless.mode ap \ 802-11-wireless.band bg \ ipv4.method shared

The connection starts automatically. The dnsmasq application gives the wlan0 interface an IP address of 10.42.0.1. The manual commands to start and stop the hotspot are:

$ sudo nmcli con up "raspi hotspot" $ sudo nmcli con down "raspi hotspot" Connect a device

The next steps are to:

  • Watch the log.
  • Connect a smartphone.
  • When you’ve seen enough, type ^C ([control][c]) to stop watching the log.
$ journalctl --follow -- Logs begin at Wed 2020-04-01 18:23:45 BST. -- ...

Use a wifi-enabled device, like your phone. The phone can find the new raspi wifi network.

Messages about an associating client appear in the activity log:

Jun 10 18:08:05 raspi wpa_supplicant[662]: wlan0: AP-STA-CONNECTED 94:b0:1f:2e:d2:bd Jun 10 18:08:05 raspi wpa_supplicant[662]: wlan0: CTRL-EVENT-SUBNET-STATUS-UPDATE status=0 Jun 10 18:08:05 raspi dnsmasq-dhcp[713]: DHCPREQUEST(wlan0) 10.42.0.125 94:b0:1f:2e:d2:bd Jun 10 18:08:05 raspi dnsmasq-dhcp[713]: DHCPACK(wlan0) 10.42.0.125 94:b0:1f:2e:d2:bd nick Examine the firewall

A new security zone named nm-shared has appeared. This is stopping some wifi access.

$ sudo firewall-cmd --get-active-zones [sudo] password for nick: nm-shared interfaces: wlan0 public interfaces: eth0

The new zone is set up to accept everything because the target is ACCEPT. Clients are able to use web, mail and SSH to get to the Internet.

$ sudo firewall-cmd --zone=nm-shared --list-all nm-shared (active) target: ACCEPT icmp-block-inversion: no interfaces: wlan0 sources: services: dhcp dns ssh ports: protocols: icmp ipv6-icmp masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule priority="32767" reject

This big list of config settings takes a little examination.

The first line, the innocent-until-proven-guilty option target: ACCEPT says all traffic is allowed through, unless a rule says otherwise. It’s the same as saying these types of traffic are all OK.

  • inbound packets – requests sent from wifi clients to the Raspberry Pi
  • forwarded packets – requests from wifi clients to the Internet
  • outbound packets – requests sent by the PI to wifi clients

However, there’s a hidden gotcha: requests from wifi clients (like your workstation) to the Raspberry Pi may be rejected. The final line — the mysterious rule in the rich rules section — refers to the routing policy database. The rule stops you from connecting from your workstation to your Pi with a command like this: ssh 10.42.0.1. This rule only affects traffic sent to to the Raspberry Pi, not traffic sent to the Internet, so browsing the web works fine.

If an inbound packet matches something in the services and protocols lists, it’s allowed through. NetworkManager automatically adds ICMP, DHCP and DNS (Internet infrastructure services and protocols). An SSH packet doesn’t match, gets as far as the post-processing stage, and is rejected — priority=”32767″ translates as “do this after all the processing is done.”

If you want to know what’s happening behind the scenes, that rich rule creates an nftables rule. The nftables rule looks like this.

$ sudo nft list chain inet firewalld filter_IN_nm-shared_post table inet firewalld { chain filter_IN_nm-shared_post { reject } } Fix SSH login

Connect from your workstation to the Raspberry Pi using SSH.This won’t work because of the rich rule. A protocol that’s not on the list gets instantly rejected.

Check that SSH is blocked:

$ ssh 10.42.0.1 ssh: connect to host 10.42.0.1 port 22: Connection refused

Next, add SSH to the list of allowed services. If you don’t remember what services are defined, list them all with firewall-cmd ‐‐get-services. For SSH, use option ‐‐add-service ssh or ‐‐remove-service ssh. Don’t forget to make the change permanent.

$ sudo firewall-cmd --add-service ssh --permanent --zone=nm-shared success

Now test with SSH again.

$ ssh 10.42.0.1 The authenticity of host '10.42.0.1 (10.42.0.1)' can't be established. ECDSA key fingerprint is SHA256:dDdgJpDSMNKR5h0cnpiegyFGAwGD24Dgjg82/NUC3Bc. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '10.42.0.1' (ECDSA) to the list of known hosts. Last login: Tue Jun 9 18:58:36 2020 from 10.0.1.35 nick@10.42.0.1's password:

SSH access is no longer blocked.

Test as a headless computer

The raspberry pi runs fine as a headless computer. From here on, you can use SSH to work on your Pi.

  • Power off.
  • Remove keyboard and video monitor.
  • Power on.
  • Wait a couple minutes.
  • Connect from your workstation to the Raspberry Pi using SSH. Use either the wired interface or the wireless one; both work.
Increase security with WPA-PSK

The WPA-PSK (Wifi Protected Access with Pre-Shared Key) system is designed for home users and small offices. It is password protected. Use nmcli again to add WPA-PSK:

$ sudo nmcli con modify "raspi hotspot" wifi-sec.key-mgmt wpa-psk $ sudo nmcli con modify "raspi hotspot" wifi-sec.psk "hotspot-password" Troubleshooting

Here are a couple recommendations:

The bad news is, there are no troubleshooting tips here. There are so many things that can go wrong, there’s no way of covering them.

Troubleshooting a network stack is tricky. If one component goes wrong, it may all go wrong. And making changes like reloading firewall rules can upset services like NetworkManager and sshd. You know you’re in the weeds when you find yourself running nftables commands like nft list ruleset and firewalld commands like firewall-cmd ‐‐set-log-denied=all.

Play with your new platform

Add value to your new AP. Since you’re running a Pi, there are many hardware add-ons. Since it’s running Fedora, you have thousands of packages available. Try turning it into a mini-NAS, or adding battery back-up, or perhaps a music player.

Photo by Uriel SC on Unsplash.

TCP window scaling, timestamps and SACK

Tuesday 11th of August 2020 10:00:00 AM

The Linux TCP stack has a myriad of sysctl knobs that allow to change its behavior.  This includes the amount of memory that can be used for receive or transmit operations, the maximum number of sockets and optional features and protocol extensions.

There are  multiple articles that recommend to disable TCP extensions, such as timestamps or selective acknowledgments (SACK) for various “performance tuning” or “security” reasons.

This article provides background on what these extensions do, why they
are enabled by default, how they relate to one another and why it is normally a bad idea to turn them off.

TCP Window scaling

The data transmission rate that TCP can sustain is limited by several factors. Some of these are:

  • Round trip time (RTT).  This is the time it takes for a packet to get to the destination and a reply to come back. Lower is better.
  • lowest link speed of the network paths involved
  • frequency of packet loss
  • the speed at which new data can be made available for transmission
    For example, the CPU needs to be able to pass data to the network adapter fast enough. If the CPU needs to encrypt the data first, the adapter might have to wait for new data. In similar fashion disk storage can be a bottleneck if it can’t read the data fast enough.
  • The maximum possible size of the TCP receive window. The receive window determines how much data (in bytes) TCP can transmit before it has to wait for the receiver to report reception of that data. This is announced by the receiver. The receiver will constantly update this value as it reads and acknowledges reception of the incoming data. The receive windows current value is contained in the TCP header that is part of every segment sent by TCP. The sender is thus aware of the current receive window whenever it receives an acknowledgment from the peer. This means that the higher the round-trip time, the longer it takes for sender to get receive window updates.

TCP is limited to at most 64 kilobytes of unacknowledged (in-flight) data. This is not even close to what is needed to sustain a decent data rate in most networking scenarios. Let us look at some examples.

Theoretical data rate

With a round-trip-time of 100 milliseconds, TCP can transfer at most 640 kilobytes per second. With a 1 second delay, the maximum theoretical data rate drops down to only 64 kilobytes per second.

This is because of the receive window. Once 64kbyte of data have been sent the receive window is already full.  The sender must wait until the peer informs it that at least some of the data has been read by the application. 

The first segment sent reduces the TCP window by the size of that segment. It takes one round-trip before an update of the receive window value will become available. When updates arrive with a 1 second delay, this results in a 64 kilobyte limit even if the link has plenty of bandwidth available.

In order to fully utilize a fast network with several milliseconds of delay, a window size larger than what classic TCP supports is a must. The ’64 kilobyte limit’ is an artifact of the protocols specification: The TCP header reserves only 16bits for the receive window size. This allows receive windows of up to 64KByte. When the TCP protocol was originally designed, this size was not seen as a limit.

Unfortunately, its not possible to just change the TCP header to support a larger maximum window value. Doing so would mean all implementations of TCP would have to be updated simultaneously or they wouldn’t understand one another anymore. To solve this, the interpretation of the receive window value is changed instead.

The ‘window scaling option’ allows to do this while keeping compatibility to existing implementations.

TCP Options: Backwards-compatible protocol extensions

TCP supports optional extensions. This allows to enhance the protocol with new features without the need to update all implementations at once. When a TCP initiator connects to the peer, it also send a list of supported extensions. All extensions follow the same format: an unique option number followed by the length of the option and the option data itself.

The TCP responder checks all the option numbers contained in the connection request. If it does not understand an option number it skips
‘length’ bytes of data and checks the next option number. The responder omits those it did not understand from the reply. This allows both the sender and receiver to learn the common set of supported options.

With window scaling, the option data always consist of a single number.

The window scaling option Window Scale option (WSopt): Kind: 3, Length: 3
    +---------+---------+---------+
    | Kind=3  |Length=3 |shift.cnt|
    +---------+---------+---------+
         1         1         1

The window scaling option tells the peer that the receive window value found in the TCP header should be scaled by the given number to get the real size.

For example, a TCP initiator that announces a window scaling factor of 7 tries to instruct the responder that any future packets that carry a receive window value of 512 really announce a window of 65536 byte. This is an increase by a factor of 128. This would allow a maximum TCP Window of 8 Megabytes.

A TCP responder that does not understand this option ignores it. The TCP packet sent in reply to the connection request (the syn-ack) then does not contain the window scale option. In this case both sides can only use a 64k window size. Fortunately, almost every TCP stack supports and enables this option by default, including Linux.

The responder includes its own desired scaling factor. Both peers can use a different number. Its also legitimate to announce a scaling factor of 0. This means the peer should treat the receive window value it receives verbatim, but it allows scaled values in the reply direction — the recipient can then use a larger receive window.

Unlike SACK or TCP timestamps, the window scaling option only appears in the first two packets of a TCP connection, it cannot be changed afterwards. It is also not possible to determine the scaling factor by looking at a packet capture of a connection that does not contain the initial connection three-way handshake.

The largest supported scaling factor is 14. This allows TCP window sizes
of up to one Gigabyte.

Window scaling downsides

It can cause data corruption in very special cases. Before you disable the option – it is impossible under normal circumstances. There is also a solution in place that prevents this. Unfortunately, some people disable this solution without realizing the relationship with window scaling. First, let’s have a look at the actual problem that needs to be addressed. Imagine the following sequence of events:

  1. The sender transmits segments: s_1, s_2, s_3, … s_n
  2.  The receiver sees: s_1, s_3, .. s_n and sends an acknowledgment for s_1.
  3.  The sender considers s_2 lost and sends it a second time. It also sends new data contained in segment s_n+1.
  4.  The receiver then sees: s_2, s_n+1, s_2: the packet s_2 is received twice.

This can happen for example when a sender triggers re-transmission too early. Such erroneous re-transmits are never a problem in normal cases, even with window scaling. The receiver will just discard the duplicate.

Old data to new data

The TCP sequence number can be at most 4 Gigabyte. If it becomes larger than this, the sequence wraps back to 0 and then increases again. This is not a problem in itself, but if this occur fast enough then the above scenario can create an ambiguity.

If a wrap-around occurs at the right moment, the sequence number s_2 (the re-transmitted packet) can already be larger than s_n+1. Thus, in the last step (4), the receiver may interpret this as: s_2, s_n+1, s_n+m, i.e. it could view the ‘old’ packet s_2 as containing new data.

Normally, this won’t happen because a ‘wrap around’ occurs only every couple of seconds or minutes even on high bandwidth links. The interval between the original and a unneeded re-transmit will be a lot smaller.

For example,with a transmit speed of 50 Megabytes per second, a
duplicate needs to arrive more than one minute late for this to become a problem. The sequence numbers do not wrap fast enough for small delays to induce this problem.

Once TCP approaches ‘Gigabyte per second’ throughput rates, the sequence numbers can wrap so fast that even a delay by only a few milliseconds can create duplicates that TCP cannot detect anymore. By solving the problem of the too small receive window, TCP can now be used for network speeds that were impossible before – and that creates a new, albeit rare problem. To safely use Gigabytes/s speed in environments with very low RTT receivers must be able to detect such old duplicates without relying on the sequence number alone.

TCP time stamps A best-before date

In the most simple terms, TCP timestamps just add a time stamp to the packets to resolve the ambiguity caused by very fast sequence number wrap around. If a segment appears to contain new data, but its timestamp is older than the last in-window packet, then the sequence number has wrapped and the ”new” packet is actually an older duplicate. This resolves the ambiguity of re-transmits even for extreme corner cases.

But this extension allows for more than just detection of old packets. The other major feature made possible by TCP timestamps are more precise round-trip time measurements (RTTm).

A need for precise round-trip-time estimation

When both peers support timestamps,  every TCP segment carries two additional numbers: a timestamp value and a timestamp echo.

TCP Timestamp option (TSopt): Kind: 8, Length: 10
+-------+----+----------------+-----------------+
|Kind=8 | 10 |TS Value (TSval)|EchoReply (TSecr)|
+-------+----+----------------+-----------------+
    1      1         4                4

An accurate RTT estimate is crucial for TCP performance. TCP automatically re-sends data that was not acknowledged. Re-transmission is triggered by a timer: If it expires, TCP considers one or more packets that it has not yet received an acknowledgment for to be lost. They are then sent again.

But “has not been acknowledged” does not mean the segment was lost. It is also possible that the receiver did not send an acknowledgment so far or that the acknowledgment is still in flight. This creates a dilemma: TCP must wait long enough for such slight delays to not matter, but it can’t wait for too long either.

Low versus high network delay

In networks with a high delay, if the timer fires too fast, TCP frequently wastes time and bandwidth with unneeded re-sends.

In networks with a low delay however,  waiting for too long causes reduced throughput when a real packet loss occurs. Therefore, the timer should expire sooner in low-delay networks than in those with a high delay. The tcp retransmit timeout therefore cannot use a fixed constant value as a timeout. It needs to adapt the value based on the delay that it experiences in the network.

Round-trip time measurement

TCP picks a retransmit timeout that is based on the expected round-trip time (RTT). The RTT is not known in advance. RTT is estimated by measuring the delta between the time a segment is sent and the time TCP receives an acknowledgment for the data carried by that segment.

This is complicated by several factors.

  • For performance reasons, TCP does not generate a new acknowledgment for every packet it receives. It waits  for a very small amount of time: If more segments arrive, their reception can be acknowledged with a single ACK packet. This is called “cumulative ACK”.
  •  The round-trip-time is not constant. This is because of a myriad of factors. For example, a client might be a mobile phone switching to different base stations as its moved around. Its also possible that packet switching takes longer when link or CPU utilization increases.
  • a packet that had to be re-sent must be ignored during computation. This is because the sender cannot tell if the ACK for the re-transmitted segment is acknowledging the original transmission (that arrived after all) or the re-transmission.

This last point is significant: When TCP is busy recovering from a loss, it may only receives ACKs for re-transmitted segments. It then can’t measure (update) the RTT during this recovery phase. As a consequence it can’t adjust the re-transmission timeout, which then keeps growing exponentially. That’s a pretty specific case (it assumes that other mechanisms such as fast retransmit or SACK did not help). Nevertheless, with TCP timestamps, RTT evaluation is done even in this case.

If the extension is used, the peer reads the timestamp value from the TCP segments extension space and stores it locally. It then places this value in all the segments it sends back as the “timestamp echo”.

Therefore the option carries two timestamps: Its senders own timestamp and the most recent timestamp it received from the peer. The “echo timestamp” is used by the original sender to compute the RTT. Its the delta between its current timestamp clock and what was reflected in the “timestamp echo”.

Other timestamp uses

TCP timestamps even have other uses beyond PAWS and RTT measurements. For example it becomes possible to detect if a retransmission was unnecessary. If the acknowledgment carries an older timestamp echo, the acknowledgment was for the initial packet, not the re-transmitted one.

Another, more obscure use case for TCP timestamps is related to the TCP syn cookie feature.

TCP connection establishment on server side

When connection requests arrive faster than a server application can accept the new incoming connection, the connection backlog will eventually reach its limit. This can occur because of a mis-configuration of the system or a bug in the application. It also happens when one or more clients send connection requests without reacting to the ‘syn ack’ response. This fills the connection queue with incomplete connections. It takes several seconds for these entries to time out. This is called a “syn flood attack”.

TCP timestamps and TCP syn cookies

Some TCP stacks allow to accept new connections even if the queue is full. When this happens, the Linux kernel will print a prominent message to the system log:

Possible SYN flooding on port P. Sending Cookies. Check SNMP counters.

This mechanism bypasses the connection queue entirely. The information that is normally stored in the connection queue is encoded into the SYN/ACK responses TCP sequence number. When the ACK comes back, the queue entry can be rebuilt from the sequence number.

The sequence number only has limited space to store information. Connections established using the ‘TCP syn cookie’ mechanism can not support TCP options for this reason.

The TCP options that are common to both peers can be stored in the timestamp, however. The ACK packet reflects the value back in the timestamp echo field which allows to recover the agreed-upon TCP options as well. Else, cookie-connections are restricted by the standard 64 kbyte receive window.

Common myths – timestamps are bad for performance

Unfortunately some guides recommend disabling TCP timestamps to reduce the number of times the kernel needs to access the timestamp clock to get the current time. This is not correct. As explained before, RTT estimation is a necessary part of TCP. For this reason, the kernel always takes a microsecond-resolution time stamp when a packet is received/sent.

Linux re-uses the clock timestamp taken for the RTT estimation for the remainder of the packet processing step. This also avoids the extra clock access to add a timestamp to an outgoing TCP packet.

The entire timestamp option only requires 10 bytes of TCP option space in each packet, this is not a significant decrease in space available for packet payload.

common myths – timestamps are a security problem

Some security audit tools and (older) blog posts recommend to disable TCP
timestamps because they allegedly leak system uptime: This would then allow to estimate the patch level of the system/kernel. This was true in the past: The timestamp clock is based on a constantly increasing value that starts at a fixed value on each system boot. A timestamp value would give a estimate as to how long the machine has been running (uptime).

As of Linux 4.12 TCP timestamps do not reveal the uptime anymore. All timestamp values sent use a peer-specific offset. Timestamp values also wrap every 49 days.

In other words, connections from or to address “A” see a different timestamp than connections to the remote address “B”.

Run sysctl net.ipv4.tcp_timestamps=2 to disable the randomization offset. This makes analyzing packet traces recorded by tools like wireshark or tcpdump easier – packets sent from the host then all have the same clock base in their TCP option timestamp.  For normal operation the default setting should be left as-is.

Selective Acknowledgments

TCP has problems if several packets in the same window of data are lost. This is because TCP Acknowledgments are cumulative, but only for packets
that arrived in-sequence. Example:

  • Sender transmits segments s_1, s_2, s_3, … s_n
  • Sender receives ACK for s_2
  • This means that both s_1 and s_2 were received and the
    sender no longer needs to keep these segments around.
  • Should s_3 be re-transmitted? What about s_4? s_n?

The sender waits for a “retransmission timeout” or ‘duplicate ACKs’ for s_2 to arrive. If a retransmit timeout occurs or several duplicate ACKs for s_2 arrive, the sender transmits s_3 again.

If the sender receives an acknowledgment for s_n, s_3 was the only missing packet. This is the ideal case. Only the single lost packet was re-sent.

If the sender receives an acknowledged segment that is smaller than s_n, for example s_4, that means that more than one packet was lost. The
sender needs to re-transmit the next segment as well.

Re-transmit strategies

Its possible to just repeat the same sequence: re-send the next packet until the receiver indicates it has processed all packet up to s_n. The problem with this approach is that it requires one RTT until the sender knows which packet it has to re-send next. While such strategy avoids unnecessary re-transmissions, it can take several seconds and more until TCP has re-sent the entire window of data.

The alternative is to re-send several packets at once. This approach allows TCP to recover more quickly when several packets have been lost. In the above example TCP re-send s_3, s_4, s_5, .. while it can only be sure that s_3 has been lost.

From a latency point of view, neither strategy is optimal. The first strategy is fast if only a single packet has to be re-sent, but takes too long when multiple packets were lost.

The second one is fast even if multiple packet have to be re-sent, but at the cost of wasting bandwidth. In addition, such a TCP sender could have transmitted new data already while it was doing the unneeded re-transmissions.

With the available information TCP cannot know which packets were lost. This is where TCP Selective Acknowledgments (SACK) come in. Just like window scaling and timestamps, it is another optional, yet very useful TCP feature.

The SACK option    TCP Sack-Permitted Option: Kind: 4, Length 2
   +---------+---------+
   | Kind=4  | Length=2|
   +---------+---------+

A sender that supports this extension includes the “Sack Permitted” option in the connection request. If both endpoints support the extension, then a peer that detects a packet is missing in the data stream can inform the sender about this.

   TCP SACK Option: Kind: 5, Length: Variable
                     +--------+--------+
                     | Kind=5 | Length |
   +--------+--------+--------+--------+
   |      Left Edge of 1st Block       |
   +--------+--------+--------+--------+
   |      Right Edge of 1st Block      |
   +--------+--------+--------+--------+
   |                                   |
   /            . . .                  /
   |                                   |
   +--------+--------+--------+--------+
   |      Left Edge of nth Block       |
   +--------+--------+--------+--------+
   |      Right Edge of nth Block      |
   +--------+--------+--------+--------+

A receiver that encounters segment_s2 followed by s_5…s_n, it will include a SACK block when it sends the acknowledgment for s_2:

                +--------+-------+
                | Kind=5 |   10  |
+--------+------+--------+-------+
| Left edge: s_5                 |
+--------+--------+-------+------+
| Right edge: s_n                |
+--------+-------+-------+-------+

This tells the sender that segments up to s_2 arrived in-sequence, but it also lets the sender know that the segments s_5 to s_n were also received. The sender can then re-transmit these two packets and proceed to send new data.

The mythical lossless network

In theory SACK provides no advantage if the connection cannot experience packet loss. Or the connection has such a low latency that even waiting one full RTT does not matter.

In practice lossless behavior is virtually impossible to ensure.
Even if the network and all its switches and routers have ample bandwidth and buffer space packets can still be lost:

  • The host operating system might be under memory pressure and drop
    packets. Remember that a host might be handling tens of thousands of packet streams simultaneously.
  • The CPU might not be able to drain incoming packets from the network interface fast enough. This causes packet drops in the network adapter itself.
  • If TCP timestamps are not available even a connection with a very small RTT can stall momentarily during loss recovery.

Use of SACK does not increase the size of TCP packets unless a connection experiences packet loss. Because of this, there is hardly a reason to disable this feature. Almost all TCP stacks support SACK – it is typically only absent on low-power IOT-alike devices that are not doing TCP bulk data transfers.

When a Linux system accepts a connection from such a device, TCP automatically disables SACK for the affected connection.

Summary

The three TCP extensions examined in this post are all related to TCP performance and should best be left to the default setting: enabled.

The TCP handshake ensures that only extensions that are understood by both parties are used, so there is never a need to disable an extension globally just because a peer might not support it.

Turning these extensions off results in severe performance penalties, especially in case of TCP Window Scaling and SACK. TCP timestamps can be disabled without an immediate disadvantage, however there is no compelling reason to do so anymore. Keeping them enabled also makes it possible to support TCP options even when SYN cookies come into effect.

More in Tux Machines

How anyone can contribute to open source software in their job

Imagine a world where your software works perfectly for you. It meets your needs, does things your way, and is the ideal tool to achieve great things toward your goals. Open source software stems from these roots. Many projects are built by engineers that have a problem and build a solution to solve it. Then they openly share their solution with others to use and improve. Unfortunately, building software is hard. Not everyone has the expertise to build software that works perfectly for their needs. And if the software developers building applications don't fully understand users' needs and how they do their job, the solutions they build may not meet the users' needs and may accidentally create a lot of gaps. Read more

5 open source tools I can't live without

Some time ago, I engaged with a Twitter thread that went viral among techies. The challenge? Pick only five tools that you cannot live without. I started to think about this in relation to my everyday life, and picking just five tools was not easy. I use many tools that I consider essential, such as my IRC client to connect with my colleagues and friends (yes, I still use IRC), a good text editor to hack on things, a calendar app to keep organized, and a videoconferencing platform when more direct interaction is needed. So let me put a twist on this challenge: Pick just five open source tools that boost your productivity. Here's my list; please share yours in the comments. Read more

How to Install Microsoft Edge Browser in Ubuntu and Other Linux

This guide explains the steps required to install Microsoft Edge Browser in Ubuntu and Other Linux. We explain both graphical and UI methods. Read more

today's leftovers

  • A Quick Look At Ubuntu 20.04 LTS vs. 20.10 With The Core i9 10900K - Phoronix

    With Ubuntu 20.10 due for release this week I have begun testing near-final Ubuntu 20.10 builds on many more systems in the lab. Larger than our normal distribution/OS comparisons, here is the culmination of running hundreds of benchmarks (366 tests to be exact) under both Ubuntu 20.04 LTS with all available updates and then again on the Ubuntu 20.10 development state while testing on Intel Comet Lake. Aside from specific improvements for bleeding-edge hardware like Intel Tiger Lake performing better on Ubuntu 20.10 or when looking at cases like the Intel and Radeon graphics performance being better on Ubuntu 20.10 due to the newer Linux kernel and Mesa, for general CPU/system workloads the performance has largely been found to be similar to that of Ubuntu 20.04 LTS. The other caveat is for workloads being built from source, Ubuntu 20.10 now ships with GCC 10 rather than GCC 9. GCC 10 doesn't normally yield any night-and-day differences in performance but in some cases for newer CPU microarchitectures there has been some improvements there or with features like LTO.

  • TSDgeos' blog: Make sure KDE software is usable in your language, join KDE translations!

    Translations are a vital part of software. More technical people often overlook it because they understand English well enough to use the software untranslated, but only 15% of the World understands English, so it's clear we need good translations to make our software more useful to the rest of the world. Translations are a place that [almost] always needs help, so I would encourage you to me (aacid@kde.org) if you are interested in helping. Sadly, some of our teams are not very active, so you may find yourself alone, it can be a bit daunting at the beginning, but the rest of us in kde-i18n-doc will help you along the way :)

  • News – WordPress 5.6 Beta 1 – WordPress.org

    WordPress 5.6 Beta 1 is now available for testing! This software is still in development, so we recommend that you run this version on a test site. [...] The current target for final release is December 8, 2020. This is just seven weeks away, so your help is needed to ensure this release is tested properly.

  • Google Patches Bug Used in Active Attacks Against Chrome

    Google has discovered and patched a serious vulnerability in Chrome that attackers are actively exploiting at the moment. The bug is a high-severity heap buffer overflow in FreeType, a free font-rendering engine that Chrome, among many other projects, uses. A member of Google’s Project Zero vulnerability research team discovered the vulnerability and subsequently found that attackers were already exploiting it. Google patched the flaw in Chrome 86.0.4240.111 for desktop browsers and the maintainers of the FreeType Project pushed out an emergency release of the library to fix it, as well. “I've just fixed a heap buffer overflow that can happen for some malformed .ttf files with PNG sbit glyphs. It seems that this vulnerability gets already actively used in the wild, so I ask all users to apply the corresponding commit as soon as possible,” Werner Lemberg, one of the original authors of the FreeType, said in an email to the FreeType announcement mailing list.

  • FreeType 2.10.4 Rushed Out As Emergency Security Release

    The FreeType text rendering library is out with version 2.10.4 today as an important security update.

  • Intel: replace thermal compound “every few years”

    Thermal compound (sometimes called thermal paste or grease) is applied to fill minuscule gaps in the materials in the heat spreader (the metal covering on top of the processor) and the heatsink. Eliminating these gaps is essential to ensuring efficient heat transfer into the heatsink. The thermal compound that is used in your computer generally won’t go bad or degrade in its useful lifespan. It will get displaced over time, however. You’d need higher temperatures than what you’ll typically find in a computer for other failure modes to come into effect. The displacement is caused by thermal cycling that results in an effect known as “thermally induced pump-out.” As the components heat up and cool down, the processors’ heat spreader (its metal top) and the heatsink will expand and contract. This effect will, over time, pump the thermal compound out from in between the two metal plates. You can find illustrations and a more technical explanation in the source links below.