Language Selection

English French German Italian Portuguese Spanish

Fedora Magazine

Syndicate content
Guides, information, and news about the Fedora operating system for users, developers, system administrators, and community members.
Updated: 18 hours 8 min ago

Packit – auto-package your projects into Fedora

18 hours 36 min ago

What is packit

Packit ( is a CLI tool that helps you auto-package your upstream projects into the Fedora operating system. But what does it really mean?

As a developer, you might want to add or update your package in Fedora. If you’ve done it in the past, you know it’s no easy task. If you haven’t let me reiterate: it’s no easy task.

And this is exactly where packit can help: with just one configuration file in your upstream repository, packit will automatically package your software into Fedora and update it when you update your source code upstream.

Furthermore, packit can synchronize downstream changes to a SPEC file back into the upstream repository. This could be useful if the SPEC file of your package is changed in Fedora repositories and you would like to synchronize it into your upstream project.

Packit also provides a way to build an SRPM package based on an upstream repository checkout, which can be used for building RPM packages in COPR.

Last but not least, packit provides a status command. This command provides information about upstream and downstream repositories, like pull requests, release and more others.

Packit provides also another two commands: build and create-update.

The command packit build performs a production build of your project in Fedora build system – koji. You can Fedora version you want to build against using an option –dist-git-branch. The command packit create-updates creates a Bodhi update for the specific branch using the option —dist-git-branch.


You can install packit on Fedora using dnf:

sudo dnf install -y packit Configuration

For demonstration use case, I have selected the upstream repository of colin ( Colin is a tool to check generic rules and best-practices for containers, dockerfiles, and container images.

First of all, clone colin git repository:

$ git clone
$ cd colin

Packit expects to run in the root of your git repository.

Packit ( needs information about your project, which has to be stored in the upstream repository in the .packit.yaml file (

See colin’s packit configuration file:

$ cat .packit.yaml
specfile_path: colin.spec
 - colin.spec
upstream_project_name: colin
downstream_package_name: colins

What do the values mean?

  • specfile_path – a relative path to a spec file within the upstream repository (mandatory)
  • synced_files – a list of relative paths to files in the upstream repo which are meant to be copied to dist-git during an update
  • upstream_project_name – name of the upstream repository (e.g. in PyPI); this is used in %prep section
  • downstream_package_name – name of the package in Fedora (mandatory)

For more information see the packit configuration documentation (

What can packit do?

Prerequisite for using packit is that you are in a working directory of a git checkout of your upstream project.

Before running any packit command, you need to do several actions. These actions are mandatory for filing a PR into the upstream or downstream repositories and to have access into the Fedora dist-git repositories.

Export GitHub token taken from


Obtain your Kerberos ticket needed for Fedora Account System (FAS) :

$ kinit <yourname>@FEDORAPROJECT.ORG

Export your Pagure API keys taken from


Packit also needs a fork token to create a pull request. The token is taken from

Do it by running:


Or store these tokens in the ~/.config/packit.yaml file:

$ cat ~/.config/packit.yaml

github_token: <GITHUB_TOKEN>
pagure_user_token: <PAGURE_USER_TOKEN>
pagure_fork_token: <PAGURE_FORK_TOKEN>
Propose a new upstream release in Fedora

The command for this first use case is called propose-update ( The command creates a new pull request in Fedora dist-git repository using a selected or the latest upstream release.

$ packit propose-update

INFO: Running 'anitya' versioneer
Version in upstream registries is '0.3.1'.
Version in spec file is '0.3.0'.
WARNING  Version in spec file is outdated
Picking version of the latest release from the upstream registry.
Checking out upstream version 0.3.1
Using 'master' dist-git branch
Copying /home/vagrant/colin/colin.spec to /tmp/tmptfwr123c/colin.spec.
Archive colin-0.3.0.tar.gz found in lookaside cache (skipping upload).
INFO: Downloading file from URL
100%[=============================>]     3.18M  eta 00:00:00
Downloaded archive: '/tmp/tmptfwr123c/colin-0.3.0.tar.gz'
About to upload to lookaside cache
won't be doing kinit, no credentials provided
PR created:

Once the command finishes, you can see a PR in the Fedora Pagure instance which is based on the latest upstream release. Once you review it, it can be merged.

Sync downstream changes back to the upstream repository

Another use case is to sync downstream changes into the upstream project repository.

The command for this purpose is called sync-from-downstream ( Files synced into the upstream repository are mentioned in the packit.yaml configuration file under the synced_files value.

$ packit sync-from-downstream

upstream active branch master
using "master" dist-git branch
Copying /tmp/tmplvxqtvbb/colin.spec to /home/vagrant/colin/colin.spec.
Creating remote fork-ssh with URL
Pushing to remote fork-ssh using branch master-downstream-sync.
PR created:

As soon as packit finishes, you can see the latest changes taken from the Fedora dist-git repository in the upstream repository. This can be useful, e.g. when Release Engineering performs mass-rebuilds and they update your SPEC file in the Fedora dist-git repository.

Get the status of your upstream project

If you are a developer, you may want to get all the information about the latest releases, tags, pull requests, etc. from the upstream and the downstream repository. Packit provides the status command for this purpose.

$ packit status
Downstream PRs:
 ID  Title                             URL
----  --------------------------------  ---------------------------------------------------------
 14  Update to upstream release 0.3.1
 12  Upstream pr: 226        
 11  Upstream pr: 226        
  8 Upstream pr: 226        

Dist-git versions:
f27: 0.2.0
f28: 0.2.0
f29: 0.2.0
f30: 0.2.0
master: 0.2.0

GitHub upstream releases:

Latest builds:
f27: colin-0.2.0-1.fc27
f28: colin-0.3.1-1.fc28
f29: colin-0.3.1-1.fc29
f30: colin-0.3.1-2.fc30

Latest bodhi updates:
Update                Karma  status
------------------  ------- --------
colin-0.3.1-1.fc29        1  stable
colin-0.3.1-1.fc28        1  stable
colin-0.3.0-2.fc28        0  obsolete Create an SRPM

The last packit use case is to generate an SRPM package based on a git checkout of your upstream project. The packit command for SRPM generation is srpm.

$ packit srpm
Version in spec file is ''.
SRPM: /home/phracek/work/colin/colin- Packit as a service

In the summer, the people behind packit would like to introduce packit as a service ( In this case, the packit GitHub application will be installed into the upstream repository and packit will perform all the actions automatically, based on the events it receives from GitHub or fedmsg.

Securing telnet connections with stunnel

Wednesday 22nd of May 2019 08:00:51 AM

Telnet is a client-server protocol that connects to a remote server through TCP over port 23. Telnet does not encrypt data and is considered insecure and passwords can be easily sniffed because data is sent in the clear. However there are still legacy systems that need to use it. This is where stunnel comes to the rescue.

Stunnel is designed to add SSL encryption to programs that have insecure connection protocols. This article shows you how to use it, with telnet as an example.

Server Installation

Install stunnel along with the telnet server and client using sudo:

sudo dnf -y install stunnel telnet-server telnet

Add a firewall rule, entering your password when prompted:

firewall-cmd --add-service=telnet --perm
firewall-cmd --reload

Next, generate an RSA private key and an SSL certificate:

openssl genrsa 2048 > stunnel.key
openssl req -new -key stunnel.key -x509 -days 90 -out stunnel.crt

You will be prompted for the following information one line at a time. When asked for Common Name you must enter the correct host name or IP address, but everything else you can skip through by hitting the Enter key.

You are about to be asked to enter information that will be
incorporated into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:
Email Address []

Merge the RSA key and SSL certificate into a single .pem file, and copy that to the SSL certificate directory:

cat stunnel.crt stunnel.key > stunnel.pem
sudo cp stunnel.pem /etc/pki/tls/certs/

Now it’s time to define the service and the ports to use for encrypting your connection. Choose a port that is not already in use. This example uses port 450 for tunneling telnet. Edit or create the /etc/stunnel/telnet.conf file:

cert = /etc/pki/tls/certs/stunnel.pem
sslVersion = TLSv1
chroot = /var/run/stunnel
setuid = nobody
setgid = nobody
pid = /
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
accept = 450
connect = 23

The accept option is the port the server will listen to for incoming telnet requests. The connect option is the internal port the telnet server listens to.

Next, make a copy of the systemd unit file that allows you to override the packaged version:

sudo cp /usr/lib/systemd/system/stunnel.service /etc/systemd/system

Edit the /etc/systemd/system/stunnel.service file to add two lines. These lines create a chroot jail for the service when it starts.

Description=TLS tunnel for network daemons

ExecStartPre=-/usr/bin/mkdir /var/run/stunnel
ExecStartPre=/usr/bin/chown -R nobody:nobody /var/run/stunnel


Next, configure SELinux to listen to telnet on the new port you just specified:

sudo semanage port -a -t telnetd_port_t -p tcp 450

Finally, add a new firewall rule:

firewall-cmd --add-port=450/tcp --perm
firewall-cmd --reload

Now you can enable and start telnet and stunnel.

systemctl enable telnet.socket stunnel@telnet.service --now

A note on the systemctl command is in order. Systemd and the stunnel package provide an additional template unit file by default. The template lets you drop multiple configuration files for stunnel into /etc/stunnel, and use the filename to start the service. For instance, if you had a foobar.conf file, you could start that instance of stunnel with systemctl start stunnel@foobar.service, without having to write any unit files yourself.

If you want, you can set this stunnel template service to start on boot:

systemctl enable stunnel@telnet.service Client Installation

This part of the article assumes you are logged in as a normal user (with sudo privileges) on the client system. Install stunnel and the telnet client:

dnf -y install stunnel telnet

Copy the stunnel.pem file from the remote server to your client /etc/pki/tls/certs directory. In this example, the IP address of the remote telnet server is

sudo scp myuser@

Create the /etc/stunnel/telnet.conf file:

cert = /etc/pki/tls/certs/stunnel.pem

The accept option is the port that will be used for telnet sessions. The connect option is the IP address of your remote server and the port it’s listening on.

Next, enable and start stunnel:

systemctl enable stunnel@telnet.service --now

Test your connection. Since you have a connection established, you will telnet to localhost instead of the hostname or IP address of the remote telnet server:

[user@client ~]$ telnet localhost 450
Trying ::1...
telnet: connect to address ::1: Connection refused
Connected to localhost.
Escape character is '^]'.

Kernel 5.0.9-301.fc30.x86_64 on an x86_64 (0)
server login: myuser
Password: XXXXXXX
Last login: Sun May  5 14:28:22 from localhost
[myuser@server ~]$

Getting set up with Fedora Project services

Monday 20th of May 2019 08:00:04 AM

In addition to providing an operating system, the Fedora Project provides numerous services for users and developers. Services such as Ask Fedora, the Fedora Project Wiki and the Fedora Project Mailing Lists provide users with valuable resources for learning how to best take advantage of Fedora. For developers of Fedora, there are many other services such as dist-git, Pagure, Bodhi, COPR and Bugzilla that are involved with the packaging and release process.

These services are available for use with a free account from the Fedora Accounts System (FAS). This account is the passport to all things Fedora! This article covers how to get set up with an account and configure Fedora Workstation for browser single sign-on.

Signing up for a Fedora account

To create a FAS account, browse to the account creation page. Here, you will fill out your basic identity data:

Account creation page

Once you enter your data, an email will be sent to the email address provided, with a temporary password. Pick a strong password and use it.

Password reset page

Next, the account details page appears. If you intend to become a contributor to the Fedora Project, you should complete the Contributor Agreement now. Otherwise, you are done and your account can now be used to log into the various Fedora services.

Account details page Configuring Fedora Workstation for single sign-On

Now that you have your account, you can sign into any of the Fedora Project services. Most of these services support single sign-on (SSO), allowing you to sign in without re-entering your username and password.

Fedora Workstation provides an easy workflow to add SSO credentials. The GNOME Online Accounts tool helps you quickly set up your system to access many popular services. To access it, go to the Settings menu.

GNOME Online Accounts

Click on the ⋮ button and select Enterprise Login (Kerberos), which provides a single text prompt for a principal. Enter fasname@FEDORAPROJECT.ORG (being sure to capitalize FEDORAPROJECT.ORG) and click Connect.

Kerberos principal dialog

GNOME prompts you to enter your password for FAS and given the option to save it. If you choose to save it, it is stored in GNOME Keyring and unlocked automatically at login. If you choose not to save it, you will need to open GNOME Online Accounts and enter your password each time you want to enable single sign-on.

Single sign-on with a web browser

Today, Fedora Workstation supports two web browsers “out of the box” with support for single sign-on with the Fedora Project services. These are Mozilla Firefox and Google Chrome. Due to a bug in Chromium, single sign-on does not currently work properly in many cases. As a result, this has not been enabled for Chromium in Fedora.

To sign on to a service, browse to it and select the “login” option for that service. For most Fedora services, this is the only thing you need to do and the browser handles the rest. Some services such as the Fedora Mailing Lists and Bugzilla support multiple login types. For them, you need to select the “Fedora” or “Fedora Account System” login type.

That’s it! You can now log into any of the Fedora Project services without re-entering your password.

Special consideration for Google Chrome

In order to enable single sign-on out of the box for Google Chrome, Fedora needed to take advantage of certain features in Chrome that are intended for use in “managed” environments. A managed environment is traditionally a corporate or other organization that sets certain security and/or monitoring requirements on the browser.

Recently, Google Chrome changed its behavior and it now reports “Managed by your organization” under the ⋮ menu in Google Chrome. That link leads to a page that states “If your Chrome browser is managed, your administrator can set up or restrict certain features, install extensions, monitor activity, and control how you use Chrome.” Fedora will never monitor your browser activity or restrict your actions.

Enter chrome://policy in the address bar to see exactly what settings Fedora has enabled in the browser. The AuthNegotiateDelegateWhitelist and AuthServerWhitelist options will be set to * These are the only changes Fedora makes.

Building Smaller Container Images

Thursday 16th of May 2019 08:00:35 AM

Linux Containers have become a popular topic, making sure that a container image is not bigger than it should be is considered as a good practice. This article give some tips on how to create smaller Fedora container images.


Fedora’s DNF is written in Python and and it’s designed to be extensible as it has wide range of plugins. But Fedora has an alternative base container image which uses an smaller package manager called microdnf written in C. To use this minimal image in a Dockerfile the FROM line should look like this:


This is an important saving if your image does not need typical DNF dependencies like Python. For example, if you are making a NodeJS image.

Install and Clean up in one layer

To save space it’s important to remove repos meta data using dnf clean all or its microdnf equivalent microdnf clean all. But you should not do this in two steps because that would actually store those files in a container image layer then mark them for deletion in another layer. To do it properly you should do the installation and cleanup in one step like this

RUN microdnf install nodejs && microdnf clean all Modularity with microdnf

Modularity is a way to offer you different versions of a stack to choose from. For example you might want non-LTS NodeJS version 11 for a project and old LTS NodeJS version 8 for another and latest LTS NodeJS version 10 for another. You can specify which stream using colon

# dnf module list
# dnf module install nodejs:8

The dnf module install command implies two commands one that enables the stream and one that install nodejs from it.

# dnf module enable nodejs:8
# dnf install nodejs

Although microdnf does not offer any command related to modularity, it is possible to enable a module with a configuation file, and libdnf (which microdnf uses) seems to support modularity streams. The file looks like this


A full Dockerfile using modularity with microdnf looks like this:

echo -e "[nodejs]\nname=nodejs\nstream=8\nprofiles=\nstate=enabled\n" > /etc/dnf/modules.d/nodejs.module && \
microdnf install nodejs zopfli findutils busybox && \
microdnf clean all Multi-staged builds

In many cases you might have tons of build-time dependencies that are not needed to run the software for example building a Go binary, which statically link dependencies. Multi-stage build are an efficient way to separate the application build and the application runtime.

For example the Dockerfile below builds confd a Go application.

# building container
FROM AS build
RUN mkdir /go && microdnf install golang && microdnf clean all
RUN export GOPATH=/go; CGO_ENABLED=0 go get

COPY --from=build /go/bin/confd /usr/local/bin
CMD ["confd"]

The multi-stage build is done by adding AS after the FROM instruction and by having another FROM from a base container image then using COPY –from= instruction to copy content from the build container to the second container.

This Dockerfile can then be built and run using podman

$ podman build -t myconfd .
$ podman run -it myconfd

Manage business documents with OpenAS2 on Fedora

Monday 13th of May 2019 08:00:09 AM

Business documents often require special handling. Enter Electronic Document Interchange, or EDI. EDI is more than simply transferring files using email or http (or ftp), because these are documents like orders and invoices. When you send an invoice, you want to be sure that:

1. It goes to the right destination, and is not intercepted by competitors.
2. Your invoice cannot be forged by a 3rd party.
3. Your customer can’t claim in court that they never got the invoice.

The first two goals can be accomplished by HTTPS or email with S/MIME, and in some situations, a simple HTTPS POST to a web API is sufficient. What EDI adds is the last part.

This article does not cover the messy topic of formats for the files exchanged. Even when using a standardized format like ANSI or EDIFACT, it is ultimately up to the business partners. It is not uncommon for business partners to use an ad-hoc CSV file format. This article shows you how to configure Fedora to send and receive in an EDI setup.

Centralized EDI

The traditional solution is to use a Value Added Network, or VAN. The VAN is a central hub that transfers documents between their customers. Most importantly, it keeps a secure record of the documents exchanged that can be used as evidence in disputes. The VAN can use different transfer protocols for each of its customers

AS Protocols and MDN

The AS protocols are a specification for adding a digital signature with optional encryption to an electronic document. What it adds over HTTPS or S/MIME is the Message Disposition Notification, or MDN. The MDN is a signed and dated response that says, in essence, “We got your invoice.” It uses a secure hash to identify the specific document received. This addresses point #3 without involving a third party.

The AS2 protocol uses HTTP or HTTPS for transport. Other AS protocols target FTP and SMTP. AS2 is used by companies big and small to avoid depending on (and paying) a VAN.


OpenAS2 is an open source Java implemention of the AS2 protocol. It is available in Fedora since 28, and installed with:

$ sudo dnf install openas2
$ cd /etc/openas2

Configuration is done with a text editor, and the config files are in XML. The first order of business before starting OpenAS2 is to change the factory passwords.

Edit /etc/openas2/config.xml and search for ChangeMe. Change those passwords. The default password on the certificate store is testas2, but that doesn’t matter much as anyone who can read the certificate store can read config.xml and get the password.

What to share with AS2 partners

There are 3 things you will exchange with an AS2 peer.


Don’t bother looking up the official AS2 standard for legal AS2 IDs. While OpenAS2 implements the standard, your partners will likely be using a proprietary product which doesn’t. While AS2 allows much longer IDs, many implementations break with more than 16 characters. Using otherwise legal AS2 ID chars like ‘:’ that can appear as path separators on a proprietary OS is also a problem. Restrict your AS2 ID to upper and lower case alpha, digits, and ‘_’ with no more than 16 characters.

SSL certificate

For real use, you will want to generate a certificate with SHA256 and RSA. OpenAS2 ships with two factory certs to play with. Don’t use these for anything real, obviously. The certificate file is in PKCS12 format. Java ships with keytool which can maintain your PKCS12 “keystore,” as Java calls it. This article skips using openssl to generate keys and certificates. Simply note that sudo keytool -list -keystore as2_certs.p12 will list the two factory practice certs.


This is an HTTP URL that will access your OpenAS2 instance. HTTPS is also supported, but is redundant. To use it you have to uncomment the https module configuration in config.xml, and supply a certificate signed by a public CA. This requires another article and is entirely unnecessary here.

By default, OpenAS2 listens on 10080 for HTTP and 10443 for HTTPS. OpenAS2 can talk to itself, so it ships with two partnerships using http://localhost:10080 as the AS2 URL. If you don’t find this a convincing demo, and can install a second instance (on a VM, for instance), you can use private IPs for the AS2 URLs. Or install Cjdns to get IPv6 mesh addresses that can be used anywhere, resulting in AS2 URLs like http://[fcbf:fc54:e597:7354:8250:2b2e:95e6:d6ba]:10080.

Most businesses will also want a list of IPs to add to their firewall. This is actually bad practice. An AS2 server has the same security risk as a web server, meaning you should isolate it in a VM or container. Also, the difficulty of keeping mutual lists of IPs up to date grows with the list of partners. The AS2 server rejects requests not signed by a configured partner.

OpenAS2 Partners

With that in mind, open partnerships.xml in your editor. At the top is a list of “partners.” Each partner has a name (referenced by the partnerships below as “sender” or “receiver”), AS2 ID, certificate, and email. You need a partner definition for yourself and those you exchange documents with. You can define multiple partners for yourself. OpenAS2 ships with two partners, OpenAS2A and OpenAS2B, which you’ll use to send a test document.

OpenAS2 Partnerships

Next is a list of “partnerships,” one for each direction. Each partnership configuration includes the sender, receiver, and the AS2 URL used to send the documents. By default, partnerships use synchronous MDN. The MDN is returned on the same HTTP transaction. You could uncomment the as2_receipt_option for asynchronous MDN, which is sent some time later. Use synchronous MDN whenever possible, as tracking pending MDNs adds complexity to your application.

The other partnership options select encryption, signature hash, and other protocol options. A fully implemented AS2 receiver can handle any combination of options, but AS2 partners may have incomplete implementations or policy requirements. For example, DES3 is a comparatively weak encryption algorithm, and may not be acceptable. It is the default because it is almost universally implemented.

If you went to the trouble to set up a second physical or virtual machine for this test, designate one as OpenAS2A and the other as OpenAS2B. Modify the as2_url on the OpenAS2A-to-OpenAS2B partnership to use the IP (or hostname) of OpenAS2B, and vice versa for the OpenAS2B-to-OpenAS2A partnership. Unless they are using the FedoraWorkstation firewall profile, on both machines you’ll need:

# sudo firewall-cmd --zone=public --add-port=10080/tcp

Now start the openas2 service (on both machines if needed):

# sudo systemctl start openas2 Resetting the MDN password

This initializes the MDN log database with the factory password, not the one you changed it to. This is a packaging bug to be fixed in the next release. To avoid frustration, here’s how to change the h2 database password:

$ sudo systemctl stop openas2
$ cat >h2passwd <<'DONE'
java -cp "$AS2DIR"/lib/h2* \
-url jdbc:h2:"$AS2DIR"/db/openas2 \
-user sa -password "$1" <<EOF
alter user sa set password '$2';
$ sudo sh h2passwd ChangeMe yournewpasswordsetabove
$ sudo systemctl start openas2 Testing the setup

With that out of the way, let’s send a document. Assuming you are on OpenAS2A machine:

$ cat >testdoc <<'DONE'
This is not a real EDI format, but is nevertheless a document.
$ sudo chown openas2 testdoc
$ sudo mv testdoc /var/spool/openas2/toOpenAS2B
$ sudo journalctl -f -u openas2
... log output of sending file, Control-C to stop following log

OpenAS2 does not send a document until it is writable by the openas2 user or group. As a consequence, your actual business application will copy, or generate in place, the document. Then it changes the group or permissions to send it on its way, to avoid sending a partial document.

Now, on the OpenAS2B machine, /var/spool/openas2/OpenAS2A_OID-OpenAS2B_OID/inbox shows the message received. That should get you started!

Photo by Beatriz Pérez Moya on Unsplash.

Contribute at the Fedora Test Week for kernel 5.1

Sunday 12th of May 2019 05:29:06 PM

The kernel team is working on final integration for kernel 5.1. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test week from Monday, May 13, 2019 through Saturday, May 18, 2019. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test day/week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.

Check storage performance with dd

Friday 10th of May 2019 08:00:54 AM

This article includes some example commands to show you how to get a rough estimate of hard drive and RAID array performance using the dd command. Accurate measurements would have to take into account things like write amplification and system call overhead, which this guide does not. For a tool that might give more accurate results, you might want to consider using hdparm.

To factor out performance issues related to the file system, these examples show how to test the performance of your drives and arrays at the block level by reading and writing directly to/from their block devices. WARNING: The write tests will destroy any data on the block devices against which they are run. Do not run them against any device that contains data you want to keep!

Four tests

Below are four example dd commands that can be used to test the performance of a block device:

  1. One process reading from $MY_DISK:# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
  2. One process writing to $MY_DISK:# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
  3. Two processes reading concurrently from $MY_DISK:# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
  4. Two processes writing concurrently to $MY_DISK:# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)

– The iflag=nocache and oflag=direct parameters are important when performing the read and write tests (respectively) because without them the dd command will sometimes show the resulting speed of transferring the data to/from RAM rather than the hard drive.

– The values for the bs and count parameters are somewhat arbitrary and what I have chosen should be large enough to provide a decent average in most cases for current hardware.

– The null and zero devices are used for the destination and source (respectively) in the read and write tests because they are fast enough that they will not be the limiting factor in the performance tests.

– The skip=200 parameter on the second dd command in the concurrent read and write tests is to ensure that the two copies of dd are operating on different areas of the hard drive.

16 examples

Below are demonstrations showing the results of running each of the above four tests against each of the following four block devices:

  1. MY_DISK=/dev/sda2 (used in examples 1-X)
  2. MY_DISK=/dev/sdb2 (used in examples 2-X)
  3. MY_DISK=/dev/md/stripped (used in examples 3-X)
  4. MY_DISK=/dev/md/mirrored (used in examples 4-X)

A video demonstration of the these tests being run on a PC is provided at the end of this guide.

Begin by putting your computer into rescue mode to reduce the chances that disk I/O from background services might randomly affect your test results. WARNING: This will shutdown all non-essential programs and services. Be sure to save your work before running these commands. You will need to know your root password to get into rescue mode. The passwd command, when run as the root user, will prompt you to (re)set your root account password.

$ sudo -i
# passwd
# setenforce 0
# systemctl rescue

You might also want to temporarily disable logging to disk:

# sed -r -i.bak 's/^#?Storage=.*/Storage=none/' /etc/systemd/journald.conf
# systemctl restart systemd-journald.service

If you have a swap device, it can be temporarily disabled and used to perform the following tests:

# swapoff -a
# MY_DEVS=$(mdadm --detail /dev/md/swap | grep active | grep -o "/dev/sd.*")
# mdadm --stop /dev/md/swap
# mdadm --zero-superblock $MY_DEVS Example 1-1 (reading from sda) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.7003 s, 123 MB/s Example 1-2 (writing to sda) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67117 s, 125 MB/s Example 1-3 (reading concurrently from sda) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.42875 s, 61.2 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.52614 s, 59.5 MB/s Example 1-4 (writing concurrently to sda) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) 200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.2435 s, 64.7 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.60872 s, 58.1 MB/s Example 2-1 (reading from sdb) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67285 s, 125 MB/s Example 2-2 (writing to sdb) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67198 s, 125 MB/s Example 2-3 (reading concurrently from sdb) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.52808 s, 59.4 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.57736 s, 58.6 MB/s Example 2-4 (writing concurrently to sdb) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.7841 s, 55.4 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.81475 s, 55.0 MB/s Example 3-1 (reading from RAID0) # mdadm --create /dev/md/stripped --homehost=any --metadata=1.0 --level=0 --raid-devices=2 $MY_DEVS
# MY_DISK=/dev/md/stripped
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.837419 s, 250 MB/s Example 3-2 (writing to RAID0) # MY_DISK=/dev/md/stripped
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.823648 s, 255 MB/s Example 3-3 (reading concurrently from RAID0) # MY_DISK=/dev/md/stripped
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.31025 s, 160 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.80016 s, 116 MB/s Example 3-4 (writing concurrently to RAID0) # MY_DISK=/dev/md/stripped
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.65026 s, 127 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.81323 s, 116 MB/s Example 4-1 (reading from RAID1) # mdadm --stop /dev/md/stripped
# mdadm --create /dev/md/mirrored --homehost=any --metadata=1.0 --level=1 --raid-devices=2 --assume-clean $MY_DEVS
# MY_DISK=/dev/md/mirrored
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.74963 s, 120 MB/s Example 4-2 (writing to RAID1) # MY_DISK=/dev/md/mirrored
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.74625 s, 120 MB/s Example 4-3 (reading concurrently from RAID1) # MY_DISK=/dev/md/mirrored
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67171 s, 125 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67685 s, 125 MB/s Example 4-4 (writing concurrently to RAID1) # MY_DISK=/dev/md/mirrored
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 4.09666 s, 51.2 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 4.1067 s, 51.1 MB/s Restore your swap device and journald configuration # mdadm --stop /dev/md/stripped /dev/md/mirrored
# mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 $MY_DEVS
# mkswap /dev/md/swap
# swapon -a
# mv /etc/systemd/journald.conf.bak /etc/systemd/journald.conf
# systemctl restart systemd-journald.service
# reboot Interpreting the results

Examples 1-1, 1-2, 2-1, and 2-2 show that each of my drives read and write at about 125 MB/s.

Examples 1-3, 1-4, 2-3, and 2-4 show that when two reads or two writes are done in parallel on the same drive, each process gets at about half the drive’s bandwidth (60 MB/s).

The 3-x examples show the performance benefit of putting the two drives together in a RAID0 (data stripping) array. The numbers, in all cases, show that the RAID0 array performs about twice as fast as either drive is able to perform on its own. The trade-off is that you are twice as likely to lose everything because each drive only contains half the data. A three-drive array would perform three times as fast as a single drive (all drives being equal) but it would be thrice as likely to suffer a catastrophic failure.

The 4-x examples show that the performance of the RAID1 (data mirroring) array is similar to that of a single disk except for the case where multiple processes are concurrently reading (example 4-3). In the case of multiple processes reading, the performance of the RAID1 array is similar to that of the RAID0 array. This means that you will see a performance benefit with RAID1, but only when processes are reading concurrently. For example, if a process tries to access a large number of files in the background while you are trying to use a web browser or email client in the foreground. The main benefit of RAID1 is that your data is unlikely to be lost if a drive fails.

Video demo Testing storage throughput using dd Troubleshooting

If the above tests aren’t performing as you expect, you might have a bad or failing drive. Most modern hard drives have built-in Self-Monitoring, Analysis and Reporting Technology (SMART). If your drive supports it, the smartctl command can be used to query your hard drive for its internal statistics:

# smartctl --health /dev/sda
# smartctl --log=error /dev/sda
# smartctl -x /dev/sda

Another way that you might be able to tune your PC for better performance is by changing your I/O scheduler. Linux systems support several I/O schedulers and the current default for Fedora systems is the multiqueue variant of the deadline scheduler. The default performs very well overall and scales extremely well for large servers with many processors and large disk arrays. There are, however, a few more specialized schedulers that might perform better under certain conditions.

To view which I/O scheduler your drives are using, issue the following command:

$ for i in /sys/block/sd?/queue/scheduler; do echo "$i: $(<$i)"; done

You can change the scheduler for a drive by writing the name of the desired scheduler to the /sys/block/<device name>/queue/scheduler file:

# echo bfq > /sys/block/sda/queue/scheduler

You can make your changes permanent by creating a udev rule for your drive. The following example shows how to create a udev rule that will set all rotational drives to use the BFQ I/O scheduler:

# cat << END > /etc/udev/rules.d/60-ioscheduler-rotational.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"

Here is another example that sets all solid-state drives to use the NOOP I/O scheduler:

# cat << END > /etc/udev/rules.d/60-ioscheduler-solid-state.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="none"

Changing your I/O scheduler won’t affect the raw throughput of your devices, but it might make your PC seem more responsive by prioritizing the bandwidth for the foreground tasks over the background tasks or by eliminating unnecessary block reordering.

Photo by James Donovan on Unsplash.

Check out the new AskFedora

Wednesday 8th of May 2019 08:00:58 AM

If you’ve been reading the Community blog, you’ll already know: AskFedora has moved to Discourse! Read on for more information about this exciting platform.

Discourse? Why Discourse?

The new AskFedora is a Discourse instance hosted by Discourse, similar to However, where is meant for development discussion within the community, AskFedora is meant for end-user troubleshooting.

The Discourse platform focuses on conversations. Not only can you ask questions and receive answers, you can have complete dialogues with others. This is especially fitting since troubleshooting includes lots of bits that are neither questions nor answers. Instead, there are lots of suggestions, ideas, thoughts, comments, musings, none of which necessarily are the one true answer, but all of which are required steps that together lead us to the solution.

Apart from this fresh take on discussions, Discourse comes with a full set of features that make interacting with each other very easy.

Login using your Fedora account

Users accounts on the new AskFedora are managed by the Fedora account system only. A Fedora account gives you access to all of the infrastructure used by the Fedora community. This includes:

This decision was made mainly to combat the spam and security issues previously encountered with the various social media login services.

So, unlike the current Askbot setup where you could login using different social media services, you will need to create a Fedora Account to use the new Discourse based instance. Luckily, creating a Fedora Account is very easy!

  1. Go to
  2. Choose a username, enter your name, and a valid e-mail address, a security question.
  3. Do the “captcha” to confirm that you are indeed a human, and confirm that you are older than 13 years of age.

That’s it! You now have a Fedora account.

Get started!

If you are using the platform for the first time, you should start with the “New users! Start here!” category. Here, we’ve put short summaries on how to use the platform effectively. This includes information on how to use Discourse, its many features that make it a great platform, notes on how to ask and respond to queries, subscribing and unsubscribing from categories, and lots more.

For the convenience of the global Fedora community, these summaries are available in all the languages that the community supports. So, please do take a minute to go over these introductory posts.

Discuss, learn, teach, have fun!

Please login, ask and discuss your queries and help each other out. As always, suggestions and feedback are always welcome. You can post these in the “Site feedback” category.

As a last note, please do remember to “be excellent to each other.” The Fedora Code of Conduct applies to all of us!


The Fedora community does everything together, so many volunteers joined forces and gave their resources to make this possible. We are most grateful to the Askbot developers who have hosted AskFedora till now, the Discourse team for hosting it now, and all the community members who helped set it up, and everyone that helps keep the Fedora community ticking along!

Use udica to build SELinux policy for containers

Monday 6th of May 2019 08:00:45 AM

While modern IT environments move towards Linux containers, the need to secure these environments is as relevant as ever. Containers are a process isolation technology. While containers can be a defense mechanism, they only excel when combined with SELinux.

Fedora SELinux engineering built a new standalone tool, udica, to generate SELinux policy profiles for containers by automatically inspecting them. This article focuses on why udica is needed in the container world, and how it makes SELinux and containers work better together. You’ll find examples of SELinux separation for containers that let you avoid turning protection off because the generic SELinux type container_t is too tight. With udica you can easily customize the policy with limited SELinux policy writing skills.

SELinux technology

SELinux is a security technology that brings proactive security to Linux systems. It’s a labeling system that assigns a label to all subjects (processes and users) and objects (files, directories, sockets, etc.). These labels are then used in a security policy that controls access throughout the system. It’s important to mention that what’s not allowed in an SELinux security policy is denied by default. The policy rules are enforced by the kernel. This security technology has been in use on Fedora for several years. A real example of such a rule is:

allow httpd_t httpd_log_t: file { append create getattr ioctl lock open read setattr };

The rule allows any process labeled as httpd_t to create, append, read and lock files labeled as httpd_log_t. Using the ps command, you can list all processes with their labels:

$ ps -efZ | grep httpd
system_u:system_r:httpd_t:s0 root 13911 1 0 Apr14 ? 00:05:14 /usr/sbin/httpd -DFOREGROUND

To see which objects are labeled as httpd_log_t, use semanage:

# semanage fcontext -l | grep httpd_log_t
/var/log/httpd(/.)? all files system_u:object_r:httpd_log_t:s0
/var/log/nginx(/.)? all files system_u:object_r:httpd_log_t:s0

The SELinux security policy for Fedora is shipped in the selinux-policyRPM package.

SELinux vs. containers

In Fedora, the container-selinux RPM package provides a generic SELinux policy for all containers started by engines like podman or docker. Its main purposes are to protect the host system against a container process, and to separate containers from each other. For instance, containers confined by SELinux with the process type container_t can only read/execute files in /usr and write to container_file_t files type on host file system. To prevent attacks by containers on each other, Multi-Category Security (MCS) is used.

Using only one generic policy for containers is problematic, because of the huge variety of container usage. On one hand, the default container type (container_t) is often too strict. For example:

  • Fedora SilverBlue needs containers to read/write a user’s home directory
  • Fluentd project needs containers to be able to read logs in the /var/log directory

On the other hand, the default container type could be too loose for certain use cases:

  • It has no SELinux network controls — all container processes can bind to any network port
  • It has no SELinux control on Linux capabilities — all container processes can use all capabilities

There is one solution to handle both use cases: write a custom SELinux security policy for the container. This can be tricky, because SELinux expertise is required. For this purpose, the udica tool was created.

Introducing udica

Udica generates SELinux security profiles for containers. Its concept is based on the “block inheritance” feature inside the common intermediate language (CIL) supported by SELinux userspace. The tool creates a policy that combines:

  • Rules inherited from specified CIL blocks (templates), and
  • Rules discovered by inspection of container JSON file, which contains mountpoints and ports definitions

You can load the final policy immediately, or move it to another system to load into the kernel. Here’s an example, using a container that:

  • Mounts /home as read only
  • Mounts /var/spool as read/write
  • Exposes port tcp/21

The container starts with this command:

# podman run -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash

The default container type (container_t) doesn’t allow any of these three actions. To prove it, you could use the sesearch tool to query that the allow rules are present on system:

# sesearch -A -s container_t -t home_root_t -c dir -p read

There’s no allow rule present that lets a process labeled as container_t access a directory labeled home_root_t (like the /home directory). The same situation occurs with /var/spool, which is labeled var_spool_t:

# sesearch -A -s container_t -t var_spool_t -c dir -p read

On the other hand, the default policy completely allows network access.

# sesearch -A -s container_t -t port_type -c tcp_socket
allow container_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg };
allow sandbox_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg }; Securing the container

It would be great to restrict this access and allow the container to bind just on TCP port 21 or with the same label. Imagine you find an example container using podman ps whose ID is 37a3635afb8f:

# podman ps -q

You can now inspect the container and pass the inspection file to the udica tool. The name for the new policy is my_container.

# podman inspect 37a3635afb8f > container.json
# udica -j container.json my_container
Policy my_container with container id 37a3635afb8f created!

Please load these modules using:
# semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}

Restart the container with: "--security-opt label=type:my_container.process" parameter

That’s it! You just created a custom SELinux security policy for the example container. Now you can load this policy into the kernel and make it active. The udica output above even tells you the command to use:

# semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}

Now you must restart the container to allow the container engine to use the new custom policy:

# podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash

The example container is now running in the newly created my_container.process SELinux process type:

# ps -efZ | grep my_container.process
unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 root 2275 434 1 13:49 pts/1 00:00:00 podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
system_u:system_r:my_container.process:s0:c270,c963 root 2317 2305 0 13:49 pts/0 00:00:00 bash Seeing the results

The command sesearch now shows allow rules for accessing /home and /var/spool:

# sesearch -A -s my_container.process -t home_root_t -c dir -p read
allow my_container.process home_root_t:dir { getattr ioctl lock open read search };
# sesearch -A -s my_container.process -t var_spool_t -c dir -p read
allow my_container.process var_spool_t:dir { add_name getattr ioctl lock open read remove_name search write }

The new custom SELinux policy also allows my_container.process to bind only to TCP/UDP ports labeled the same as TCP port 21:

# semanage port -l | grep 21 | grep ftp
ftp_port_t tcp 21, 989, 990
# sesearch -A -s my_container.process -c tcp_socket -p name_bind
allow my_container.process ftp_port_t:tcp_socket name_bind; Conclusion

The udica tool helps you create SELinux policies for containers based on an inspection file without any SELinux expertise required. Now you can increase the security of containerized environments. Sources are available on GitHub, and an RPM package is available in Fedora repositories for Fedora 28 and later.

Photo by Samuel Zeller on Unsplash.

Mirror your System Drive using Software RAID

Friday 3rd of May 2019 08:00:57 AM

Nothing lasts forever. When it comes to the hardware in your PC, most of it can easily be replaced. There is, however, one special-case hardware component in your PC that is not as easy to replace as the rest — your hard disk drive.

Drive Mirroring

Your hard drive stores your personal data. Some of your data can be backed up automatically by scheduled backup jobs. But those jobs scan the files to be backed up for changes and trying to scan an entire drive would be very resource intensive. Also, anything that you’ve changed since your last backup will be lost if your drive fails. Drive mirroring is a better way to maintain a secondary copy of your entire hard drive. With drive mirroring, a secondary copy of all the data on your hard drive is maintained in real time.

An added benefit of live mirroring your hard drive to a secondary hard drive is that it can increase your computer’s performance. Because disk I/O is one of your computer’s main performance bottlenecks, the performance improvement can be quite significant.

Note that a mirror is not a backup. It only protects your data from being lost if one of your physical drives fail. Types of failures that drive mirroring, by itself, does not protect against include:

Some of the above can be addressed by other file system features that can be used in conjunction with drive mirroring. File system features that address the above types of failures include:

This guide will demonstrate one method of mirroring your system drive using the Multiple Disk and Device Administration (mdadm) toolset. Just for fun, this guide will show how to do the conversion without using any extra boot media (CDs, USB drives, etc). For more about the concepts and terminology related to the multiple device driver, you can skim the md man page:

$ man md The Procedure
  1. Use sgdisk to (re)partition the extra drive that you have added to your computer:
    $ sudo -i # MY_DISK_1=/dev/sdb # sgdisk --zap-all $MY_DISK_1 # test -d /sys/firmware/efi/efivars || sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub_1 $MY_DISK_1 # sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot_1 $MY_DISK_1 # sgdisk -n 0:0:+4GiB -t 0:fd00 -c 0:swap_1 $MY_DISK_1 # sgdisk -n 0:0:0 -t 0:fd00 -c 0:root_1 $MY_DISK_1

    – If the drive that you will be using for the second half of the mirror in step 12 is smaller than this drive, then you will need to adjust down the size of the last partition so that the total size of all the partitions is not greater than the size of your second drive.
    – A few of the commands in this guide are prefixed with a test for the existence of an efivars directory. This is necessary because those commands are slightly different depending on whether your computer is BIOS-based or UEFI-based.

  2. Use mdadm to create RAID devices that use the new partitions to store their data:
    # mdadm --create /dev/md/boot --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/boot_1 missing # mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/swap_1 missing # mdadm --create /dev/md/root --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/root_1 missing # cat << END > /etc/mdadm.conf MAILADDR root AUTO +all DEVICE partitions END # mdadm --detail --scan >> /etc/mdadm.conf

    – The missing parameter tells mdadm to create an array with a missing member. You will add the other half of the mirror in step 14.
    – You should configure sendmail so you will be notified if a drive fails.
    – You can configure Evolution to monitor a local mail spool.

  3. Use dracut to update the initramfs:
    # dracut -f --add mdraid --add-drivers xfs

    – Dracut will include the /etc/mdadm.conf file you created in the previous section in your initramfs unless you build your initramfs with the hostonly option set to no. If you build your initramfs with the hostonly option set to no, then you should either manually include the /etc/mdadm.conf file, manually specify the UUID’s of the RAID arrays to assemble at boot time with the kernel parameter, or specify the kernel parameter to have all RAID arrays automatically assembled and started at boot time. This guide will demonstrate the option since it is the most generic.

  4. Format the RAID devices:
    # mkfs -t vfat /dev/md/boot # mkswap /dev/md/swap # mkfs -t xfs /dev/md/root

    – The new Boot Loader Specification states “if the OS is installed on a disk with GPT disk label, and no ESP partition exists yet, a new suitably sized (let’s say 500MB) ESP should be created and should be used as $BOOT” and “$BOOT must be a VFAT (16 or 32) file system”.

  5. Reboot and set the, rd.break and single kernel parameters:
    # reboot

    – You may need to set your root password before rebooting so that you can get into single-user mode in step 7.
    – See “Making Temporary Changes to a GRUB 2 Menu” for directions on how to set kernel parameters on compters that use the GRUB 2 boot loader.

  6. Use the dracut shell to copy the root file system:
    # mkdir /newroot # mount /dev/md/root /newroot # shopt -s dotglob # cp -ax /sysroot/* /newroot # rm -rf /newroot/boot/* # umount /newroot # exit

    – The dotglob flag is set for this bash session so that the wildcard character will match hidden files.
    – Files are removed from the boot directory because they will be copied to a separate partition in the next step.
    – This copy operation is being done from the dracut shell to insure that no processes are accessing the files while they are being copied.

  7. Use single-user mode to copy the non-root file systems:
    # mkdir /newroot # mount /dev/md/root /newroot # mount /dev/md/boot /newroot/boot # shopt -s dotglob # cp -Lr /boot/* /newroot/boot # test -d /newroot/boot/efi/EFI && mv /newroot/boot/efi/EFI/* /newroot/boot/efi && rmdir /newroot/boot/efi/EFI # test -d /sys/firmware/efi/efivars && ln -sfr /newroot/boot/efi/fedora/grub.cfg /newroot/etc/grub2-efi.cfg # cp -ax /home/* /newroot/home # exit

    – It is OK to run these commands in the dracut shell shown in the previous section instead of doing it from single-user mode. I’ve demonstrated using single-user mode to avoid having to explain how to mount the non-root partitions from the dracut shell.
    – The parameters being past to the cp command for the boot directory are a little different because the VFAT file system doesn’t support symbolic links or Unix-style file permissions.
    – In rare cases, the parameter is known to cause LVM to fail to assemble due to a race condition. If you see errors about your swap or home partition failing to mount when entering single-user mode, simply try again by repeating step 5 but omiting the rd.break paramenter so that you will go directly to single-user mode.

  8. Update fstab on the new drive:
    # cat << END > /newroot/etc/fstab /dev/md/root / xfs defaults 0 0 /dev/md/boot /boot vfat defaults 0 0 /dev/md/swap swap swap defaults 0 0 END
  9. Configure the boot loader on the new drive:
    # NEW_GRUB_CMDLINE_LINUX=$(cat /etc/default/grub | sed -n 's/^GRUB_CMDLINE_LINUX="\(.*\)"/\1/ p') # NEW_GRUB_CMDLINE_LINUX=${NEW_GRUB_CMDLINE_LINUX//rd.lvm.*([^ ])} # NEW_GRUB_CMDLINE_LINUX=${NEW_GRUB_CMDLINE_LINUX//resume=*([^ ])} # NEW_GRUB_CMDLINE_LINUX+=" selinux=0" # sed -i "/^GRUB_CMDLINE_LINUX=/s/=.*/=\"$NEW_GRUB_CMDLINE_LINUX\"/" /newroot/etc/default/grub

    – You can re-enable selinux after this procedure is complete. But you will have to relabel your file system first.

  10. Install the boot loader on the new drive:
    # sed -i '/^GRUB_DISABLE_OS_PROBER=.*/d' /newroot/etc/default/grub # echo "GRUB_DISABLE_OS_PROBER=true" >> /newroot/etc/default/grub # MY_DISK_1=$(mdadm --detail /dev/md/boot | grep active | grep -m 1 -o "/dev/sd.") # for i in dev dev/pts proc sys run; do mount -o bind /$i /newroot/$i; done # chroot /newroot env MY_DISK_1=$MY_DISK_1 bash --login # test -d /sys/firmware/efi/efivars || MY_GRUB_DIR=/boot/grub2 # test -d /sys/firmware/efi/efivars && MY_GRUB_DIR=$(find /boot/efi -type d -name 'fedora' -print -quit) # test -e /usr/sbin/grub2-switch-to-blscfg && grub2-switch-to-blscfg --grub-directory=$MY_GRUB_DIR # grub2-mkconfig -o $MY_GRUB_DIR/grub.cfg \; # test -d /sys/firmware/efi/efivars && test /boot/grub2/grubenv -nt $MY_GRUB_DIR/grubenv && cp /boot/grub2/grubenv $MY_GRUB_DIR/grubenv # test -d /sys/firmware/efi/efivars || grub2-install "$MY_DISK_1" # logout # for i in run sys proc dev/pts dev; do umount /newroot/$i; done # test -d /sys/firmware/efi/efivars && efibootmgr -c -d "$MY_DISK_1" -p 1 -l "$(find /newroot/boot -name shimx64.efi -printf '/%P\n' -quit | sed 's!/!\\!g')" -L "Fedora RAID Disk 1"

    – The grub2-switch-to-blscfg command is optional. It is only supported on Fedora 29+.
    – The cp command above should not be necessary, but there appears to be a bug in the current version of grub which causes it to write to $BOOT/grub2/grubenv instead of $BOOT/efi/fedora/grubenv on UEFI systems.
    – You can use the following command to verify the contents of the grub.cfg file right after running the grub2-mkconfig command above:

    # sed -n '/BEGIN .*10_linux/,/END .*10_linux/ p' $MY_GRUB_DIR/grub.cfg

    – You should see references to mdraid and mduuid in the output from the above command if the RAID array was detected properly.

  11. Boot off of the new drive:
    # reboot

    – How to select the new drive is system-dependent. It usually requires pressing one of the F12, F10, Esc or Del keys when you hear the System OK BIOS beep code.
    – On UEFI systems the boot loader on the new drive should be labeled “Fedora RAID Disk 1”.

  12. Remove all the volume groups and partitions from your old drive:
    # MY_DISK_2=/dev/sda # MY_VOLUMES=$(pvs | grep $MY_DISK_2 | awk '{print $2}' | tr "\n" " ") # test -n "$MY_VOLUMES" && vgremove $MY_VOLUMES # sgdisk --zap-all $MY_DISK_2

    WARNING: You want to make certain that everything is working properly on your new drive before you do this. A good way to verify that your old drive is no longer being used is to try booting your computer once without the old drive connected.
    – You can add another new drive to your computer instead of erasing your old one if you prefer.

  13. Create new partitions on your old drive to match the ones on your new drive:
    # test -d /sys/firmware/efi/efivars || sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub_2 $MY_DISK_2 # sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot_2 $MY_DISK_2 # sgdisk -n 0:0:+4GiB -t 0:fd00 -c 0:swap_2 $MY_DISK_2 # sgdisk -n 0:0:0 -t 0:fd00 -c 0:root_2 $MY_DISK_2

    – It is important that the partitions match in size and type. I prefer to use the parted command to display the partition table because it supports setting the display unit:

    # parted /dev/sda unit MiB print # parted /dev/sdb unit MiB print
  14. Use mdadm to add the new partitions to the RAID devices:
    # mdadm --manage /dev/md/boot --add /dev/disk/by-partlabel/boot_2 # mdadm --manage /dev/md/swap --add /dev/disk/by-partlabel/swap_2 # mdadm --manage /dev/md/root --add /dev/disk/by-partlabel/root_2
  15. Install the boot loader on your old drive:
    # test -d /sys/firmware/efi/efivars || grub2-install "$MY_DISK_2" # test -d /sys/firmware/efi/efivars && efibootmgr -c -d "$MY_DISK_2" -p 1 -l "$(find /boot -name shimx64.efi -printf "/%P\n" -quit | sed 's!/!\\!g')" -L "Fedora RAID Disk 2"
  16. Use mdadm to test that email notifications are working:
    # mdadm --monitor --scan --oneshot --test

As soon as your drives have finished synchronizing, you should be able to select either drive when restarting your computer and you will receive the same live-mirrored operating system. If either drive fails, mdmonitor will send an email notification. Recovering from a drive failure is now simply a matter of swapping out the bad drive with a new one and running a few sgdisk and mdadm commands to re-create the mirrors (steps 13 through 15). You will no longer have to worry about losing any data if a drive fails!

Video Demonstrations Converting a UEFI PC to RAID1 Converting a BIOS PC to RAID1
  • TIP: Set the the quality to 720p on the above videos for best viewing.

3 apps to manage personal finances in Fedora

Wednesday 1st of May 2019 08:00:18 AM

There are numerous services available on the web for managing your personal finances. Although they may be convenient, they also often mean leaving your most valuable personal data with a company you can’t monitor. Some people are comfortable with this level of trust.

Whether you are or not, you might be interested in an app you can maintain on your own system. This means your data never has to leave your own computer if you don’t want. One of these three apps might be what you’re looking for.


HomeBank is a fully featured way to manage multiple accounts. It’s easy to set up and keep updated. It has multiple ways to categorize and graph income and liabilities so you can see where your money goes. It’s available through the official Fedora repositories.

A simple account set up in HomeBank with a few transactions.

To install HomeBank, open the Software app, search for HomeBank, and select the app. Then click Install to add it to your system. HomeBank is also available via a Flatpak.


The KMyMoney app is a mature app that has been around for a long while. It has a robust set of features to help you manage multiple accounts, including assets, liabilities, taxes, and more. KMyMoney includes a full set of tools for managing investments and making forecasts. It also sports a huge set of reports for seeing how your money is doing.

A subset of the many reports available in KMyMoney.

To install, use a software center app, or use the command line:

$ sudo dnf install kmymoney GnuCash

One of the most venerable free GUI apps for personal finance is GnuCash. GnuCash is not just for personal finances. It also has functions for managing income, assets, and liabilities for a business. That doesn’t mean you can’t use it for managing just your own accounts. Check out the online tutorial and guide to get started.

Checking account records shown in GnuCash.

Open the Software app, search for GnuCash, and select the app. Then click Install to add it to your system. Or use dnf install as above to install the gnucash package.

It’s now available via Flathub which makes installation easy. If you don’t have Flathub support, check out this article on the Fedora Magazine for how to use it. Then you can also use the flatpak install GnuCash command with a terminal.

Photo by Fabian Blank on Unsplash.

Upgrading Fedora 29 to Fedora 30

Tuesday 30th of April 2019 08:25:55 PM

Fedora 30 is available now. You’ll likely want to upgrade your system to the latest version of Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 29 to Fedora 30.

Upgrading Fedora 29 Workstation to Fedora 30

Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the GNOME Software app. Or you can choose Software from GNOME Shell.

Choose the Updates tab in GNOME Software and you should see a screen informing you that Fedora 30 is Now Available.

If you don’t see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available.

Choose Download to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.

Using the command line

If you’ve upgraded from past Fedora releases, you are likely familiar with the dnf upgrade plugin. This method is the recommended and supported way to upgrade from Fedora 29 to Fedora 30. Using this plugin will make your upgrade to Fedora 30 simple and easy.

1. Update software and back up your system

Before you do anything, you will want to make sure you have the latest software for Fedora 29 before beginning the upgrade process. To update your software, use GNOME Software or enter the following command in a terminal.

sudo dnf upgrade --refresh

Additionally, make sure you back up your system before proceeding. For help with taking a backup, see the backup series on the Fedora Magazine.

2. Install the DNF plugin

Next, open a terminal and type the following command to install the plugin:

sudo dnf install dnf-plugin-system-upgrade 3. Start the update with DNF

Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:

sudo dnf system-upgrade download --releasever=30

This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the ‐‐allowerasing flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.

4. Reboot and upgrade

Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:

sudo dnf system-upgrade reboot

Your system will restart after this. Many releases ago, the fedup tool would create a new option on the kernel selection / boot screen. With the dnf-plugin-system-upgrade package, your system reboots into the current kernel installed for Fedora 29; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.

Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 30 system.

Resolving upgrade problems

On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the DNF system upgrade wiki page for more information on troubleshooting in the event of a problem.

If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.

What’s new in Fedora 30 Workstation

Tuesday 30th of April 2019 02:05:50 PM

Fedora 30 Workstation is the latest groundbreaking release of our free, leading-edge operating system. You can download it from the official website here right now. There are several new and noteworthy changes in Fedora Workstation. Read more details below.

GNOME 3.32

Fedora 30 Workstation includes the latest release of this simple, beautiful desktop environment for users of all types. There are numerous improvements throughout GNOME 3.32, including:

  • A refreshed visual style with buttons and switches that are easier to identify and use
  • Completely refreshed icons for applications
  • Consistent user icons across the desktop
  • Snappier performance thanks to fixes and enhancements in the core GNOME libraries
  • An Applications panel that controls permissions, to make use of Flatpak apps easier
  • …and much more!

Do you want the full details of everything in GNOME 3.32? Visit the release notes for even more community provided goodness.


You can also try Fedora Silverblue — it’s all the features of Workstation but combined with the rpm-ostree features of Fedora Atomic. Worry-free upgrades (with backouts) are just one of the benefits of this technology. You can also install your favorite Flatpak or RPM packaged apps on top.

Silverblue continues to develop now and in future releases. Learn how you can contribute by visiting the Silverblue team’s website.

Announcing the release of Fedora 30

Monday 29th of April 2019 01:36:02 PM

It seems like it was just six months ago that we announced Fedora 29, and here we are again. Today, we announce our next operating system release. Even though it went so quickly, a lot has happened in the last half year, and you’ll see the results in Fedora 30.

If you’re impatient, go to now. For details, read on.

Variants and more

Fedora Editions are targeted outputs geared toward specific “showcase” uses. Since we first started using this concept in the Fedora 21 release, the needs of the community have continued to evolve. As part of Fedora 30, we’re combining cloud and server into the Fedora Server edition. We’re bringing in Fedora CoreOS to replace Fedora Atomic Host as our container-focused deliverable in the Fedora 30 timeframe — stay tuned for that. The Fedora Workstation edition continues to focus on delivering the latest in open source desktop tools.

Of course, we produce more than just the editions. Fedora Spins and Labs target a variety of audiences and use cases, including the Internet of Things. And, we haven’t forgotten our alternate architectures, ARM AArch64, Power, and S390x.

Fedora Workstation features GNOME 3.32 — the latest release of this popular desktop environment. GNOME 3.32 features an updated visual style, including the user interface, the icons, and the desktop itself. New to Fedora Server are Linux System Roles — a collection of roles and modules executed by Ansible to assist Linux admins in the configuration of common GNU/Linux subsystems

No matter what variant of Fedora you use, you’re getting the latest the open source world has to offer. GCC 9, Bash 5.0, and PHP 7.3 are among the many updated packages in Fedora 30. We’re excited for you to try it out. So go to and download it now. Or if you’re already running a Fedora release, follow the easy upgrade instructions.

Along with the release of Fedora 30, we’re moving our “Ask Fedora” support forum to the Discourse platform. Log in to Ask Fedora to try it out and watch for a Fedora Magazine article about it soon.

As always, thanks to the thousands of people who contributed in some way to the Fedora Project in this release cycle, and to the Fedora heroes who helped get this release out on schedule even with so much else going on. If you’re in Boston for Red Hat Summit next week, whether you are one of these contributors, would like to be one in the future, or just a friend, make sure to visit the Fedora booth in Community Central!

Awk utility in Fedora

Monday 29th of April 2019 08:00:09 AM

Fedora provides awk as part of its default installation, including all its editions, including the immutable ones like Silverblue. But you may be asking, what is awk and why would you need it?

Awk is a data driven programming language that acts when it matches a pattern. On Fedora, and most other distributions, GNU awk or gawk is used. Read on for more about this language and how to use it.

A brief history of awk

Awk began at Bell Labs in 1977. Its name is an acronym from the initials of the designers: Alfred V. Aho, Peter J. Weinberger, and Brian W. Kernighan.

The specification for awk in the POSIX Command Language and Utilities standard further clarified the language. Both the gawk designers and the original awk designers at Bell Laboratories provided feedback for the POSIX specification.

From The GNU Awk User’s Guide

For a more in-depth look at how awk/gawk ended up being as powerful and useful as it is, follow the link above. Numerous individuals have contributed to the current state of gawk. Among those are:

  • Arnold Robbins and David Trueman, the creators of gawk
  • Michael Brennan, the creator of mawk, which later was merged with gawk
  • Jurgen Kahrs, who added networking capabilities to gawk in 1997
  • John Hague, who rewrote the gawk internals and added an awk-level debugger in 2011
Using awk

The following sections show various ways of using awk in Fedora.

At the command line

The simples way to invoke awk is at the command line. You can search a text file for a particular pattern, and if found, print out the line(s) of the file that match the pattern anywhere. As an example, use cat to take a look at the command history file in your home director:

$ cat ~/.bash_history

There are probably many lines scrolling by right now.

Awk helps with this type of file quite easily. Instead of printing the entire file out to the terminal like cat, you can use awk to find something of specific interest. For this example, type the following at the command line if you’re running a standard Fedora edition:

$ awk '/dnf/' ~/.bash_history

If you’re running Silverblue, try this instead:

$ awk '/rpm-ostree/' ~/.bash_history

In both cases, more data likely appears than what you really want. That’s no problem for awk since it can accept regular expressions. Using the previous example, you can change the pattern to more closely match search requirements of wanting to know about installs only. Try changing the search pattern to one of these:

$ awk '/rpm-ostree install/' ~/.bash_history
$ awk '/dnf install/' ~/.bash_history

All the entries of your bash command line history appear that have the pattern specified at any position along the line. Awk works on one line of a data file at a time. It matches pattern, then performs an action, then moves to next line until the end of file (EOF) is reached.

From an awk program

Using awk at the command line as above is not much different than piping output to grep, like this:

$ cat .bash_history | grep 'dnf install'

The end result of printing to standard output (stdout) is the same with both methods.

Awk is a programming language, and the command awk is an interpreter of that language. The real power and flexibility of awk is you can make programs with it, and combine them with shell scripts to create even more powerful programs. For more feature rich development with awk, you can also incorporate C or C++ code using Dynamic-Extensions.

Next, to show the power of awk, let’s make a couple of program files to print the header and draw five numbers for the first row of a bingo card. To do this we’ll create two awk program files.

The first file prints out the header of the bingo card. For this example it is called bingo-title.awk. Use your favorite editor to save this text as that file name:

    print "B\tI\tN\tG\tO"

Now the title program is ready. You could try it out with this command:

$ awk -f bingo-title.awk

The program prints the word BINGO, with a tab space (\t) between the characters. For the number selection, let’s use one of awk’s builtin numeric functions called rand() and use two of the control statements, for and switch. (Except the editor changed my program, so no switch statement used this time).

The title of the second awk program is bingo-num.awk. Enter the following into your favorite editor and save with that file name:

@include "bingo-title.awk"
    for (i = 1; i < = 5; i++) {
    b = int(rand() * 15) + (15*(i-1))
    printf "%s\t", b

The @include statement in the file tells the interpreter to process the included file first. In this case the interpreter processs the bingo-title.awk file so the title prints out first.

Running the test program

Now enter the command to pick a row of bingo numbers:

$ awk -f bingo-num.awk

Output appears similar to the following. Note that the rand() function in awk is not ideal for truly random numbers. It’s used here only as for example purposes.

$ awk -f bingo-num.awk
B   I   N   G   O
13  23  34  53  71

In the example, we created two programs with only beginning sections that used actions to manipulate data generated from within the awk program. In order to satisfy the rules of Bingo, more work is needed to achieve the desirable results. The reader is encouraged to fix the programs so they can reliably pick bingo numbers, maybe look at the awk function srand() for answers on how that could be done.

Final examples

Awk can be useful even for mundane daily search tasks that you encounter, like listing all flatpak’s on the Flathub repository from org.gnome (providing you have the Flathub repository setup). The command to do that would be:

$ flatpak remote-ls flathub --system | awk /org.gnome/

A listing appears that shows all output from remote-ls that matches the org.gnome pattern. To see flatpaks already installed from org.gnome, enter this command:

$ flatpak list --system | awk /org.gnome/

Awk is a powerful and flexible programming language that fills a niche with text file manipulation exceedingly well.

Automate backups with restic and systemd

Thursday 25th of April 2019 08:00:36 AM

Timely backups are important. So much so that backing up software is a common topic of discussion, even here on the Fedora Magazine. This article demonstrates how to automate backups with restic using only systemd unit files.

For an introduction to restic, be sure to check out our article Use restic on Fedora for encrypted backups. Then read on for more details.

Two systemd services are required to run in order to automate taking snapshots and keeping data pruned. The first service runs the backup command needs to be run on a regular frequency. The second service takes care of data pruning.

If you’re not familiar with systemd at all, there’s never been a better time to learn. Check out the series on systemd here at the Magazine, starting with this primer on unit files:

systemd unit file basics

If you haven’t installed restic already, note it’s in the official Fedora repositories. To install use this command with sudo:

$ sudo dnf install restic Backup

First, create the ~/.config/systemd/user/restic-backup.service file. Copy and paste the text below into the file for best results.

Description=Restic backup service
ExecStart=restic backup --verbose --one-file-system --tag systemd.timer $BACKUP_EXCLUDES $BACKUP_PATHS
ExecStartPost=restic forget --verbose --tag systemd.timer --group-by "paths,tags" --keep-daily $RETENTION_DAYS --keep-weekly $RETENTION_WEEKS --keep-monthly $RETENTION_MONTHS --keep-yearly $RETENTION_YEARS

This service references an environment file in order to load secrets (such as RESTIC_PASSWORD). Create the ~/.config/restic-backup.conf file. Copy and paste the content below for best results. This example uses BackBlaze B2 buckets. Adjust the ID, key, repository, and password values accordingly.

BACKUP_EXCLUDES="--exclude-file /home/rupert/.restic_excludes --exclude-if-present .exclude_from_backup"

Now that the service is installed, reload systemd: systemctl –user daemon-reload. Try running the service manually to create a backup: systemctl –user start restic-backup.

Because the service is a oneshot, it will run once and exit. After verifying that the service runs and creates snapshots as desired, set up a timer to run this service regularly. For example, to run the restic-backup.service daily, create ~/.config/systemd/user/restic-backup.timer as follows. Again, copy and paste this text:

Description=Backup with restic daily

Enable it by running this command:

$ systemctl --user enable --now restic-backup.timer Prune

While the main service runs the forget command to only keep snapshots within the keep policy, the data is not actually removed from the restic repository. The prune command inspects the repository and current snapshots, and deletes any data not associated with a snapshot. Because prune can be a time-consuming process, it is not necessary to run every time a backup is run. This is the perfect scenario for a second service and timer. First, create the file ~/.config/systemd/user/restic-prune.service by copying and pasting this text:

Description=Restic backup service (data pruning)
ExecStart=restic prune

Similarly to the main restic-backup.service, restic-prune is a oneshot service and can be run manually. Once the service has been set up, create and enable a corresponding timer at ~/.config/systemd/user/restic-prune.timer:

Description=Prune data from the restic repository monthly

That’s it! Restic will now run daily and prune data monthly.

Photo by Samuel Zeller on Unsplash.

2 new apps for music tweakers on Fedora Workstation

Monday 22nd of April 2019 08:00:06 AM

Linux operating systems are great for making unique customizations and tweaks to make your computer work better for you. For example, the i3 window manager encourages users to think about the different components and pieces that make up the modern Linux desktop.

Fedora has two new packages of interest for music tweakers: mpris-scrobbler and playerctl. mpris-scrobbler tracks your music listening history on a music-tracking service like and/or ListenBrainz. playerctl is a command-line music player controller.

mpris-scrobbler records your music listening trends

mpris-scrobbler is a CLI application to submit play history of your music to a service like,, or ListenBrainz. It listens on the MPRIS D-Bus interface to detect what’s playing. It connects with several different music clients like spotify-client, vlc, audacious, bmp, cmus, and others. last week in music report. Generated from user-submitted listening history. Install and configure mpris-scrobbler

mpris-scrobbler is available for Fedora 28 or later, as well as the EPEL 7 repositories. Run the following command in a terminal to install it:

sudo dnf install mpris-scrobbler

Once it is installed, use systemctl to start and enable the service. The following command starts mpris-scrobbler and always starts it after a system reboot:

systemctl --user enable --now mpris-scrobbler.service Submit plays to ListenBrainz

This article explains how to link mpris-scrobbler with a ListenBrainz account. To use or, see the upstream documentation.

To submit plays to a ListenBrainz server, you need a ListenBrainz API token. If you have an account, get the token from your profile settings page. When you have a token, run this command to authenticate with your ListenBrainz API token:

$ mpris-scrobbler-signon token listenbrainz
Token for

Finally, test it by playing a song in your preferred music client on Fedora. The songs you play appear on your ListenBrainz profile.

Basic statistics and play history from a user profile on ListenBrainz. The current track is playing on a Fedora Workstation laptop with mpris-scrobbler. playerctl controls your music playback

playerctl is a CLI tool to control any music player implementing the MPRIS D-Bus interface. You can easily bind it to keyboard shortcuts or media hotkeys. Here’s how to install it, use it in the command line, and create key bindings for the i3 window manager.

Install and use playerctl

playerctl is available for Fedora 28 or later. Run the following command in a terminal to install it:

sudo dnf install playerctl

Now that it’s installed, you can use it right away. Open your preferred music player on Fedora. Next, try the following commands to control playback from a terminal.

To play or pause the currently playing track:

playerctl play-pause

If you want to skip to the next track:

playerctl next

For a list of all running players:

playerctl -l

To play or pause what’s currently playing, only on the spotify-client app:

playerctl -p spotify play-pause Create playerctl key bindings in i3wm

Do you use a window manager like the i3 window manager? Try using playerctl for key bindings. You can bind different commands to different key shortcuts, like the play/pause buttons on your keyboard. Look at the following i3wm config excerpt to see how:

# Media player controls
bindsym XF86AudioPlay exec "playerctl play-pause"
bindsym XF86AudioNext exec "playerctl next"
bindsym XF86AudioPrev exec "playerctl previous" Try it out with your favorite music players

Need to know more about customizing the music listening experience on Fedora? The Fedora Magazine has you covered. Check out these five cool music players on Fedora:

5 cool music player apps

Bring order to your music library chaos by sorting and organizing it with MusicBrainz Picard:

Picard brings order to your music library

Photo by Frank Septillion on Unsplash.

4 cool new projects to try in COPR for April 2019

Friday 19th of April 2019 09:00:45 AM

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

Here’s a set of new and interesting projects in COPR.


Joplin is a note-taking and to-do app. Notes are written in the Markdown format, and organized by sorting them into various notebooks and using tags.
Joplin can import notes from any Markdown source or exported from Evernote. In addition to the desktop app, there’s an Android version with the ability to synchronize notes between them — using Nextcloud, Dropbox or other cloud services. Finally, there’s a browser extension for Chrome and Firefox to save web pages and screenshots.

Installation instructions

The repo currently provides Joplin for Fedora 29 and 30, and for EPEL 7. To install Joplin, use these commands with sudo:

sudo dnf copr enable taw/joplin
sudo dnf install joplin Fzy

Fzy is a command-line utility for fuzzy string searching. It reads from a standard input and sorts the lines based on what is most likely the sought after text, and then prints the selected line. In addition to command-line, fzy can be also used within vim. You can try fzy in this online demo.

Installation instructions

The repo currently provides fzy for Fedora 29, 30, and Rawhide, and other distributions. To install fzy, use these commands:

sudo dnf copr enable lehrenfried/fzy
sudo dnf install fzy Fondo

Fondo is a program for browsing many photographs from the website. It has a simple interface that allows you to look for pictures of one of several themes, or all of them at once. You can then set a found picture as a wallpaper with a single click, or share it.

Installation instructions

The repo currently provides Fondo for Fedora 29, 30, and Rawhide. To install Fondo, use these commands:

sudo dnf copr enable atim/fondo
sudo dnf install fondo YACReader

YACReader is a digital comic book reader that supports many comics and image formats, such as cbz, cbr, pdf and others. YACReader keeps track of reading progress, and can download comics’ information from Comic Vine. It also comes with a YACReader Library for organizing and browsing your comic book collection.

Installation instructions

The repo currently provides YACReader for Fedora 29, 30, and Rawhide. To install YACReader, use these commands:

sudo dnf copr enable atim/yacreader
sudo dnf install yacreader

Managing RAID arrays with mdadm

Wednesday 17th of April 2019 08:00:25 AM

Mdadm stands for Multiple Disk and Device Administration. It is a command line tool that can be used to manage software RAID arrays on your Linux PC. This article outlines the basics you need to get started with it.

The following five commands allow you to make use of mdadm’s most basic features:

  1. Create a RAID array:
    # mdadm --create /dev/md/test --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
  2. Assemble (and start) a RAID array:
    # mdadm --assemble /dev/md/test /dev/sda1 /dev/sdb1
  3. Stop a RAID array:
    # mdadm --stop /dev/md/test
  4. Delete a RAID array:
    # mdadm --zero-superblock /dev/sda1 /dev/sdb1
  5. Check the status of all assembled RAID arrays:
    # cat /proc/mdstat
Notes on features mdadm --create

The create command shown above includes the following four parameters in addition to the create parameter itself and the device names:

  1. –homehost:
    By default, mdadm stores your computer’s name as an attribute of the RAID array. If your computer name does not match the stored name, the array will not automatically assemble. This feature is useful in server clusters that share hard drives because file system corruption usually occurs if multiple servers attempt to access the same drive at the same time. The name any is reserved and disables the homehost restriction.
  2. –metadata:
    mdadm reserves a small portion of each RAID device to store information about the RAID array itself. The metadata parameter specifies the format and location of the information. The value 1.0 indicates to use version-1 formatting and store the metadata at the end of the device.
  3. –level:
    The level parameter specifies how the data should be distributed among the underlying devices. Level 1 indicates each device should contain a complete copy of all the data. This level is also known as disk mirroring.
  4. –raid-devices:
    The raid-devices parameter specifies the number of devices that will be used to create the RAID array.

By using level=1 (mirroring) in combination with metadata=1.0 (store the metadata at the end of the device), you create a RAID1 array whose underlying devices appear normal if accessed without the aid of the mdadm driver. This is useful in the case of disaster recovery, because you can access the device even if the new system doesn’t support mdadm arrays. It’s also useful in case a program needs read-only access to the underlying device before mdadm is available. For example, the UEFI firmware in a computer may need to read the bootloader from the ESP before mdadm is started.

mdadm --assemble

The assemble command above fails if a member device is missing or corrupt. To force the RAID array to assemble and start when one of its members is missing, use the following command:

# mdadm --assemble --run /dev/md/test /dev/sda1 Other important notes

Avoid writing directly to any devices that underlay a mdadm RAID1 array. That causes the devices to become out-of-sync and mdadm won’t know that they are out-of-sync. If you access a RAID1 array with a device that’s been modified out-of-band, you can cause file system corruption. If you modify a RAID1 device out-of-band and need to force the array to re-synchronize, delete the mdadm metadata from the device to be overwritten and then re-add it to the array as demonstrated below:

# mdadm --zero-superblock /dev/sdb1
# mdadm --assemble --run /dev/md/test /dev/sda1
# mdadm /dev/md/test --add /dev/sdb1

These commands completely overwrite the contents of sdb1 with the contents of sda1.

To specify any RAID arrays to automatically activate when your computer starts, create an /etc/mdadm.conf configuration file.

For the most up-to-date and detailed information, check the man pages:

$ man mdadm
$ man mdadm.conf

The next article of this series will show a step-by-step guide on how to convert an existing single-disk Linux installation to a mirrored-disk installation, that will continue running even if one of its hard drives suddenly stops working!

Kubernetes on Fedora IoT with k3s

Monday 15th of April 2019 08:00:14 AM

Fedora IoT is an upcoming Fedora edition targeted at the Internet of Things. It was introduced last year on Fedora Magazine in the article How to turn on an LED with Fedora IoT. Since then, it has continued to improve together with Fedora Silverblue to provide an immutable base operating system aimed at container-focused workflows.

Kubernetes is an immensely popular container orchestration system. It is perhaps most commonly used on powerful hardware handling huge workloads. However, it can also be used on lightweight devices such as the Raspberry Pi 3. Read on to find out how.

Why Kubernetes?

While Kubernetes is all the rage in the cloud, it may not be immediately obvious to run it on a small single board computer. But there are certainly reasons for doing it. First of all it is a great way to learn and get familiar with Kubernetes without the need for expensive hardware. Second, because of its popularity, there are tons of applications that comes pre-packaged for running in Kubernetes clusters. Not to mention the large community to provide help if you ever get stuck.

Last but not least, container orchestration may actually make things easier, even at the small scale in a home lab. This may not be apparent when tackling the the learning curve, but these skills will help when dealing with any cluster in the future. It doesn’t matter if it’s a single node Raspberry Pi cluster or a large scale machine learning farm.

K3s – a lightweight Kubernetes

A “normal” installation of Kubernetes (if such a thing can be said to exist) is a bit on the heavy side for IoT. The recommendation is a minimum of 2 GB RAM per machine! However, there are plenty of alternatives, and one of the newcomers is k3s – a lightweight Kubernetes distribution.

K3s is quite special in that it has replaced etcd with SQLite for its key-value storage needs. Another thing to note is that k3s ships as a single binary instead of one per component. This diminishes the memory footprint and simplifies the installation. Thanks to the above, k3s should be able to run k3s with just 512 MB of RAM, perfect for a small single board computer!

What you will need
  1. Fedora IoT in a virtual machine or on a physical device. See the excellent getting started guide here. One machine is enough but two will allow you to test adding more nodes to the cluster.
  2. Configure the firewall to allow traffic on ports 6443 and 8472. Or simply disable it for this experiment by running “systemctl stop firewalld”.
Install k3s

Installing k3s is very easy. Simply run the installation script:

curl -sfL | sh -

This will download, install and start up k3s. After installation, get a list of nodes from the server by running the following command:

kubectl get nodes

Note that there are several options that can be passed to the installation script through environment variables. These can be found in the documentation. And of course, there is nothing stopping you from installing k3s manually by downloading the binary directly.

While great for experimenting and learning, a single node cluster is not much of a cluster. Luckily, adding another node is no harder than setting up the first one. Just pass two environment variables to the installation script to make it find the first node and avoid running the server part of k3s

curl -sfL | K3S_URL=https://example-url:6443 \

The example-url above should be replaced by the IP address or fully qualified domain name of the first node. On that node the token (represented by XXX) is found in the file /var/lib/rancher/k3s/server/node-token.

Deploy some containers

Now that we have a Kubernetes cluster, what can we actually do with it? Let’s start by deploying a simple web server.

kubectl create deployment my-server --image nginx

This will create a Deployment named “my-server” from the container image “nginx” (defaulting to docker hub as registry and the latest tag). You can see the Pod created by running the following command.

kubectl get pods

In order to access the nginx server running in the pod, first expose the Deployment through a Service. The following command will create a Service with the same name as the deployment.

kubectl expose deployment my-server --port 80

The Service works as a kind of load balancer and DNS record for the Pods. For instance, when running a second Pod, we will be able to curl the nginx server just by specifying my-server (the name of the Service). See the example below for how to do this.

# Start a pod and run bash interactively in it
kubectl run debug --generator=run-pod/v1 --image=fedora -it -- bash
# Wait for the bash prompt to appear
curl my-server
# You should get the "Welcome to nginx!" page as output Ingress controller and external IP

By default, a Service only get a ClusterIP (only accessible inside the cluster), but you can also request an external IP for the service by setting its type to LoadBalancer. However, not all applications require their own IP address. Instead, it is often possible to share one IP address among many services by routing requests based on the host header or path. You can accomplish this in Kubernetes with an Ingress, and this is what we will do. Ingresses also provide additional features such as TLS encryption of the traffic without having to modify your application.

Kubernetes needs an ingress controller to make the Ingress resources work and k3s includes Traefik for this purpose. It also includes a simple service load balancer that makes it possible to get an external IP for a Service in the cluster. The documentation describes the service like this:

k3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available the load balancer will stay in Pending.


The ingress controller is already exposed with this load balancer service. You can find the IP address that it is using with the following command.

$ kubectl get svc --all-namespaces
default kubernetes ClusterIP 443/TCP 33d
default my-server ClusterIP 80/TCP 30m
kube-system kube-dns ClusterIP 53/UDP,53/TCP,9153/TCP 33d
kube-system traefik LoadBalancer 80:31596/TCP,443:31539/TCP 33d

Look for the Service named traefik. In the above example the IP we are interested in is

Route incoming requests

Let’s create an Ingress that routes requests to our web server based on the host header. This example uses to avoid having to set up DNS records. It works by including the IP adress as a subdomain, to use any subdomain of to reach the IP In other words, is used to reach the ingress controller in the cluster. You can try this right now (with your own IP instead of Without an ingress in place you should reach the “default backend” which is just a page showing “404 page not found”.

We can tell the ingress controller to route requests to our web server Service with the following Ingress.

apiVersion: extensions/v1beta1
kind: Ingress
name: my-server
- host:
- path: /
serviceName: my-server
servicePort: 80

Save the above snippet in a file named my-ingress.yaml and add it to the cluster by running this command:

kubectl apply -f my-ingress.yaml

You should now be able to reach the default nginx welcoming page on the fully qualified domain name you chose. In my example this would be The ingress controller is routing the requests based on the information in the Ingress. A request to will be routed to the Service and port defined as backend in the Ingress (my-server and 80 in this case).

What about IoT then?

Imagine the following scenario. You have dozens of devices spread out around your home or farm. It is a heterogeneous collection of IoT devices with various hardware capabilities, sensors and actuators. Maybe some of them have cameras, weather or light sensors. Others may be hooked up to control the ventilation, lights, blinds or blink LEDs.

In this scenario, you want to gather data from all the sensors, maybe process and analyze it before you finally use it to make decisions and control the actuators. In addition to this, you may want to visualize what’s going on by setting up a dashboard. So how can Kubernetes help us manage something like this? How can we make sure that Pods run on suitable devices?

The simple answer is labels. You can label the nodes according to capabilities, like this:

kubectl label nodes <node-name> <label-key>=<label-value>
# Example
kubectl label nodes node2 camera=available

Once they are labeled, it is easy to select suitable nodes for your workload with nodeSelectors. The final piece to the puzzle, if you want to run your Pods on all suitable nodes is to use DaemonSets instead of Deployments. In other words, create one DaemonSet for each data collecting application that uses some unique sensor and use nodeSelectors to make sure they only run on nodes with the proper hardware.

The service discovery feature that allows Pods to find each other simply by Service name makes it quite easy to handle these kinds of distributed systems. You don’t need to know or configure IP addresses or custom ports for the applications. Instead, they can easily find each other through named Services in the cluster.

Utilize spare resources

With the cluster up and running, collecting data and controlling your lights and climate control you may feel that you are finished. However, there are still plenty of compute resources in the cluster that could be used for other projects. This is where Kubernetes really shines.

You shouldn’t have to worry about where exactly those resources are or calculate if there is enough memory to fit an extra application here or there. This is exactly what orchestration solves! You can easily deploy more applications in the cluster and let Kubernetes figure out where (or if) they will fit.

Why not run your own NextCloud instance? Or maybe gitea? You could also set up a CI/CD pipeline for all those IoT containers. After all, why would you build and cross compile them on your main computer if you can do it natively in the cluster?

The point here is that Kubernetes makes it easier to make use of the “hidden” resources that you often end up with otherwise. Kubernetes handles scheduling of Pods in the cluster based on available resources and fault tolerance so that you don’t have to. However, in order to help Kubernetes make reasonable decisions you should definitely add resource requests to your workloads.


While Kubernetes, or container orchestration in general, may not usually be associated with IoT, it certainly makes a lot of sense to have an orchestrator when you are dealing with distributed systems. Not only does is allow you to handle a diverse and heterogeneous fleet of devices in a unified way, but it also simplifies communication between them. In addition, Kubernetes makes it easier to utilize spare resources.

Container technology made it possible to build applications that could “run anywhere”. Now Kubernetes makes it easier to manage the “anywhere” part. And as an immutable base to build it all on, we have Fedora IoT.

More in Tux Machines

Ubuntu 19.10 Puts Nvidia's Proprietary GPU Driver Right On The ISO

In Ubuntu 19.04, Canonical introduced the ability to download Nvidia's propriety graphics driver during the OS installation process (provided the user has an internet connection). That was a welcome step toward making gaming more accessible for newcomers. With the upcoming Ubuntu 19.10, however, Canonical is following in the footsteps of System76's Pop!_OS and slapping Nvidia's driver (both 390 and 418) right onto the ISO. Phoronix spotted the update via Ubuntu's Launchpad platform. What this means is that users can have the proprietary Nvidia driver -- a better option for gaming compared to the open source "Nouveau" driver -- ready to go at first boot. They also have the option to install the Nvidia binary at any point in the future without needing to add or activate a repository or download the driver. Read more

Benchmarking AMD FX vs. Intel Sandy/Ivy Bridge CPUs Following Spectre, Meltdown, L1TF, Zombieload

Now with MDS / Zombieload being public and seeing a 8~10% performance hit in the affected workloads as a result of the new mitigations to these Microarchitectural Data Sampling vulnerabilities, what's the overall performance look like now if going back to the days of AMD FX Vishera and Intel Sandybridge/Ivybridge processors? If Spectre, Meltdown, L1TF/Foreshadow, and now Zombieload had come to light years ago would it have shaken that pivotal point in the industry? Here are benchmarks looking at the the performance today with and without the mitigations to the known CPU vulnerabilities to date. As I've already delivered many benchmarks of these mitigations (including MDS/Zombieload) on newer CPUs, for this article we're looking at older AMD FX CPUs with their relevant Spectre mitigations against Intel Sandybridge and Ivybridge with the Spectre/Meltdown/L1TF/MDS mitigations. Tests were done on Ubuntu 19.04 with the Linux 5.0 kernel while toggling the mitigation levels of off (no coverage) / auto (the default / out-of-the-box mitigations used on all major Linux distributions for the default protections) / auto,nosmt (the more restricted level that also disables SMT / Hyper Threading). The AMD CPUs were tested with off/auto as in the "auto,nosmt" mode it doesn't disable any SMT as it doesn't deem it insecure on AMD platforms. Read more

Today in Techrights

today's leftovers

  • Zombieload, Nextcloud, Peppermint 10, KDE Plasma, IPFire, ArcoLinux, LuneOS | This Week in Linux 67
    On this episode of This Week in Linux, we’ll check out some Distro News from Peppermint OS, ArcoLinux, LuneOS & IPFire. We got a couple apps to talking about like Nextclou0…d and a new Wallpaper tool that has quite a bit of potential. We’ll take a look at what is to come with the next version of KDE Plasma. Intel users have gotten some more bad news regarding a new security vulnerability. Later in the show, we’ll cover some interesting information regarding a couple governments saving money by switching to Linux. Then finally we’ll check out some Linux Gaming News. All that and much more on your Weekly Source for Linux GNews!
  • Ubuntu Podcast: S12E07 – R-Type
    This week we’ve been installing Lineage on a OnePlus One and not migrating Mastodon accounts to We round up the Ubuntu community news from Kubuntu, Ubuntu MATE, Peppermint OS and we discuss some tech news. It’s Season 12 Episode 07 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.
  • OpenGL 4.6 / SPIR-V Support Might Be Inching Closer For Mesa Drivers
    We're quickly approaching the two year anniversary of the OpenGL 4.6 release and it's looking like the Intel/RadeonSI drivers might be inching towards the finish line for that latest major revision of the graphics API.  As we've covered many times, the Mesa drivers have been held up on OpenGL 4.6 support due to their SPIR-V ingestion support mandated by this July 2017 version of the OpenGL specification. While there are the Intel and Radeon RADV Vulkan drivers already with the SPIR-V support that is central to Vulkan, it's taken a long time re-fitting the OpenGL drivers for the likes of ARB_gl_spriv. Then again, there aren't many (actually, any?) major OpenGL games requiring version 4.6 of the specification even with its interoperability benefits thanks to SPIR-V.