Language Selection

English French German Italian Portuguese Spanish

Fedora Magazine

Syndicate content
Guides, information, and news about the Fedora operating system for users, developers, system administrators, and community members.
Updated: 13 hours 5 min ago

Getting Started with Go on Fedora

Wednesday 21st of August 2019 08:00:34 AM

The Go programming language was first publicly announced in 2009, since then the language has become widely adopted. In particular Go has become a reference in the world of cloud infrastructure with big projects like Kubernetes, OpenShift or Terraform for example.

Some of the main reasons for Go’s increasing popularity are the performances, the ease to write fast concurrent application, the simplicity of the language and fast compilation time. So let’s see how to get started with Go on Fedora.

Install Go in Fedora

Fedora provides an easy way to install the Go programming language via the official repository.

$ sudo dnf install -y golang $ go version go version go1.12.7 linux/amd64

Now that Go is installed, let’s write a simple program, compile it and execute it.

First program in Go

Let’s write the traditional “Hello, World!” program in Go. First create a main.go file and type or copy the following.

package main import "fmt" func main() { fmt.Println("Hello, World!") }

Running this program is quite simple.

$ go run main.go Hello, World!

This will build a binary from main.go in a temporary directory, execute the binary, then delete the temporary directory. This command is really great to quickly run the program during development and it also highlights the speed of Go compilation.

Building an executable of the program is as simple as running it.

$ go build main.go $ ./main Hello, World! Using Go modules

Go 1.11 and 1.12 introduce preliminary support for modules. Modules are a solution to manage application dependencies. This solution is based on 2 files go.mod and go.sum used to explicitly define the version of the dependencies.

To show how to use modules, let’s add a dependency to the hello world program.

Before changing the code, the module needs to be initialized.

$ go mod init helloworld go: creating new go.mod: module helloworld $ ls go.mod main main.go

Next modify the main.go file as follow.

package main import "github.com/fatih/color" func main () { color.Blue("Hello, World!") }

In the modified main.go, instead of using the standard library “fmt” to print the “Hello, World!”. The application uses an external library which makes it easy to print text in color.

Let’s run this version of the application.

$ go run main.go Hello, World!

Now that the application is depending on the github.com/fatih/color library, it needs to download all the dependencies before compiling it. The list of dependencies is then added to go.mod and the exact version and commit hash of these dependencies is recorded in go.sum.

Command line quick tips: Searching with grep

Monday 19th of August 2019 08:00:38 AM

If you use your Fedora system for more than just browsing the web, you have probably needed to search for text in your files. For instance, you might be a developer that can’t remember where you left some code snippet. Or you might be looking for a setting stored in your system configuration files. Whatever the reason, there are plenty of ways to search for text on your Fedora system. This article will show you how, including using the built-in utility grep.

Introducing grep

The grep utility allows you to search for text, or more specifically text patterns, on your file system. The name grep comes from global regular expression print. Yikes, what a mouthful! This is because a regular expression (or regex) is a way of defining text patterns.

The grep utility lets you find and print out matches on these patterns — thus the name. It’s a powerful system, and you can even find it in modern code editors like Visual Studio Code or Atom.

Regular expressions

Harnessing all the power of regular expressions is a topic bigger than this article, for sure. The simplest kind of regex can be just a word, or a portion of a word. That pattern is simply “the following characters, in the same order.” The pattern is searched line by line. For example:

  • pciutil – matches any time the 7 characters pciutil appear together — including pciutil, pciutils, pciutil123, and foopciutil.
  • ^pciutil – matches any time the 7 characters pciutil appear together immediately at the beginning of a line (that’s what the ^ stands for)
  • pciutil$ – matches any time the 7 characters pciutil appear together immediately before the end of a line (that’s what the $ stands for)

More complicated expressions are also possible. Special characters are used in a regex as wildcards, or to change the way the regex works. If you want to match on one of these characters, use a \ (backslash) before the character.

For instance, the . (period or full stop) is a wildcard that matches any single character. If you use it in the expression pci.til, it matches pciutil, pci4til, or pci!til, but does not match pcitil. There must be a character to match the . in the regular expression.

The ? is a marker in a regex that marks the previous element as optional. So if you built on the previous example, the expression pci.?til would also match on pcitil because there need not be a character between i and t for a valid match.

The + and * are markers that stand for repetition. While + stands for one or more of the previous element, * stands for zero or more. So the regex pci.+til would match any of these: pciutil, pci4til, pci!til, pciuuuuuutil, pci423til. However, it wouldn’t match pcitil — but the regex pci.*til would.

Examples of grep

Now that you know a little about regex, let’s put it to work. Imagine that you’re trying to find a configuration file that mentions a user account jpublic. You tried a bunch of files already, but none were the correct one, and you’re sure it’s there. So, try searching the /etc folder (using sudo because some subfolders are not readable outside the root account):

$ sudo grep -r jpublic /etc/

The -r switch searches the folder recursively. The utility prints a list of matching files, and the line where the hit occurred. In most modern terminal environments, the hit is color highlighted for better readability.

Imagine you have a much larger selection of files in /home/shared and you need to establish which ones mention the name MacNulty. However, you’re not sure whether the capitalization will be consistent, and you’re just looking for names of files, not the context. Also, you believe someone may have misspelled the name as McNulty in some places.

Use the -l switch to only output filenames with a match, a ? marker for optional a in the name, and -i to make the search case-insensitive:

$ sudo grep -irl 'ma\?cnulty' /home/shared

This command will match on strings like Macnulty, McNulty, Mcnulty, and macNulty with no problem. You’ll get a simple list of filenames where the match was found in the contents.

These are only the simplest ways to use grep and regular expressions. You can learn a lot more about both using the info grep command.

But wait, there’s more…

The grep command is venerable but in some situations may not be as efficient as newer search utilities. For instance, the ripgrep utility is engineered to be a fast search utility that can take the place of grep. We covered ripgrep as part of an article on Rust and Rust applications previously in the Magazine:

Oxidizing Fedora: Try Rust and its applications today

It’s important to note that ripgrep has its own command line switches and syntax. For example, it has simple switches to print only filename matches, invert searches, and many other useful functions. It can also ignore based on .rgignore files placed in any subdirectories. (It’s also noteworthy that the -r switch is used differently for ripgrep, because it is automatically recursive.)

To install, use this command:

$ sudo dnf install ripgrep

To explore the options, use the manual page (man rg). You’ll find that many, but not all, options are the same as grep.

Have fun searching!

Cockpit and the evolution of the Web User Interface

Friday 16th of August 2019 08:00:20 AM

Over 3 years ago the Fedora Magazine published an article entitled Cockpit: an overview. Since then, the interface has see some eye-catching changes. Today’s Cockpit is cleaner and the larger fonts makes better use of screen real-estate.

This article will go over some of the changes made to the UI. It will also explore some of the general tools available in the web interface to simplify those monotonous sysadmin tasks.

Cockpit installation

Cockpit can be installed using the dnf install cockpit command. This provides a minimal setup providing the basic tools required to use the interface.

Another option is to install the Headless Management group. This will install additional packages used to extend the usability of Cockpit. It includes extensions for NetworkManager, software packages, disk, and SELinux management.

Run the following commands to enable the web service on boot and open the firewall port:

$ sudo systemctl enable --now cockpit.socket Created symlink /etc/systemd/system/sockets.target.wants/cockpit.socket -> /usr/lib/systemd/system/cockpit.socket $ sudo firewall-cmd --permanent --add-service cockpit success $ sudo firewall-cmd --reload success Logging into the web interface

To access the web interface, open your favourite browser and enter the server’s domain name or IP in the address bar followed by the service port (9090). Because Cockpit uses HTTPS, the installation will create a self-signed certificate to encrypt passwords and other sensitive data. You can safely accept this certificate, or request a CA certificate from your sysadmin or a trusted source.

Once the certificate is accepted, the new and improved login screen will appear. Long-time users will notice the username and password fields have been moved to the top. In addition, the white background behind the credential fields immediately grabs the user’s attention.

A feature added to the login screen since the previous article is logging in with sudo privileges — if your account is a member of the wheel group. Check the box beside Reuse my password for privileged tasks to elevate your rights.

Another edition to the login screen is the option to connect to remote servers also running the Cockpit web service. Click Other Options and enter the host name or IP address of the remote machine to manage it from your local browser.

Home view

Right off the bat we get a basic overview of common system information. This includes the make and model of the machine, the operating system, if the system is up-to-date, and more.

Clicking the make/model of the system displays hardware information such as the BIOS/Firmware. It also includes details about the components as seen with lspci.

Clicking on any of the options to the right will display the details of that device. For example, the % of CPU cores option reveals details on how much is used by the user and the kernel. In addition, the Memory & Swap graph displays how much of the system’s memory is used, how much is cached, and how much of the swap partition active. The Disk I/O and Network Traffic graphs are linked to the Storage and Networking sections of Cockpit. These topics will be revisited in an upcoming article that explores the system tools in detail.

Secure Shell Keys and authentication

Because security is a key factor for sysadmins, Cockpit now has the option to view the machine’s MD5 and SHA256 key fingerprints. Clicking the Show fingerprints options reveals the server’s ECDSA, ED25519, and RSA fingerprint keys.

You can also add your own keys by clicking on your username in the top-right corner and selecting Authentication. Click on Add keys to validate the machine on other systems. You can also revoke your privileges in the Cockpit web service by clicking on the X button to the right.

Changing the host name and joining a domain

Changing the host name is a one-click solution from the home page. Click the host name currently displayed, and enter the new name in the Change Host Name box. One of the latest features is the option to provide a Pretty name.

Another feature added to Cockpit is the ability to connect to a directory server. Click Join a domain and a pop-up will appear requesting the domain address or name, organization unit (optional), and the domain admin’s credentials. The Domain Membership group provides all the packages required to join an LDAP server including FreeIPA, and the popular Active Directory.

To opt-out, click on the domain name followed by Leave Domain. A warning will appear explaining the changes that will occur once the system is no longer on the domain. To confirm click the red Leave Domain button.

Configuring NTP and system date and time

Using the command-line and editing config files definitely takes the cake when it comes to maximum tweaking. However, there are times when something more straightforward would suffice. With Cockpit, you have the option to set the system’s date and time manually or automatically using NTP. Once synchronized, the information icon on the right turns from red to blue. The icon will disappear if you manually set the date and time.

To change the timezone, type the continent and a list of cities will populate beneath.

Shutting down and restarting

You can easily shutdown and restart the server right from home screen in Cockpit. You can also delay the shutdown/reboot and send a message to warn users.

Configuring the performance profile

If the tuned and tuned-utils packages are installed, performance profiles can be changed from the main screen. By default it is set to a recommended profile. However, if the purpose of the server requires more performance, we can change the profile from Cockpit to suit those needs.

Terminal web console

A Linux sysadmin’s toolbox would be useless without access to a terminal. This allows admins to fine-tune the server beyond what’s available in Cockpit. With the addition of themes, admins can quickly adjust the text and background colours to suit their preference.

Also, if you type exit by mistake, click the Reset button in the top-right corner. This will provide a fresh screen with a flashing cursor.

Adding a remote server and the Dashboard overlay

The Headless Management group includes the Dashboard module (cockpit-dashboard). This provides an overview the of the CPU, memory, network, and disk performance in a real-time graph. Remote servers can also be added and managed through the same interface.

For example, to add a remote computer in Dashboard, click the + button. Enter the name or IP address of the server and select the colour of your choice. This helps to differentiate the stats of the servers in the graph. To switch between servers, click on the host name (as seen in the screen-cast below). To remove a server from the list, click the check-mark icon, then click the red trash icon. The example below demonstrates how Cockpit manages a remote machine named server02.local.lan.

Documentation and finding help

As always, the man pages are a great place to find documentation. A simple search in the command-line results with pages pertaining to different aspects of using and configuring the web service.

$ man -k cockpit cockpit (1) - Cockpit cockpit-bridge (1) - Cockpit Host Bridge cockpit-desktop (1) - Cockpit Desktop integration cockpit-ws (8) - Cockpit web service cockpit.conf (5) - Cockpit configuration file

The Fedora repository also has a package called cockpit-doc. The package’s description explains it best:

The Cockpit Deployment and Developer Guide shows sysadmins how to deploy Cockpit on their machines as well as helps developers who want to embed or extend Cockpit.

For more documentation visit https://cockpit-project.org/external/source/HACKING

Conclusion

This article only touches upon some of the main functions available in Cockpit. Managing storage devices, networking, user account, and software control will be covered in an upcoming article. In addition, optional extensions such as the 389 directory service, and the cockpit-ostree module used to handle packages in Fedora Silverblue.

The options continue to grow as more users adopt Cockpit. The interface is ideal for admins who want a light-weight interface to control their server(s).

What do you think about Cockpit? Share your experience and ideas in the comments below.

Taz Brown: How Do You Fedora?

Wednesday 14th of August 2019 08:00:45 AM

We recently interviewed Taz Brown on how she uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming a interviewee.

Taz Brown is a seasoned IT professional with over 15 years of experience. “I have worked as a systems administrator, senior Linux administrator, DevOps engineer and I now work as a senior Ansible automation consultant at Red Hat with the Automation Practice Team.” Originally Taz started using Ubuntu, but she started using CentOS, Red Hat Enterprise Linux and Fedora as a Linux administrator in the IT industry.

Taz is relatively new to contributing to open source, but she found that code was not the only way to contribute. “I prefer to contribute through documentation as I am not a software developer or engineer. I found that there was more than one way to contribute to open source than just through code.”

All about Taz

Her childhood hero is Wonder Woman. Her favorite movie is Hackers. “My favorite scene is the beginning of the movie,” Taz tells the Magazine. “The movie starts with a group of special agents breaking into a house to catch the infamous hacker, Zero Cool. We soon discover that Zero Cool is actually 11-year-old Dade Murphy, who managed to crash 1,507 computer systems in one day. He is charged for his crimes and his family is fined $45,000. Additionally, he is banned from using computers or touch-tone telephones until he is 18.”

Her favorite character in the movie is Paul Cook. “Paul Cook, Lord Nikon, played by Laurence Mason was my favorite character. One of the main reasons is that I never really saw a hacker movie that had characters that looked like me so I was fascinated by his portrayal. He was enigmatic. It was refreshing to see and it made me real proud that I was passionate about IT and that I was a geek of sorts.”

Taz is an amateur photographer and uses a Nikon D3500. “I definitely like vintage things so I am looking to add a new one to my collection soon.” She also enjoys 3D printing, and drawing. “I use open source tools in my hobbies such as Wekan, which is an open-source kanban utility.”

The Fedora community

Taz first started using Linux about 8 years ago. “I started using Ubuntu and then graduated to Fedora and its community and I was hooked. I have been using Fedora now for about 5 years.”

When she became a Linux Administrator, Linux turned into a passion. “I was trying to find my way in terms of contributing to open source. I didn’t know where to go so I wondered if I could truly be an open source enthusiast and influencer because the community is so vast, but once I found a few people who embraced my interests and could show me the way, I was able to open up and ask questions and learn from the community.”

Taz first became involved with the Fedora community through her work as a Linux systems engineer while working at Mastercard. “My first impressions of the Fedora community was one of true collaboration, respect and sharing.”

When Brown talked about the Fedora Project she gave an excellent analogy. “America is an melting pot and that’s how I see open source projects like the Fedora Project. There is plenty of room for diverse contributions to the Fedora Project. There are so many ways in which to get and stay involved and there is also room for new ideas.”

When we asked Brown about what she would like to see improved in the Fedora community, she commented on making others more aware of the opportunities. “I wish those who are typically underrepresented in tech were more aware of the amazing commitment that the Fedora Project has to diversity and inclusion in open source and in the Fedora community.”

Next Taz had some advice for people looking to join the Fedora Community. “It’s a great decision and one that you likely will not regret joining. Fedora is a project with a very large supportive community and if you’re new to open source, it’s definitely a great place to start. There is a lot of cool stuff in Fedora. I believe there are limitless opportunities for The Fedora Project.”

What hardware?

Taz uses an Lenovo Thinkserver TS140 with 64 GB of ram, 4 1 TB SSDs and a 1 TB HD for data storage. The server is currently running Fedora 30. She also has a Synology NAS with 164 TB of storage using a RAID 5 configuration. Taz also has a Logitech MX Master and MX Master 2S. “For my keyboard, I use a Kinesis Advantage 2.” She also uses two 38 inch LG ultrawide curved monitors and a single 34 inch LG ultrawide monitor.

She owns a System76 laptop. “I use the 16.1-inch Oryx Pro by System76 with IPS Display with i7 processor with 6 cores and 12 threads.” It has 6 GB GDDR6 RTX 2060 w/ 1920 CUDA Cores and also 64 GB of DDR4 RAM and a total of 4 TB of SSD storage. “I love the way Fedora handles my peripherals and like my mouse and keyboard. Everything works seamlessly. Plug and play works as it should and performance never suffers.”

What software?

Brown is currently running Fedora 30. She has a variety of software in her everyday work flow. “I use Wekan, which is an open-source kanban, which I use to manage my engagements and projects. My favorite editor is Atom, though I use to use Sublime at one point in time.”

And as for terminals? “I use Terminator as my go-to terminal because of grid arrangement as well as it’s many keyboard shortcuts and its tab formation.” Taz continues, “I love using neofetch which comes up with a nifty fedora logo and system information every time I log in to the terminal. I also have my terminal pimped out using powerline and powerlevel9k and vim-powerline as well.”

Use a drop-down terminal for fast commands in Fedora

Friday 9th of August 2019 08:00:38 AM

A drop-down terminal lets you tap a key and quickly enter any command on your desktop. Often it creates a terminal in a smooth way, sometimes with effects. This article demonstrates how it helps to improve and speed up daily tasks, using drop-down terminals like Yakuake, Tilda, Guake and a GNOME extension.

Yakuake

Yakuake is a drop-down terminal emulator based on KDE Konsole techonology. It is distributed under the terms of the GNU GPL Version 2. It includes features such as:

  • Smoothly rolls down from the top of your screen
  • Tabbed interface
  • Configurable dimensions and animation speed
  • Skinnable
  • Sophisticated D-Bus interface

To install Yakuake, use the following command:

$ sudo dnf install -y yakuake Startup and configuration

If you’re runnign KDE, open the System Settings and go to Startup and Shutdown. Add yakuake to the list of programs under Autostart, like this:

It’s easy to configure Yakuake while running the app. To begin, launch the program at the command line:

$ yakuake &

The following welcome dialog appears. You can set a new keyboard shortcut if the standard one conflicts with another keystroke you already use:

Now click the menu button, and the following help menu appears. Next, select Configure Yakuake… to access the configuration options.

You can customize the options for appearance, such as opacity; behavior, such as focusing terminals when the mouse pointer is moved over them; and window, such as size and animation. In the window options you’ll find one of the most useful options is you use two or more monitors: Open on screen: At mouse location.

Using Yakuake

The main shortcuts are:

  • F12 = Open/Retract Yakuake
  • Ctrl+F11 = Full Screen Mode
  • Ctrl+) = Split Top/Bottom
  • Ctrl+( = Split Left/Right
  • Ctrl+Shift+T = New Session
  • Shift+Right = Next Session
  • Shift+Left = Previous Session
  • Ctrl+Alt+S = Rename Session

Below is an example of Yakuake being used to split the session like a terminal multiplexer. Using this feature, you can run several shells in one session.

Tilda

Tilda is a drop-down terminal that compares with other popular terminal emulators such as GNOME Terminal, KDE’s Konsole, xterm, and many others.

It features a highly configurable interface. You can even change options such as the terminal size and animation speed. Tilda also lets you enable hotkeys you can bind to commands and operations.

To install Tilda, run this command:

$ sudo dnf install -y tilda Startup and configuration

Most users prefer to have a drop-down terminal available behind the scenes when they login. To set this option, first go to the app launcher in your desktop, search for Tilda, and open it.

Next, open up the Tilda Config window. Select Start Tilda hidden, which means it will not display a terminal immediately when started.

Next, you’ll set your desktop to start Tilda automatically. If you’re using KDE, go to System Settings > Startup and Shutdown > Autostart and use Add a Program.

If you’re using GNOME, you can run this command in a terminal:

$ ln -s /usr/share/applications/tilda.desktop ~/.config/autostart/

When you run for the first time, a wizard shows up to set your preferences. If you need to change something, right click and go to Preferences in the menu.

You can also create multiple configuration files, and bind other keys to open new terminals at different places on the screen. To do that, run this command:

$ tilda -C

Every time you use the above command, Tilda creates a new config file located in the ~/.config/tilda/ folder called config_0, config_1, and so on. You can then map a key combination to open a new Tilda terminal with a specific set of options.

Using Tilda

The main shortcuts are:

  • F1 = Pull Down Terminal Tilda (Note: If you have more than one config file, the shortcuts are the same, with a diferent open/retract shortcut like F1, F2, F3, and so on)
  • F11 = Full Screen Mode
  • F12 = Toggle Transparency
  • Ctrl+Shift+T = Add Tab
  • Ctrl+Page Up = Go to Next Tab
  • Ctrl+Page Down = Go to Previous Tab
GNOME Extension

The Drop-down Terminal GNOME Extension lets you use this useful tool in your GNOME Shell. It is easy to install and configure, and gives you fast access to a terminal session.

Installation

Open a browser and go to the site for this GNOME extension. Enable the extension setting to On, as shown here:

Then select Install to install the extension on your system.

Once you do this, there’s no reason to set any autostart options. The extension will automatically run whenever you login to GNOME!

Configuration

After install, the Drop Down Terminal configuration window opens to set your preferences. For example, you can set the size of the terminal, animation, transparency, and scrollbar use.

If you need change some preferences in the future, run the gnome-shell-extension-prefs command and choose Drop Down Terminal.

Using the extension

The shortcuts are simple:

  • ` (usually the key above Tab) = Open/Retract Terminal
  • F12 (customize as you prefer) = Open/Retract Terminal

Trace code in Fedora with bpftrace

Wednesday 7th of August 2019 08:00:11 AM

bpftrace is a new eBPF-based tracing tool that was first included in Fedora 28. It was developed by Brendan Gregg, Alastair Robertson and Matheus Marchini with the help of a loosely-knit team of hackers across the Net. A tracing tool lets you analyze what a system is doing behind the curtain. It tells you which functions in code are being called, with which arguments, how many times, and so on.

This article covers some basics about bpftrace, and how it works. Read on for more information and some useful examples.

eBPF (extended Berkeley Packet Filter)

eBPF is a tiny virtual machine, or a virtual CPU to be more precise, in the Linux Kernel. The eBPF can load and run small programs in a safe and controlled way in kernel space. This makes it safer to use, even in production systems. This virtual machine has its own instruction set architecture (ISA) resembling a subset of modern processor architectures. The ISA makes it easy to translate those programs to the real hardware. The kernel performs just-in-time translation to native code for main architectures to improve the performance.

The eBPF virtual machine allows the kernel to be extended programmatically. Nowadays several kernel subsystems take advantage of this new powerful Linux Kernel capability. Examples include networking, seccomp, tracing, and more. The main idea is to attach eBPF programs into specific code points, and thereby extend the original kernel behavior.

eBPF machine language is very powerful. But writing code directly in it is extremely painful, because it’s a low level language. This is where bpftrace comes in. It provides a high-level language to write eBPF tracing scripts. The tool then translates these scripts to eBPF with the help of clang/LLVM libraries, and then attached to the specified code points.

Installation and quick start

To install bpftrace, run the following command in a terminal using sudo:

$ sudo dnf install bpftrace

Try it out with a “hello world” example:

$ sudo bpftrace -e 'BEGIN { printf("hello world\n"); }'

Note that you must run bpftrace as root due to the privileges required. Use the -e option to specify a program, and to construct the so-called “one-liners.” This example only prints hello world, and then waits for you to press Ctrl+C.

BEGIN is a special probe name that fires only once at the beginning of execution. Every action inside the curly braces { } fires whenever the probe is hit — in this case, it’s just a printf.

Let’s jump now to a more useful example:

$ sudo bpftrace -e 't:syscalls:sys_enter_execve { printf("%s called %s\n", comm, str(args->filename)); }'

This example prints the parent process name (comm) and the name of every new process being created in the system. t:syscalls:sys_enter_execve is a kernel tracepoint. It’s a shorthand for tracepoint:syscalls:sys_enter_execve, but both forms can be used. The next section shows you how to list all available tracepoints.

comm is a bpftrace builtin that represents the process name. filename is a field of the t:syscalls:sys_enter_execve tracepoint. You can access these fields through the args builtin.

All available fields of the tracepoint can be listed with this command:

bpftrace -lv "t:syscalls:sys_enter_execve" Example usage Listing probes

A central concept for bpftrace are probe points. Probe points are instrumentation points in code (kernel or userspace) where eBPF programs can be attached. They fit into the following categories:

  • kprobe – kernel function start
  • kretprobe – kernel function return
  • uprobe – user-level function start
  • uretprobe – user-level function return
  • tracepoint – kernel static tracepoints
  • usdt – user-level static tracepoints
  • profile – timed sampling
  • interval – timed output
  • software – kernel software events
  • hardware – processor-level events

All available kprobe/kretprobe, tracepoints, software and hardware probes can be listed with this command:

$ sudo bpftrace -l

The uprobe/uretprobe and usdt probes are userspace probes specific to a given executable. To use them, use the special syntax shown later in this article.

The profile and interval probes fire at fixed time intervals. Fixed time intervals are not covered in this article.

Counting system calls

Maps are special BPF data types that store counts, statistics, and histograms. You can use maps to summarize how many times each syscall is being called:

$ sudo bpftrace -e 't:syscalls:sys_enter_* { @[probe] = count(); }'

Some probe types allow wildcards to match multiple probes. You can also specify multiple attach points for an action block using a comma separated list. In this example, the action block attaches to all tracepoints whose name starts with t:syscalls:sys_enter_, which means all available syscalls.

The bpftrace builtin function count() counts the number of times this function is called. @[] represents a map (an associative array). The key of this map is probe, which is another bpftrace builtin that represents the full probe name.

Here, the same action block is attached to every syscall. Then, each time a syscall is called the map will be updated, and the entry is incremented in the map relative to this same syscall. When the program terminates, it automatically prints out all declared maps.

This example counts the syscalls called globally, it’s also possible to filter for a specific process by PID using the bpftrace filter syntax:

$ sudo bpftrace -e 't:syscalls:sys_enter_* / pid == 1234 / { @[probe] = count(); }' Write bytes by process

Using these concepts, let’s analyze how many bytes each process is writing:

$ sudo bpftrace -e 't:syscalls:sys_exit_write /args->ret > 0/ { @[comm] = sum(args->ret); }'

bpftrace attaches the action block to the write syscall return probe (t:syscalls:sys_exit_write). Then, it uses a filter to discard the negative values, which are error codes (/args->ret > 0/).

The map key comm represents the process name that called the syscall. The sum() builtin function accumulates the number of bytes written for each map entry or process. args is a bpftrace builtin to access tracepoint’s arguments and return values. Finally, if successful, the write syscall returns the number of written bytes. args->ret provides access to the bytes.

Read size distribution by process (histogram):

bpftrace supports the creation of histograms. Let’s analyze one example that creates a histogram of the read size distribution by process:

$ sudo bpftrace -e 't:syscalls:sys_exit_read { @[comm] = hist(args->ret); }'

Histograms are BPF maps, so they must always be attributed to a map (@). In this example, the map key is comm.

The example makes bpftrace generate one histogram for every process that calls the read syscall. To generate just one global histogram, attribute the hist() function just to ‘@’ (without any key).

bpftrace automatically prints out declared histograms when the program terminates. The value used as base for the histogram creation is the number of read bytes, found through args->ret.

Tracing userspace programs

You can also trace userspace programs with uprobes/uretprobes and USDT (User-level Statically Defined Tracing). The next example uses a uretprobe, which probes to the end of a user-level function. It gets the command lines issued in every bash running in the system:

$ sudo bpftrace -e 'uretprobe:/bin/bash:readline { printf("readline: \"%s\"\n", str(retval)); }'

To list all available uprobes/uretprobes of the bash executable, run this command:

$ sudo bpftrace -l "uprobe:/bin/bash"

uprobe instruments the beginning of a user-level function’s execution, and uretprobe instruments the end (its return). readline() is a function of /bin/bash, and it returns the typed command line. retval is the return value for the instrumented function, and can only be accessed on uretprobe.

When using uprobes, you can access arguments with arg0..argN. A str() call is necessary to turn the char * pointer to a string.

Shipped Scripts

There are many useful scripts shipped with bpftrace package. You can find them in the /usr/share/bpftrace/tools/ directory.

Among them, you can find:

  • killsnoop.bt – Trace signals issued by the kill() syscall.
  • tcpconnect.bt – Trace all TCP network connections.
  • pidpersec.bt – Count new procesess (via fork) per second.
  • opensnoop.bt – Trace open() syscalls.
  • vfsstat.bt – Count some VFS calls, with per-second summaries.

You can directly use the scripts. For example:

$ sudo /usr/share/bpftrace/tools/killsnoop.bt

You can also study these scripts as you create new tools.

Links

Photo by Roman Romashov on Unsplash.

4 cool new projects to try in COPR for August 2019

Monday 5th of August 2019 08:00:32 AM

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

Here’s a set of new and interesting projects in COPR.

Duc

Duc is a collection of tools for disk usage inspection and visualization. Duc uses an indexed database to store sizes of files on your system. Once the indexing is done, you can then quickly overview your disk usage either by its command-line interface or the GUI.

Installation instructions

The repo currently provides duc for EPEL 7, Fedora 29 and 30. To install duc, use these commands:

sudo dnf copr enable terrywang/duc sudo dnf install duc MuseScore

MuseScore is a software for working with music notation. With MuseScore, you can create sheet music either by using a mouse, virtual keyboard or a MIDI controller. MuseScore can then play the created music or export it as a PDF, MIDI or MusicXML. Additionally, there’s an extensive database of sheet music created by Musescore users.

Installation instructions

The repo currently provides MuseScore for Fedora 29 and 30. To install MuseScore, use these commands:

sudo dnf copr enable jjames/MuseScore sudo dnf install musescore Dynamic Wallpaper Editor

Dynamic Wallpaper Editor is a tool for creating and editing a collection of wallpapers in GNOME that change in time. This can be done using XML files, however, Dynamic Wallpaper Editor makes this easy with its graphical interface, where you can simply add pictures, arrange them and set the duration of each picture and transitions between them.

Installation instructions

The repo currently provides dynamic-wallpaper-editor for Fedora 30 and Rawhide. To install dynamic-wallpaper-editor, use these commands:

sudo dnf copr enable atim/dynamic-wallpaper-editor sudo dnf install dynamic-wallpaper-editor Manuskript

Manuskript is a tool for writers and is aimed to make creating large writing projects easier. It serves as an editor for writing the text itself, as well as a tool for organizing notes about the story itself, characters of the story and individual plots.

Installation instructions

The repo currently provides Manuskript for Fedora 29, 30 and Rawhide. To install Manuskript, use these commands:

sudo dnf copr enable notsag/manuskript sudo dnf install manuskript

Use Postfix to get email from your Fedora system

Friday 2nd of August 2019 08:00:22 AM

Communication is key. Your computer might be trying to tell you something important. But if your mail transport agent (MTA) isn’t properly configured, you might not be getting the notifications. Postfix is a MTA that’s easy to configure and known for a strong security record. Follow these steps to ensure that email notifications sent from local services will get routed to your internet email account through the Postfix MTA.

Install packages

Use dnf to install the required packages (you configured sudo, right?):

$ sudo -i # dnf install postfix mailx

If you previously had a different MTA configured, you may need to set Postfix to be the system default. Use the alternatives command to set your system default MTA:

$ sudo alternatives --config mta There are 2 programs which provide 'mta'. Selection Command *+ 1 /usr/sbin/sendmail.sendmail 2 /usr/sbin/sendmail.postfix Enter to keep the current selection[+], or type selection number: 2 Create a password_maps file

You will need to create a Postfix lookup table entry containing the email address and password of the account that you want to use to for sending email:

# MY_EMAIL_ADDRESS=glb@gmail.com # MY_EMAIL_PASSWORD=abcdefghijklmnop # MY_SMTP_SERVER=smtp.gmail.com # MY_SMTP_SERVER_PORT=587 # echo "[$MY_SMTP_SERVER]:$MY_SMTP_SERVER_PORT $MY_EMAIL_ADDRESS:$MY_EMAIL_PASSWORD" >> /etc/postfix/password_maps # chmod 600 /etc/postfix/password_maps # unset MY_EMAIL_PASSWORD # history -c

If you are using a Gmail account, you’ll need to configure an “app password” for Postfix, rather than using your gmail password. See “Sign in using App Passwords” for instructions on configuring an app password.

Next, you must run the postmap command against the Postfix lookup table to create or update the hashed version of the file that Postfix actually uses:

# postmap /etc/postfix/password_maps

The hashed version will have the same file name but it will be suffixed with .db.

Update the main.cf file

Update Postfix’s main.cf configuration file to reference the Postfix lookup table you just created. Edit the file and add these lines.

relayhost = [smtp.gmail.com]:587 smtp_tls_security_level = verify smtp_tls_mandatory_ciphers = high smtp_tls_verify_cert_match = hostname smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/password_maps

The example assumes you’re using Gmail for the relayhost setting, but you can substitute the correct hostname and port for the mail host to which your system should hand off mail for sending.

For the most up-to-date details about the above configuration options, see the man page:

$ man postconf.5 Enable, start, and test Postfix

After you have updated the main.cf file, enable and start the Postfix service:

# systemctl enable --now postfix.service

You can then exit your sudo session as root using the exit command or Ctrl+D. You should now be able to test your configuration with the mail command:

$ echo 'It worked!' | mail -s "Test: $(date)" glb@gmail.com Update services

If you have services like logwatch, mdadm, fail2ban, apcupsd or certwatch installed, you can now update their configurations so that their email notifications will go to your internet email address.

Optionally, you may want to configure all email that is sent to your local system’s root account to go to your internet email address. Add this line to the /etc/aliases file on your system (you’ll need to use sudo to edit this file, or switch to the root account first):

root: glb+root@gmail.com

Now run this command to re-read the aliases:

# newaliases
  • TIP: If you are using Gmail, you can add an alpha-numeric mark between your username and the @ symbol as demonstrated above to make it easier to identify and filter the email that you will receive from your computer(s).
Troubleshooting

View the mail queue:

$ mailq

Clear all email from the queues:

# postsuper -d ALL

Filter the configuration settings for interesting values:

$ postconf | grep "^relayhost\|^smtp_"

View the postfix/smtp logs:

$ journalctl --no-pager -t postfix/smtp

Reload postfix after making configuration changes:

$ systemctl reload postfix

Photo by Sharon McCutcheon on Unsplash.

Multi-monitor wallpapers with Hydrapaper

Wednesday 31st of July 2019 08:00:55 AM

When using multiple monitors, by default, means that your desktop wallpaper is duplicated across all of your screens. However, with all that screen real-estate that a multiple monitor setup delivers, having a different wallpaper for each monitor is a nice way to brighten up your workspace even more.

One manual workaround for getting different wallpapers on multiple monitors is to manually create it using something like the GIMP, cropping and positioning your backgrounds by hand. There is, however, a neat wallpaper manager called Hydrapaper that makes setting multiple wallpapers a breeze.

Hydrapaper

Hydrapaper is a simple GNOME application that auto-detects your monitors, and allows you to choose different wallpapers for each display. In the background, it achieves this by simply composing a new background image from your choices that fits your displays, and sets that as your new wallpaper. All with a single click.

Hydrapaper lets the user define multiple source directories to choose wallpapers from, and also has an option to select random wallpapers from the source directories. Finally, it also allows you to specify your favourite images, and provides an additional category for favourites. This is especially useful for users that have a lot of wallpapers and change them frequently.

Installing Hydrapaper on Fedora Workstation

Hydrapaper is available to install from the 3rd party Flathub repositories. If you have never installed an application from Flathub before, set it up using the following guide:

Install Flathub apps on Fedora

After correctly setting up Flathub as a software source, you will be able to search for and install Hydrapaper via GNOME Software.

Command line quick tips: More about permissions

Monday 29th of July 2019 08:00:34 AM

A previous article covered some basics about file permissions on your Fedora system. This installment shows you additional ways to use permissions to manage file access and sharing. It also builds on the knowledge and examples in the previous article, so if you haven’t read that one, do check it out.

Symbolic and octal

In the previous article you saw how there are three distinct permission sets for a file. The user that owns the file has a set, members of the group that owns the file has a set, and then a final set is for everyone else. These permissions are expressed on screen in a long listing (ls -l) using symbolic mode.

Each set has r, w, and x entries for whether a particular user (owner, group member, or other) can read, write, or execute that file. But there’s another way to express these permissions: in octal mode.

You’re used to the decimal numbering system, which has ten distinct values (0 through 9). The octal system, on the other hand, has eight distinct values (0 through 7). In the case of permissions, octal is used as a shorthand to show the value of the r, w, and x fields. Think of each field as having a value:

  • r = 4
  • w = 2
  • x = 1

Now you can express any combination with a single octal value. For instance, read and write permission, but no execute permission, would have a value of 6. Read and execute permission only would have a value of 5. A file’s rwxr-xr-x symbolic permission has an octal value of 755.

You can use octal values to set file permissions with the chmod command similarly to symbolic values. The following two commands set the same permissions on a file:

chmod u=rw,g=r,o=r myfile1
chmod 644 myfile1 Special permission bits

There are several special permission bits also available on a file. These are called setuid (or suid), setgid (or sgid), and the sticky bit (or delete inhibit). Think of this as yet another set of octal values:

  • setuid = 4
  • setgid = 2
  • sticky = 1

The setuid bit is ignored unless the file is executable. If that’s the case, the file (presumably an app or a script) runs as if it were launched by the user who owns the file. A good example of setuid is the /bin/passwd utility, which allows a user to set or change passwords. This utility must be able to write to files no user should be allowed to change. Therefore it is carefully written, owned by the root user, and has a setuid bit so it can alter the password related files.

The setgid bit works similarly for executable files. The file will run with the permissions of the group that owns it. However, setgid also has an additional use for directories. If a file is created in a directory with setgid permission, the group owner for the file will be set to the group owner of the directory.

Finally, the sticky bit, while ignored for files, is useful for directories. The sticky bit set on a directory will prevent a user from deleting files in that directory owned by other users.

The way to set these bits with chmod in octal mode is to add a value prefix, such as 4755 to add setuid to an executable file. In symbolic mode, the u and g can be used to set or remove setuid and setgid, such as u+s,g+s. The sticky bit is set using o+t. (Other combinations, like o+s or u+t, are meaningless and ignored.)

Sharing and special permissions

Recall the example from the previous article concerning a finance team that needs to share files. As you can imagine, the special permission bits help to solve their problem even more effectively. The original solution simply made a directory the whole group could write to:

drwxrwx---. 2 root finance 4096 Jul 6 15:35 finance

One problem with this directory is that users dwayne and jill, who are both members of the finance group, can delete each other’s files. That’s not optimal for a shared space. It might be useful in some situations, but probably not when dealing with financial records!

Another problem is that files in this directory may not be truly shared, because they will be owned by the default groups of dwayne and jill — most likely the user private groups also named dwayne and jill.

A better way to solve this is to set both setgid and the sticky bit on the folder. This will do two things — cause files created in the folder to be owned by the finance group automatically, and prevent dwayne and jill from deleting each other’s files. Either of these commands will work:

sudo chmod 3770 finance
sudo chmod u+rwx,g+rwxs,o+t finance

The long listing for the file now shows the new special permissions applied. The sticky bit appears as T and not t because the folder is not searchable for users outside the finance group.

drwxrws--T. 2 root finance 4096 Jul 6 15:35 finance

Manage your passwords with Bitwarden and Podman

Friday 26th of July 2019 08:00:21 AM

You might have encountered a few advertisements the past year trying to sell you a password manager. Some examples are LastPass, 1Password, or Dashlane. A password manager removes the burden of remembering the passwords for all your websites. No longer do you need to re-use passwords or use easy-to-remember passwords. Instead, you only need to remember one single password that can unlock all your other passwords for you.

This can make you more secure by having one strong password instead of many weak passwords. You can also sync your passwords across devices if you have a cloud-based password manager like LastPass, 1Password, or Dashlane. Unfortunately, none of these products are open source. Luckily there are open source alternatives available.

Open source password managers

These alternatives include Bitwarden, LessPass, or KeePass. Bitwarden is an open source password manager that stores all your passwords encrypted on the server, which works the same way as LastPass, 1Password, or Dashlane. LessPass is a bit different as it focuses on being a stateless password manager. This means it derives passwords based on a master password, the website, and your username rather than storing the passwords encrypted. On the other side of the spectrum there’s KeePass, a file-based password manager with a lot of flexibility with its plugins and applications.

Each of these three apps has its own downsides. Bitwarden stores everything in one place and is exposed to the web through its API and website interface. LessPass can’t store custom passwords since it’s stateless, so you need to use their derived passwords. KeePass, a file-based password manager, can’t easily sync between devices. You can utilize a cloud-storage provider together with WebDAV to get around this, but a lot of clients do not support it and you might get file conflicts if devices do not sync correctly.

This article focuses on Bitwarden.

Running an unofficial Bitwarden implementation

There is a community implementation of the server and its API called bitwarden_rs. This implementation is fully open source as it can use SQLite or MariaDB/MySQL, instead of the proprietary Microsoft SQL Server that the official server uses.

It’s important to recognize some differences exist between the official and the unofficial version. For instance, the official server has been audited by a third-party, whereas the unofficial one hasn’t. When it comes to implementations, the unofficial version lacks email confirmation and support for two-factor authentication using Duo or email codes.

Let’s get started running the server with SELinux in mind. Following the documentation for bitwarden_rs you can construct a Podman command as follows:

$ podman run -d \
--userns=keep-id \
--name bitwarden \
-e SIGNUPS_ALLOWED=false \
-e ROCKET_PORT=8080 \
-v /home/egustavs/Bitwarden/bw-data/:/data/:Z \
-p 8080:8080 \
bitwardenrs/server:latest

This downloads the bitwarden_rs image and runs it in a user container under the user’s namespace. It uses a port above 1024 so that non-root users can bind to it. It also changes the volume’s SELinux context with :Z to prevent permission issues with read-write on /data.

If you host this under a domain, it’s recommended to put this server under a reverse proxy with Apache or Nginx. That way you can use port 80 and 443 which points to the container’s 8080 port without running the container as root.

Running under systemd

With Bitwarden now running, you probably want to keep it that way. Next, create a unit file that keeps the container running, automatically restarts if it doesn’t respond, and starts running after a system restart. Create this file as /etc/systemd/system/bitwarden.service:

[Unit]
Description=Bitwarden Podman container
Wants=syslog.service

[Service]
User=egustavs
Group=egustavs
TimeoutStartSec=0
ExecStart=/usr/bin/podman run 'bitwarden'
ExecStop=-/usr/bin/podman stop -t 10 'bitwarden'
Restart=always
RestartSec=30s
KillMode=none

[Install]
WantedBy=multi-user.target

Now, enable and start it using sudo:

$ sudo systemctl enable bitwarden.service && sudo systemctl start bitwarden.service
$ systemctl status bitwarden.service
bitwarden.service - Bitwarden Podman container
Loaded: loaded (/etc/systemd/system/bitwarden.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-07-09 20:23:16 UTC; 1 day 14h ago
Main PID: 14861 (podman)
Tasks: 44 (limit: 4696)
Memory: 463.4M

Success! Bitwarden is now running under system and will keep running.

Adding LetsEncrypt

It’s strongly recommended to run your Bitwarden instance through an encrypted channel with something like LetsEncrypt if you have a domain. Certbot is a bot that creates LetsEncrypt certificates for us, and they have a guide for doing this through Fedora.

After you generate a certificate, you can follow the bitwarden_rs guide about HTTPS. Just remember to append :Z to the LetsEncrypt volume to handle permissions while not changing the port.

Photo by CMDR Shane on Unsplash.

Introducing Fedora CoreOS

Wednesday 24th of July 2019 08:00:58 AM

The Fedora CoreOS team is excited to announce the first preview release of Fedora CoreOS, a new Fedora edition built specifically for running containerized workloads securely and at scale. It’s the successor to both Fedora Atomic Host and CoreOS Container Linux. Fedora CoreOS combines the provisioning tools, automatic update model, and philosophy of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host.

Read on for more details about this exciting new release.

Why Fedora CoreOS?

Containers allow workloads to be reproducibly deployed to production and automatically scaled to meet demand. The isolation provided by a container means that the host OS can be small. It only needs a Linux kernel, systemd, a container runtime, and a few additional services such as an SSH server.

While containers can be run on a full-sized server OS, an operating system built specifically for containers can provide functionality that a general purpose OS cannot. Since the required software is minimal and uniform, the entire OS can be deployed as a unit with little customization. And, since containers are deployed across multiple nodes for redundancy, the OS can update itself automatically and then reboot without interrupting workloads.

Fedora CoreOS is built to be the secure and reliable host for your compute clusters. It’s designed specifically for running containerized workloads without regular maintenance, automatically updating itself with the latest OS improvements, bug fixes, and security updates. It provisions itself with Ignition, runs containers with Podman and Moby, and updates itself atomically and automatically with rpm-ostree.

Provisioning immutable infrastructure

Whether you run in the cloud, virtualized, or on bare metal, a Fedora CoreOS machine always begins from the same place: a generic OS image. Then, during the first boot, Fedora CoreOS uses Ignition to provision the system. Ignition reads an Ignition config from cloud user data or a remote URL, and uses it to create disk partitions and file systems, users, files and systemd units.

To provision a machine:

  1. Write a Fedora CoreOS Config (FCC), a YAML document that specifies the desired configuration of a machine. FCCs support all Ignition functionality, and also provide additional syntax (“sugar”) that makes it easier to specify typical configuration changes.
  2. Use the Fedora CoreOS Config Transpiler to validate your FCC and convert it to an Ignition config.
  3. Launch a Fedora CoreOS machine and pass it the Ignition config. If the machine boots successfully, provisioning has completed without errors.

Fedora CoreOS is designed to be managed as immutable infrastructure. After a machine is provisioned, you should not modify /etc or otherwise reconfigure the machine. Instead, modify the FCC and use it to provision a replacement machine.

This is similar to how you’d manage a container: container images are not updated in place, but rebuilt from scratch and redeployed. This approach makes it easy to scale out when load increases. Simply use the same Ignition config to launch additional machines.

Automatic updates

By default, Fedora CoreOS automatically downloads new OS releases, atomically installs them, and reboots into them. Releases roll out gradually over time. We can even stop a rollout if we discover a problem in a new release. Upgrades between Fedora releases are treated as any other update, and are automatically applied without user intervention.

The Linux ecosystem evolves quickly, and software updates can bring undesired behavior changes. However, for automatic updates to be trustworthy, they cannot break existing machines. To avoid this, Fedora CoreOS takes a two-pronged approach. First, we automatically test each change to the OS. However, automatic testing can’t catch all regressions, so Fedora CoreOS also ships multiple independent release streams:

  • The testing stream is a regular snapshot of the current Fedora release, plus updates.
  • After a testing release has been available for two weeks, it is sent to the stable stream. Bugs discovered in testing will be fixed before a release is sent to stable.
  • The next stream is a regular snapshot of the upcoming Fedora release, allowing additional time for testing larger changes.

All three streams receive security updates and critical bugfixes, and are intended to be safe for production use. Most machines should run the stable stream, since that receives the most testing. However, users should run a few percent of their nodes on the next and testing streams, and report problems to the issue tracker. This helps ensure that bugs that only affect certain workloads or certain hardware are fixed before they reach stable.

Telemetry

To help direct our development efforts, Fedora CoreOS will perform some telemetry by default. A service called fedora-coreos-pinger will periodically collect non-identifying information about the machine, such as the OS version, cloud platform, and instance type, and report it to servers controlled by the Fedora project.

No unique identifiers will be reported or collected, and the data will only be used in aggregate to answer questions about how Fedora CoreOS is being used. We will prominently document that this collection is occurring and how to disable it. We will also tell you how to help the project by reporting additional detail, including information that might identify the machine.

Current status of Fedora CoreOS

Fedora CoreOS is still under active development, and some planned functionality is not available in the first preview release:

  • Only the testing stream currently exists; the next and stable streams are not yet available.
  • Several cloud and virtualization platforms are not yet available. Only x86_64 is currently supported.
  • Booting a live Fedora CoreOS system via network (PXE) or CD is not yet supported.
  • We are actively discussing plans for closer integration with Kubernetes distributions, including OKD.
  • Fedora CoreOS Config Transpiler will gain more sugar over time.
  • Telemetry is not yet active.
  • Documentation is still under development.

While Fedora CoreOS is intended for production use, preview releases should not be used in production. Fedora CoreOS may change in incompatible ways during the preview period. There is no guarantee that a preview release will successfully update to a later preview release, or to a stable release.

The future

We expect the preview period to continue for about six months. At the end of the preview, we will declare Fedora CoreOS stable and encourage its use in production.

CoreOS Container Linux will be maintained until about six months after Fedora CoreOS is declared stable. We’ll announce the exact timing later this year. During the preview period, we’ll publish tools and documentation to help Container Linux users migrate to Fedora CoreOS.

Fedora Atomic Host will be maintained until the end of life of Fedora 29, expected in late November. Before then, Fedora Atomic Host users should migrate to Fedora CoreOS.

Getting involved in Fedora CoreOS

To try out the new release, head over to the download page to get OS images or cloud image IDs. Then use the quick start guide to get a machine running quickly. Finally, get involved! You can report bugs and missing features to the issue tracker. You can also discuss Fedora CoreOS in Fedora Discourse, the development mailing list, or in #fedora-coreos on Freenode.

Welcome to Fedora CoreOS, and let us know what you think!

How to run virtual machines with virt-manager

Monday 22nd of July 2019 08:00:18 AM

In the beginning there was dual boot, it was the only way to have more than one operating system on the same laptop. At the time, it was difficult for these operating systems to be run simultaneously or interact with each other. Many years passed before it was possible, on common PCs, to run an operating system inside another through virtualization.

Recent PCs or laptops, including moderately-priced ones, have the hardware features to run virtual machines with performance close to the physical host machine.

Virtualization has therefore become normal, to test operating systems, as a playground for learning new techniques, to create your own home cloud, to create your own test environment and much more. This article walks you through using Virt Manager on Fedora to setup virtual machines.

Introducing QEMU/KVM and Libvirt

Fedora, like all other Linux systems, comes with native support for virtualization extensions. This support is given by KVM (Kernel based Virtual Machine) currently available as a kernel module.

QEMU is a complete system emulator that works together with KVM and allows you to create virtual machines with hardware and peripherals.

Finally libvirt is the API layer that allows you to administer the infrastructure, ie create and run virtual machines.

The set of these three technologies, all open source, is what we’re going to install on our Fedora Workstation.

Installation Step 1: install packages

Installation is a fairly simple operation. The Fedora repository provides the “virtualization” package group that contains everything you need.

sudo dnf install @virtualization Step 2: edit the libvirtd configuration

By default the system administration is limited to the root user, if you want to enable a regular user you have to proceed as follows.

Open the /etc/libvirt/libvirtd.conf file for editing

sudo vi /etc/libvirt/libvirtd.conf

Set the domain socket group ownership to libvirt

unix_sock_group = "libvirt"

Adjust the UNIX socket permissions for the R/W socket

unix_sock_rw_perms = "0770" Step 3: start and enable the libvirtd service sudo systemctl start libvirtd
sudo systemctl enable libvirtd Step 4: add user to group

In order to administer libvirt with the regular user you must add the user to the libvirt group, otherwise every time you start virtual-manager you will be asked for the password for sudo.

sudo usermod -a -G libvirt $(whoami)

This adds the current user to the group. You must log out and log in to apply the changes.

Getting started with virt-manager

The libvirt system can be managed either from the command line (virsh) or via the virt-manager graphical interface. The command line can be very useful if you want to do automated provisioning of virtual machines, for example with Ansible, but in this article we will concentrate on the user-friendly graphical interface.

The virt-manager interface is simple. The main form shows the list of connections including the local system connection.

The connection settings include virtual networks and storage definition. it is possible to define multiple virtual networks and these networks can be used to communicate between guest systems and between the guest systems and the host.

Creating your first virtual machine

To start creating a new virtual machine, press the button at the top left of the main form:

The first step of the wizard requires the installation mode. You can choose between a local installation media, network boot / installation or an existing virtual disk import:

Choosing the local installation media the next step will require the ISO image path:


The subsequent two steps will allow you to size the CPU, memory and disk of the new virtual machine. The last step will ask you to choose network preferences: choose the default network if you want the virtual machine to be separated from the outside world by a NAT, or bridged if you want it to be reachable from the outside. Note that if you choose bridged the virtual machine cannot communicate with the host machine.

Check “Customize configuration before install” if you want to review or change the configuration before starting the setup:

The virtual machine configuration form allows you to review and modify the hardware configuration. You can add disks, network interfaces, change boot options and so on. Press “Begin installation” when satisfied:

At this point you will be redirected to the console where to proceed with the installation of the operating system. Once the operation is complete, you will have the working virtual machine that you can access from the console:

The virtual machine just created will appear in the list of the main form, where you will also have a graph of the CPU and memory occupation:

libvirt and virt-manager is a powerful tool that allows great customization to your virtual machines with enterprise level management. If something even simpler is desired, note that Fedora Workstation comes with GNOME Boxes pre-installed and can be sufficient for basic virtualization needs.

Contribute at the Fedora Test Week for kernel 5.2

Monday 22nd of July 2019 06:51:00 AM

The kernel team is working on final integration for kernel 5.2. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test week from Monday, Jul 22, 2019 through Monday, Jul 29, 2019. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test day/week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.

Modifying Windows local accounts with Fedora and chntpw

Friday 19th of July 2019 08:00:08 AM

I recently encountered a problem at work where a client’s Windows 10 PC lost trust to the domain. The user is an executive and the hindrance of his computer can affect real-time mission-critical tasks. He gave me 30 minutes to resolve the issue while he attended a meeting.

Needless to say, I’ve encountered this issue many times in my career. It’s an easy fix using the Windows 7/8/10 installation media to reset the Administrator password, remove the PC off the domain and rejoin it. Unfortunately it didn’t work this time. After 20 minutes of scouring the net and scanning through the Microsoft Docs with no success, I turned to my development machine running Fedora with hopes of finding a solution.

With dnf search I found a utility called chntpw:

$ dnf search windows | grep password

According to the summary, chntpw will “change passwords in Windows SAM files.”

Little did I know at the time there was more to this utility than explained in the summary. Hence, this article will go through the steps I used to successfully reset a Windows local user password using chntpw and a Fedora Workstation Live boot USB. The article will also cover some of the features of chntpw used for basic user administration.

Installation and setup

If the PC can connect to the internet after booting the live media, install chntpw from the official Fedora repository with:

$ sudo dnf install chntpw

If you’re unable to access the internet, no sweat! Fedora Workstation Live boot media has all the dependencies installed out-of-the-box, so all we need is the package. You can find the builds for your Fedora version from the Fedora Project’s Koji site. You can use another computer to download the utility and use a USB thumb drive, or other form of media to copy the package.

First and foremost we need to create the Fedora Live USB stick. If you need instructions, the article on How to make a Fedora USB stick is a great reference.

Once the key is created shut-down the Windows PC, insert the thumb drive if the USB key was created on another computer, and turn on the PC — be sure to boot from the USB drive. Once the live media boots, select “Try Fedora” and open the Terminal application.

Also, we need to mount the Windows drive to access the files. Enter the following command to view all drive partitions with an NTFS filesystem:

$ sudo blkid | grep ntfs

Most hard drives are assigned to /dev/sdaX where X is the partition number — virtual drives may be assigned to /dev/vdX, and some newer drives (like SSDs) use /dev/nvmeX. For this example the Windows C drive is assigned to /dev/sda2. To mount the drive enter:

$ sudo mount /dev/sda2 /mnt

Fedora Workstation contains the ntfs-3g and ntfsprogs packages out-of-the-box. If you’re using a spin that does not have NTFS working out of the box, you can install these two packages from the official Fedora repository with:

$ sudo dnf install ntfs-3g ntfsprogs

Once the drive is mounted, navigate to the location of the SAM file and verify that it’s there:

$ cd /mnt/Windows/System32/config
$ ls | grep SAM
SAM
SAM.LOG1
SAM.LOG2 Clearing or resetting a password

Now it’s time to get to work. The help flag -h provides everything we need to know about this utility and how to use it:

$ chntpw -h
chntpw: change password of a user in a Windows SAM file,
or invoke registry editor. Should handle both 32 and 64 bit windows and
all version from NT3.x to Win8.1
chntpw [OPTIONS] [systemfile] [securityfile] [otherreghive] […]
-h This message
-u Username or RID (0x3e9 for example) to interactively edit
-l list all users in SAM file and exit
-i Interactive Menu system
-e Registry editor. Now with full write support!
-d Enter buffer debugger instead (hex editor),
-v Be a little more verbose (for debuging)
-L For scripts, write names of changed files to /tmp/changed
-N No allocation mode. Only same length overwrites possible (very safe mode)
-E No expand mode, do not expand hive file (safe mode)

Usernames can be given as name or RID (in hex with 0x first)
See readme file on how to get to the registry files, and what they are.
Source/binary freely distributable under GPL v2 license. See README for details.
NOTE: This program is somewhat hackish! You are on your own!

Use the -l parameter to display a list of users it reads from the SAM file:

$ sudo chntpw -l SAM
chntpw version 1.00 140201, (c) Petter N Hagen
Hive name (from header): <\SystemRoot\System32\Config\SAM>
ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c
File size 65536 [10000] bytes, containing 7 pages (+ 1 headerpage)
Used for data: 346/37816 blocks/bytes, unused: 23/7016 blocks/bytes.

| RID -|---------- Username ------------| Admin? |- Lock? --|
| 01f4 | Administrator | ADMIN | dis/lock |
| 01f7 | DefaultAccount | | dis/lock |
| 03e8 | defaultuser0 | | dis/lock |
| 01f5 | Guest | | dis/lock |
| 03ea | sysadm | ADMIN | |
| 01f8 | WDAGUtilityAccount | | dis/lock |
| 03e9 | WinUser | | |

Now that we have a list of Windows users we can edit the account. Use the -u parameter followed by the username and the name of the SAM file. For this example, edit the sysadm account:

$ sudo chntpw -u sysadm SAM
chntpw version 1.00 140201, (c) Petter N Hagen
Hive name (from header): <\SystemRoot\System32\Config\SAM>
ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c
File size 65536 [10000] bytes, containing 7 pages (+ 1 headerpage)
Used for data: 346/37816 blocks/bytes, unused: 23/7016 blocks/bytes.

================= USER EDIT ====================

RID : 1002 [03ea]
Username: sysadm
fullname: SysADM
comment :
homedir :

00000220 = Administrators (which has 2 members)

Account bits: 0x0010 =
[ ] Disabled | [ ] Homedir req. | [ ] Passwd not req. |
[ ] Temp. duplicate | [X] Normal account | [ ] NMS account |
[ ] Domain trust ac | [ ] Wks trust act. | [ ] Srv trust act |
[ ] Pwd don't expir | [ ] Auto lockout | [ ] (unknown 0x08) |
[ ] (unknown 0x10) | [ ] (unknown 0x20) | [ ] (unknown 0x40) |

Failed login count: 0, while max tries is: 0
Total login count: 0

- - - User Edit Menu:
1 - Clear (blank) user password
(2 - Unlock and enable user account) [seems unlocked already]
3 - Promote user (make user an administrator)
4 - Add user to a group
5 - Remove user from a group
q - Quit editing user, back to user select
Select: [q] >

To clear the password press 1 and ENTER. If successful you will see the following message:

...
Select: [q] > 1
Password cleared!
================= USER EDIT ====================

RID : 1002 [03ea]
Username: sysadm
fullname: SysADM
comment :
homedir :

00000220 = Administrators (which has 2 members)

Account bits: 0x0010 =
[ ] Disabled | [ ] Homedir req. | [ ] Passwd not req. |
[ ] Temp. duplicate | [X] Normal account | [ ] NMS account |
[ ] Domain trust ac | [ ] Wks trust act. | [ ] Srv trust act |
[ ] Pwd don't expir | [ ] Auto lockout | [ ] (unknown 0x08) |
[ ] (unknown 0x10) | [ ] (unknown 0x20) | [ ] (unknown 0x40) |

Failed login count: 0, while max tries is: 0
Total login count: 0
** No NT MD4 hash found. This user probably has a BLANK password!
** No LANMAN hash found either. Try login with no password!
...

Verify the change by repeating:

$ sudo chntpw -l SAM
chntpw version 1.00 140201, (c) Petter N Hagen
Hive name (from header): <\SystemRoot\System32\Config\SAM>
ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c
File size 65536 [10000] bytes, containing 7 pages (+ 1 headerpage)
Used for data: 346/37816 blocks/bytes, unused: 23/7016 blocks/bytes.

| RID -|---------- Username ------------| Admin? |- Lock? --|
| 01f4 | Administrator | ADMIN | dis/lock |
| 01f7 | DefaultAccount | | dis/lock |
| 03e8 | defaultuser0 | | dis/lock |
| 01f5 | Guest | | dis/lock |
| 03ea | sysadm | ADMIN | *BLANK* |
| 01f8 | WDAGUtilityAccount | | dis/lock |
| 03e9 | WinUser | | |

...

The “Lock?” column now shows BLANK for the sysadm user. Type q to exit and y to write the changes to the SAM file. Reboot the machine into Windows and login using the account (in this case sysadm) without a password.

Features

Furthermore, chntpw can perform basic Windows user administrative tasks. It has the ability to promote the user to the administrators group, unlock accounts, view and modify group memberships, and edit the registry.

The interactive menu

chntpw has an easy-to-use interactive menu to guide you through the process. Use the -i parameter to launch the interactive menu:

$ chntpw -i SAM
chntpw version 1.00 140201, (c) Petter N Hagen
Hive name (from header): <\SystemRoot\System32\Config\SAM>
ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c
File size 65536 [10000] bytes, containing 7 pages (+ 1 headerpage)
Used for data: 346/37816 blocks/bytes, unused: 23/7016 blocks/bytes.

<>========<> chntpw Main Interactive Menu <>========<>
Loaded hives:
1 - Edit user data and passwords
2 - List groups
- - -
9 - Registry editor, now with full write support!
q - Quit (you will be asked if there is something to save) Groups and account membership

To display a list of groups and view its members, select option 2 from the interactive menu:

...
What to do? [1] -> 2
Also list group members? [n] y
=== Group # 220 : Administrators
0 | 01f4 | Administrator |
1 | 03ea | sysadm |
=== Group # 221 : Users
0 | 0004 | NT AUTHORITY\INTERACTIVE |
1 | 000b | NT AUTHORITY\Authenticated Users |
2 | 03e8 | defaultuser0 |
3 | 03e9 | WinUser |
=== Group # 222 : Guests
0 | 01f5 | Guest |
=== Group # 223 : Power Users
...
=== Group # 247 : Device Owners Adding the user to the administrators group

To elevate the user with administrative privileges press 1 to edit the account, then 3 to promote the user:

...
Select: [q] > 3

=== PROMOTE USER
Will add the user to the administrator group (0x220)
and to the users group (0x221). That should usually be
what is needed to log in and get administrator rights.
Also, remove the user from the guest group (0x222), since
it may forbid logins.

(To add or remove user from other groups, please other menu selections)

Note: You may get some errors if the user is already member of some
of these groups, but that is no problem.

Do it? (y/n) [n] : y

Adding to 0x220 (Administrators) …
sam_put_user_grpids: success exit
Adding to 0x221 (Users) …
sam_put_user_grpids: success exit
Removing from 0x222 (Guests) …
remove_user_from_grp: NOTE: group not in users list of groups, may mean user not member at all. Safe. Continuing.
remove_user_from_grp: NOTE: user not in groups list of users, may mean user was not member at all. Does not matter, continuing.
sam_put_user_grpids: success exit

Promotion DONE! Editing the Windows registry

Certainly the most noteworthy, as well as the most powerful, feature of chntpw is the ability to edit the registry and write to it. Select 9 from the interactive menu:

...
What to do? [1] -> 9
Simple registry editor. ? for help.

> ?
Simple registry editor:
hive [] - list loaded hives or switch to hive number
cd - change current key
ls | dir [] - show subkeys & values,
cat | type - show key value
dpi - show decoded DigitalProductId value
hex - hexdump of value data
ck [] - Show keys class data, if it has any
nk - add key
dk - delete key (must be empty)
ed - Edit value
nv - Add value
dv - Delete value
delallv - Delete all values in current key
rdel - Recursively delete key & subkeys
ek - export key to (Windows .reg file format)
debug - enter buffer hexeditor
st [] - debug function: show struct info
q - quit
Finding help

As we saw earlier, the -h parameter allows us to quickly access a reference guide to the options available with chntpw. The man page contains detailed information and can be accessed with:

$ man chntpw

Also, if you’re interested in a more hands-on approach, spin up a virtual machine. Windows Server 2019 has an evaluation period of 180 days, and Windows Hyper-V Server 2019 is unlimited. Creating a Windows guest VM will provide the basics to modify the Administrator account for testing and learning. For help with quickly creating a guest VM refer to the article Getting started with virtualization in Gnome Boxes.

Conclusion

chntpw is a hidden gem for Linux administrators and IT professionals alike. While a nifty tool to quickly reset Windows account passwords, it can also be used to troubleshoot and modify local Windows accounts with a no-nonsense feel that delivers. This is perhaps only one such tool for solving the problem, though. If you’ve experienced this issue and have an alternative solution, feel free to put it in the comments below.

This tool, like many other “hacking” tools, holds with it an ethical responsibility. Even chntpw states:

NOTE: This program is somewhat hackish! You are on your own!

When using such programs, we should remember the three edicts outlined in the message displayed when running sudo for the first time:

  1. Respect the privacy of others.
  2. Think before you type.
  3. With great power comes great responsibility.

Photo by Silas Köhler on Unsplash,

Bond WiFi and Ethernet for easier networking mobility

Wednesday 17th of July 2019 08:00:57 AM

Sometimes one network interface isn’t enough. Network bonding allows multiple network connections to act together with a single logical interface. You might do this because you want more bandwidth than a single connection can handle. Or maybe you want to switch back and forth between your wired and wireless networks without losing your network connection.

The latter applies to me. One of the benefits to working from home is that when the weather is nice, it’s enjoyable to work from a sunny deck instead of inside. But every time I did that, I lost my network connections. IRC, SSH, VPN — everything goes away, at least for a moment while some clients reconnect. This article describes how I set up network bonding on my Fedora 30 laptop to seamlessly move from the wired connection my laptop dock to a WiFi connection.

In Linux, interface bonding is handled by the bonding kernel module. Fedora does not ship with this enabled by default, but it is included in the kernel-core package. This means that enabling interface bonding is only a command away:

sudo modprobe bonding

Note that this will only have effect until you reboot. To permanently enable interface bonding, create a file called bonding.conf in the /etc/modules-load.d directory that contains only the word “bonding”.

Now that you have bonding enabled, it’s time to create the bonded interface. First, you must get the names of the interfaces you want to bond. To list the available interfaces, run:

sudo nmcli device status

You will see output that looks like this:

DEVICE          TYPE      STATE         CONNECTION         
enp12s0u1       ethernet  connected     Wired connection 1
tun0            tun       connected     tun0               
virbr0          bridge    connected     virbr0             
wlp2s0          wifi      disconnected  --      
p2p-dev-wlp2s0  wifi-p2p disconnected  --      
enp0s31f6       ethernet  unavailable   --      
lo              loopback  unmanaged     --                 
virbr0-nic      tun       unmanaged     --       

In this case, there are two (wired) Ethernet interfaces available. enp12s0u1 is on a laptop docking station, and you can tell that it’s connected from the STATE column. The other, enp0s31f6, is the built-in port in the laptop. There is also a WiFi connection called wlp2s0. enp12s0u1 and wlp2s0 are the two interfaces we’re interested in here. (Note that it’s not necessary for this exercise to understand how network devices are named, but if you’re interested you can see the systemd.net-naming-scheme man page.)

The first step is to create the bonded interface:

sudo nmcli connection add type bond ifname bond0 con-name bond0

In this example, the bonded interface is named bond0. The “con-name bond0” sets the connection name to bond0; leaving this off would result in a connection named bond-bond0. You can also set the connection name to something more human-friendly, like “Docking station bond” or “Ben”

The next step is to add the interfaces to the bonded interface:

sudo nmcli connection add type ethernet ifname enp12s0u1 master bond0 con-name bond-ethernet
sudo nmcli connection add type wifi ifname wlp2s0 master bond0 ssid Cotton con-name bond-wifi

As above, the connection name is specified to be more descriptive. Be sure to replace enp12s0u1 and wlp2s0 with the appropriate interface names on your system. For the WiFi interface, use your own network name (SSID) where I use “Cotton”. If your WiFi connection has a password (and of course it does!), you’ll need to add that to the configuration, too. The following assumes you’re using WPA2-PSK authentication

sudo nmcli connection modify bond-wifi wifi-sec.key-mgmt wpa-psk
sudo nmcli connection edit bond-wif

The second command will bring you into the interactive editor where you can enter your password without it being logged in your shell history. Enter the following, replacing password with your actual password

set wifi-sec.psk password
save
quit

Now you’re ready to start your bonded interface and the secondary interfaces you created

sudo nmcli connection up bond0
sudo nmcli connection up bond-ethernet
sudo nmcli connection up bond-wifi

You should now be able to disconnect your wired or wireless connections without losing your network connections.

A caveat: using other WiFi networks

This configuration works well when moving around on the specified WiFi network, but when away from this network, the SSID used in the bond is not available. Theoretically, one could add an interface to the bond for every WiFi connection used, but that doesn’t seem reasonable. Instead, you can disable the bonded interface:

sudo nmcli connection down bond0

When back on the defined WiFi network, simply start the bonded interface as above.

Fine-tuning your bond

By default, the bonded interface uses the “load balancing (round-robin)” mode. This spreads the load equally across the interfaces. But if you have a wired and a wireless connection, you may want to prefer the wired connection. The “active-backup” mode enables this. You can specify the mode and primary interface when you are creating the interface, or afterward using this command (the bonded interface should be down):

sudo nmcli connection modify bond0 +bond.options "mode=active-backup,primary=enp12s0u1"

The kernel documentation has much more information about bonding options.

What is Silverblue?

Friday 12th of July 2019 08:00:38 AM

Fedora Silverblue is becoming more and more popular inside and outside the Fedora world. So based on feedback from the community, here are answers to some interesting questions about the project. If you do have any other Silverblue related questions, please leave it in the comments section and we will try to answer them in a future article.

What is Silverblue?

Silverblue is a codename for the new generation of the desktop operating system, previously known as Atomic Workstation. The operating system is delivered in images that are created by utilizing the rpm-ostree project. The main benefits of the system are speed, security, atomic updates and immutability.

What does “Silverblue” actually mean?

“Team Silverblue” or “Silverblue” in short doesn’t have any hidden meaning. It was chosen after roughly two months when the project, previously known as Atomic Workstation was rebranded. There were over 150 words or word combinations reviewed in the process. In the end Silverblue was chosen because it had an available domain as well as the social network accounts. One could think of it as a new take on Fedora’s blue branding, and could be used in phrases like “Go, Team Silverblue!” or “Want to join the team and improve Silverblue?”.

What is ostree?

OSTree or libostree is a project that combines a “git-like” model for committing and downloading bootable filesystem trees, together with a layer to deploy them and manage the bootloader configuration. OSTree is used by rpm-ostree, a hybrid package/image based system that Silverblue uses. It atomically replicates a base OS and allows the user to “layer” the traditional RPM on top of the base OS if needed.

Why use Silverblue?

Because it allows you to concentrate on your work and not on the operating system you’re running. It’s more robust as the updates of the system are atomic. The only thing you need to do is to restart into the new image. Also, if there’s anything wrong with the currently booted image, you can easily reboot/rollback to the previous working one, if available. If it isn’t, you can download and boot any other image that was generated in the past, using the ostree command.

Another advantage is the possibility of an easy switch between branches (or, in an old context, Fedora releases). You can easily try the Rawhide or updates-testing branch and then return back to the one that contains the current stable release. Also, you should consider Silverblue if you want to try something new and unusual.

What are the benefits of an immutable OS?

Having the root filesystem mounted read-only by default increases resilience against accidental damage as well as some types of malicious attack. The primary tool to upgrade or change the root filesystem is rpm-ostree.

Another benefit is robustness. It’s nearly impossible for a regular user to get the OS to the state when it doesn’t boot or doesn’t work properly after accidentally or unintentionally removing some system library. Try to think about these kind of experiences from your past, and imagine how Silverblue could help you there.

How does one manage applications and packages in Silverblue?

For graphical user interface applications, Flatpak is recommended, if the application is available as a flatpak. Users can choose between Flatpaks from either Fedora and built from Fedora packages and in Fedora-owned infrastructure, or Flathub that currently has a wider offering. Users can install them easily through GNOME Software, which already supports Fedora Silverblue.

One of the first things users find out is there is no dnf preinstalled in the OS. The main reason is that it wouldn’t work on Silverblue — and part of its functionality was replaced by the rpm-ostree command. Users can overlay the traditional packages by using the rpm-ostree install PACKAGE. But it should only be used when there is no other way. This is because when the new system images are pulled from the repository, the system image must be rebuilt every time it is altered to accommodate the layered packages, or packages that were removed from the base OS or replaced with a different version.

Fedora Silverblue comes with the default set of GUI applications that are part of the base OS. The team is working on porting them to Flatpaks so they can be distributed that way. As a benefit, the base OS will become smaller and easier to maintain and test, and users can modify their default installation more easily. If you want to look at how it’s done or help, take a look at the official documentation.

What is Toolbox?

Toolbox is a project to make containers easily consumable for regular users. It does that by using podman’s rootless containers. Toolbox lets you easily and quickly create a container with a regular Fedora installation that you can play with or develop on, separated from your OS.

Is there any Silverblue roadmap?

Formally there isn’t any, as we’re focusing on problems we discover during our testing and from community feedback. We’re currently using Fedora’s Taiga to do our planning.

What’s the release life cycle of the Silverblue?

It’s the same as regular Fedora Workstation. A new release comes every 6 months and is supported for 13 months. The team plans to release updates for the OS bi-weekly (or longer) instead of daily as they currently do. That way the updates can be more thoroughly tested by QA and community volunteers before they are sent to the rest of the users.

What is the future of the immutable OS?

From our point of view the future of the desktop involves the immutable OS. It’s safest for the user, and Android, ChromeOS, and the last macOS Catalina all use this method under the hood. For the Linux desktop there are still problems with some third party software that expects to write to the OS. HP printer drivers are a good example.

Another issue is how parts of the system are distributed and installed. Fonts are a good example. Currently in Fedora they’re distributed in RPM packages. If you want to use them, you have to overlay them and then restart to the newly created image that contains them.

What is the future of standard Workstation?

There is a possibility that the Silverblue will replace the regular Workstation. But there’s still a long way to go for Silverblue to provide the same functionality and user experience as the Workstation. In the meantime both desktop offerings will be delivered at the same time.

How does Atomic Workstation or Fedora CoreOS relate to any of this?

Atomic Workstation was the name of the project before it was renamed to Fedora Silverblue.

Fedora CoreOS is a different, but similar project. It shares some fundamental technologies with Silverblue, such as rpm-ostree, toolbox and others. Nevertheless, CoreOS is a more minimal, container-focused and automatically updating OS.

Firefox 68 available now in Fedora

Friday 12th of July 2019 01:33:53 AM

Earlier this week, Mozilla released version 68 of the Firefox web browser. Firefox is the default web browser in Fedora, and this update is now available in the official Fedora repositories.

This Firefox release provides a range of bug fixes and enhancements, including:

  • Better handling when using dark GTK themes (like Adwaita Dark). Previously, running a dark theme may have caused issues where user interface elements on a rendered webpage (like forms) are rendered in the dark theme, on a white background. Firefox 68 resolves these issues. Refer to these two Mozilla bugzilla tickets for more information.
  • The about:addons special page has two new features to keep you safer when installing extensions and themes in Firefox. First is the ability to report security and stability issues with addons directly in the about:addons page. Additionally, about:addons now has a list of secure and stable extensions and themes that have been vetted by the Recommended Extensions program.
Updating Firefox in Fedora

Firefox 68 has already been pushed to the stable Fedora repositories. The security fix will be applied to your system with your next update. You can also update the firefox package only by running the following command:

$ sudo dnf update --refresh firefox

This command requires you to have sudo setup on your system. Additionally, note that not every Fedora mirrors syncs at the same rate. Community sites graciously donate space and bandwidth these mirrors to carry Fedora content. You may need to try again later if your selected mirror is still awaiting the latest update.

Fedora job opening: Fedora Community Action and Impact Coordinator (FCAIC)

Wednesday 10th of July 2019 02:50:34 PM

I’ve decided to move on from my role as the Fedora Community Action and Impact Coordinator (FCAIC).  This was not an easy decision to make. I am proud of the work I have done in Fedora over the last three years and I think I have helped the community move past many challenges.  I could NEVER have done all of this without the support and assistance of the community!

As some of you know, I have been covering for some other roles in Red Hat for almost the last year.  Some of these tasks have led to some opportunities to take my career in a different direction. I am going to remain at Red Hat and on the same team with the same manager, but with a slightly expanded scope of duties.  I will no longer be day-to-day on Fedora and will instead be in a consultative role as a Community Architect at Large. This is a fancy way of saying that I will be tackling helping lots of projects with various issues while also working on some specific strategic objectives.

I think this is a great opportunity for the Fedora community.  The Fedora I became FCAIC in three years ago is a very different place from the Fedora of today.  While I could easily continue to help shape and grow this community, I think that I can do more by letting some new ideas come in.  The new person will hopefully be able to approach challenges differently. I’ll also be here to offer my advice and feedback as others who have moved on in the past have done.  Additionally, I will work with Matthew Miller and Red Hat to help hire and onboard the new Fedora Community and Impact Coordinator. During this time I will continue as FCAIC.

This means that we are looking for a new FCAIC. Love Fedora? Want to work with Fedora full-time to help support and grow the Fedora community? This is the core of what the FCAIC does. The job description (also below), has a list of some of the primary job responsibilities and required skills – but that’s just a sample of the duties required, and the day to day life working full-time with the Fedora community.

Day to day work includes working with Mindshare, managing the Fedora Budget, and being part of many other teams, including the Fedora Council.  You should be ready to write frequently about Fedora’s achievements, policies and decisions, and to draft and generate ideas and strategies. And, of course, planning Flock and Fedora’s presence at other events. It’s hard work, but also a great deal of fun.

Are you good at setting long-term priorities and hacking away at problems with the big picture in mind? Do you enjoy working with people all around the world, with a variety of skills and interests, to build not just a successful Linux distribution, but a healthy project? Can you set priorities, follow through, and know when to say “no” in order to focus on the most important tasks for success? Is Fedora’s mission deeply important to you?

If you said “yes” to those questions, you might be a great candidate for the FCAIC role. If you think you’re a great fit apply online, or contact Matthew Miller, Brian Exelbierd, or Stormy Peters.


Fedora Community Action and Impact Coordinator

Location: CZ-Remote – prefer Europe but can be North America

Company Description

At Red Hat, we connect an innovative community of customers, partners, and contributors to deliver an open source stack of trusted, high-performing solutions. We offer cloud, Linux, middleware, storage, and virtualization technologies, together with award-winning global customer support, consulting, and implementation services. Red Hat is a rapidly growing company supporting more than 90% of Fortune 500 companies.

Job summary

Red Hat’s Open Source Programs Office (OSPO) team is looking for the next Fedora Community Action and Impact Lead. In this role, you will join the Fedora Council and guide initiatives to grow the Fedora user and developer communities, as well as make Red Hat and Fedora interactions even more transparent and positive. The Council is responsible for stewardship of the Fedora Project as a whole, and supports the health and growth of the Fedora community.

As a the Fedora Community Action and Impact Lead, you’ll facilitate decision making on how to best focus the Fedora community budget to meet our collective objectives, work with other council members to identify the short, medium, and long-term goals of the Fedora community, and organize and enable the project.

You will also help make decisions about trademark use, project structure, community disputes or complaints, and other issues. You’ll hold a full council membership, not an auxiliary or advisory role.

Primary job responsibilities
  • Identify opportunities to engage new contributors and community members; align project around supporting those opportunities.
  • Improve on-boarding materials and processes for new contributors.
  • Participate in user and developer discussions and identify barriers to success for contributors and users.
  • Use metrics to evaluate the success of open source initiatives.
  • Regularly report on community metrics and developments, both internally and externally.  
  • Represent Red Hat’s stake in the Fedora community’s success.
  • Work with internal stakeholders to understand their goals and develop strategies for working effectively with the community.
  • Improve onboarding materials and presentation of Fedora to new hires; develop standardized materials on Fedora that can be used globally at Red Hat.
  • Work with the Fedora Council to determine the annual Fedora budget.
  • Assist in planning and organizing Fedora’s flagship events each year.
  • Create and carry out community promotion strategies; create media content like blog posts, podcasts, and videos and facilitate the creation of media by other members of the community
Required skills
  • Extensive experience with the Fedora Project or a comparable open source community.
  • Exceptional writing and speaking skills
  • Experience with software development and open source developer communities; understanding of development processes.
  • Outstanding organizational skills; ability to prioritize tasks matching short and long-term goals and focus on the tasks of high priority
  • Ability to manage a project budget.
  • Ability to lead teams and participate in multiple cross-organizational teams that span the globe.
  • Experience motivating volunteers and staff across departments and companies

Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.

Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.

Photo by Deva Williamson on Unsplash.

Red Hat, IBM, and Fedora

Tuesday 9th of July 2019 12:51:40 PM

Today marks a new day in the 26-year history of Red Hat. IBM has finalized its acquisition of Red Hat, which will operate as a distinct unit within IBM.

What does this mean for Red Hat’s participation in the Fedora Project?

In short, nothing.

Red Hat will continue to be a champion for open source, just as it always has, and valued projects like Fedora that will continue to play a role in driving innovation in open source technology. IBM is committed to Red Hat’s independence and role in open source software communities. We will continue this work and, as always, we will continue to help upstream projects be successful and contribute to welcoming new members and maintaining the project.

In Fedora, our mission, governance, and objectives remain the same. Red Hat associates will continue to contribute to the upstream in the same ways they have been.

We will do this together, with the community, as we always have.

If you have questions or would like to learn more about today’s news, I encourage you to review the materials below. For any questions not answered here, please feel free to contact us. Red Hat CTO Chris Wright will host an online Q&A session in the coming days where you can ask questions you may have about what the acquisition means for Red Hat and our involvement in open source communities. Details will be announced on the Red Hat blog.

Regards,

Matthew Miller, Fedora Project Leader
Brian Exelbierd, Fedora Community Action and Impact Coordinator