Language Selection

English French German Italian Portuguese Spanish

Fedora Magazine

Syndicate content
Guides, information, and news about the Fedora operating system for users, developers, system administrators, and community members.
Updated: 1 week 6 days ago

Network address translation part 3 – the conntrack event framework

Friday 26th of March 2021 08:00:00 AM

This is the third post in a series about network address translation (NAT). The first article introduced how to use the iptables/nftables packet tracing feature to find the source of NAT-related connectivity problems. Part 2 introduced the “conntrack” command. This part gives an introduction to the “conntrack” event framework.

Introduction

NAT configured via iptables or nftables builds on top of netfilter’s connection tracking framework. conntrack’s event facility allows real-time monitoring of incoming and outgoing flows. This event framework is useful for debugging or logging flow information, for instance with ulog and its IPFIX output plugin.

Conntrack events

Run the following command to see a real-time conntrack event log:

# conntrack -E NEW tcp 120 SYN_SENT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 [UNREPLIED] src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 UPDATE tcp 60 SYN_RECV src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 UPDATE tcp 432000 ESTABLISHED src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED] UPDATE tcp 120 FIN_WAIT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED] UPDATE tcp 30 LAST_ACK src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED] UPDATE tcp 120 TIME_WAIT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]

This prints a continuous stream of events:

  • new connections
  • removal of connections
  • changes in a connections state.

Hit ctrl+c to quit.

The conntrack tool offers a number of options to limit the output. For example its possible to only show DESTROY events. The NEW event is generated after the iptables/nftables rule set accepts the corresponding packet.

Conntrack expectations

Some legacy protocols require multiple connections to work, such as FTP, SIP or H.323. To make these work in NAT environments, conntrack uses “connection tracking helpers”: kernel modules that can parse the specific higher-level protocol such as ftp.

The nf_conntrack_ftp module parses the ftp command connection and extracts the TCP port number that will be used for the file transfer. The helper module then inserts a “expectation” that consists of the extracted port number and address of the ftp client. When a new data connection arrives, conntrack searches the expectation table for a match. An incoming connection that matches such an entry is flagged RELATED rather than NEW. This allows you to craft iptables and nftables rulesets that reject incoming connection requests unless they were requested by an existing connection. If the original connection is subject to NAT, the related data connection will inherit this as well. This means that helpers can expose ports on internal hosts that are otherwise unreachable from the wider internet. The next section will explain this expectation mechanism in more detail.

The expectation table

Use conntrack -L expect to list all active expectations. In most cases this table appears to be empty, even if a helper module is active. This is because expectation table entries are short-lived. Use conntrack -E expect to monitor the system for changes in the expectation table instead.

Use this to determine if a helper is working as intended or to log conntrack actions taken by the helper. Here is an example output of a file download via ftp:

# conntrack -E expect
NEW 300 proto=6 src=10.2.1.1 dst=10.8.4.12 sport=0 dport=46767 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=10.2.1.1 master-dst=10.8.4.12 sport=34526 dport=21 class=0 helper=ftp
DESTROY 299 proto=6 src=10.2.1.1 dst=10.8.4.12 sport=0 dport=46767 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=10.2.1.1 master-dst=10.8.4.12 sport=34526 dport=21 class=0 helper=ftp

The expectation entry describes the criteria that an incoming connection request must meet in order to recognized as a RELATED connection. In this example, the connection may come from any port, must go to port 46767 (the port the ftp server expects to receive the DATA connection request on). Futhermore the source and destination addresses must match the address of the ftp client and server.

Events also include the connection that created the expectation and the name of the protocol helper (ftp). The helper has full control over the expectation: it can request full matching (IP addresses of the incoming connection must match), it can restrict to a subnet or even allow the request to come from any address. Check the “mask-dst” and “mask-src” parameters to see what parts of the addresses need to match.

Caveats

You can configure some helpers to allow wildcard expectations. Such wildcard expectations result in requests coming from an unrelated 3rd party host to get flagged as RELATED. This can open internal servers to the wider internet (“NAT slipstreaming”).

This is the reason helper modules require explicit configuration from the nftables/iptables ruleset. See this article for more information about helpers and how to configure them. It includes a table that describes the various helpers and the types of expectations (such as wildcard forwarding) they can create. The nftables wiki has a nft ftp example.

A nftables rule like ‘ct state related ct helper “ftp”‘ matches connections that were detected as a result of an expectation created by the ftp helper.

In iptables, use “-m conntrack –ctstate RELATED -m helper –helper ftp“. Always restrict helpers to only allow communication to and from the expected server addresses. This prevents accidental exposure of other, unrelated hosts.

Summary

This article introduced the conntrack event facilty and gave examples on how to inspect the expectation table. The next part of the series will describe low-level debug knobs of conntrack.

Announcing the release of Fedora Linux 34 Beta

Tuesday 23rd of March 2021 02:01:55 PM

The Fedora Project is pleased to announce the immediate availability of Fedora Linux 34 Beta, the next step towards our planned Fedora Linux 34 release at the end of April.

Download the prerelease from our Get Fedora site:

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:

Beta Release Highlights BTRFS transparent compression 

After Fedora Linux 33 made BTRFS the default filesystem for desktop variants, Fedora Linux 34 Beta enables transparent compression for more disk space. This increases the lifespan of flash-based media by reducing write amplification for solid-state disks. This compression improves the read and write performance for larger files, with the potential to add significant time efficiency into workflows. With a foundation for future enhancements, we aim to continue adding to these capabilities in future versions.

Replacing PulseAudio with PipeWire

Desktop audio will transition from using PulseAudio to PipeWire to mix and manage audio streams. The supports low-latency for pro audio use cases. PipeWire is better designed to meet the needs of containers and applications shipped in Flatpaks. The integration of PipeWire also creates the space for just one audio infrastructure to serve both desktop and professional use cases for mixing.

Fedora Workstation

Fedora 34 Workstation Beta includes GNOME 40, the newest release of the GNOME desktop environment. GNOME 40 represents a significant rewrite and brings user experience enhancements to the GNOME shell overview. It changes features like search, windows, workspaces and applications to be more spatially coherent. GNOME shell will also start in the overview after login, and the GNOME welcome tour that was introduced in Fedora Linux 33 will be adapted to the new design for an integrated, cohesive look for the desktop.

Other updates

Fedora Linux 34 Beta provides a better experience in out-of-memory (OOM) situations by enabling systemd-oomd by default. Actions taken by systemd-oomd operate on a per-cgroup level, aligning well with the life cycle of systemd units.

The KDE Plasma desktop now uses the Wayland display server by default. We are also shipping the Fedora KDE Plasma Desktop Spin on the aarch64 architecture for the first time.

A new i3 Spin introduced in Fedora Linux 34 Beta is our first desktop spin to feature a tiling window manager.

Testing needed

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in the #fedora-qa channel on Freenode IRC. As testing progresses, common issues are tracked on the Common F34 Bugs page.

For tips on reporting a bug effectively, read how to file a bug.

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora Linux users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora Linux, but the Linux ecosystem and free software as a whole.

More information

For more detailed information about what’s new on Fedora Linux 33 Beta release, you can consult the Fedora Linux 34 Change set. It contains more technical information about the new packages and improvements shipped with this release.

Productivity with Ulauncher

Monday 22nd of March 2021 08:00:00 AM

Application launchers are a category of productivity software that not everyone is familiar with, and yet most people use the basic concepts without realizing it. As the name implies, this (type of) software launches applications, but they can also have other capiblities.

Examples of dedicated Linux launchers include dmenu, Synapse, and Albert. On MacOS, some examples are Quicksilver and Alfred. Many modern desktops include basic versions as well. On Fedora Linux, the Gnome 3 activities overview uses search to open applications and more, while MacOS has the built-in launcher Spotlight.

While these applications have great feature sets, this article focuses on productivity with Ulauncher.

What is Ulauncher?

Ulauncher is a new application launcher written in Python, with the first Fedora package available in March 2020 for Fedora Linux 32. The core focuses on basic functionality with a nice interface for extensions. Like most application launchers, the key idea in Ulauncher is search. Search is a powerful productivity boost, especially for repetitive tasks.

Typical menu-driven interfaces work great for discovery when you aren’t sure what options are available. However, when the same action needs to happen repeatedly, it is a real time sink to navigate into 3 nested sub-menus over and over again. On the other side, hotkeys give immediate access to specific actions, but can be difficult to remember. Especially after exhausting all the obvious mnemonics. Is Control+C “copy”, or is it “cancel”? Search is a middle ground giving a means to get to a specific command quickly, while supporting discovery by typing only some remembered word or fragment. Exploring by search works especially well if tags and descriptions are available. Ulauncher supplies the search framework that extensions can use to build all manner of productivity enhancing actions.

Getting started

Getting the core functionality of Ulauncher on any Fedora OS is trivial; install using dnf:

sudo dnf install ulauncher

Once installed, use any standard desktop launching method for the first start up of Ulauncher. A basic dialog should pop up, but if not try launching it again to toggle the input box on. Click the gear icon on the right side to open the preferences dialog.

Ulauncher input box

A number of options are available, but the most important when starting out are Launch at login and the hotkey. The default hotkey is Control+space, but it can be changed. Running in Wayland needs additional configuration for consistent operation; see the Ulauncher wiki for details. Users of “Focus on Hover” or “Sloppy Focus” should also enable the “Don’t hide after losing mouse focus” option. Otherwise, Ulauncher disappears while typing in some cases.

Ulauncher basics

The idea of any application launcher, like Ulauncher, is fast access at any time. Press the hotkey and the input box shows up on top of the current application. Type out and execute the desired command and the dialog hides until the next use. Unsurprisingly, the most basic operation is launching applications. This is similar to most modern desktop environments. Hit the hotkey to bring up the dialog and start typing, for example te, and a list of matches comes up. Keep typing to further refine the search, or navigate to the entry using the arrow keys. For even faster access, use Alt+# to directly choose a result.

Ulauncher dialog searching for keywords with “te”

Ulauncher can also do quick calculations and navigate the file-system. To calculate, hit the hotkey and type a math expression. The result list dynamically updates with the result, and hitting Enter copies the value to the clipboard. Start file-system navigation by typing / to start at the root directory or ~/ to start in the home directory. Selecting a directory lists that directory’s contents and typing another argument filters the displayed list. Locate the right file by repeatedly descending directories. Selecting a file opens it, while Alt+Enter opens the folder containing the file.

Ulauncher shortcuts

The first bit of customization comes in the form of shortcuts. The Shortcuts tab in the preferences dialog lists all the current shortcuts. Shortcuts can be direct commands, URL aliases, URLs with argument substitution, or small scripts. Basic shortcuts for Wikipedia, StackOverflow, and Google come pre-configured, but custom shortcuts are easy to add.

Ulauncher shortcuts preferences tab

For instance, to create a duckduckgo search shortcut, click Add Shortcut in the Shortcuts preferences tab and add the name and keyword duck with the query https://duckduckgo.com/?q=%s. Any argument given to the duck keyword replaces %s in the query and the URL opened in the default browser. Now, typing duck fedora will bring up a duckduckgo search using the supplied terms, in this case fedora.

A more complex shortcut is a script to convert UTC time to local time. Once again click Add Shortcut and this time use the keyword utc. In the Query or Script text box, include the following script:

#!/bin/bash tzdate=$(date -d "$1 UTC") zenity --info --no-wrap --text="$tzdate"

This script takes the first argument (given as $1) and uses the standard date utility to convert a given UTC time into the computer’s local timezone. Then zenity pops up a simple dialog with the result. To test this, open Ulauncher and type utc 11:00. While this is a good example showing what’s possible with shortcuts, see the ultz extension for really converting time zones.

Introducing extensions

While the built-in functionality is great, installing extensions really accelerates productivity with Ulauncher. Extensions can go far beyond what is possible with custom shortcuts, most obviously by providing suggestions as arguments are typed. Extensions are Python modules which use the Ulauncher extension interface and can either be personally-developed local code or shared with others using GitHub. A collection of community developed extensions is available at https://ext.ulauncher.io/. There are basic standalone extensions for quick conversions and dynamic interfaces to online resources such as dictionaries. Other extensions integrate with external applications, like password managers, browsers, and VPN providers. These effectively give external applications a Ulauncher interface. By keeping the core code small and relying on extensions to add advanced functionality, Ulauncher ensures that each user only installs the functionality they need.

Ulauncher extension configuration

Installing a new extension is easy, though it could be a more integrated experience. After finding an interesting extension, either on the Ulauncher extensions website or anywhere on GitHub, navigate to the Extensions tab in the preferences window. Click Add Extension and paste in the GitHub URL. This loads the extension and shows a preferences page for any available options. A nice hint is that while browsing the extensions website, clicking on the Github star button opens the extension’s GitHub page. Often this GitHub repository has more details about the extension than the summary provided on the community extensions website.

Firefox bookmarks search

One useful extension is Ulauncher Firefox Bookmarks, which gives fuzzy search access to the current user’s Firefox bookmarks. While this is similar to typing *<search-term> in Firefox’s omnibar, the difference is Ulauncher gives quick access to the bookmarks from anywhere, without needing to open Firefox first. Also, since this method uses search to locate bookmarks, no folder organization is really needed. This means pages can be “starred” quickly in Firefox and there is no need to hunt for an appropriate folder to put it in.

Firefox Ulauncher extension searching for fedora Clipboard search

Using a clipboard manager is a productivity boost on its own. These managers maintain a history of clipboard contents, which makes it easy to retrieve earlier copied snippets. Knowing there is a history of copied data allows the user to copy text without concern of overwriting the current contents. Adding in the Ulauncher clipboard extension gives quick access to the clipboard history with search capability without having to remember another unique hotkey combination. The extension integrates with different clipboard managers: GPaste, clipster, or CopyQ. Invoking Ulauncher and typing the c keywords brings up a list of recent copied snippets. Typing out an argument starts to narrow the list of options, eventually showing the sought after text. Selecting the item copies it to the clipboard, ready to paste into another application.

Ulauncher clipboard extension listing latest clipboard contents Google search

The last extension to highlight is Google Search. While a Google search shortcut is available as a default shortcut, using an extension allows for more dynamic behavior. With the extension, Google supplies suggestions as the search term is typed. The experience is similar to what is available on Google’s homepage, or in the search box in Firefox. Again, the key benefit of using the extension for Google search is immediate access while doing anything else on the computer.

Google search Ulauncher extension listing suggestions for fedora Being productive

Productivity on a computer means customizing the environment for each particular usage. A little configuration streamlines common tasks. Dedicated hotkeys work really well for the most frequent actions, but it doesn’t take long before it gets hard to remember them all. Using fuzzy search to find half-remembered keywords strikes a good balance between discoverability and direct access. The key to productivity with Ulauncher is identifying frequent actions and installing an extension, or adding a shortcut, to make doing it faster. Building a habit to search in Ulauncher first means there is a quick and consistent interface ready to go a key stroke away.

4 cool new projects to try in Copr for March 2021

Friday 19th of March 2021 08:00:00 AM

Copr is a collection of personal repositories for software that isn’t carried in Fedora Linux. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open-source. Copr can offer these projects outside the Fedora set of packages. Software in Copr isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

This article presents a few new and interesting projects in Copr. If you’re new to using Copr, see the Copr User Documentation for how to get started.

Ytfzf

Ytfzf is a simple command-line tool for searching and watching YouTube videos. It provides a fast and intuitive interface built around fuzzy find utility fzf. It uses youtube-dl to download selected videos and opens an external video player to watch them. Because of this approach, ytfzf is significantly less resource-heavy than a web browser with YouTube. It supports thumbnails (via ueberzug), history saving, queueing multiple videos or downloading them for later, channel subscriptions, and other handy features. Thanks to tools like dmenu or rofi, it can even be used outside the terminal.

Installation instructions

The repo currently provides Ytfzf for Fedora 33 and 34. To install it, use these commands:

sudo dnf copr enable bhoman/ytfzf sudo dnf install ytfzf Gemini clients

Have you ever wondered what your internet browsing experience would be if the World Wide Web went an entirely different route and didn’t adopt CSS and client-side scripting? Gemini is a modern alternative to the HTTPS protocol, although it doesn’t intend to replace it. The stenstorp/gemini Copr project provides various clients for browsing Gemini websites, namely CastorDragonstoneKristall, and Lagrange.

The Gemini site provides a list of some hosts that use this protocol. Using Castor to visit this site is shown here:

Installation instructions

The repo currently provides Gemini clients for Fedora 32, 33, 34, and Fedora Rawhide. Also available for EPEL 7 and 8, and CentOS Stream. To install a browser, choose from the install commands shown here:

sudo dnf copr enable stenstorp/gemini sudo dnf install castor sudo dnf install dragonstone sudo dnf install kristall sudo dnf install lagrange Ly

Ly is a lightweight login manager for Linux and BSD. It features a ncurses-like text-based user interface. Theoretically, it should support all X desktop environments and window managers (many of them were tested). Ly also provides basic Wayland support (Sway works very well). Somewhere in the configuration, there is an easter egg option to enable the famous PSX DOOM fire animation in the background, which on its own, is worth checking out.

Installation instructions

The repo currently provides Ly for Fedora 32, 33, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable dhalucario/ly sudo dnf install ly

Before setting up Ly to be your system login screen, run ly command in the terminal to make sure it works properly. Then proceed with disabling your current login manager and enabling Ly instead.

sudo systemctl disable gdm sudo systemctl enable ly

Finally, restart your computer for the changes to take an effect.

AWS CLI v2

AWS CLI v2 brings a steady and methodical evolution based on the community feedback, rather than a massive redesign of the original client. It introduces new mechanisms for configuring credentials and now allows the user to import credentials from the .csv files generated in the AWS Console. It also provides support for AWS SSO. Other big improvements are server-side auto-completion, and interactive parameters generation. A fresh new feature is interactive wizards, which provide a higher level of abstraction and combines multiple AWS API calls to create, update, or delete AWS resources.

Installation instructions

The repo currently provides AWS CLI v2 for Fedora Linux 32, 33, 34, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable spot/aws-cli-2 sudo dnf install aws-cli-2

Naturally, access to an AWS account is necessary.

Fedora Workstation 34 Feature Focus: Updated Activities Overview

Tuesday 16th of March 2021 08:00:00 AM

Fedora Workstation 34 is scheduled for release towards the end of April. Among the various changes that it will contain is the soon-to-be-released GNOME 40. This comes with a number of improvements and new features, most notably an updated Activities Overview design. Read on to hear the background behind those changes, and what to expect from the upcoming release!

What’s in a number?

In case anyone is wondering: yes, GNOME is changing its version schema for the upcoming release. The previous GNOME version schema gave stable releases even point numbers, like 3.32, 3.34 and 3.36. The last GNOME stable release is 3.38.

As the number of releases in the GNOME 3.x series increased, this approach has become unwieldy. It is therefore being replaced with a simpler schema where stable releases have their own major number. 40 will be followed by 41, 42, 43, and so on.

The new schema is starting at 40 because the next release will be GNOME’s 40th. This means that, going forward, the GNOME version number will indicate the number of stable releases that GNOME has produced.

While it might seem like a coincidence, the GNOME 40 version number is incidental to the overview design changes described here.

Why change the overview?

Okay, back to the Activities Overview! Before I talk about the changes themselves, I wanted to briefly touch on the motivations behind them.

One of these is that the overview hasn’t received much in the way of design updates since its introduction in 2011. Other aspects of the desktop have evolved (notifications, system status, unlock and login, to name some examples) but the overview hasn’t had much in the way of improvements.

Not only did the overview need a refresh, but a number of limitations in its design had become apparent over the years. The GNOME design and development team wanted to resolve these.

These limitations included the somewhat unhelpful blank boot state, the lack of coherent touchpad gestures, a sub-par app browsing experience, and the ambiguous nature of some overview elements, in particular the workspace switcher.

The goals for the upcoming release were, therefore, to give the overview a welcome refresh and address some longstanding issues, while keeping the basic design and essential features intact.

Design and development process

The new designs have been in the works for some time, with original conversations going back to around 2017. More recently, as the design has taken shape, a good deal of user research has taken place. These included exploratory interviews, a survey, user testing, and a diary study. These research exercises have fed into ongoing design work, helping to ensure that the design is attractive to both new and existing users.

There has also been an intensive testing program over the past month. VM images and a Copr were made available so that people can try the new design for themselves.

Those who are interested in this work can check out the research post on the GNOME Shell development blog.

Introducing the new overview

So what are the design changes? Most aspects of the GNOME desktop remain the same in Fedora Workstation 34. The top bar, search, notifications, shortcuts, system status are all largely unchanged. What has changed is the layout of the activities overview. As part of this, workspaces have switched orientation from vertical to horizontal.

This is what the initial overview looks like in the upcoming Fedora Workstation 34 beta:

As before, search is central and at the top. Next there are the workspaces. These are arranged as a horizontal strip which can be panned left and right. Finally, there’s the dash at the bottom, which contains launchers for favorite and running apps.

When more workspaces are in use, a small “navigator” appears above the main workspace strip. This allows switching between workspaces (by clicking the small thumbnail). It is also a drop target when dragging windows out of the main workspace view.

The second major part of the overview is the app grid. As before, access is via the grid button.

This is fairly similar to the previous version, though there are a few differences. First, workspace thumbnails appear above the app grid. These make it possible to see which workspace you are currently on, which is useful when launching apps. The second difference is improvements around app grid customization, including better drag and drop.

Check it out

The updated overview design will provide a refreshed look and feel to the desktop, some general improvements, and some new features. The new layout is better organized and easier to understand, the app grid is more customizable, and new gestures makes it easy to navigate the system with a touchpad. The boot state now also shows the overview by default, making it easier to get started.

There is also a collection of smaller improvements, including a better-organized dash, app icons that are shown over each window, and new shortcuts.

The new overview is a great reason to give Fedora Workstation 34 beta a spin and help out with testing. You can try a development version in a VM, either by upgrading from F33 or by installing a nightly compose. There’s also going to be test days running from 17-19 March, where we’re looking for volunteers to put the new design through its tracks.

Learn more

If you have more questions about the updated overview design, the GNOME Shell development blog contains a whole series of posts with information about the design and development process, as well as how the new design works.

How to use Poetry to manage your Python projects on Fedora

Monday 8th of March 2021 12:54:00 PM

Python developers often create a new virtual environment to separate project dependencies and then manage them with tools such as pip, pipenv, etc. Poetry is a tool for simplifying dependency management and packaging in Python. This post will show you how to use Poetry to manage your Python projects on Fedora.

Unlike other tools, Poetry uses only a single configuration file for dependency management, packaging, and publishing. This eliminates the need for different files such as Pipfile, MANIFEST.in, setup.py, etc. It is also faster than using multiple tools.

Detailed below is a brief overview of commands used when getting started with Poetry.

Installing Poetry on Fedora

If you already use Fedora 32 or above, you can install Poetry directly from the command line using this command:

$ sudo dnf install poetry

Editor note: on Fedora Silverblue or CoreOs Python 3.9.2 is part of the core commit, you would layer Poetry with ‘rpm-ostree install poetry

Initialize a project

Create a new project using the new command.

$ poetry new poetry-project

The structure of a project created with Poetry looks like this:

├── poetry_project │ └── init.py ├── pyproject.toml ├── README.rst └── tests ├── init.py └── test_poetry_project.py

Poetry uses pyproject.toml to manage the dependencies of your project. Initially, this file will look similar to this:

[tool.poetry] name = "poetry-project" version = "0.1.0" description = "" authors = ["Someone <someone@somemail.com>"] [tool.poetry.dependencies] python = "^3.9" [tool.poetry.dev-dependencies] pytest = "^5.2" [build-system] requires = ["poetry>=0.12"] build-backend = "poetry.masonry.api"

This file contains 4 sections:

  • The first section contains information describing the project such as project name, project version, etc.
  • The second section contains project dependencies. These dependencies are necessary to build the project.
  • The third section contains development dependencies.
  • The fourth section describes a building system as in PEP 517

If you already have a project, or create your own project folder, and you want to use poetry, run the init command within your project.

$ poetry init

After this command, you will see an interactive shell to configure your project.

Create a virtual environment

If you want to create a virtual environment or activate an existing virtual environment, use the command below:

$ poetry shell

Poetry creates the virtual environment in the /home/username/.cache/pypoetry project by default. You can change the default path by editing the poetry config. Use the command below to see the config list:

$ poetry config --list cache-dir = "/home/username/.cache/pypoetry" virtualenvs.create = true virtualenvs.in-project = true virtualenvs.path = "{cache-dir}/virtualenvs"

Change the virtualenvs.in-project configuration variable to create a virtual environment within your project directory. The Poetry command is:

$ poetry config virtualenvs.in-project true Add dependencies

Install a dependency for the project with the poetry add command.

$ poetry add django

You can identify any dependencies that you use only for the development environment using the add command with the –dev option.

$ poetry add black --dev

The add command creates a poetry.lock file that is used to track package versions. If the poetry.lock file doesn’t exist, the latest versions of all dependencies in pyproject.toml are installed. If poetry.lock does exist, Poetry uses the exact versions listed in the file to ensure that the package versions are consistent for everyone working on your project.

Use the poetry install command to install all dependencies in your current project.

$ poetry install

Prevent development dependencies from being installed by using the no-dev option.

$ poetry install --no-dev List packages

The show command lists all of the available packages. The tree option will list packages as a tree.

$ poetry show --tree django 3.1.7 A high-level Python Web framework that encourages rapid development and clean, pragmatic design. ├── asgiref >=3.2.10,<4 ├── pytz * └── sqlparse >=0.2.2

Include the package name to list details of a specific package.

$ poetry show requests name         : requests version      : 2.25.1 description  : Python HTTP for Humans. dependencies - certifi >=2017.4.17 - chardet >=3.0.2,<5 - idna >=2.5,<3 - urllib3 >=1.21.1,<1.27

Finally, if you want to learn the latest version of the packages, you can pass the latest option.

$ poetry show --latest idna     2.10      3.1    Internationalized Domain Names in Applications asgiref  3.3.1     3.3.1  ASGI specs, helper code, and adapters Further information

More details on Poetry are available in the documentation.

Getting started with COBOL development on Fedora Linux 33

Saturday 27th of February 2021 08:00:00 AM

Though its popularity has waned, COBOL is still powering business critical operations within many major organizations. As the need to update, upgrade and troubleshoot these applications grows, so may the demand for anyone with COBOL development knowledge.

Fedora 33 represents an excellent platform for COBOL development.
This article will detail how to install and configure tools, as well as compile and run a COBOL program.

Installing and configuring tools

GnuCOBOL is a free and open modern compiler maintained by volunteer developers. To install, open a terminal and execute the following command:

# sudo dnf -y install gnucobol

Once completed, execute this command to verify that GnuCOBOL is ready for work:

# cobc -v

You should see version information and build dates. Don’t worry if you see the error “no input files”. We will create a COBOL program file with the Vim text editor in the following steps.

Fedora ships with a minimal version of Vim, but it would be nice to have some of the extra features that the full version can offer (such as COBOL syntax highlighting). Run the command below to install Vim-enhanced, which will overwrite Vim-minimal:

# sudo dnf -y install vim-enhanced Writing, Compiling, and Executing COBOL programs

At this point, you are ready to write a COBOL program. For this example, I am set up with username fedorauser and I will create a folder under my home directory to store my COBOL programs. I called mine cobolcode.

# mkdir /home/fedorauser/cobolcode # cd /home/fedorauser/cobolcode

Now we can create and open a new file to enter our COBOL source program. I’ll call it helloworld.cbl.

# vim helloworld.cbl

You should now have the blank file open in Vim, ready to edit. This will be a simple program that does nothing except print out a message to our terminal.

Enable “insert” mode in vim by pressing the “i” key, and key in the text below. Vim will assist with placement of your code sections. This can be very helpful since every character space in a COBOL file has a purpose (it’s a digital representation of the physical cards that developers would complete and feed into the computer).

IDENTIFICATION DIVISION. PROGRAM-ID. HELLO-WORLD. *simple helloworld program. PROCEDURE DIVISION. DISPLAY '##################################'. DISPLAY '#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#'. DISPLAY '#!!!!!!!!!!FEDORA RULES!!!!!!!!!!#'. DISPLAY '#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#'. DISPLAY '##################################'. STOP RUN.

You can now press the “ESC” key to exit insert mode, and key in “:x” to save and close the file.

Compile the program by keying in the following:

# cobc -x helloworld.cbl

It should complete quickly with return status: 0. Key in “ls” to view the contents of your current directory. You should see your original helloworld.cbl file, as well as a new file simply named helloworld.

Execute the COBOL program.

# ./helloworld

If you see your text output without errors, then you have sucessfully compiled and executed the program!

Now that we have the basics of writing, compiling, and running a COBOL program, lets try one that does something a little more interesting.

The following program will generate the Fibonacci sequence given your input. Use Vim to create a file called fib.cbl and input the text below:

****************************************************************** * Author: Bryan Flood * Date: 25/10/2018 * Purpose: Compute Fibonacci Numbers * Tectonics: cobc ****************************************************************** IDENTIFICATION DIVISION. PROGRAM-ID. FIB. DATA DIVISION. FILE SECTION. WORKING-STORAGE SECTION. 01 N0 BINARY-C-LONG VALUE 0. 01 N1 BINARY-C-LONG VALUE 1. 01 SWAP BINARY-C-LONG VALUE 1. 01 RESULT PIC Z(20)9. 01 I BINARY-C-LONG VALUE 0. 01 I-MAX BINARY-C-LONG VALUE 0. 01 LARGEST-N BINARY-C-LONG VALUE 92. PROCEDURE DIVISION. *> THIS IS WHERE THE LABELS GET CALLED PERFORM MAIN PERFORM ENDFIB GOBACK. *> THIS ACCEPTS INPUT AND DETERMINES THE OUTPUT USING A EVAL STMT MAIN. DISPLAY "ENTER N TO GENERATE THE FIBONACCI SEQUENCE" ACCEPT I-MAX. EVALUATE TRUE WHEN I-MAX > LARGEST-N PERFORM INVALIDN WHEN I-MAX > 2 PERFORM CASEGREATERTHAN2 WHEN I-MAX = 2 PERFORM CASE2 WHEN I-MAX = 1 PERFORM CASE1 WHEN I-MAX = 0 PERFORM CASE0 WHEN OTHER PERFORM INVALIDN END-EVALUATE. STOP RUN. *> THE CASE FOR WHEN N = 0 CASE0. MOVE N0 TO RESULT. DISPLAY RESULT. *> THE CASE FOR WHEN N = 1 CASE1. PERFORM CASE0 MOVE N1 TO RESULT. DISPLAY RESULT. *> THE CASE FOR WHEN N = 2 CASE2. PERFORM CASE1 MOVE N1 TO RESULT. DISPLAY RESULT. *> THE CASE FOR WHEN N > 2 CASEGREATERTHAN2. PERFORM CASE1 PERFORM VARYING I FROM 1 BY 1 UNTIL I = I-MAX ADD N0 TO N1 GIVING SWAP MOVE N1 TO N0 MOVE SWAP TO N1 MOVE SWAP TO RESULT DISPLAY RESULT END-PERFORM. *> PROVIDE ERROR FOR INVALID INPUT INVALIDN. DISPLAY 'INVALID N VALUE. THE PROGRAM WILL NOW END'. *> END THE PROGRAM WITH A MESSAGE ENDFIB. DISPLAY "THE PROGRAM HAS COMPLETED AND WILL NOW END". END PROGRAM FIB.

As before, hit the “ESC” key to exit insert mode, and key in “:x” to save and close the file.

Compile the program:

# cobc -x fib.cbl

Now execute the program:

# ./fib

The program will ask for you to input a number, and will then generate Fibonacci output based upon that number.

Further Study

There are numerous resources available on the internet to consult, however vast amounts of knowledge reside only in legacy print. Keep an eye out for vintage COBOL guides when visiting used book stores and public libraries; you may find copies of endangered manuals at a rock-bottom prices!

It is also worth noting that helpful documentation was installed on your system when you installed GnuCOBOL. You can access them with these terminal commands:

# info gnucobol # man cobc # cobc -h

Contribute at the Fedora Audio, Kernel 5.11 and i18n test days

Thursday 25th of February 2021 08:00:00 PM

Fedora test days are events where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are three upcoming test events in the next week.

  • Wednesday March 03, is to test the Pipewire Audio changes in Fedora.
  • Monday March 08 through March 15, this test week is focusing on testing Kernel 5.11.
  • Tuesday March 09 through March 15, is to test the Fedora 34 i18n features.

Come and test with us to make the upcoming Fedora 34 even better. Read more below on how to do it.

Audio test day

There is a recent proposal to replace the PulseAudio daemon with a functionally compatible implementation based on PipeWire. This means that all existing clients using the PulseAudio client library will continue to work as before, as well as applications shipped as Flatpak. The test day is to test everything works as expected. This will occur on Wednesday, March 03

Kernel test week

The kernel team is working on the final integration for kernel 5.11. This version was just recently released and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week for Monday, March 08 through Monday, March 15. Refer to the wiki page for links to the test images you’ll need to participate. This document clearly outlines the steps.

i18n test week

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora users. A lot of our users use Fedora in their preferred languages and it’s important that we test the changes. The wiki contains more details about how to participate. The test week is March 09 through March 15.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki pages above. If you’re available on or around the days of the events, please do some testing and report your results.

Installing Nextcloud 20 on Fedora Linux with Podman

Monday 15th of February 2021 08:00:00 AM

Nowadays, many open source projects offer container images for easy deployment. This is very handy when running a home server or lab environment. A previous Fedora Magazine article covered installing Nextcloud from the source package. This article explains how you can run Nextcloud on Fedora 33 as a container deployment with Podman.

What is Nextcloud?

Nextcloud started in 2016 as a fork of Owncloud. Since then, it evolved into a full-fledged collaboration software offering file-, calendar-, and contact-syncing, plus much more. You can run a simple Kanban Board in it or write documents collaboratively. Nextcloud is fully open source under the AGPLv3 License and can be used for private or commercial use alike.

What is Podman?

Podman is a container engine for developing, managing, and running OCI Containers on your Linux System. It offers a wide variety of features like rootless mode, cgroupv2 support, pod management, and it can run daemonless. Furthermore, you are getting a Docker compatible API for further development. It is available by default on Fedora Workstation and ready to be used.

In case you need to install podman, run:

sudo dnf install podman Designing the Deployment

Every deployment needs a bit of preparation. Sure, you can simply start a container and start using it, but that wouldn’t be so much fun. A well-thought and designed deployment should be easy to understand and offer some kind of flexibility.

Container / Images

First, you need to choose the proper container images for the deployment. This is quite easy for Nextcloud, since it offers already a pretty good documentation for container deployments. Nextcloud supports two variations: Nextcloud Apache httpd (which is fully self-contained) and Nextcloud php-fpm (which needs an additional nginx container).

In both cases, you also need to provide a database, which can be MariaDB (recommended) or PostgreSQL (also supported). This article uses the Apache httpd + MariaDB installation.

Volumes

Running a container does not persist data you create during the runtime. You perform updates by recreating the container. Therefore, you will need some volumes for the database and the Nextcloud files. Nextcloud also recommends you put the “data” folder in a separate volume. So you will end up with three volumes:

  • nextcloud-app
  • nextcloud-data
  • nextcloud-db
Network

Lastly, you need to consider networking. One of the benefits of containers is that you can re-create your deployment as it may look like in production. Network segmentation is a very common practice and should be considered for a container deployment, too. This tutorial will not add advanced features like network load balancing or security segmentation. You will need only one network which you will use to publish the ports for Nextcloud. Creating a network also provides the dnsname plugin, which will allow container communication based on container names.

The picture

Now that every single element is prepared, you can put these together and get a really nice understanding of how the development will look.

Run, Nextcloud, Run

Now you have prepared all of the ingredients and you can start running the commands to deploy Nextcloud. All commands can be used for root-privileged or rootless deployments. This article will stick to rootless deployments.

Sart with the network:

# Creating a new network $ podman network create nextcloud-net # Listing all networks $ podman network ls # Inspecting a network $ podman network inspect nextcloud-net

As you can see in the last command, you created a DNS zone with the name “dns.podman”. All containers created in this network are reachable via “CONTAINER_NAME.dns.podman”.

Next, optionally prepare your volumes. This step can be skipped, since Podman will create named volumes on demand, if they do not exist. Podman supports named volumes, which it creates in special locations, so you don’t need to take care of SELinux or alike.

# Creating the volumes $ podman volume create nextcloud-app $ podman volume create nextcloud-data $ podman volume create nextcloud-db # Listing volumes $ podman volume ls # Inspecting volumes (this also provides the full path) $ podman volume inspect nextcloud-app

Network and volumes are done. Now provide the containers.

First, you need the database. According to the MariaDB image documentation, you need to provide some additional environment variables,. Additionally, you need to attach the created volume, connect the network, and provide a name for the container. Most of the values will be needed in the next commands again. (Note that you should replace DB_USER_PASSWORD and DB_ROOT_PASSWORD with unique passwords.)

# Deploy Mariadb $ podman run --detach --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=DB_USER_PASSWORD --env MYSQL_ROOT_PASSWORD=DB_ROOT_PASSWORD --volume nextcloud-db:/var/lib/mysql --network nextcloud-net --restart on-failure --name nextcloud-db docker.io/library/mariadb:10 # Check running containers $ podman container ls

After the successful start of your new MariaDB container, you can deploy Nextcloud itself. (Note that you should replace DB_USER_PASSWORD with the password you used in the previous step. Replace NC_ADMIN and NC_PASSWORD with the username and password you want to use for the Nextcloud administrator account.)

# Deploy Nextcloud $ podman run --detach --env MYSQL_HOST=nextcloud-db.dns.podman --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=DB_USER_PASSWORD --env NEXTCLOUD_ADMIN_USER=NC_ADMIN --env NEXTCLOUD_ADMIN_PASSWORD=NC_PASSWORD --volume nextcloud-app:/var/www/html --volume nextcloud-data:/var/www/html/data --network nextcloud-net --restart on-failure --name nextcloud --publish 8080:80 docker.io/library/nextcloud:20 # Check running containers $ podman container ls

Now that the two containers are running, you can configure your containers. Open your browser and point to “localhost:8080” (or another host name or IP address if it is running on a different server).

The first load may take some time (30 seconds) or even report “unable to load”. This is coming from Nextcloud, which is preparing the first run. In that case, wait a minute or two. Nextcloud will prompt for a username and password.

Enter the user name and password you used previously.

Now you are ready to go and experience Nextcloud for testing, development ,or your home server.

Update

If you want to update one of the containers, you need to pull the new image and re-create the containers.

# Update mariadb $ podman pull mariadb:10 $ podman stop nextcloud-db $ podman rm nextcloud-db $ podman run --detach --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=DB_USER_PASSWORD --env MYSQL_ROOT_PASSWORD=DB_ROOT_PASSWORD --volume nextcloud-db:/var/lib/mysql --network nextcloud-net --restart on-failure --name nextcloud-db docker.io/library/mariadb:10

Updating the Nextcloud container works exactly the same.

# Update Nextcloud $ podman pull nextcloud:20 $ podman stop nextcloud $ podman rm nextcloud $ podman run --detach --env MYSQL_HOST=nextcloud-db.dns.podman --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=DB_USER_PASSWORD --env NEXTCLOUD_ADMIN_USER=NC_ADMIN --env NEXTCLOUD_ADMIN_PASSWORD=NC_PASSWORD --volume nextcloud-app:/var/www/html --volume nextcloud-data:/var/www/html/data --network nextcloud-net --restart on-failure --name nextcloud --publish 8080:80 docker.io/library/nextcloud:20

That’s it; your Nextcloud installation is up-to-date again.

Conclusion

Deploying Nextcloud with Podman is quite easy. After just a couple of minutes, you will have a very handy collaboration software, offering filesync, calendar, contacts, and much more. Check out apps.nextcloud.com, which will extend the features even further.

Network address translation part 2 – the conntrack tool

Friday 12th of February 2021 08:00:00 AM

This is the second article in a series about network address translation (NAT). The first article introduced how to use the iptables/nftables packet tracing feature to find the source of NAT-related connectivity problems. Part 2 introduces the “conntrack” command. conntrack allows you to inspect and modify tracked connections.

Introduction

NAT configured via iptables or nftables builds on top of netfilters connection tracking facility. The conntrack command is used to inspect and alter the state table. It is part of the “conntrack-tools” package.

Conntrack state table

The connection tracking subsystem keeps track of all packet flows that it has seen. Run “sudo conntrack -L” to see its content:

tcp 6 43184 ESTABLISHED src=192.168.2.5 dst=10.25.39.80 sport=5646 dport=443 src=10.25.39.80 dst=192.168.2.5 sport=443 dport=5646 [ASSURED] mark=0 use=1
tcp 6 26 SYN_SENT src=192.168.2.5 dst=192.168.2.10 sport=35684 dport=443 [UNREPLIED] src=192.168.2.10 dst=192.168.2.5 sport=443 dport=35684 mark=0 use=1
udp 17 29 src=192.168.8.1 dst=239.255.255.250 sport=48169 dport=1900 [UNREPLIED] src=239.255.255.250 dst=192.168.8.1 sport=1900 dport=48169 mark=0 use=1

Each line shows one connection tracking entry. You might notice that each line shows the addresses and port numbers twice and even with inverted address and port pairs! This is because each entry is inserted into the state table twice. The first address quadruple (source and destination address and ports) are those recorded in the original direction, i.e. what the initiator sent. The second quadruple is what conntrack expects to see when a reply from the peer is received. This solves two problems:

  1. If a NAT rule matches, such as IP address masquerading, this is recorded in the reply part of the connection tracking entry and can then be automatically applied to all future packets that are part of the same flow.
  2. A lookup in the state table will be successful even if its a reply packet to a flow that has any form of network or port address translation applied.

The original (first shown) quadruple stored never changes: Its what the initiator sent. NAT manipulation only alters the reply (second) quadruple because that is what the receiver will see. Changes to the first quadruple would be pointless: netfilter has no control over the initiators state, it can only influence the packet as it is received/forwarded. When a packet does not map to an existing entry, conntrack may add a new state entry for it. In the case of UDP this happens automatically. In the case of TCP conntrack can be configured to only add the new entry if the TCP packet has the SYN bit set. By default conntrack allows mid-stream pickups to not cause problems for flows that existed prior to conntrack becoming active.

Conntrack state table and NAT

As explained in the previous section, the reply tuple listed contains the NAT information. Its possible to filter the output to only show entries with source or destination nat applied. This allows to see which kind of NAT transformation is active on a given flow. “sudo conntrack -L -p tcp –src-nat” might show something like this:

tcp 6 114 TIME_WAIT src=10.0.0.10 dst=10.8.2.12 sport=5536 dport=80 src=10.8.2.12 dst=192.168.1.2 sport=80 dport=5536 [ASSURED]

This entry shows a connection from 10.0.0.10:5536 to 10.8.2.12:80. But unlike the previous example, the reply direction is not just the inverted original direction: the source address is changed. The destination host (10.8.2.12) sends reply packets to 192.168.1.2 instead of 10.0.0.10. Whenever 10.0.0.10 sends another packet, the router with this entry replaces the source address with 192.168.1.2. When 10.8.2.12 sends a reply, it changes the destination back to 10.0.0.10. This source NAT is due to a nft masquerade rule:

inet nat postrouting meta oifname "veth0" masquerade

Other types of NAT rules, such as “dnat to” or “redirect to” would be shown in a similar fashion, with the reply tuples destination different from the original one.

Conntrack extensions

Two useful extensions are conntrack accounting and timestamping. “sudo sysctl net.netfilter.nf_conntrack_acct=1” makes “sudo conntrack -L” track byte and packet counters for each flow.

“sudo sysctl net.netfilter.nf_conntrack_timestamp=1” records a “start timestamp” for each connection. “sudo conntrack -L” then displays the seconds elapsed since the flow was first seen. Add “–output ktimestamp” to see the absolute start date as well.

Insert and change entries

You can add entries to the state table. For example:

sudo conntrack -I -s 192.168.7.10 -d 10.1.1.1 --protonum 17 --timeout 120 --sport 12345 --dport 80

This is used by conntrackd for state replication. Entries of an active firewall are replicated to a standby system. The standby system can then take over without breaking connectivity even on established flows. Conntrack can also store metadata not related to the packet data sent on the wire, for example the conntrack mark and connection tracking labels. Change them with the “update” (-U) option:

sudo conntrack -U -m 42 -p tcp

This changes the connmark of all tcp flows to 42.

Delete entries

In some cases, you want to delete enries from the state table. For example, changes to NAT rules have no effect on packets belonging to flows that are already in the table. For long-lived UDP sessions, such as tunneling protocols like VXLAN, it might make sense to delete the entry so the new NAT transformation can take effect. Delete entries via “sudo conntrack -D” followed by an optional list of address and port information. The following example removes the given entry from the table:

sudo conntrack -D -p udp --src 10.0.12.4 --dst 10.0.0.1 --sport 1234 --dport 53

Conntrack error counters

Conntrack also exports statistics:

# sudo conntrack -S cpu=0 found=0 invalid=130 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=10 cpu=1 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0 cpu=2 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=1 cpu=3 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0

Most counters will be 0. “Found” and “insert” will always be 0, they only exist for backwards compatibility. Other errors accounted for are:

  • invalid: packet does not match an existing connection and doesn’t create a new connection.
  • insert_failed: packet starts a new connection, but insertion into the state table failed. This can happen for example when NAT engine happened to pick identical source address and port when Masquerading.
  • drop: packet starts a new connection, but no memory is available to allocate a new state entry for it.
  • early_drop: conntrack table is full. In order to accept the new connection existing connections that did not see two-way communication were dropped.
  • error: icmp(v6) received icmp error packet that did not match a known connection
  • search_restart: lookup interrupted by an insertion or deletion on another CPU.
  • clash_resolve: Several CPUs tried to insert identical conntrack entry.

These error conditions are harmless unless they occur frequently. Some can be mitigated by tuning the conntrack sysctls for the expected workload. net.netfilter.nf_conntrack_buckets and net.netfilter.nf_conntrack_max are typical candidates. See the nf_conntrack-sysctl documentation for a full list.

Use “sudo sysctl net.netfilter.nf_conntrack_log_invalid=255″ to get more information when a packet is invalid. For example, when conntrack logs the following when it encounters a packet with all tcp flags cleared:

nf_ct_proto_6: invalid tcp flag combination SRC=10.0.2.1 DST=10.0.96.7 LEN=1040 TOS=0x00 PREC=0x00 TTL=255 ID=0 PROTO=TCP SPT=5723 DPT=443 SEQ=1 ACK=0

Summary

This article gave an introduction on how to inspect the connection tracking table and the NAT information stored in tracked flows. The next part in the series will expand on the conntrack tool and the connection tracking event framework.

Fedora Aarch64 on the SolidRun HoneyComb LX2K

Monday 8th of February 2021 08:00:00 AM

Almost a year has passed since the HoneyComb development kit was released by SolidRun. I remember reading about this Mini-ITX Arm workstation board being released and thinking “what a great idea.” Then I saw the price and realized this isn’t just another Raspberry Pi killer. Currently that price is $750 USD plus shipping and duty. Niche devices like the HoneyComb aren’t mass produced like the simpler Pi is, and they pack in quite a bit of high end tech. Eventually COVID lockdown boredom got the best of me and I put a build together. Adding a case and RAM, the build ended up costing about $1100 shipped to London. This is a recount of my experiences and the current state of using Fedora on this fun bit of hardware.

First and foremost, the tech packed into this board is impressive. It’s not about to kill a Xeon workstation in raw performance but it’s going to wallop it in performance/watt efficiency. Essentially this is a powerful server in the energy footprint of a small laptop. It’s also a powerful hybrid of compute and network functionality, combining powerful network features in a carrier board with modular daughter card sporting a 16-core A72 with 2 ECC-capable DDR4 SO-DIMM slots. The carrier board comes in a few editions, giving flexibility to swap or upgrade your RAM + CPU options. I purchased the edition pictured below with 16 cores, 32GB (non-ECC), 512GB NVMe, and 4x10Gbe. For an extra $250 you can add the 100Gbe option if you’re building a 5G deployment or an ISP for a small country (bottom right of board). Imagine this jacked into a 100Gb uplink port acting as proxy, tls inspector, router, or storage for a large 10gb TOR switch.

When I ordered it I didn’t fully understand the network co processor included from NXP. NXP is the company that makes the unique LX2160A CPU/SOC for this as well as configurable ports and offload engine that enable handling up to 150Gb/s of network traffic without the CPU breaking a sweat. Here is a list of options from NXP’s Layerscape user manual.

Configure ports in switch, LAG, MUX mode, or straight NICs.

I have a 10gb network in my home attic via a Ubiquiti ES-16-XG so I was eager to see how much this board could push. I also have a QNAP connected via 10gb which rarely manages to saturate the line, so could this also be a NAS replacement? It turned out I needed to sort out drivers and get a stable install first. Since the board has been out for a year, I had some catching up to do. SolidRun keeps an active Discord on Developer-Ecosystem which was immensely helpful as install wasn’t as straightforward as previous blogs have mentioned. I’ve always been cursed. If you’ve ever seen Pure Luck, I’m bound to hit every hardware glitch.

For starters, you can add a GPU and install graphically or install via USB console. I started with a spare GPU (Radeon Pro WX2100) intending to build a headless box which in the end over-complicated things. If you need to swap parts or re-flash a BIOS via the microSD card, you’ll need to swap display, keyboard + mouse. Chaos. Much simpler just to plug into the micro USB console port and access it via /dev/ttyUSB0 for that picture-in-picture experience. It’s really great to have the open ended PCIe3-x8 slot but I’ll keep it open for now. Note that the board does not support PCIe Atomics so some devices may have compatibility issues.

Now comes the fun part. BIOS is not built-in here. You’ll need to build from source for to your RAM speed and install via microSDHC. At first this seems annoying but then you realize that with removable BIOS installer it’s pretty hard to brick this thing. Not bad. The good news is the latest UEFI builds have worked well for me. Just remember that every time you re-flash your BIOS you’ll need to set everything up again. This was enough to boot Fedora aarch64 from USB. The board offers 64GB of eMMC flash which you can install to if you like. I immediately benched it to find it reads about 165MB/s and writes 55MB/s which is practical speed for embedded usage but I’ll definitely be installing to NVMe instead. I had an older Samsung 950 Pro in my spares from a previous Linux box but I encountered major issues with it even with the widely documented kernel param workaround:

nvme_core.default_ps_max_latency_us=0

In the end I upgraded my main workstation so I could repurpose its existing Samsung EVO 960 for the HoneyComb which worked much better.

After some fidgeting I was able to install Fedora but it became apparent that the integrated network ports still don’t work with the mainline kernel. The NXP tech is great but requires a custom kernel build and tooling. Some earlier blogs got around this with a USB->RJ45 Ethernet adapter which works fine. Hopefully network support will be mainlined soon, but for now I snagged a kernel SRPM from the helpful engineers on Discord. With the custom kernel the 1Gbe NIC worked fine, but it turns out the SFP+ ports need more configuration. They won’t be recognized as interfaces until you use NXP’s restool utility to map ports to their usage. In this case just a runtime mapping of dmap -> dni was required. This is NXP’s way of mapping a MAC to a network interface via IOCTL commands. The restool binary isn’t provided either and must be built from source. It then layers on management scripts which use cheeky $arg0 references for redirection to call the restool binary with complex arguments.

Since I was starting to accumulate quite a few custom packages it was apparent that a COPR repo was needed to simplify this for Fedora. If you’re not familiar with COPR I think it’s one of Fedora’s finest resources. This repo contains the uefi build (currently failing build), 5.10.5 kernel built with network support, and the restool binary with supporting scripts. I also added a oneshot systemd unit to enable the SFP+ ports on boot:

systemd enable --now dpmac@7.service
systemd enable --now dpmac@8.service
systemd enable --now dpmac@9.service
systemd enable --now dpmac@10.service

Now each SPF+ port will boot configured as eth1-4, with eth0 being the 1Gb. NetworkManager will struggle unless these are consistent, and if you change the service start order the eth devices will re-order. I actually put a sleep $@ in each activation so they are consistent and don’t have locking issues. Unfortunately it adds 10 seconds to boot time. This has been fixed in the latest kernel and won’t be an issue once mainlined.

I’d love to explore the built-in LAG features but this still needs to be coded into the restool options. I’ll save it for later. In the meantime I managed a single 10gb link as primary, and a 3×10 LACP Team for kicks. Eventually I changed to 4×10 LACP via copper SFP+ cables mounted in the attic.

Energy Efficiency

Now with a stable environment it’s time to raise some hell. It’s really nice to see PWM support was recently added for the CPU fan, which sounds like a mini jet engine without it. Now the sound level is perfectly manageable and thermal control is automatic. Time to test drive with a power meter. Total power usage is consistently between 20-40 watts (usually in the low 20s) which is really impressive. I tried a few tuned profiles which didn’t seem to have much effect on energy. If you add a power-hungry GPU or device that can obviously increase but for a dev server it’s perfect and well below the Z600 workstations I have next to it which consume 160-250 watts each when fired up.

Remote Access

I’m an old soul so I still prefer KDE with Xorg and NX via X2go server. I can access SSH or a full GUI at native performance without a GPU. This lets me get a feel for performance, thermal stats, and also helps to evaluate the device as a workstation or potential VDI. The version of KDE shipped with the aarch64 server spin doesn’t seem to recognize some sensors but that seems to be because of KDE’s latest widget changes which I’d have to dig into.

X2go KDE session over SSH

Cockpit support is also outstanding out of the box. If SSH and X2go remote access aren’t your thing, Cockpit provides a great remote management platform with a growing list of plugins. Everything works great in my experience.

Cockpit behaves as expected.

All I needed to do now is shift into high gear with jumbo frames. MTU 1500 yields me an iperf of about 2-4Gbps bottlenecked at CPU0. Ain’t nobody got time for that. Set MTU 9000 and suddenly it gets the full 10Gbps both ways with time to spare on the CPU. Again, it would be nice to use the hardware assisted LAG since the device is supposed to handle up to 150Gbps duplex no sweat (with the 100Gbe QSFP option), which is nice given the Ubiquiti ES-16-XG tops out at 160Gbps full duplex (10gb/16 ports).

Storage

As a storage solution this hardware provides great value in a small thermal window and energy saving footprint. I could accomplish similar performance with an old x86 box for cheap but the energy usage alone would eclipse any savings in short order. By comparison I’ve seen some consumer NAS devices offer 10Gbe and NVMe cache sharing an inadequate number of PCIe2 lanes and bottlenecked at the bus. This is fully customizable and since the energy footprint is similar to a small laptop a small UPS backup should allow full writeback cache mode for maximum performance. This would make a great oVirt NFS or iSCSI storage pool if needed. I would pair it with a nice NAS case or rack mount case with bays. Some vendors such as Bamboo are actually building server options around this platform as we speak.

The board has 4 SATA3 ports but if I were truly going to build a NAS with this I would probably add a RAID card that makes best use of the PCIe8x slot, which thankfully is open ended. Why some hardware vendors choose to include close-ended PCIe 8x,4x slots is beyond me. Future models will ship with a physical x16 slot but only 8x electrically. Some users on the SolidRun Discord talk about bifurcation and splitting out the 8 PCIe lanes which is an option as well. Note that some of those lanes are also reserved for NVMe, SATA, and network. The CEX7 form factor and interchangeable carrier board presents interesting possibilities later as the NXP LX2160A docs claim to support up to 24 lanes. For a dev board it’s perfectly fine as-is.

Network Perf

For now I’ve managed to rig up a 4×10 LACP Team with NetworkManager for full load balancing. This same setup can be done with a QSFP+ breakout cable. KDE nm Network widget still doesn’t support Teams but I can set them up via nm-connection-editor or Cockpit. Automation could be achieved with nmcli and teamdctl. An iperf3 test shows the connection maxing out at about 13Gbps to/from the 2×10 LACP team on my workstation. I know that iperf isn’t a true indication of real-world usage but it’s fun for benchmarks and tuning nonetheless. This did in fact require a lot of tuning and at this point I feel like I could fill a book just with iperf stats.

$ iperf3 -c honeycomb -P 4 --cport 5000 -R Connecting to host honeycomb, port 5201 Reverse mode, remote host honeycomb is sending [ 5] local 192.168.2.10 port 5000 connected to 192.168.2.4 port 5201 [ 7] local 192.168.2.10 port 5001 connected to 192.168.2.4 port 5201 [ 9] local 192.168.2.10 port 5002 connected to 192.168.2.4 port 5201 [ 11] local 192.168.2.10 port 5003 connected to 192.168.2.4 port 5201 [ ID] Interval Transfer Bitrate [ 5] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec [ 7] 1.00-2.00 sec 382 MBytes 3.21 Gbits/sec [ 9] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec [ 11] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec [SUM] 1.00-2.00 sec 1.49 GBytes 12.8 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - (TRUNCATED) - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 2.00-3.00 sec 380 MBytes 3.18 Gbits/sec [ 7] 2.00-3.00 sec 380 MBytes 3.19 Gbits/sec [ 9] 2.00-3.00 sec 380 MBytes 3.18 Gbits/sec [ 11] 2.00-3.00 sec 380 MBytes 3.19 Gbits/sec [SUM] 2.00-3.00 sec 1.48 GBytes 12.7 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 3.67 GBytes 3.16 Gbits/sec 1 sender [ 5] 0.00-10.00 sec 3.67 GBytes 3.15 Gbits/sec receiver [ 7] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec 7 sender [ 7] 0.00-10.00 sec 3.67 GBytes 3.15 Gbits/sec receiver [ 9] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec 36 sender [ 9] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver [ 11] 0.00-10.00 sec 3.69 GBytes 3.17 Gbits/sec 1 sender [ 11] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver [SUM] 0.00-10.00 sec 14.7 GBytes 12.6 Gbits/sec 45 sender [SUM] 0.00-10.00 sec 14.7 GBytes 12.6 Gbits/sec receiver iperf Done Notes on iperf3

I struggled with LACP Team configuration for hours, having done this before with an HP cluster on the same switch. I’d heard stories about bonds being old news with team support adding better load balancing to single TCP flows. This still seems bogus as you still can’t load balance a single flow with a team in my experience. Also LACP claims to be fully automated and easier to set up than traditional load balanced trunks but I find the opposite to be true. For all it claims to automate you still need to have hashing algorithms configured correctly at switches and host. With a few quirks along the way I once accidentally left a team in broadcast mode (not LACP) which registered duplicate packets on the iperf server and made it look like a single connection was getting double bandwidth. That mistake caused confusion as I tried to reproduce it with LACP.

Then I finally found the LACP hash settings in Ubiquiti’s new firmware GUI. It’s hidden behind a tiny pencil icon on each LAG. I managed to set my LAGs to hash on Src+Dest IP+port when they were defaulting to MAC/port. Still I was only seeing traffic on one slave of my 2×10 team even with parallel clients. Eventually I tried parallel clients with -V and it all made sense. By default iperf3 client ports are ephemeral but they follow an even sequence: 42174, 42176, 42178, 42180, etc… If your lb hash across a pair of sequential MACs includes src+dst port but those ports are always even, you’ll never hit the other interface with an odd MAC. How crazy is that for iperf to do? I tried looking at the source for iperf3 and I don’t even see how that could be happening. Instead if you specify a client port as well as parallel clients, they use a straight sequence: 50000, 50001, 50002, 50003, etc.. With odd+even numbers in client ports, I’m finally able to LB across all interfaces in all LAG groups. This setup would scale out well with more clients on the network.

Proper LACP load balancing.

Everything could probably be tuned a bit better but for now it is excellent performance and it puts my QNAP to shame. I’ll continue experimenting with the network co-processor and seeing if I can enable the native LAG support for even better performance. Across the network I would expect a practical peak of about 40 Gbps raw which is great.

Virtualization

What about virt? One of the best parts about having a 16 A72 cores is support for Aarch64 VMs at full speed using KVM, which you won’t be able to do on x86. I can use this single box to spin up a dozen or so VMs at a time for CI automation and testing, or just to test our latest HashiCorp builds with aarch64 builds on COPR. Qemu on x86 without KVM can emulate aarch64 but crawls by comparison. I’ve not yet tried to add it to an oVirt cluster yet but it’s really snappy actually and proves more cost effective than spinning up Arm VMs in a cloud. One of the use cases for this environment is NFV, and I think it fits it perfectly so long as you pair it with ECC RAM which I skipped as I’m not running anything critical. If anybody wants to test drive a VM DM me and I’ll try to get you some temp access.

Virtual Machines in Cockpit Benchmarks

Phoronix has already done quite a few benchmarks on OpenBenchmarking.org but I wanted to rerun them with the latest versions on my own Fedora 33 build for consistency. I also wanted to compare them to my Xeons which is not really a fair comparison. Both use DDR4 with similar clock speeds – around 2Ghz but different architectures and caches obviously yield different results. Also the Xeons are dual socket which is a huge cooling advantage for single threaded workloads. You can watch one process bounce between the coolest CPU sockets. The Honeycomb doesn’t have this luxury and has a smaller fan but the clock speed is playing it safe and slow at 2Ghz so I would bet the SoC has room to run faster if cooling were adjusted. I also haven’t played with the PWM settings to adjust the fan speed up just in case. Benchmarks performed using the tuned profile network-throughput.

Strangely some single core operations seem to actually perform better on the Honeycomb than they do on my Xeons. I tried single-threaded zstd compression with default level 3 on a a few files and found it actually performs consistently better on the Honeycomb. However using the actual pts/compress-zstd benchmark with multithreaded option turns the tables. The 16 cores still manage an impressive 2073 MB/s:

Zstd Compression 1.4.5:
   pts/compress-zstd-1.2.1 &#91;Compression Level: 3]
   Test 1 of 1
   Estimated Trial Run Count:    3                      
   Estimated Time To Completion: 9 Minutes &#91;22:41 UTC]  
       Started Run 1 @ 22:33:02
       Started Run 2 @ 22:33:53
       Started Run 3 @ 22:34:37
   Compression Level: 3:
2079.3
2067.5
       2073.9
   Average: 2073.57 MB/s

For apples to oranges comparison my 2×10 core Xeon E5-2660 v3 box does 2790 MB/s, so 2073 seems perfectly respectable as a potential workstation. Paired with a midrange GPU this device would also make a great video transcoder or media server. Some users have asked about mining but I wouldn’t use one of these for mining crypto currency. The lack of PCIe atomics means certain OpenCL and CUDA features might not be supported and with only 8 PCIe lanes exposed you’re fairly limited. That said it could potentially make a great mobile ML, VR, IoT, or vision development platform. The possibilities are pretty open as the whole package is very well balanced and flexible.

Conclusion

I wasn’t organized enough this year to arrange a FOSDEM visit but this is something I would have loved to talk about. I’m definitely glad I tried out. Special thanks to Jon Nettleton and the folks on SolidRun’s Discord for the help and troubleshooting. The kit is powerful and potentially replaces a lot of energy waste in my home lab. It provides a great Arm platform for development and it’s great to see how solid Fedora’s alternative architecture support is. I got my Linux start on Gentoo back in the day, but Fedora really has upped it’s arch game. I’m really glad I didn’t have to sit waiting for compilation on a proprietary platform. I look forward to the remaining patches to be mainlined into the Fedora kernel and I hope to see a few more generations use this package, especially as Apple goes all in on Arm. It will also be interesting to see what features emerge if Nvidia’s Arm acquisition goes through.

Astrophotography with Fedora Astronomy Lab: setting up

Friday 5th of February 2021 08:00:00 AM

You love astrophotography. You love Fedora Linux. What if you could do the former using the latter? Capturing stunning and awe-inspiring astrophotographs, processing them, and editing them for printing or sharing online using Fedora is absolutely possible! This tutorial guides you through the process of setting up a computer-guided telescope mount, guide cameras, imaging cameras, and other pieces of equipment. A future article will cover capturing and processing data into pleasing images. Please note that while this article is written with certain aspects of the astrophotography process included or omitted based off my own equipment, you can custom-tailor it to fit your own equipment and experience. Let’s capture some photons!

Installing Fedora Astronomy Lab

This tutorial focuses on Fedora Astronomy Lab, so it only makes sense that the first thing we should do is get it installed. But first, a quick introduction: based on the KDE Plasma desktop, Fedora Astronomy Lab includes many pieces of open source software to aid astronomers in planning observations, capturing data, processing images, and controlling astronomical equipment.

Download Fedora Astronomy Lab from the Fedora Labs website. You will need a USB flash-drive with at least eight GB of storage. Once you have downloaded the ISO image, use Fedora Media Writer to write the image to your USB flash-drive. After this is done, boot from the USB drive you just flashed and install Fedora Astronomy Lab to your hard drive. While you can use Fedora Astronomy Lab in a live-environment right from the flash drive, you should install to the hard drive to prevent bottlenecks when processing large amounts of astronomical data.

Configuring your installation

Before you can go capturing the heavens, you need to do some minor setup in Fedora Astronomy Lab.

First of all, you need to add your user to the dialout group so that you can access certain pieces of astronomical equipment from within the guiding software. Do that by opening the terminal (Konsole) and running this command (replacing user with your username):

sudo usermod -a -G dialout user

My personal setup includes a guide camera (QHY5 series, also known as Orion Starshoot) that does not have a driver in the mainline Fedora repositories. To enable it, ypu need to install the qhyccd SDK. (Note that this package is not officially supported by Fedora. Use it at your own risk.) At the time of writing, I chose to use the latest stable release, 20.08.26. Once you download the Linux 64-bit version of the SDK, to extract it:

tar zxvf sdk_linux64_20.08.26.tgz

Now change into the directory you just extracted, change the permissions of the install.sh file to give you execute privileges, and run the install.sh:

cd sdk_linux64_20.08.26 chmod +x install.sh sudo ./install.sh

Now it’s time to install the qhyccd INDI driver. INDI is an open source software library used to control astronomical equipment. Unfortunately, the driver is unavailable in the mainline Fedora repositories, but it is in a Copr repository. (Note: Copr is not officially supported by Fedora infrastructure. Use packages at your own risk.) If you prefer to have the newest (and perhaps unstable!) pieces of astronomy software, you can also enable the “bleeding” repositories at this time by following this guide. For this tutorial, you are only going to enable one repo:

sudo dnf copr enable xsnrg/indi-3rdparty-bleeding

Install the driver by running the following command:

sudo dnf install indi-qhy

Finally, update all of your system packages:

sudo dnf update -y

To recap what you accomplished in this sectio: you added your user to the dialout group, downloaded and installed the qhyccd driver, enabled the indi-3rdparty-bleeding copr, installed the qhyccd-INDI driver with dnf, and updated your system.

Connecting your equipment

This is the time to connect all your equipment to your computer. Most astronomical equipment will connect via USB, and it’s really as easy as plugging each device into your computer’s USB ports. If you have a lot of equipment (mount, imaging camera, guide camera, focuser, filter wheel, etc), you should use an external powered-USB hub to make sure that all connected devices have adequate power. Once you have everything plugged in, run the following command to ensure that the system recognizes your equipment:

lsusb

You should see output similar to (but not the same as) the output here:

You see in the output that the system recognizes the telescope mount (a SkyWatcher EQM-35 Pro) as Prolific Technology, Inc. PL2303 Serial Port, the imaging camera (a Sony a6000) as Sony Corp. ILCE-6000, and the guide camera (an Orion Starshoot, aka QHY5) as Van Ouijen Technische Informatica. Now that you have made sure your system recognizes your equipment, it’s time to open your desktop planetarium and telescope controller, KStars!

Setting up KStars

It’s time to open KStars, which is a desktop planetarium and also includes the Ekos telescope control software. The first time you open KStars, you will see the KStars Startup Wizard.

Follow the prompts to choose your home location (where you will be imaging from) and Download Extra Data…

Setting your location “Download Extra Data” Choosing which catalogs to download

This will allow you to install additional star, nebula, and galaxy catalogs. You don’t need them, but they don’t take up too much space and add to the experience of using KStars. Once you’ve completed this, hit Done in the bottom right corner to continue.

Getting familiar with KStars

Now is a good time to play around with the KStars interface. You are greeted with a spherical image with a coordinate plane and stars in the sky.

This is the desktop planetarium which allows you to view the placement of objects in the night sky. Double-clicking an object selects it, and right clicking on an object gives you options like Center & Track which will follow the object in the planetarium, compensating for sidereal time. Show DSS Image shows a real digitized sky survey image of the selected object.

Another essential feature is the Set Time option in the toolbar. Clicking this will allow you to input a future (or past) time and then simulate the night sky as if that were the current date.

The Set Time button Configuring capture equipment with Ekos

You’re familiar with the KStars layout and some basic functions, so it’s time to move on configuring your equipment using the Ekos observatory controller and automation tool. To open Ekos, click the observatory button in the toolbar or go to Tools > Ekos.

The Ekos button on the toolbar

You will see another setup wizard: the Ekos Profile Wizard. Click Next to start the wizard.

In this tutorial, you have all of our equipment connected directly to your computer. A future article we will cover using an INDI server installed on a remote computer to control our equipment, allowing you to connect over a network and not have to be in the same physical space as your gear. For now though, select Equipment is attached to this device.

You are now asked to name your equipment profile. I usually name mine something like “Local Gear” to differentiate between profiles that are for remote gear, but name your profile what you wish. We will leave the button marked Internal Guide checked and won’t select any additional services. Now click the Create Profile & Select Devices button.

This next screen is where we can select your particular driver to use for each individual piece of equipment. This part will be specific to your setup depending on what gear you use. For this tutorial, I will select the drivers for my setup.

My mount, a SkyWatcher EQM-35 Pro, uses the EQMod Mount under SkyWatcher in the menu (this driver is also compatible with all SkyWatcher equatorial mounts, including the EQ6-R Pro and the EQ8-R Pro). For my Sony a6000 imaging camera, I choose the Sony DSLR under DSLRs under the CCD category. Under Guider, I choose the QHY CCD under QHY for my Orion Starshoot (and any QHY5 series camera). That last driver we want to select will be under the Aux 1 category. We want to select Astrometry from the drop-down window. This will enable the Astrometry plate-solver from within Ekos that will allow our telescope to automatically figure out where in the night sky it is pointed, saving us the time and hassle of doing a one, two, or three star calibration after setting up our mount.

You selected your drivers. Now it’s time to configure your telescope. Add new telescope profiles by clicking on the + button in the lower right. This is essential for computing field-of-view measurements so you can tell what your images will look like when you open the shutter. Once you click the + button, you will be presented with a form where you can enter the specifications of your telescope and guide scope. For my imaging telescope, I will enter Celestron into the Vendor field, SS-80 into the Model field, I will leave the Driver field as None, Type field as Refractor, Aperture as 80mm, and Focal Length as 400mm.

After you enter the data, hit the Save button. You will see the data you just entered appear in the left window with an index number of 1 next to it. Now you can go about entering the specs for your guide scope following the steps above. Once you hit save here, the guide scope will also appear in the left window with an index number of 2. Once all of your scopes are entered, close this window. Now select your Primary and Guide telescopes from the drop-down window.

After all that work, everything should be correctly configured! Click the Close button and complete the final bit of setup.

Starting your capture equipment

This last step before you can start taking images should be easy enough. Click the Play button under Start & Stop Ekos to connect to your equipment.

You will be greeted with a screen that looks similar to this:

When you click on the tabs at the top of the screen, they should all show a green dot next to Connection, indicating that they are connected to your system. On my setup, the baud rate for my mount (the EQMod Mount tab) is set incorrectly, and so the mount is not connected.

This is an easy fix; click on the EQMod Mount tab, then the Connection sub-tab, and then change the baud rate from 9600 to 115200. Now is a good time to ensure the serial port under Ports is the correct serial port for your mount. You can check which port the system has mounted your device on by running the command:

ls /dev | grep USB

You should see ttyUSB0. If there is more than one USB-serial device plugged in at a time, you will see more than one ttyUSB port, with an incrementing following number. To figure out which port is correct. unplug your mount and run the command again.

Now click on the Main Control sub-tab, click Connect again, and wait for the mount to connect. It might take a few seconds, be patient and it should connect.

The last thing to do is set the sensor and pixel size parameters for my camera. Under the Sony DSLR Alpha-A6000 (Control) tab, select the Image Info sub-tab. This is where you can enter your sensor specifications; if you don’t know them, a quick search on the internet will bring you your sensor’s maximum resolution as well as pixel pitch. Enter this data into the right-side boxes, then press the Set button to load them into the left boxes and save them into memory. Hit the Close button when you are done.

Conclusion

Your equipment is ready to use. In the next article, you will learn how to capture and process the images.

Join Fedora at FOSDEM 2021

Monday 1st of February 2021 08:00:00 AM

Every year in Brussels, Belgium, the first weekend of February is dedicated to the Free and Open source Software Developers’ European Meeting (FOSDEM) This is the largest open source, developer-oriented conference of the year. As expected, the conference is going online for the 2021 edition, which gives open source enthusiasts from everywhere the opportunity to attend. You can participate with the Fedora community virtually, too.

The Fedora Project has a long history of attendance at FOSDEM (since 2006) and 2021 will not be an exception. Every year a team of dedicated volunteers, advocates, and ambassadors staff a booth, hand out swag, and answer questions related to the Fedora Project. Although we will miss seeing everyone’s faces in person this year, we are still excited to catch up with friends, old and new.

FOSDEM 2021 will be held on February 6th & 7th and we expressly invite you to “stop by” our virtual booth, check out what’s new in Fedora, and what is planned for the upcoming year. The Fedora Project has its own web page where you can join us and find out what we have planned for the FOSDEM weekend! Visiting a FOSDEM stand this year is little different. Here’s how you can participate with the Fedora community.

Chat

The FOSDEM organizers have created a Fedora room on a dedicated server. You can either join the room directly and create a new Matrix account or join with a pre-existing account.

To join with a pre-existing account:

  • Go to the Element home page
  • Click “Explore more rooms”
  • Click on the dropdown list in order to add the FOSDEM server
  • On the bottom of the dropdown list click “Add a new server”
  • Enter “fosdem.org” and click “Add”
  • Join the Fedora chat
Welcome Sessions

Immediately before the start of each day’s sessions, there will be a “Welcome Session” for each stand from 0830-0900 UTC (9:30-10AM CET). There will be a group of Fedora folks hanging out in our chatroom during those times. Join us in our virtual booth.

Social Hours

Due to pandemic life, Fedorans have been meeting for a weekly social hour since April 2020. We have found them to be a fun way to stay connected, and we want to connect with you at FOSDEM! This is a great opportunity for conference attendees to meet the faces of Fedora. There will be two Social Hours: one on Saturday, 6 February at 1500 UTC and one on Sunday 7 February at 1500 UTC. Join us in our virtual booth!

Raffle

What would FOSDEM be without swag? In order to get that sweet swag into people’s hands this year, we are holding a raffle contest!

Join our chat room to find the raffle entry link. You will need to enter your name and email address to enter. We will randomly draw 100 winners on 8 February at 1700 UTC. The winners will receive a follow up form to provide shipment information. Terms & Conditions are available on the entry form.

Videos

The FOSDEM organizers have created a dedicated space for us to upload some videos to show-case our project. If you haven’t already watched we will have:

  • “State of Fedora” from Matthew Miller from Nest 2020
  • Unboxing of Lenovo laptop running Fedora
  • Fedora 33 Release Party videos
  • Budapest community video from Flock 2019

Manage containers with Podman Compose

Friday 29th of January 2021 08:00:00 AM

Containers are awesome, allowing you to package your application along with its dependencies and run it anywhere. Starting with Docker in 2013, containers have been making the lives of software developers much easier.

One of the downsides of Docker is it has a central daemon that runs as the root user, and this has security implications. But this is where Podman comes in handy. Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux system in root or rootless mode.

There are other articles on Fedora Magazine you can use to learn more about Podman. Two examples follow:

If you have worked with Docker, chances are you also know about Docker Compose, which is a tool for orchestrating several containers that might be interdependent. To learn more about Docker Compose see its documentation.

What is Podman Compose?

Podman Compose is a project whose goal is to be used as an alternative to Docker Compose without needing any changes to be made in the docker-compose.yaml file. Since Podman Compose works using pods, it’s good to check a refresher definition of a pod.

A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage/network resources, and a specification for how to run the containers.

Pods – Kubernetes Documentation

The basic idea behind Podman Compose is that it picks the services defined inside the docker-compose.yaml file and creates a container for each service. A major difference between Docker Compose and Podman Compose is that Podman Compose adds the containers to a single pod for the whole project, and all the containers share the same network. It even names the containers the same way Docker Compose does, using the ‐‐add-host flag when creating the containers, as you will see in the example.

Installation

Complete install instructions for Podman Compose are found on its project page, and there are several ways to do it. To install the latest development version, use the following command:

pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz

Make sure you also have Podman installed since you’ll need it as well. On Fedora, to install Podman use the following command:

sudo dnf install podman Example: launching a WordPress site with Podman Compose

Imagine your docker-compose.yaml file is in a folder called wpsite. A typical docker-compose.yaml (or docker-compose.yml) for a WordPress site looks like this:

version: "3.8" services: web: image: wordpress restart: always volumes: - wordpress:/var/www/html ports: - 8080:80 environment: WORDPRESS_DB_HOST: db WORDPRESS_DB_USER: magazine WORDPRESS_DB_NAME: magazine WORDPRESS_DB_PASSWORD: 1maGazine! WORDPRESS_TABLE_PREFIX: cz WORDPRESS_DEBUG: 0 depends_on: - db networks: - wpnet db: image: mariadb:10.5 restart: always ports: - 6603:3306 volumes: - wpdbvol:/var/lib/mysql environment: MYSQL_DATABASE: magazine MYSQL_USER: magazine MYSQL_PASSWORD: 1maGazine! MYSQL_ROOT_PASSWORD: 1maGazine! networks: - wpnet volumes: wordpress: {} wpdbvol: {} networks: wpnet: {}

If you come from a Docker background, you know you can launch these services by running docker-compose up. Docker Compose will create two containers named wpsite_web_1 and wpsite_db_1, and attaches them to a network called wpsite_wpnet.

Now, see what happens when you run podman-compose up in the project directory. First, a pod is created named after the directory in which the command was issued. Next, it looks for any named volumes defined in the YAML file and creates the volumes if they do not exist. Then, one container is created per every service listed in the services section of the YAML file and added to the pod.

Naming of the containers is done similar to Docker Compose. For example, for your web service, a container named wpsite_web_1 is created. Podman Compose also adds localhost aliases to each named container. Then, containers can still resolve each other by name, although they are not on a bridge network as in Docker. To do this, use the option –add-host. For example, –add-host web:localhost.

Note that docker-compose.yaml includes a port forwarding from host port 8080 to container port 80 for the web service. You should now be able to access your fresh WordPress instance from the browser using the address http://localhost:8080.

WordPress Dashboard Controlling the pod and containers

To see your running containers, use podman ps, which shows the web and database containers along with the infra container in your pod.

CONTAINER ID  IMAGE                               COMMAND               CREATED      STATUS          PORTS                                         NAMES
a364a8d7cec7  docker.io/library/wordpress:latest  apache2-foregroun...  2 hours ago  Up 2 hours ago  0.0.0.0:8080-&gt;80/tcp, 0.0.0.0:6603-&gt;3306/tcp  wpsite_web_1
c447024aa104  docker.io/library/mariadb:10.5      mysqld                2 hours ago  Up 2 hours ago  0.0.0.0:8080-&gt;80/tcp, 0.0.0.0:6603-&gt;3306/tcp  wpsite_db_1
12b1e3418e3e  k8s.gcr.io/pause:3.2

You can also verify that a pod has been created by Podman for this project, named after the folder in which you issued the command.

POD ID        NAME             STATUS    CREATED      INFRA ID      # OF CONTAINERS
8a08a3a7773e  wpsite           Degraded  2 hours ago  12b1e3418e3e  3

To stop the containers, enter the following command in another command window:

podman-compose down

You can also do that by stopping and removing the pod. This essentially stops and removes all the containers and then the containing pod. So, the same thing can be achieved with these commands:

podman pod stop podname podman pod rm podname

Note that this does not remove the volumes you defined in docker-compose.yaml. So, the state of your WordPress site is saved, and you can get it back by running this command:

podman-compose up

In conclusion, if you’re a Podman fan and do your container jobs with Podman, you can use Podman Compose to manage your containers in development and production.

Introduction to Thunderbird mail filters

Wednesday 27th of January 2021 08:00:00 AM

Everyone eventually runs into an inbox loaded with messages that they need to sort through. If you are like a lot of people, this is not a fast process. However, use of mail filters can make the task a little less tedious by letting Thunderbird pre-sort the messages into categories that reflect their source, priority, or usefulness. This article is an introduction to the creation of filters in Thunderbird.

Filters may be created for each email account you have created in Thunderbird. These are the accounts you see in the main Thunderbird folder pane shown at the left of the “Classic Layout”.

Classic Layout

There are two methods that can be used to create mail filters for your accounts. The first is based on the currently selected account and the second on the currently selected message. Both are discussed here.

Message destination folder

Before filtering messages there has to be a destination for them. Create the destination by selecting a location to create a new folder. In this example the destination will be Local Folders shown in the accounts pane. Right click on Local Folders and select New Folder… from the menu.

Creating a new folder

Enter the name of the new folder in the menu and select Create Folder. The mail to filter is coming from the New York Times so that is the name entered.

Folder creation Filter creation based on the selected account

Select the Inbox for the account you wish to filter and select the toolbar menu item at Tools > Message_Filters.

Message_Filters menu location


The Message Filters menu appears and is set to your pre-selected account as indicated at the top in the selection menu labelled Filters for:.

Message Filters menu


Previously created filters, if any, are listed beneath the account name in the “Filter Name” column. To the right of this list are controls that let you modify the filters selected. These controls are activated when you select a filter. More on this later.

Start creating your filter as follows:

  1. Verify the correct account has been pre-selected. It may be changed if necessary.
  2. Select New… from the menu list at the right.

When you select New you will see the Filter Rules menu where you define your filter. Note that when using New… you have the option to copy an existing filter to use as a template or to simply duplicate the settings.

Filter rules are made up of three things, the “property” to be tested, the “test”, and the “value” to be tested against. Once the condition is met, the “action” is performed.

Message Filters menu

Complete this filter as follows:

  1. Enter an appropriate name in the textbox labelled Filter name:
  2. Select the property From in the left drop down menu, if not set.
  3. Leave the test set to contains.
  4. Enter the value, in this case the email address of the sender.

Under the Perform these actions: section at the bottom, create an action rule to move the message and choose the destination.

  1. Select Move Messages to from the left end of the action line.
  2. Select Choose Folder… and select Local Folders > New York Times.
  3. Select OK.

By default the Apply filter when: is set to Manually Run and Getting New Mail:. This means that when new mail appears in the Inbox for this account the filter will be applied and you may run it manually at any time, if necessary. There are other options available but they are too numerous to be discussed in this introduction. They are, however, for the most part self explanatory.

If more than one rule or action is to be created during the same session, the “+” to the right of each entry provides that option. Additional property, test, and value entries can be added. If more than one rule is created, make certain that the appropriate option for Match all of the following and Match any of the following is selected. In this example the choice does not matter since we are only setting one filter.

After selecting OK, the Message Filters menu is displayed again showing your newly created filter. Note that the menu items on the right side of the menu are now active for Edit… and Delete.

First filter in the Message Filters menu

Also notice the message “Enabled filters are run automatically in the order shown below”. If there are multiple filters the order is changed by selecting the one to be moved and using the Move to Top, Move Up, Move Down, or Move to Bottom buttons. The order can change the destination of your messages so consider the tests used in each filter carefully when deciding the order.

Since you have just created this filter you may wish to use the Run Now button to run your newly created filter on the Inbox shown to the left of the button.

Filter creation based on a message

An alternative creation technique is to select a message from the message pane and use the Create Filter From Message… option from the menu bar.

In this example the filter will use two rules to select the messages: the email address and a text string in the Subject line of the email. Start as follows:

  1. Select a message in the message page.
  2. Select the filter options on the toolbar at Message > Create Filter From Message….
Create new filters from Messages

The pre-selected message, highlighted in grey in the message pane above, determines the account used and Create Filter From Message… takes you directly to the Filter Rules menu.

The property (From), test (is), and value (email) are pre-set for you as shown in the image above. Complete this filter as follows:

  1. Enter an appropriate name in the textbox labelled Filter name:. COVID is the name in this case.
  2. Check that the property is From.
  3. Verify the test is set to is.
  4. Confirm that the value for the email address is from the correct sender.
  5. Select the “+” to the right of the From rule to create a new filter rule.
  6. In the new rule, change the default property entry From to Subject using the pulldown menu.
  7. Set the test to contains.
  8. Enter the value text to be matched in the Email “Subject” line. In this case COVID.

Since we left the Match all of the following item checked, each message will be from the address chosen AND will have the text COVID in the email subject line.

Now use the action rule to choose the destination for the messages under the Perform these actions: section at the bottom:

  1. Select Move Messages to from the left menu.
  2. Select Choose Folder… and select Local Folders > COVID in Scotland. (This destination was created before this example was started. There was no magic here.)
  3. Select OK.

OK will cause the Message Filters menu to appear, again, verifying that the new filter has been created.

The Message Filters menu

All the message filters you create will appear in the Message Filters menu. Recall that the Message Filters is available in the menu bar at Tools > Message Filters.

Once you have created filters there are several options to manage them. To change a filter, select the filter in question and click on the Edit button. This will take you back to the Filter Rules menu for that filter. As mentioned earlier, you can change the order in which the rules are apply here using the Move buttons. Disable a filter by clicking on the check mark in the Enabled column.

The Run Now button will execute the selected filter immediately. You may also run your filter from the menu bar using Tools > Run Filters on Folder or Tools > Run Filters on Message.

Next step

This article hasn’t covered every feature available for message filtering but hopefully it provides enough information for you to get started. Places for further investigation are the “property”, “test”, and “actions” in the Filter menu as well as the settings there for when your filter is to be run, Archiving, After Sending, and Periodically.

References

Mozilla: Organize Your Messages by Using Filters

MozillaZine: Message Filters

Convert your filesystem to Btrfs

Friday 22nd of January 2021 08:00:00 AM
Introduction

The purpose of this article is to give you an overview about why, and how to migrate your current partitions to a Btrfs filesystem. To read a step-by-step walk through of how this is accomplished – follow along, if you’re curious about doing this yourself.

Starting with Fedora 33, the default filesystem is now Btrfs for new installations. I’m pretty sure that most users have heard about its advantages by now: copy-on-write, built-in checksums, flexible compression options, easy snapshotting and rollback methods. It’s really a modern filesystem that brings new features to desktop storage.

Updating to Fedora 33, I wanted to take advantage of Btrfs, but personally didn’t want to reinstall the whole system for ‘just a filesystem change’. I found [there was] little guidance on how exactly to do it, so decided to share my detailed experience here.

Watch out!

Doing this, you are playing with fire. Hopefully you are not surprised to read the following:

During editing partitions and converting file systems, you can have your data corrupted and/or lost. You can end up with an unbootable system and might be facing data recovery. You can inadvertently delete your partitions or otherwise harm your system.

These conversion procedures are meant to be safe even for production systems – but only if you plan ahead, have backups for critical data and rollback plans. As a sudoer, you can do anything without limits, without any of the usual safety guards protecting you.

The safe way: reinstalling Fedora

Reinstalling your operating system is the ‘official’ way of converting to Btrfs, recommended for most users. Therefore, choose this option if you are unsure about anything in this guide. The steps are roughly the following:

  1. Backup your home folder and any data that might be used in your system like /etc. [Editors note: VM’s too]
  2. Save your list of installed packages to a file.
  3. Reinstall Fedora by removing your current partitions and choosing the new default partitioning scheme with Btrfs.
  4. Restore the contents of your home folder and reinstall the packages using the package list.

For detailed steps and commands, see this comment by a community user at ask.fedoraproject.org. If you do this properly, you’ll end up with a system that is functioning in the same way as before, with minimal risk of losing any data.

Pros and cons of conversion

Let’s clarify this real quick: what kind of advantages and disadvantages does this kind of filesystem conversion have?

The good
  • Of course, no reinstallation is needed! Every file on your system will remain the exact same as before.
  • It’s technically possible to do it in-place i.e. without a backup.
  • You’ll surely learn a lot about btrfs!
  • It’s a rather quick procedure if everything goes according to plan.
The bad
  • You have to know your way around the terminal and shell commands.
  • You can lose data, see above.
  • If anything goes wrong, you are on your own to fix it.
The ugly
  • You’ll need about 20% of free disk space for a successful conversion. But for the complete backup & reinstall scenario, you might need even more.
  • You can customize everything about your partitions during the process, but you can also do that from Anaconda if you choose to reinstall.
What about LVM?

LVM layouts have been the default during the last few Fedora installations. If you have an LVM partition layout with multiple partitions e.g. / and /home, you would somehow have to merge them in order to enjoy all the benefits of Btrfs.

If you choose so, you can individually convert partitions to Btrfs while keeping the volume group. Nevertheless, one of the advantages of migrating to Btrfs is to get rid of the limits imposed by the LVM partition layout. You can also use the send-receive functionality offered by btrfs to merge the partitions after the conversion.

See also on Fedora Magazine: Reclaim hard-drive space with LVM, Recover your files from Btrfs snapshots and Choose between Btrfs and LVM-ext4.

Getting acquainted with Btrfs

It’s advisable to read at least the following to have a basic understanding about what Btrfs is about. If you are unsure, just choose the safe way of reinstalling Fedora.

Must reads Useful resources Conversion steps Create a live image

Since you can’t convert mounted filesystems, we’ll be working from a Fedora live image. Install Fedora Media Writer and ‘burn’ Fedora 33 to your favorite USB stick.

Free up disk space

btrfs-convert will recreate filesystem metadata in your partition’s free disk space, while keeping all existing ext4 data at its current location.

Unfortunately, the amount of free space required cannot be known ahead – the conversion will just fail (and do no harm) if you don’t have enough. Here are some useful ideas for freeing up space:

  • Use baobab to identify large files & folders to remove. Don’t manually delete files outside of your home folder if possible.
  • Clean up old system journals: journalctl –vacuum-size=100M
  • If you are using Docker, carefully use tools like docker volume prune, docker image prune -a
  • Clean up unused virtual machine images inside e.g. GNOME Boxes
  • Clean up unused packages and flatpaks: dnf autoremove, flatpak remove –unused,
  • Clean up package caches: pkcon refresh force -c -1, dnf clean all
  • If you’re confident enough to, you can cautiously clean up the ~/.cache folder.
Convert to Btrfs

Save all your valuable data to a backup, make sure your system is fully updated, then reboot into the live image. Run gnome-disks to find out your device handle e.g. /dev/sda1 (it can look different if you are using LVM). Check the filesystem and do the conversion: [Editors note: The following commands are run as root, use caution!]

$ sudo su - # fsck.ext4 -fyv /dev/sdXX # man btrfs-convert (read it!) # btrfs-convert /dev/sdXX

This can take anywhere from 10 minutes to even hours, depending on the partition size and whether you have a rotational or solid-state hard drive. If you see errors, you’ll likely need more free space. As a last resort, you could try btrfs-convert -n.

How to roll back?

If the conversion fails for some reason, your partition will remain ext4 or whatever it was before. If you wish to roll back after a successful conversion, it’s as simple as

# btrfs-convert -r /dev/sdXX

Warning! You will permanently lose your ability to roll back if you do any of these: defragmentation, balancing or deleting the ext2_saved subvolume.

Due to the copy-on-write nature of Btrfs, you can otherwise safely copy, move and even delete files, create subvolumes, because ext2_saved keeps referencing to the old data.

Mount & check

Now the partition is supposed to have btrfs file system. Mount it and look around your files… and subvolumes!

# mount /dev/sdXX /mnt # man btrfs-subvolume (read it!) # btrfs subvolume list / (-t for a table view)

Because you have already read the relevant manual page, you should now know that it’s safe to create subvolume snapshots, and that you have an ext2-saved subvolume as a handy backup of your previous data.

It’s time to read the Btrfs sysadmin guide, so that you won’t confuse subvolumes with regular folders.

Create subvolumes

We would like to achieve a ‘flat’ subvolume layout, which is the same as what Anaconda creates by default:

toplevel (volume root directory, not to be mounted by default) +-- root (subvolume root directory, to be mounted at /) +-- home (subvolume root directory, to be mounted at /home)

You can skip this step, or decide to aim for a different layout. The advantage of this particular structure is that you can easily create snapshots of /home, and have different compression or mount options for each subvolume.

# cd /mnt # btrfs subvolume snapshot ./ ./root2 # btrfs subvolume create home2 # cp -a home/* home2/

Here, we have created two subvolumes. root2 is a full snapshot of the partition, while home2 starts as an empty subvolume and we copy the contents inside. (This cp command doesn’t duplicate data so it is going to be fast.)

  • In /mnt (the top-level subvolume) delete everything except root2, home2, and ext2_saved.
  • Rename root2 and home2 subvolumes to root and home.
  • Inside root subvolume, empty out the home folder, so that we can mount the home subvolume there later.

It’s simple if you get everything right!

Modify fstab

In order to mount the new volume after a reboot, fstab has to be modified, by replacing the old ext4 mount lines with new ones.

You can use the command blkid to learn your partition’s UUID.

UUID=xx / btrfs subvol=root 0 0 UUID=xx /home btrfs subvol=home 0 0

(Note that the two UUIDs are the same if they are referring to the same partition.)

These are the defaults for new Fedora 33 installations. In fstab you can also choose to customize compression and add options like noatime.

See the relevant wiki page about compression and man 5 btrfs for all relevant options.

Chroot into your system

If you’ve ever done system recovery, I’m pretty sure you know these commands. Here, we get a shell prompt that is essentially inside your system, with network access.

First, we have to remount the root subvolume to /mnt, then mount the /boot and /boot/efi partitions (these can be different depending on your filesystem layout):

# umount /mnt # mount -o subvol=root /dev/sdXX /mnt # mount /dev/sdXX /mnt/boot # mount /dev/sdXX /mnt/boot/efi

Then we can move on to mounting system devices:

# mount -t proc /proc /mnt/proc # mount --rbind /dev /mnt/dev # mount --make-rslave /mnt/dev # mount --rbind /sys /mnt/sys # mount --make-rslave /mnt/sys # cp /mnt/etc/resolv.conf /mnt/etc/resolv.conf.chroot # cp -L /etc/resolv.conf /mnt/etc # chroot /mnt /bin/bash $ ping www.fedoraproject.org Reinstall GRUB & kernel

The easiest way – now that we have network access – is to reinstall GRUB and the kernel because it does all configuration necessary. So, inside the chroot:

# mount /boot/efi # dnf reinstall grub2-efi shim # grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg # dnf reinstall kernel-core ...or just renegenerating initramfs: # dracut --kver $(uname -r) --force

This applies if you have an UEFI system. Check the docs below if you have a BIOS system. Let’s check if everything went well, before rebooting:

# cat /boot/grub2/grubenv # cat /boot/efi/EFI/fedora/grub.cfg # lsinitrd /boot/initramfs-$(uname -r).img | grep btrfs

You should have proper partition UUIDs or references in grubenv and grub.cfg (grubenv may not have been updated, edit it if needed) and see insmod btrfs in grub.cfg and btrfs module in your initramfs image.

See also: Reinstalling GRUB 2 and Verifying the Initial RAM Disk Image in the Fedora System Administration Guide.

Reboot

Now your system should boot properly. If not, don’t panic, go back to the live image and fix the issue. In the worst case, you can just reinstall Fedora from right there.

After first boot

Check that everything is fine with your new Btrfs system. If you are happy, you’ll need to reclaim the space used by the old ext4 snapshot, defragment and balance the subvolumes. The latter two might take some time and is quite resource intensive.

You have to mount the top level subvolume for this:

# mount /dev/sdXX -o subvol=/ /mnt/someFolder # btrfs subvolume delete /mnt/someFolder/ext2_saved

Then, run these commands when the machine has some idle time:

# btrfs filesystem defrag -v -r -f / # btrfs filesystem defrag -v -r -f /home # btrfs balance start -m /

Finally, there’s a “no copy-on-write” attribute that is automatically set for virtual machine image folders for new installations. Set it if you are using VMs:

# chattr +C /var/lib/libvirt/images $ chattr +C ~/.local/share/gnome-boxes/images

This attribute only takes effect for new files in these folders. Duplicate the images and delete the originals. You can confirm the result with lsattr.

Wrapping up

I really hope that you have found this guide to be useful, and was able to make a careful and educated decision about whether or not to convert to Btrfs on your system. I wish you a successful conversion process!

Feel free to share your experience here in the comments, or if you run into deeper issues, on ask.fedoraproject.org.

Deploy your own Matrix server on Fedora CoreOS

Monday 18th of January 2021 08:00:00 AM

Today it is very common for open source projects to distribute their software via container images. But how can these containers be run securely in production? This article explains how to deploy a Matrix server on Fedora CoreOS.

What is Matrix?

Matrix provides an open source, federated and optionally end-to-end encrypted communication platform.

From matrix.org:

Matrix is an open source project that publishes the Matrix open standard for secure, decentralised, real-time communication, and its Apache licensed reference implementations.

Matrix also includes bridges to other popular platforms such as Slack, IRC, XMPP and Gitter. Some open source communities are replacing IRC with Matrix or adding Matrix as an new communication channel (see for example Mozilla, KDE, and Fedora).

Matrix is a federated communication platform. If you host your own server, you can join conversations hosted on other Matrix instances. This makes it great for self hosting.

What is Fedora CoreOS?

From the Fedora CoreOS docs:

Fedora CoreOS is an automatically updating, minimal, monolithic, container-focused operating system, designed for clusters but also operable standalone, optimized for Kubernetes but also great without it.

With Fedora CoreOS (FCOS), you get all the benefits of Fedora (podman, cgroups v2, SELinux) packaged in a minimal automatically updating system thanks to rpm-ostree.

To get more familiar with Fedora CoreOS basics, check out this getting started article:

Getting started with Fedora CoreOS Creating the Fedora CoreOS configuration

Running a Matrix service requires the following software:

This guide will demonstrate how to run all of the above software in containers on the same FCOS server. All the services will all be configured to run under podman container engines.

Assembling the FCCT configuration

Configuring and provisioning these containers on the host requires an Ignition file. FCCT generates the ignition file using a YAML configuration file as input. On Fedora Linux you can install FCCT using dnf:

$ sudo dnf install fcct

A GitHub repository is available for the reader, it contains all the configuration needed and a basic template system to simplify the personnalisation of the configuration. These template values use the %%VARIABLE%% format and each variable is defined in a file named secrets.

User and ssh access

The first configuration step is to define an SSH key for the default user core.

variant: fcos version: 1.3.0 passwd: users: - name: core ssh_authorized_keys: - %%SSH_PUBKEY%% Cgroups v2

Fedora CoreOS comes with cgroups version 1 by default, but it can be configured to use cgroups v2. Using the latest version of cgroups allows for better control of the host resources among other new features.

Switching to cgroups v2 is done via a systemd service that modifies the kernel arguments and reboots the host on first boot.

systemd: units: - name: cgroups-v2-karg.service enabled: true contents: | [Unit] Description=Switch To cgroups v2 After=systemd-machine-id-commit.service ConditionKernelCommandLine=systemd.unified_cgroup_hierarchy ConditionPathExists=!/var/lib/cgroups-v2-karg.stamp [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/rpm-ostree kargs --delete=systemd.unified_cgroup_hierarchy ExecStart=/bin/touch /var/lib/cgroups-v2-karg.stamp ExecStart=/bin/systemctl --no-block reboot [Install] WantedBy=multi-user.target Podman pod

Podman supports the creation of pods. Pods are quite handy when you need to group containers together within the same network namespace. Containers within a pod can communicate with each other using the localhost address.

Create and configure a pod to run the different services needed by Matrix:

- name: podmanpod.service enabled: true contents: | [Unit] Description=Creates a podman pod to run the matrix services. After=cgroups-v2-karg.service network-online.target Wants=After=cgroups-v2-karg.service network-online.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=sh -c 'podman pod exists matrix || podman pod create -n matrix -p 80:80 -p 443:443 -p 8448:8448' [Install] WantedBy=multi-user.target

Another advantage of using a pod is that we can control which ports are exposed to the host from the pod as a whole. Here we expose the ports 80, 443 (HTTP and HTTPS) and 8448 (Matrix federation) to the host to make these services available outside of the pod.

A web server with Let’s Encrypt support

The Matrix protocol is HTTP based. Clients connect to their homeserver via HTTPS. Federation between Matrix homeservers is also done over HTTPS. For this setup, you will need three domains. Using distinct domains helps to isolate each service and protect against cross-site scripting (XSS) vulnerabilities.

  • example.tld: The base domain for your homeserver. This will be part of the user Matrix IDs (for example: @username:example.tld).
  • matrix.example.tld: The sub-domain for your Synapse Matrix server.
  • chat.example.tld: The sub-domain for your Element web client.

To simplify the configuration, you only need to set your own domain once in the secrets file.

You will need to ask Let’s Encrypt for certificates on first boot for each of the three domains. Make sure that the domains are configured beforehand to resolve to the IP address that will be assigned to your server. If you do not know what IP address will be assigned to your server in advance, you might want to use another ACME challenge method to get Let’s Encrypt certificates (see DNS Plugins).

- name: certbot-firstboot.service enabled: true contents: | [Unit] Description=Run certbot to get certificates ConditionPathExists=!/var/srv/matrix/letsencrypt-certs/archive After=podmanpod.service network-online.target nginx-http.service Wants=network-online.target Requires=podmanpod.service nginx-http.service [Service] Type=oneshot ExecStart=/bin/podman run \ --name=certbot \ --pod=matrix \ --rm \ --cap-drop all \ --volume /var/srv/matrix/letsencrypt-webroot:/var/lib/letsencrypt:rw,z \ --volume /var/srv/matrix/letsencrypt-certs:/etc/letsencrypt:rw,z \ docker.io/certbot/certbot:latest \ --agree-tos --webroot certonly [Install] WantedBy=multi-user.target

Once the certificates are available, you can start the final instance of nginx. Nginx will act as an HTTPS reverse proxy for your services.

- name: nginx.service enabled: true contents: | [Unit] Description=Run the nginx server After=podmanpod.service network-online.target certbot-firstboot.service Wants=network-online.target Requires=podmanpod.service certbot-firstboot.service [Service] ExecStartPre=/bin/podman pull docker.io/nginx:stable ExecStart=/bin/podman run \ --name=nginx \ --pull=always \ --pod=matrix \ --rm \ --volume /var/srv/matrix/nginx/nginx.conf:/etc/nginx/nginx.conf:ro,z \ --volume /var/srv/matrix/nginx/dhparam:/etc/nginx/dhparam:ro,z \ --volume /var/srv/matrix/letsencrypt-webroot:/var/www:ro,z \ --volume /var/srv/matrix/letsencrypt-certs:/etc/letsencrypt:ro,z \ --volume /var/srv/matrix/well-known:/var/well-known:ro,z \ docker.io/nginx:stable ExecStop=/bin/podman rm --force --ignore nginx [Install] WantedBy=multi-user.target

Because Let’s Encrypt certificates have a short lifetime, they must be renewed frequently. Set up a system timer to automate their renewal:

- name: certbot.timer enabled: true contents: | [Unit] Description=Weekly check for certificates renewal [Timer] OnCalendar=Sun --* 02:00:00 Persistent=true [Install] WantedBy=timers.target - name: certbot.service enabled: false contents: | [Unit] Description=Let's Encrypt certificate renewal ConditionPathExists=/var/srv/matrix/letsencrypt-certs/archive After=podmanpod.service network-online.target Wants=network-online.target Requires=podmanpod.service [Service] Type=oneshot ExecStart=/bin/podman run \ --name=certbot \ --pod=matrix \ --rm \ --cap-drop all \ --volume /var/srv/matrix/letsencrypt-webroot:/var/lib/letsencrypt:rw,z \ --volume /var/srv/matrix/letsencrypt-certs:/etc/letsencrypt:rw,z \ docker.io/certbot/certbot:latest \ renew ExecStartPost=/bin/systemctl restart --no-block nginx.service Synapse and database

Finally, configure the Synapse server and PostgreSQL database.

The Synapse server requires a configuration and secrets keys to be generated. Follow the GitHub repository’s README file section to generate those.

Once these steps completed, add a systemd service using podman to run Synapse as a container:

- name: synapse.service enabled: true contents: | [Unit] Description=Run the synapse service. After=podmanpod.service network-online.target Wants=network-online.target Requires=podmanpod.service [Service] ExecStart=/bin/podman run \ --name=synapse \ --pull=always \ --read-only \ --pod=matrix \ --rm \ -v /var/srv/matrix/synapse:/data:z \ docker.io/matrixdotorg/synapse:v1.24.0 ExecStop=/bin/podman rm --force --ignore synapse [Install] WantedBy=multi-user.target

Setting up the PostgreSQL database is very similar. You will also need to provide a POSTGRES_PASSWORD, in the repository’s secrets file and declare a systemd service (check here for all the details).

Setting the Fedora CoreOS host

FCCT provides a storage directive which is useful for creating directories and adding files on the Fedora CoreOS host.

The following configuration makes sure that the configuration needed by each service is present under /var/srv/matrix. Each service has a dedicated directory. For example /var/srv/matrix/nginx and /var/srv/matrix/synapse. These directories are mounted by podman as volumes when the containers are started.

storage: directories: - path: /var/srv/matrix mode: 0700 - path: /var/srv/matrix/synapse/media_store mode: 0777 - path: /var/srv/matrix/postgres - path: /var/srv/matrix/letsencrypt-webroot trees: - local: synapse path: /var/srv/matrix/synapse - local: nginx path: /var/srv/matrix/nginx - local: nginx-http path: /var/srv/matrix/nginx-http - local: letsencrypt-certs path: /var/srv/matrix/letsencrypt-certs - local: well-known path: /var/srv/matrix/well-known - local: element-web path: /var/srv/matrix/element-web files: - path: /etc/postgresql_synapse contents: local: postgresql_synapse mode: 0700 Auto-updates

You are now ready to setup the most powerful part of Fedora CoreOS ‒ auto-updates. On Fedora CoreOS, the system is automatically updated and restarted approximately once every two weeks for each new Fedora CoreOS release. On startup, all containers will be updated to the latest version (because the pull=always option is set). The containers are stateless and volume mounts are used for any data that needs to be persistent across reboots.

The PostgreSQL container is an exception. It can not be fully updated automatically because it requires manual intervention for major releases. However, it will still be updated with new patch releases to fix security issues and bugs as long as the specified version is supported (approximately five years). Be aware that Synapse might start requiring a newer version before support ends. Consequently, you should plan a manual update approximately once per year for new PostgreSQL releases. The steps to update PostgreSQL are documented in this project’s README.

To maximise availability and avoid service interruptions in the middle of the day, set an update strategy in Zincati’s configuration to only allow reboots for updates during certain periods of time. For example, one might want to restrict reboots to week days between 2 AM and 4 AM UTC. Make sure to pick the correct time for your timezone. Fedora CoreOS uses the UTC timezone by default. Here is an example configuration snippet that you could append to your config.yaml:

[updates] strategy = "periodic" [[updates.periodic.window]] days = [ "Mon", "Tue", "Wed", "Thu", "Fri" ] start_time = "02:00" length_minutes = 120 Creating your own Matrix server by using the git repository

Some sections where lightly edited to make this article easier to read but you can find the full, unedited configuration in this GitHub repository. To host your own server from this configuration, fill in the secrets values and generate the full configuration with fcct via the provided Makefile:

$ cp secrets.example secrets ${EDITOR} secrets # Fill in values not marked as generated by synapse

Next, generate the Synapse secrets and include them in the secrets file. Finally, you can build the final configuration with make:

$ make # This will generate the config.ign file

You are now ready to deploy your Matrix homeserver on Fedora CoreOS. Follow the instructions for your platform of choice from the documentation to proceed.

Registering new users

What’s a service without users? Open registration was disabled by default to avoid issues. You can re-enable open registration in the Synapse configuration if you are up for it. Alternatively, even with open registration disabled, it is possible to add new users to your server via the command line:

$ sudo podman run --rm --tty --interactive \ --pod=matrix \ -v /var/srv/matrix/synapse:/data:z,ro \ --entrypoint register_new_matrix_user \ docker.io/matrixdotorg/synapse:latest \ -c /data/homeserver.yaml http://127.0.0.1:8008 Conclusion

You are now ready to join the Matrix federated universe! Enjoy your quickly deployed and automatically updating Matrix server! Remember that auto-updates are made as safe as possible by the fact that if anything breaks, you can either rollback the system to the previous version or use the previous container image to work around any bugs while they are being fixed. Being able to quickly setup a system that will be kept updated and secure is the main advantage of the Fedora CoreOS model.

To go further, take a look at this other article that is taking advantage of Terraform to generate the configuration and directly deploy Fedora CoreOS on your platform of choice.

Deploy Fedora CoreOS servers with Terraform

Discover Fedora Kinoite: a Silverblue variant with the KDE Plasma desktop

Thursday 14th of January 2021 08:00:00 AM

Fedora Kinoite is an immutable desktop operating system featuring the KDE Plasma desktop. In short, Fedora Kinoite is like Fedora Silverblue but with KDE instead of GNOME. It is an emerging variant of Fedora, based on the same technologies as Fedora Silverblue (rpm-ostree, Flatpak, podman) and created exclusively from official RPM packages from Fedora.

Fedora Kinoite is like Silverblue, but what is Silverblue?

From the Fedora Silverblue documentation:

Fedora Silverblue is an immutable desktop operating system. It aims to be extremely stable and reliable. It also aims to be an excellent platform for developers and for those using container-focused workflows.

For more details about what makes Fedora Silverblue interesting, read the “What is Fedora Silverblue?” article previously published on Fedora Magazine. Everything in that article also applies to Fedora Kinoite.

Fedora Kinoite status

Kinoite is not yet an official emerging edition of Fedora. But this is in progress and currently planned for Fedora 35. However, it is already usable! Join us if you want to help.

Kinoite is made from the same packages that are available in the Fedora repositories and that go into the classic Fedora KDE Plasma Spin, so it is as functional and stable as classic Fedora. I’m also “eating my own dog food” as it is the only OS installed on the laptop that I am currently using to write this article.

However, be aware that Kinoite does not currently have graphical support for updates. You will need to be confortable with using the command line to manage updates, install Flatpaks, or overlay packages with rpm-ostree.

How to try Fedora Kinoite

As Kinoite is not yet an official Fedora edition, there is not a dedicated installer for it for now. To get started, install Silverblue and switch to Kinoite with the following commands:

# Add the temporary unofficial Kinoite remote $ curl -O https://tim.siosm.fr/downloads/siosm_gpg.pub $ sudo ostree remote add kinoite https://siosm.fr/kinoite/ --gpg-import siosm_gpg.pub # Optional, only if you want to keep Silverblue available $ sudo ostree admin pin 0 # Switch to Kinoite $ sudo rpm-ostree rebase kinoite:fedora/33/x86_64/kinoite # Reboot $ sudo systemctl reboot How to keep it up-to-date

Kinoite does not yet have graphical support for updates in Discover. Work is in progress to teach Discover how to manage an rpm-ostree system. Flatpak management mostly works with Discover but the command line is still the best option for now.

To update the system, use:

$ rpm-ostree update

To update Flatpaks, use:

$ flatpak update Status of KDE Apps Flatpaks

Just like Fedora Silverblue, Fedora Kinoite focuses on applications delivered as Flatpaks. Some non KDE applications are available in the Fedora Flatpak registry, but until this selection is expanded with KDE ones, your best bet is to look for them in Flathub (see all KDE Apps on Flathub). Be aware that applications on Flathub may include non-free or proprietary software. The KDE SIG is working on packaging KDE Apps as Fedora provided Flatpaks but this is not ready yet.

Submitting bug reports

Report issues in the Fedora KDE SIG issue tracker or in the discussion thread at discussion.fedoraproject.org.

Other desktop variants

Although this project started with KDE, I have also already created variants for XFCE, Mate, Deepin, Pantheon, and LXQt. They are currently available from the same remote as Kinoite. Note that they will remain unofficial until someone steps up to maintain them officially in Fedora.

I have also created an additional smaller Base variant without any desktop environment. This allows you to overlay the lightweight window manager of your choice (i3, Sway, etc.). The same caveats as the ones for other desktop environments apply (currently unofficial and will need a maintainer).

Java development on Fedora Linux

Friday 8th of January 2021 08:00:00 AM

Java is a lot. Aside from being an island of Indonesia, it is a large software development ecosystem. Java was released in January 1996. It is approaching its 25th birthday and it’s still a popular platform for enterprise and casual software development. Many things, from banking to Minecraft, are powered by Java development.

This article will guide you through all the individual components that make Java and how they interact. This article will also cover how Java is integrated in Fedora Linux and how you can manage different versions. Finally, a small demonstration using the game Shattered Pixel Dungeon is provided.

A birds-eye perspective of Java

The following subsections present a quick recap of a few important parts of the Java ecosystem.

The Java language

Java is a strongly typed, object oriented, programming language. Its principle designer is James Gosling who worked at Sun, and Java was officially announced in 1995. Java’s design is strongly inspired by C and C++, but using a more streamlined syntax. Pointers are not present and parameters are passed-by-value. Integers and floats no longer have signed and unsigned variants, and more complex objects like Strings are part of the base definition.

But that was 1995, and the language has seen its ups and downs in development. Between 2006 and 2014, no major releases were made, which led to stagnation and which opened up the market to competition. There are now multiple competing Java-esk languages like Scala, Clojure and Kotlin. A large part of ‘Java’ programming nowadays uses one of these alternative language specifications which focus on functional programming or cross-compilation.

// Java public class Hello { public static void main(String[] args) { println("Hello, world!"); } } // Scala object Hello { def main(args: Array[String]) = { println("Hello, world!") } } // Clojure (defn -main [& args] (println "Hello, world!")) // Kotlin fun main(args: Array<String>) { println("Hello, world!") }

The choice is now yours. You can choose to use a modern version or you can opt for one of the alternative languages if they suit your style or business better.

The Java platform

Java isn’t just a language. It is also a virtual machine to run the language. It’s a C/C++ based application that takes the code, and executes it on the actual hardware. Aside from that, the platform is also a set of standard libraries which are included with the Java Virtual Machine (JVM) and which are written in the same language. These libraries contain logic for things like collections and linked lists, date-times, and security.

And the ecosystem doesn’t stop there. There are also software repositories like Maven and Clojars which contain a considerable amount of usable third-party libraries. There are also special libraries aimed at certain languages, providing extra benefits when used together. Additionally, tools like Apache Maven, Sbt and Gradle allow you to compile, bundle and distribute the application you write. What is important is that this platform works with other languages. You can write your code in Scala and have it run side-by-side with Java code on the same platform.

Last but not least, there is a special link between the Java platform and the Android world. You can compile Java and Kotlin for the Android platform to get additional libraries and tools to work with.

License history

Since 2006, the Java platform is licensed under the GPL 2.0 with a classpath-exception. This means that everybody can build their own Java platform; tools and libraries included. This makes the ecosystem very competitive. There are many competing tools for building, distribution, and development.

Sun ‒ the original maintainer of Java ‒ was bought by Oracle in 2009. In 2017, Oracle changed the license terms of the Java package. This prompted multiple reputable software suppliers to create their own Java packaging chain. Red Hat, IBM, Amazon and SAP now have their own Java packages. They use the OpenJDK trademark to distinguish their offering from Oracle’s version.

It deserves special mention that the Java platform package provided by Oracle is not FLOSS. There are strict license restrictions to Oracle’s Java-trademarked platform. For the remainder of this article, Java refers to the FLOSS edition ‒ OpenJDK.

Finally, the classpath-exception deserves special mention. While the license is GPL 2.0, the classpath-exception allows you to write proprietary software using Java as long as you don’t change the platform itself. This puts the license somewhere in between the GPL 2.0 and the LGPL and it makes Java very suitable for enterprises and commercial activities.

Praxis

If all of that seems quite a lot to take in, don’t panic. It’s 25 years of software history and there is a lot of competition. The following subsections demonstrate using Java on Fedora Linux.

Running Java locally

The default Fedora Workstation 33 installation includes OpenJDK 11. The open source code of the platform is bundled for Fedora Workstation by the Fedora Project’s package maintainers. To see for yourself, you can run the following:

$ java -version

Multiple versions of OpenJDK are available in Fedora Linux’s default repositories. They can be installed concurrently. Use the alternatives command to select which installed version of OpenJDK should be used by default.

$ dnf search openjdk $ alternatives --config java

Also, if you have Podman installed, you can find most OpenJDK options by searching for them.

$ podman search openjdk

There are many options to run Java, both natively and in containers. Many other Linux distributions also come with OpenJDK out of the box. Pkgs.org has a comprehensive list. GNOME Boxes or Virt Manager will be your friend in that case.

To get involved with the Fedora community directly, see their project Wiki.

Alternative configurations

If the Java version you want is not available in the repositories, use SDKMAN to install Java in your home directory. It also allows you to switch between multiple installed versions and it comes with popular CLI tools like Ant, Maven, Gradle and Sbt.

Last but not least, some vendors provide direct downloads for Java. Special mention goes to AdoptOpenJDK which is a collaborative effort among several major vendors to provide simple FLOSS packages and binaries.

Graphical tools

Several integrated development environments (IDEs) are available for Java. Some of the more popular IDEs include:

  • Eclipse: This is free software published and maintained by the Eclipse Foundation. Install it directly from the Fedora Project’s repositories or from Flathub.
  • NetBeans: This is free software published and maintained by the Apache foundation. Install it from their site or from Flathub.
  • IntelliJ IDEA: This is proprietary software but it comes with a gratis community version. It is published by Jet Brains. Install it from their site or from Flathub.

The above tools are themselves written in OpenJDK. They are examples of dogfooding.

Demonstration

The following demonstration uses Shattered Pixel Dungeon ‒ a Java based roque-like which is available on Android, Flathub and others.

First, set up a development environment:

$ curl -s "https://get.sdkman.io" | bash $ source "$HOME/.sdkman/bin/sdkman-init.sh" $ sdk install gradle

Next, close your terminal window and open a new terminal window. Then run the following commands in the new window:

$ git clone https://github.com/00-Evan/shattered-pixel-dungeon.git $ cd shattered-pixel-dungeon $ gradle desktop:debug

Now, import the project in Eclipse. If Eclipse is not already installed, run the following command to install it:

$ sudo dnf install eclipse-jdt

Use Import Projects from File System to add the code of Shattered Pixel Dungeon.

As you can see in the imported resources on the top left, not only do you have the code of the project to look at, but you also have the OpenJDK available with all its resources and libraries.

If this motivates you further, I would like to point you towards the official documentation from Shattered Pixel Dungeon. The Shattered Pixel Dungeon build system relies on Gradle which is an optional extra that you will have to configure manually in Eclipse. If you want to make an Android build, you will have to use Android Studio. Android Studio is a gratis, Google-branded version of IntelliJ IDEA.

Summary

Developing with OpenJDK on Fedora Linux is a breeze. Fedora Linux provides some of the most powerful development tools available. Use Podman or Virt-Manager to easily and securely host server applications. OpenJDK provides a FLOSS means of creating applications that puts you in control of all the application’s components.

Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Network address translation part 1 – packet tracing

Monday 4th of January 2021 08:00:00 AM

The first post in a series about network address translation (NAT). Part 1 shows how to use the iptables/nftables packet tracing feature to find the source of NAT related connectivity problems.

Introduction

Network address translation is one way to expose containers or virtual machines to the wider internet. Incoming connection requests have their destination address rewritten to a different one. Packets are then routed to a container or virtual machine instead. The same technique can be used for load-balancing where incoming connections get distributed among a pool of machines.

Connection requests fail when network address translation is not working as expected. The wrong service is exposed, connections end up in the wrong container, request time out, and so on. One way to debug such problems is to check that the incoming request matches the expected or configured translation.

Connection tracking

NAT involves more than just changing the ip addresses or port numbers. For instance, when mapping address X to Y, there is no need to add a rule to do the reverse translation. A netfilter system called “conntrack” recognizes packets that are replies to an existing connection. Each connection has its own NAT state attached to it. Reverse translation is done automatically.

Ruleset evaluation tracing

The utility nftables (and, to a lesser extent, iptables) allow for examining how a packet is evaluated and which rules in the ruleset were matched by it. To use this special feature “trace rules” are inserted at a suitable location. These rules select the packet(s) that should be traced. Lets assume that a host coming from IP address C is trying to reach the service on address S and port P. We want to know which NAT transformation is picked up, which rules get checked and if the packet gets dropped somewhere.

Because we are dealing with incoming connections, add a rule to the prerouting hook point. Prerouting means that the kernel has not yet made a decision on where the packet will be sent to. A change to the destination address often results in packets to get forwarded rather than being handled by the host itself.

Initial setup # nft 'add table inet trace_debug'
# nft 'add chain inet trace_debug trace_pre { type filter hook prerouting priority -200000; }'
# nft "insert rule inet trace_debug trace_pre ip saddr $C ip daddr $S tcp dport $P tcp flags syn limit rate 1/second meta nftrace set 1"

The first rule adds a new table This allows easier removal of the trace and debug rules later. A single “nft delete table inet trace_debug” will be enough to undo all rules and chains added to the temporary table during debugging.

The second rule creates a base hook before routing decisions have been made (prerouting) and with a negative priority value to make sure it will be evaluated before connection tracking and the NAT rules.

The only important part, however, is the last fragment of the third rule: “meta nftrace set 1″. This enables tracing events for all packets that match the rule. Be as specific as possible to get a good signal-to-noise ratio. Consider adding a rate limit to keep the number of trace events at a manageable level. A limit of one packet per second or per minute is a good choice. The provided example traces all syn and syn/ack packets coming from host $C and going to destination port $P on the destination host $S. The limit clause prevents event flooding. In most cases a trace of a single packet is enough.

The procedure is similar for iptables users. An equivalent trace rule looks like this:

# iptables -t raw -I PREROUTING -s $C -d $S -p tcp --tcp-flags SYN SYN  --dport $P  -m limit --limit 1/s -j TRACE Obtaining trace events

Users of the native nft tool can just run the nft trace mode:

# nft monitor trace

This prints out the received packet and all rules that match the packet (use CTRL-C to stop it):

trace id f0f627 ip raw prerouting  packet: iif "veth0" ether saddr ..

We will examine this in more detail in the next section. If you use iptables, first check the installed version via the “iptables –version” command. Example:

# iptables --version
iptables v1.8.5 (legacy)

(legacy) means that trace events are logged to the kernel ring buffer. You will need to check dmesg or journalctl. The debug output lacks some information but is conceptually similar to the one provided by the new tools. You will need to check the rule line numbers that are logged and correlate those to the active iptables ruleset yourself. If the output shows (nf_tables), you can use the xtables-monitor tool:

# xtables-monitor --trace

If the command only shows the version, you will also need to look at dmesg/journalctl instead. xtables-monitor uses the same kernel interface as the nft monitor trace tool. Their only difference is that it will print events in iptables syntax and that, if you use a mix of both iptables-nft and nft, it will be unable to print rules that use maps/sets and other nftables-only features.

Example

Lets assume you’d like to debug a non-working port forward to a virtual machine or container. The command “ssh -p 1222 10.1.2.3” should provide remote access to a container running on the machine with that address, but the connection attempt times out.

You have access to the host running the container image. Log in and add a trace rule. See the earlier example on how to add a temporary debug table. The trace rule looks like this:

nft "insert rule inet trace_debug trace_pre ip daddr 10.1.2.3 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1"

After the rule has been added, start nft in trace mode: nft monitor trace, then retry the failed ssh command. This will generate a lot of output if the ruleset is large. Do not worry about the large example output below – the next section will do a line-by-line walkthrough.

trace id 9c01f8 inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)
trace id 9c01f8 inet trace_debug trace_pre verdict continue
trace id 9c01f8 inet trace_debug trace_pre policy accept
trace id 9c01f8 inet nat prerouting packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp  tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3  tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)
trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200
trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)
trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn Line-by-line trace walkthrough

The first line generated is the packet id that triggered the subsequent trace output. Even though this is in the same grammar as the nft rule syntax, it contains header fields of the packet that was just received. You will find the name of the receiving network interface (here named “enp0”) the source and destination mac addresses of the packet, the source ip address (can be important – maybe the reporter is connecting from a wrong/unexpected host) and the tcp source and destination ports. You will also see a “trace id” at the very beginning. This identification tells which incoming packet matched a rule. The second line contains the first rule matched by the packet:

trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)

This is the just-added trace rule. The first rule is always one that activates packet tracing. If there would be other rules before this, we would not see them. If there is no trace output at all, the trace rule itself is never reached or does not match. The next two lines tell that there are no further rules and that the “trace_pre” hook allows the packet to continue (verdict accept).

The next matching rule is

trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3  tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)

This rule sets up a mapping to a different address and port. Provided 192.168.70.10 really is the address of the desired VM, there is no problem so far. If its not the correct VM address, the address was either mistyped or the wrong NAT rule was matched.

IP forwarding

Next we can see that the IP routing engine told the IP stack that the packet needs to be forwarded to another host:

trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200

This is another dump of the packet that was received, but there are a couple of interesting changes. There is now an output interface set. This did not exist previously because the previous rules are located before the routing decision (the prerouting hook). The id is the same as before, so this is still the same packet, but the address and port has already been altered. In case there are rules that match “tcp dport 1222” they will have no effect anymore on this packet.

If the line contains no output interface (oif), the routing decision steered the packet to the local host. Route debugging is a different topic and not covered here.

trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)

This tells that the packet matched a rule that jumps to a chain named “allowed_dnats”. The next line shows the source of the connection failure:

trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)

The rule unconditionally drops the packet, so no further log output for the packet exists. The next output line is the result of a different packet: trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn

The trace id is different, the packet however has the same content. This is a retransmit attempt: The first packet was dropped, so TCP re-tries. Ignore the remaining output, it does not contain new information. Time to inspect that chain.

Ruleset investigation

The previous section found that the packet is dropped in a chain named “allowed_dnats” in the inet filter table. Time to look at it:

# nft list chain inet filter allowed_dnats
table inet filter {
 chain allowed_dnats {
  meta nfproto ipv4 ip daddr . tcp dport @allow_in accept
  drop
   }
}

The rule that accepts packets in the @allow_in set did not show up in the trace log. Double-check that the address is in the @allow_set by listing the element:

# nft "get element inet filter allow_in { 192.168.70.10 . 22 }"
Error: Could not process rule: No such file or directory

As expected, the address-service pair is not in the set. We add it now.

# nft "add element inet filter allow_in { 192.168.70.10 . 22 }"

Run the query command now, it will return the newly added element.

# nft "get element inet filter allow_in { 192.168.70.10 . 22 }" table inet filter { set allow_in { type ipv4_addr . inet_service elements = { 192.168.70.10 . 22 } } }

The ssh command should now work and the trace output reflects the change:

trace id 497abf58 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)

trace id 497abf58 inet filter allowed_dnats rule meta nfproto ipv4 ip daddr . tcp dport @allow_in accept (verdict accept)

trace id 497abf58 ip postrouting packet: iif "enp0" oif "veth21" ether .. trace id 497abf58 ip postrouting policy accept

This shows the packet passes the last hook in the forwarding path – postrouting.

In case the connect is still not working, the problem is somewhere later in the packet pipeline and outside of the nftables ruleset.

Summary

This Article gave an introduction on how to check for packet drops and other sources of connectivity problems with the nftables trace mechanism. A later post in the series shows how to inspect the connection tracking subsystem and the NAT information that may be attached to tracked flows.

More in Tux Machines

Open Hardware: Crowbits, Raspberry Pi, and RISC-V

  • Crowbits Master Kit Tutorial - Part 2: ESP32 intrusion scanner and visual programming - CNX Software - Embedded Systems News

    I started Crowbits Master Kit review last month by checking out the content, user manual, and some of the possible projects for the ESP32 educational kit including a 2G phone and a portable game console. For the second part of the review, I’ll go through one of the lessons in detail, namely the intrusion scanner to show the whole process and how well (or not) it works. Let’s go to Lesson 5 directly, although I’d recommend going through the first lessons that provide details about the hardware and visual programming basics using Letscode program, which is basically a custom version of Scratch for Crowbits

  • RP2040-PICO-PC small computer made with the Raspberry Pi RP2040-PICO module first prototypes are ready

    It’s small base board for RP2040-PICO the $4 module with the Cortex-M0+ processor made by Raspberry Pi foundation.

    We were ready with the prototype for a long time but the RP2040-PICO modules were tricky to source

  • ESP32-C6 WiFI 6 and Bluetooth 5 LE RISC-V SoC for IoT devices coming soon - CNX Software - Embedded Systems News

    Espressif Systems introduced their first RISC-V wireless SoC last year with ESP32-C3 single-core 32-bit RISC-V SoC offering both 2.4GHz WiFi 4 and Bluetooth 5.0 LE connectivity, and while the company sent some engineering samples of ESP32-C3 boards months ago, general availability of ESP32-C3-DevKitM-1 and modules is expected shortly. But the company did not stop here, and just announced their second RISC-V processor with ESP32-C6 single-core 32-bit RISC-V microcontroller clocked at up to 160 MHz with both 2.4 GHz WiFi 6 (802.11ax) and Bluetooth 5 LE connectivity.

Linux, NetBSD, and OpenBSD

  • EXT4 With Linux 5.13 Looks Like It Will Support Casefolding With Encryption Enabled - Phoronix

    While EXT4 supports both case-folding for optional case insensitive filenames and does support file-system encryption, at the moment those features are mutually exclusive. But it looks like the upcoming Linux 5.13 kernel will allow casefolding and encryption to be active at the same time. Queued this week into the EXT4 file-system's "dev" tree was ext4: handle casefolding with encryption.

  • SiFive FU740 PCIe Support Queued Ahead Of Linux 5.13 - Phoronix

    Arguably the most interesting RISC-V board announced to date is SiFive's HiFive Unmatched with the FU740 RISC-V SoC that features four U74-MC cores and one S7 embedded core. The HiFive Unmatched also has 16GB of RAM, USB 3.2 Gen 1, one PCI Express x16 slot (operating at x8 speeds), an NVMe slot, and Gigabit Ethernet. The upstream kernel support for the HiFive Unmatched and the FU740 SoC continues. With the Linux 5.12 cycle there was the start of mainlining SiFive FU740 SoC support and that work is continuing for the upcoming Linux 5.13 cycle.

  •                  
  • The state of toolchains in NetBSD
                     
                       

    While FreeBSD and OpenBSD both switched to using LLVM/Clang as their base system compiler, NetBSD picked a different path and remained with GCC and binutils regardless of the license change to GPLv3. However, it doesn't mean that the NetBSD project endorses this license, and the NetBSD Foundation's has issued a statement about its position on the subject.

                       

    Realistically, NetBSD is more or less tied to GCC, as it supports more architectures than the other BSDs, some of which will likely never be supported in LLVM.

                       

    As of NetBSD 9.1, the latest released version, all supported platforms have recent versions of GCC (7.5.0) and binutils (2.31.1) in the base system. Newer (and older!) versions of GCC can be installed via Pkgsrc, and the following packages are available, going all the way back to GCC 3.3.6: [...]

  •                
  • Review: OpenBSD 6.8 on 8th Gen Lenovo ThinkPad X1 Carbon 13.3"
                     
                       

    10 days ago, I bought this X1 Carbon. I immediately installed OpenBSD on it. It took me a few days to settle in and make myself at home, but here are my impressions.

                       

    This was the smoothest experience I've had getting OpenBSD set up the way I like it. The Toshiba NB305 in 2011 was a close second, but the Acer I used between these two laptops required a lot more tweaking of both hardware and kernel to get it to feel like home.

Audio/Video and Games: This Week in Linux, MineTest, OpenTTD, and Portal Stories: Mel

  • This Week in Linux 146: Linux on M1 Mac, Google vs Oracle, PipeWire, Bottom Panel for GNOME Shell - TuxDigital

    On this episode of This Week in Linux, we’ve got an update for Linux support on Apple’s M1 Mac hardware. KDE Announces a new patch-set for Qt 5. IBM Announced COBOL Compiler For Linux. Then later in the show we’re bringing back everyone’s favorite Legal News segment with Google v. Oracle reaching U.S. Supreme Court. We’ve also got new releases to talk about such as PipeWire 0.3.25 and JingOS v0.8 plus GNOME Designers are exploring the possibility of having a bottom panel. Then we’ll round out the show with some Humble Bundles about programming in Python. All that and much more on Your Weekly Source for Linux GNews!

  • MineTest: I Am A Dwarf And I'm Digging A Hole

    People have been asking me to play MineTest for ages so I thought I should finally get around to it, if you've never heard of it MineTest is an open source Minecraft clone that surprisingly has a lot of community support

  • OpenTTD Went to Steam to Solve a Hard Problem - Boiling Steam

    OpenTTD, the free and open-source software recreation of Transport Tycoon Deluxe, has been a popular game for a long time, but recently something unusual happened. The team behind the project decided to release the game on Steam (still free as always) and this has changed everything. Let me explain why this matters. If you have ever played OpenTTD on Linux, let me venture that you have probably relied on your distro’s package manager to keep your game up-to-date. In theory, this is the BEST way to keep your packages up to date. Rely on maintainers. In practice however, it’s far from being something you can rely on, beyond security updates. Debian stable tends to have really old packages, sometimes years behind their latest versions. So on Debian stable you end up with OpenTTD 1.08 as the most recent version. That’s what shipped in April 2018. Just about 3 years old.

  • Portal Stories: Mel gets Vulkan support on Linux in a new Beta | GamingOnLinux

    Portal Stories: Mel, an extremely popular and highly rated mod for Portal 2 just had a new Beta pushed out which adds in Vulkan support for Linux. Much like the update for Portal 2 that recently added Vulkan support, it's using a special native build of DXVK, the Vulkan-based translation layer for Direct3D 9/10/11. Compared with the Portal 2 update, in some of my own testing today it seems that Portal Stories: Mel seems to benefit from the Vulkan upgrade quite a bit more in some places. At times giving a full 100FPS increase! So for those on weaker cards, this will probably be an ideal upgrade. Another game to test with Vulkan is always great too.

today's howtos

  • How to Install TeamSpeak Client on Ubuntu 20.04 Linux - Linux Shout

    TeamSpeak is a free voice conferencing software available to install on Linux, Windows, macOS, FreeBSD, and Android. It is the pioneer in its areas of other platforms such as Discord. TeamSpeak allows free of cost access to around 1000 public TeamSpeak servers or even your own private one. In parallel to online games, you can use the current TeamSpeak to communicate with friends via speech and text.

  • How To Install Robo 3T on Ubuntu 20.04 LTS - idroot

    In this tutorial, we will show you how to install Robo 3T on Ubuntu 20.04 LTS. For those of you who didn’t know, Robo3T formerly known as RobMongo is one of the best GUI tools for managing and querying MongoDB database. It provides GUI tools for managing & querying the MongoDB database. It embeds the actual mongo shell that allows for CLI as well as GUI access to the database. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you through the step-by-step installation of Robo 3T RobMongo on an Ubuntu 20.04 (Focal Fossa) server. You can follow the same instructions for Ubuntu 18.04, 16.04, and any other Debian-based distribution like Linux Mint.

  • How to Install Java on Ubuntu Step by Step Guide for Beginners

    Some programs/tools/utility on Ubuntu required java/JVM, without java these programs are not working. Are you facing the same problem? Don’t worry! Today I am going to cover in this article how to install Java on Ubuntu. This article will cover the complete tutorial step by step. You can get java on Ubuntu via three packages JRE, OpenJDK and Oracle JDK. Java and Java’s Virtual Machine (JVM) are widely used and required to run much software.

  • "apt-get command not found" error in Ubuntu by Easy Way

    apt-get command is used to manage package in Ubuntu and other Debian based distribution. You can install, remove software in Ubuntu, You can update upgrade ubuntu and other operating systems with help of this command. If you want to install new software on the Linux operating system by apt-get command but you get the error “apt-get command not found“. This is really the biggest problem for the new user. Neither you can install new packages nor you can update and upgrade ubuntu. apt-get is not working, how will you install a new package? If the problem only of installing new packages then it can be solved. You can use dpkg command to install deb files in ubuntu and derivatives.

  • How to upgrade Linux Mint 19.3 (Tricia) to Mint 20.1 (Ulyssa) - Linux Shout

    Are you planning to upgrade your existing Linux Mint 19.3 (Tricia) PC or Laptop to Linux Mint 20.1 Ulyssa, then following the simple steps given in the tutorial… Linux Mint is one of the popular distros among users who want a Windows-like operating system but with the benefits of Linux and a user-friendly interface. As Mint is an Ubuntu derivative, thus not only we have the access to a large number of packages to install but also stability. The process of upgrading Mint is very easy, we can use GUI or command to do that. However, in this article, we will show you how to upgrade from Tricia (19.3) to Ulyssa (20.1) using CLI, thus first you have to make sure that your existing Mint 19.3 is on 64-bit because 20.1 doesn’t support 32-bit.

  • How to Install Node js in Ubuntu Step by Step Explanation for Beginners

    Node.js is an open source cross-platform JavaScript run-time environment that allows server-side execution of JavaScript code. In simple words you can run JavaScript code on your machine (server) as a standalone application, and access form any web browser. When you create a server side application you need Node.js, it is also help to create front-end and full-stack. npm (Node Package Manager) is a package manager for the JavaScript programming language, and default package manager for Node.js. This tutorial will cover step by step methods “how to install node js in ubuntu 19.04″. in case you need the latest Node.js and npm versions. If you are using Node.js for development purposes then your best option is to install Node.js using the NVM script. Although this tutorial is written for Ubuntu the same instructions apply for any Ubuntu-based distribution, including Kubuntu, Linux Mint and Elementary OS.

  • How to play Geometry Dash on Linux

    Geometry Dash is a music platformer game developed by Robert Topala. The game is available to play on iOS, Android, as well as Microsoft Windows via Steam. In the game, players control a character’s movement and navigate through a series of music-based levels while avoiding obstacles and hazards.

  • How To Set Up a Firewall with UFW in Ubuntu \ Debian

    The Linux kernel includes the Netfilter subsystem, which is used to manipulate or decide the fate of network traffic headed into or through your server. All modern Linux firewall solutions use this system for packet filtering. [...] The default behavior of the UFW Firewall is to block all incoming and forwarding traffic and allow all outbound traffic. This means that anyone trying to access your server will not be able to connect unless you specifically open the port. Applications and services running on your server will be able to access the outside world.