Language Selection

English French German Italian Portuguese Spanish

Fedora Magazine

Syndicate content
Guides, information, and news about the Fedora operating system for users, developers, system administrators, and community members.
Updated: 12 hours 39 min ago

Using Python and NetworkManager to control the network

Friday 16th of September 2022 08:00:00 AM

NetworkManager is the default network management service on Fedora and several other Linux distributions. Its main purpose is to take care of things like setting up interfaces, adding addresses and routes to them and configuring other network related aspects of the system, such as DNS.

There are other tools that offer similar functionality. However one of the advantages of NetworkManager is that it offers a powerful API. Using this API, other applications can inspect, monitor and change the networking state of the system.

This article first introduces the API of NetworkManager and presents how to use it from a Python program. In the second part it shows some practical examples: how to connect to a wireless network or to add an IP address to an interface programmatically via NetworkManager.

The API

NetworkManager provides a D-Bus API. D-Bus is a message bus system that allows processes to talk to each other; using D-Bus, a process that wants to offer some services can register on the bus with a well-known name (for example, “org.freedesktop.NetworkManager”) and expose some objects, each identified by a path. Using d-feet, a graphical tool to inspect D-Bus objects, we can see the object tree exposed by the NetworkManager service:

Each object has properties, methods and signals, grouped into different interfaces. For example, the following is a simplified view of the interfaces for the second device object:

We see that there are different interfaces; the org.freedesktop.NetworkManager.Device interface contains some properties common to all devices, like the state, the MTU and IP configurations. Since this device is Ethernet, it also has a org.freedesktop.NetworkManager.Device.Wired D-Bus interface containing other properties such as the link speed.

The full documentation for the D-Bus API of NetworkManager is here.

A client can connect to the NetworkManager service using the well-known name and perform operations on the exposed objects. For example, it can invoke methods, access properties or receive notifications via signals. In this way, it can control almost every aspect of network configuration. In fact, all the tools that interact with NetworkManager – nmcli, nmtui, GNOME control center, the KDE applet, Cockpit – use this API.

libnm

When developing a program, it can be convenient to automatically instantiate objects from the objects available on D-Bus and keep their properties synchronized; or to be able to have method calls on those objects automatically dispatched to the corresponding D-Bus method. Such objects are usually called proxies and are used to hide the complexity of D-Bus communication from the developer.

For this purpose, the NetworkManager project provides a library called libnm, written in C and based on GNOME’s GLib and GObject. The library provides C language bindings for functionality provided by NetworkManager. Being a GLib library, it is usable from other languages as well via GObject introspection, as explained below.

The library maps fairly closely to the D-Bus API of NetworkManager. It wraps remote D-Bus objects as native GObjects, and D-Bus signals and properties to GObject signals and properties. Furthermore, it provides helpful accessors and utility functions.

Overview of libnm objects

The diagram below shows the most important objects in libnm and their relationship:

NMClient caches all the objects instantiated from D-Bus. The object is typically created at the beginning at the program and provides a way to access other objects.

A NMDevice represents a network interface, physical (as Ethernet, Infiniband, Wi-Fi, etc.) or virtual (as a bridge or a IP tunnel). Each device type supported by NetworkManager has a dedicated subclass that implements type-specific properties and methods. For example, a NMDeviceWifi has properties related to the wireless configuration and to access points found during the scan, while a NMDeviceVlan has properties describing its VLAN-id and the parent device.

NMClient also provides a list of NMRemoteConnection objects. NMRemoteConnection is one of the two implementations of the NMConnection interface. A connection (or connection profile) contains all the configuration needed to connect to a specific network.

The difference between a NMRemoteConnection and a NMSimpleConnection is that the former is a proxy for a connection existing on D-Bus while the latter is not. In particular, NMSimpleConnection can be instantiated when a new blank connection object is required. This is useful for, example, when adding a new connection to NetworkManager.

The last object in the diagram is NMActiveConnection. This represents an active connection to a specific network using settings from a NMRemoteConnection.

GObject introspection

GObject introspection is a layer that acts as a bridge between a C library using GObject and programming language runtimes such as JavaScript, Python, Perl, Java, Lua, .NET, Scheme, etc.

When the library is built, sources are scanned to generate introspection metadata describing, in a language-agnostic way, all the constants, types, functions, signals, etc. exported by the library. The resulting metadata is used to automatically generate bindings to call into the C library from other languages.

One form of metadata is a GObject Introspection Repository (GIR) XML file. GIRs are mostly used by languages that generate bindings at compile time. The GIR can be translated into a machine-readable format called Typelib that is optimized for fast access and lower memory footprint; for this reason it is mostly used by languages that generate bindings at runtime.

This page lists all the introspection bindings for other languages. For a Python example we will use PyGObject which is included in the python3-gobject RPM on Fedora.

A basic example

Let’s start with a simple Python program that prints information about the system:

import gi gi.require_version("NM", "1.0") from gi.repository import GLib, NM client = NM.Client.new(None) print("version:", client.get_version())

At the beginning we import the introspection module and then the Glib and NM modules. Since there could be multiple versions of the NM module in the system, we make certain to load the right one. Then we create a client object and print the version of NetworkManager.

Next, we want to get a list of devices and print some of their properties:

devices = client.get_devices() print("devices:") for device in devices: print(" - name:", device.get_iface()); print(" type:", device.get_type_description()) print(" state:", device.get_state().value_nick)

The device state is an enum of type NMDeviceState and we use value_nick to get its description. The output is something like:

version: 1.41.0 devices: - name: lo type: loopback state: unmanaged - name: enp1s0 type: ethernet state: activated - name: wlp4s0 type: wifi state: activated

In the libnm documentation we see that the NMDevice object has a get_ip4_config() method which returns a NMIPConfig object and provides access to addresses, routes and other parameters currently set on the device. We can print them with:

ip4 = device.get_ip4_config() if ip4 is not None: print(" addresses:") for a in ip4.get_addresses(): print(" - {}/{}".format(a.get_address(), a.get_prefix())) print(" routes:") for r in ip4.get_routes(): print(" - {}/{} via {}".format(r.get_dest(), r.get_prefix(), r.get_next_hop()))

From this, the output for enp1s0 becomes:

- name: enp1s0 type: ethernet state: activated addresses: - 192.168.122.191/24 - 172.26.1.1/16 routes: - 172.26.0.0/16 via None - 192.168.122.0/24 via None - 0.0.0.0/0 via 192.168.122.1 Connecting to a Wi-Fi network

Now that we have mastered the basics, let’s try something more advanced. Suppose we are in the range of a wireless network and we want to connect to it.

As mentioned before, a connection profile describes all the settings required to connect to a specific network. Conceptually, we’ll need to perform two different operations: first insert a new connection profile into NetworkManager’s configuration and second activate it. Fortunately, the API provides method nm_client_add_and_activate_connection_async() that does everything in a single step. When calling the method we need to pass at least the following parameters:

  • the NMConnection we want to add, containing all the needed properties;
  • the device to activate the connection on;
  • the callback function to invoke when the method completes asynchronously.

We can construct the connection with:

def create_connection(): connection = NM.SimpleConnection.new() ssid = GLib.Bytes.new("Home".encode("utf-8")) s_con = NM.SettingConnection.new() s_con.set_property(NM.SETTING_CONNECTION_ID, "my-wifi-connection") s_con.set_property(NM.SETTING_CONNECTION_TYPE, "802-11-wireless") s_wifi = NM.SettingWireless.new() s_wifi.set_property(NM.SETTING_WIRELESS_SSID, ssid) s_wifi.set_property(NM.SETTING_WIRELESS_MODE, "infrastructure") s_wsec = NM.SettingWirelessSecurity.new() s_wsec.set_property(NM.SETTING_WIRELESS_SECURITY_KEY_MGMT, "wpa-psk") s_wsec.set_property(NM.SETTING_WIRELESS_SECURITY_PSK, "z!q9at#0b1") s_ip4 = NM.SettingIP4Config.new() s_ip4.set_property(NM.SETTING_IP_CONFIG_METHOD, "auto") s_ip6 = NM.SettingIP6Config.new() s_ip6.set_property(NM.SETTING_IP_CONFIG_METHOD, "auto") connection.add_setting(s_con) connection.add_setting(s_wifi) connection.add_setting(s_wsec) connection.add_setting(s_ip4) connection.add_setting(s_ip6) return connection

The function creates a new NMSimpleConnection and sets all the needed properties. All the properties are grouped into settings. In particular, the NMSettingConnection setting contains general properties such as the profile name and its type. NMSettingWireless indicates the wireless network name (SSID) and that we want to operate in “infrastructure” mode, that is, as a wireless client. The wireless security setting specifies the authentication mechanism and a password. We set both IPv4 and IPv6 to “auto” so that the interface gets addresses via DHCP and IPv6 autoconfiguration.

All the properties supported by NetworkManager are described in the nm-settings man page and in the “Connection and Setting API Reference” section of the libnm documentation.

To find a suitable interface, we loop through all devices on the system and return the first Wi-Fi device.

def find_wifi_device(client): for device in client.get_devices(): if device.get_device_type() == NM.DeviceType.WIFI: return device return None

What is missing now is a callback function, but it’s easier if we look at it later. We can proceed invoking the add_and_activate_connection_async() method:

import gi gi.require_version("NM", "1.0") from gi.repository import GLib, NM # other functions here... main_loop = GLib.MainLoop() client = NM.Client.new(None) connection = create_connection() device = find_wifi_device(client) client.add_and_activate_connection_async( connection, device, None, None, add_and_activate_cb, None ) main_loop.run()

To support multiple asynchronous operations without blocking execution of the whole program, libnm uses an event loop mechanism. For an introduction to event loops in GLib see this tutorial. The call to main_loop.run() waits until there are events (such as the callback for our method invocation, or any update from D-Bus). Event processing continues until the main loop is explicitly terminated. This happens in the callback:

def add_and_activate_cb(client, result, data): try: ac = client.add_and_activate_connection_finish(result) print("ActiveConnection {}".format(ac.get_path())) print("State {}".format(ac.get_state().value_nick)) except Exception as e: print("Error:", e) main_loop.quit()

Here, we use client.add_and_activate_connection_finish() to get the result for the asynchronous method. The result is a NMActiveConnection object and we print its D-Bus path and state.

 Note that the callback is invoked as soon as the active connection is created. It may still be attempting to connect. In other words, when the callback runs we don’t have a guarantee that the activation completed successfully. If we want to ensure that, we would need to monitor the active connection state until it changes to activated (or to deactivated in case of failure). In this example, we just print that the activation started, or why it failed, and then we quit the main loop; after that, the main_loop.run() call will end and our program will terminate.

Adding an address to a device

Once there is a connection active on a device, we might decide that we want to configure an additional IP address on it.

There are different ways to do that. One way would be to modify the profile and activate it again similar to what we saw in the previous example. Another way is by changing the runtime configuration of the device without updating the profile on disk.

To do that, we use the reapply() method. It requires at least the following parameters:

  • the NMDevice on which to apply the new configuration;
  • the NMConnection containing the configuration.

Since we only want to change the IP address and leave everything else unchanged, we first need to retrieve the current configuration of the device (also called the “applied connection”). Then we update it with the static address and reapply it to the device.

The applied connection, not surprisingly, can be queried with method get_applied_connection() of the NMDevice. Note that the method also returns a version id that can be useful during the reapply to avoid race conditions with other clients. For simplicity we are not going to use it.

In this example we suppose that we already know the name of the device we want to update:

import gi import socket gi.require_version("NM", "1.0") from gi.repository import GLib, NM # other functions here... main_loop = GLib.MainLoop() client = NM.Client.new(None) device = client.get_device_by_iface("enp1s0") device.get_applied_connection_async(0, None, get_applied_cb, None) main_loop.run()

The callback function retrieves the applied connection from the result, changes the IPv4 configuration and reapplies it:

def get_applied_cb(device, result, data): (connection, v) = device.get_applied_connection_finish(result) s_ip4 = connection.get_setting_ip4_config() s_ip4.add_address(NM.IPAddress.new(socket.AF_INET, "172.25.12.1", 24)) device.reapply_async(connection, 0, 0, None, reapply_cb, None)

Omitting exception handling for brevity, the reapply callback is as simple as:

def reapply_cb(device, result, data): device.reapply_finish(result) main_loop.quit()

When the program quits, we will see the new address configured on the interface.

Conclusion

This article introduced the D-Bus and libnm API of NetworkManager and presented some practical examples of its usage. Hopefully it will be useful when you need to develop your next project that involves networking!

Besides the examples presented here, the NetworkManager git tree includes many others for different programming languages. To stay up-to-date with the news from NetworkManager world, follow the blog.

References

Announcing the release of Fedora Linux 37 Beta

Tuesday 13th of September 2022 12:12:40 AM

The Fedora Project is pleased to announce the immediate availability of Fedora Linux 37 Beta, the next step towards our planned Fedora Linux 37 release at the end of October.

Download the prerelease from our Get Fedora site:

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for specific use cases like Computational Neuroscience

Beta Release Highlights Fedora Workstation

Fedora 37 Workstation Beta includes a beta release of GNOME 43. (We expect the final GNOME 43 release in a few weeks.) GNOME 43 includes a new device security panel in Settings, providing the user with information about the security of hardware and firmware on the system. Building on the previous release, more core GNOME apps have been ported to the latest version of the GTK toolkit, providing improved performance and a modern look. 

Other updates

The Raspberry Pi 4 is now officially supported in Fedora Linux, including accelerated graphics. In other ARM news, Fedora Linux 37 Beta drops support for the ARMv7 architecture (also known as arm32 or armhfp).

We are preparing to promote two of our most popular variants: Fedora CoreOS and Fedora Cloud Base to Editions. Fedora Editions are our flagship offerings targeting specific use cases. 

In order to keep up with advances in cryptography, this release introduces a TEST-FEDORA39 policy that previews changes planned for Fedora Linux 39. The new policy includes a move away from SHA-1 signatures.

Of course, there’s the usual update of programming languages and libraries: Python 3.11, Perl 5.36, Golang 1.19, and more!

Testing needed

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the test mailing list or in the #quality channel on Matrix (bridged to #fedora-qa on Libera.chat). As testing progresses, we track common issues on Ask Fedora.

For tips on reporting a bug effectively, read how to file a bug.

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora Linux users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora Linux, but the Linux ecosystem and free software as a whole.

More information

For more detailed information about what’s new on Fedora Linux 37 Beta release, you can consult the Fedora Linux 37 Change set. It contains more technical information about the new packages and improvements shipped with this release.

Manual action required to update Fedora Silverblue, Kinoite and IoT (version 36)

Friday 9th of September 2022 08:00:00 AM

Due to an unfortunate combination of issues, the Fedora Silverblue, Kinoite and IoT variants that are running a version from 36.20220810.0 and later are no longer able to update to the latest version.

You can use these two commands to work around the bugs:

$ sudo find /boot/efi -exec touch '{}' ';' $ sudo touch /etc/kernel/cmdline

Afterwards, you can update your system as usual with GNOME Software (on Silverblue) or via:

$ sudo rpm-ostree upgrade

These two issues are rooted in GRUB2 bugs that have only landed in Fedora and do not affect CentOS Stream 9 or RHEL. This also does not affect Fedora CoreOS for different reasons.

You can get more details about those issues in the tracker for Fedora Silverblue: https://github.com/fedora-silverblue/issue-tracker/issues/322

The business case for supporting EPEL

Wednesday 7th of September 2022 08:00:00 AM

EPEL stands for Extra Packages for Enterprise Linux. EPEL is a collection of packages built and maintained by the community for use in Red Hat Enterprise Linux (RHEL), CentOS Stream, and RHEL-like distributions like Rocky Linux and Alma Linux.

I am going to make the case that if you use EPEL as part of your organization’s infrastructure, you have an interest in keeping those packages available and as secure as they can be.

Who is this article for? I’m thinking of the team leads, managers, and directors in IT departments who make decisions about the tools their organizations have access to.

If you don’t use or know about EPEL, it’s likely that you don’t have to think about these things. In this case this article isn’t for you. However, it might contain ideas for promoting sustainable uses of free and open source software that you can apply to other situations that are more relevant to you.

Reason 1: Unmaintained packages may be removed from EPEL

Packages must be built and maintained for them to be available to the users of every distro. If someone isn’t doing the work of maintaining the packages, those packages become increasingly out of date. Eventually they may even be removed from the repository because of the security risk. This is avoidable as long as a package has a maintainer.

If you or someone in your organization is the maintainer of a package that you use, then you don’t have to worry about it falling by the wayside and potentially becoming a vulnerability. You get to make sure that the package stays in the repo, is up to date, and remains compatible with the rest of your infrastructure or deployments. Plain and simple.

Of course there needs to be room to manage bandwidth. How critical an application is to the operations of your organization defines how important it should be for you to make sure that either you maintain it or it is being looked after. XFCE may just be a nice-to-have for you, but Ansible might be mission critical.

Reason 2: You’re the first to have any security patches

Cyber threats continue to grow in number of exploits found and the speed at which they are exploited. Security is on every IT person’s mind. Patching vulnerabilities is something that increasingly can’t wait, and this extends to EPEL packages as well.

If you’re the maintainer for a required application, you have the ability to respond quickly to newly discovered vulnerabilities and protect your organization. Additionally, acting in your own self-interest now protects all the other organizations that also depend on that package.

Reason 3: Everyone else who uses that package can help you keep the package running well

As the maintainer of an application, others who also use the package will alert you of bugs as they arise. These are bugs that you may not have realized were there. Arguably it may not be critical to squash bugs that you don’t experience. However, by becoming the hub for feedback for that package, you will also be smoothing out the experience for your own users who may not have thought to report the bug. You benefit from crowd-sourcing quality control.

Reason 4: You can prepare for future releases before they come out

All future LTS releases of RHEL and RHEL-like distros will have their start as CentOS Stream. If you plan on migrating to a release that is represented by the current version of CentOS Stream, as the maintainer you can and should be building against it. This allows you to ensure continuity by packaging the application yourself for your next upgrade. You will know, ahead of time, whether your must-have packages will work in the latest release of your enterprise distro of choice.

Reason 5: You’re contributing to the long-term confidence in EPEL as a platform

The only reason we have packages in EPEL to begin with is because individuals are volunteering their time to maintain them. In a few cases you have companies committing resources to maintain packages but they are a small minority. If people don’t believe that EPEL will stick around for as long as RHEL releases, maintainers can lose steam or burnout. By committing resources to EPEL, you are shoring up confidence in the project – confidence that can encourage other organizations and people to invest in EPEL.

Potential solutions

If at this point you are thinking to yourself, “I would like to give back in some way, but what would that look like?”, here are some ideas. Some require lower commitment than others if you want to help but need to remain flexible about involvement.

  1. Maintain at least one package of the ones you use in your organization. The average maintainer looks after 10 packages, so covering at least one should be an easier hurdle to cross.
  2. If everything you use is already covered, find at least one package without a maintainer so that you can support other users just as other maintainers are supporting you.
  3. Report bugs for the packages you’re using.
  4. Request packages from older EPEL branches in newer EPEL branches, i.e. EPEL 9.
  5. Provide testing feedback for packages in the epel-testing repositories.
  6. Depending on the number and importance of packages you use, consider how much employee time you want to dedicate to EPEL maintenance.
  7. Integrate any EPEL maintenance you provide into the job descriptions of the responsible employees so that your team can continue being a responsible open source contributor into the future.
Become a package maintainer

You can start by checking out the Fedora documentation on how to become a package maintainer!

If you need support, or assistance getting started, help is available in the EPEL Matrix channel (with IRC bridge). Here are other ways to get in touch with the EPEL community.

Since you’ve made it this far…

Here are additional resources you can check out on EPEL and how you can leverage it more.

What do you think?

Do you think these reasons are valid? Are there others you think should be mentioned? Do you disagree with this idea? Continue the conversation in the comments below or in the Fedora Discussion board!

Manage containers on Fedora Linux with Podman Desktop

Monday 5th of September 2022 08:00:00 AM

Podman Desktop is an open-source GUI application for managing containers on Linux, macOS, and Windows.

Historically, developers have been using Docker Desktop for graphical management of containers. This worked for those who had Docker Daemon and Docker CLI installed. However, for those who used Podman daemon-less tool, although there were a few Podman frontends like Pods, Podman desktop companion, and Cockpit, there was no official application. This is not the case anymore. Enter Podman Desktop!

This article will discuss features, installation, and use of Podman Desktop, which is developed by developers from Red Hat and other open-source contributors.

Installation

To install Podman Desktop on Fedora Linux, head over to podman-desktop.io, and click the Download for Linux button. You will be presented with two options: Flatpak and zip. In this example we are using Flatpak. After clicking Flatpak, open it in GNOME Software by double clicking the file (if you are using GNOME). You can also install it via the terminal:

flatpak install podman-desktop-X.X.X.flatpak

In the above command, replace X.X.X with the specific version you have downloaded. If you downloaded the zip file, then extract the archive, and launch the Podman Desktop application binary. You can also find pre-release versions by going to the project’s releases page on GitHub.

Features

Podman Desktop is still in its early days. Yet, it supports many common container operations like creating container images, running containers, etc. In addition, you can find a Podman extension under Extensions Catalog in Preferences, which you can use to manage Podman virtual machines on macOS and Windows. Futhermore, Podman Desktop has support for Docker Desktop extensions.

You can install such extensions in the Docker Desktop Extensions section under Preferences. The application window has two panes. The left narrow pane shows different features of the application and the right pane is the content area, which will display relevant information given what is selected on the left.

Podman Desktop 0.0.6 running on Fedora 36 Demo

To get an overall view of Podman Desktop’s capabilities, we will create an image from a Dockerfile and push it to a registry, then pull and run it, all from within Podman Desktop.

Build image

The first step is to create a simple Dockerfile by entering the following lines in the command line:

cat <<EOF>>Dockerfile FROM docker.io/library/httpd:2.4 COPY . /var/www/html WORKDIR /var/www/html CMD ["httpd", "-D", "FOREGROUND"] EOF

Now, go to the Images section and press the Build Image button. You will be taken to a new page to specify the Dockerfile, build context and image name. Under Containerfile path, click and browse to pick your Dockerfile. Under image name, enter a name for your image. You can specify a fully qualified image name (FQIN) in the form example.com/username/repo:tag if you want to push the image to a container registry. In this example, I enter quay.io/codezombie/demo-httpd:latest, because I have a public repository named demo-httpd on quay.io. You can follow a similar format to specify your FQIN pointing to your container registry (Quay, Docker Hub, GitHub Container Registry, etc.). Now, press Build and wait for the build to complete.

Push image

Once the build is finished, it’s time to push the image. So, we need to configure a registry in Podman Desktop. Go to Preferences, Registries and press Add registry.

Add Registry dialog

In the Add Registry dialog, enter your registry server address, and your user credentials and click ADD REGISTRY.

Now, I go back to my image in the list of images and push it to the repository by pressing the upload icon. When you hover over the image name that starts with the name of the registry added in the settings (quay.io in this demo), a push button appears alongside the image name.

The push button that appears when you hover over the image name Image pushed to repository via Podman Desktop

Once the image is pushed, anyone with access to the image repository can pull it. Since my image repository is public, you can easily pull it in Podman Desktop.

Pull image

So, to make sure things work, remove this image locally and pull it in Podman Desktop. Find the image in the list and remove it by pressing the delete icon. Once the image is removed, click the Pull Image button. Enter the fully qualified name in the Image to Pull section and press Pull image.

Our container image is successfully pulled Create a container

As the last part in our Podman Desktop demo, let us spin up a container from our image and check the result. I go to Containers and press Create Container. This will open up a dialog with two choices: From Containerfile/Dockerfile, and From existing image. Press From existing image. This takes us to the list of images. There, select the image we pulled.

Create a container in Podman Desktop

Now, we select our recently-pulled image from the list and press the Play button in front of it. In the dialog that appears, I enter demo-web as Container Name and 8000 as Port Mapping, and press Start Container.

Container configuration

The container starts running and we can check out our Apache server’s default page by running the following command:

curl http://localhost:8000 It works!

You should also be able to see the running container in the Containers list, with its status changed to Running. There, you will find available operations in front of the container. For example, you can click the terminal icon to open a TTY into the container!

Display of running container demo-web in Podman Desktop with available operations for the container What Comes Next

Podman Desktop is still young and under active development. There is a project roadmap on GitHub with a list of exciting and on-demand features including:

  • Kubernetes Integration
  • Support for Pods
  • Task Manager
  • Volumes Support
  • Support fo Docker Compose
  • Kind Support

Contribute at the i18n, Release Validation, CryptoPolicy and GNOME 43 Final test weeks for Fedora Linux 37

Thursday 1st of September 2022 11:52:41 PM

There are 4 upcoming test days/weeks in the coming weeks. The first is Wed 31 August through Wed 07 Sept. It is to test Pre-Beta Release Validation. The second is Tuesday 6 Sept through Monday 12 Sept. It focuses on testing i18n. The third is Monday 5 Sept the Crypto Policy test day. The fourth is Wed 7 Sept through Wed 14 Sept to test GNOME 43 Final. Please come and test with us to make the upcoming Fedora 37 even better. Read more below on how to participate.

Pre-Beta Release Validation

Fedora Linux is foremost a community-powered distribution. Fedora Linux runs on all sorts of off-the-shelf hardware. The QA team relies on looking at bugs and edge cases coming out of community-owned hardware, so testing pre-release composes is a crucial part of the release process. We try to fix as many of them as we can! Please participate in the pre-beta release validation test week now through 7 September. You can help us catch those bugs at an early stage. A detailed post can be found here

GNOME 43 Final test week

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora users. As a part of the planned change, the GNOME 43 Final will land on Fedora which then will be shipped with Fedora Linux 37. To ensure that everything works fine The Workstation Working Group and QA team will have this test week Wed 7 Sept through Wed 14 Sept. Refer to the GNOME 43 test week wiki page for links and resources needed to participate.

i18n test week

i18n test week focuses on testing internationalization features in Fedora Linux. The test week is Tuesday 6 Sept through Monday 12 Sept.

StrongCryptoSettings3 test day

This is a new and unconventional test day. The change, however small, will have impacts across many areas and we want our users to spot as many bugs as we possibly can. The advice is to use exotic VPNs, proprietary chat apps, different email providers and even git workflows. These can be tested with some advice which can be found here. This test day is Monday 5 Sept.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about both test days is available on the wiki pages mentioned above. If you’re available on or around the days of the events, please do some testing and report your results.

Come and test with us to make the upcoming Fedora 37 even better.

Help improve GNOME!

Wednesday 31st of August 2022 08:00:00 AM

gnome-info-collect is a new tool that collects anonymous data about how GNOME systems are configured. It sends that information back to GNOME servers where it can be analyzed. The goal of this tool is to help improve GNOME, by providing data that can inform design decisions, influence where resources are invested, and generally help GNOME understand its users better.

The more people who provide data, the better! So, if you would like to help us improve GNOME, please consider installing and running gnome-info-collect on your system. It only takes a second.

As of last week, gnome-info-collect is ready to be used, and we are asking all GNOME users to install and run it!

How to run the tool

Simply install the package from Fedora Copr repository by running the following commands:

$ dnf copr enable vstanek/gnome-info-collect $ dnf install gnome-info-collect

The Copr repository also contains instructions on how to install on systems without dnf (useful for Silverblue users).

After installing, simply run

$ gnome-info-collect

from the Terminal. The tool will show you what information will be shared and won’t upload anything until you give your consent.

There are packages for other distributions as well. See the project’s README for more information.

How it works

gnome-info-collect is a simple client-server application. The client can be run on any GNOME system. There, it collects various system data including:

  • Hardware information, including the manufacturer and model
  • Various system settings, including workspace configuration, and which sharing features are enabled
  • Application information, such as which apps are installed, and which ones are favourited
  • Which GNOME shell extensions are installed and enabled

You can find the full list of collected information in the gnome-info-collect README. The tool shows the data that will be collected prior to uploading and, if the user consents to the upload, is then securely sent to GNOME’s servers for processing.

Data privacy

The collected data is completely anonymous and will be used only for the purpose of enhancing usability and user experience of GNOME. No personal information, like usernames or email addresses, is recorded. Any potentially identifying information, such as the IP address of the sender and the precise time of receiving the data, is discarded on the server side. To prevent the same client from sending data multiple times, a salted hash of the machine ID and username is used.

All of this ensures that the collected data is confidential and untraceable.

Spread the word!

The best way to help is to take part by running gnome-info-collect and uploading your anonymous data.

You can also help by sharing this article with other GNOME users, and by encouraging others to run the collection tool themselves. The more users running gnome-info-collect, the better conclusions we can make from the collected data. This will result in an improved GNOME system that is more comfortable for its users.

So, do not hesitate to help improve GNOME. Simply install gnome-info-collect, run it and go tell all your GNOME friends about it! Thank you!

Fedora Linux editions part 2: Spins

Monday 29th of August 2022 08:00:00 AM

One of the nice things about using Linux is the wide choice of desktop environments. Fedora Linux official Worksation edition comes with GNOME as default desktop environment, but you can choose another desktop environment as default via Fedora Spins. This article will go into a little more detail about the Fedora Linux Spins. You can find an overview of all the Fedora Linux variants in my previous article Introduce the different Fedora Linux editions.

KDE Plasma Desktop

This Fedora Linux comes with KDE Plasma as the default desktop environment. KDE Plasma is an elegant desktop environment that is very easy to customize. Therefore, you can freely and easily change the appearance of your desktop as you wish. You can customize your favorite themes, install the widgets you want, change icons, change fonts, customize panels according to your preferences, and install various extensions from the community.

Fedora Linux KDE Plasma Desktop is installed with a variety of ready-to-use applications. You’re ready to go online with Firefox, Kontact, Telepathy, KTorrent, and KGet. LibreOffice, Okular, Dolphic, and Ark are ready to use for your office needs. Your multimedia needs will be met with several applications such as Elisa, Dragon Player, K3B, and GwenView.

Fedora KDE Plasma Desktop

More information is available at this link: https://spins.fedoraproject.org/en/kde/

XFCE Desktop

This version is perfect for those who want a balance between ease of customizing appearance and performance. XFCE itself is made to be fast and light, but still has an attractive appearance. This desktop environment is becoming popular for those with older devices.

Fedora Linux XFCE is installed with various applications that suit your daily needs. These applications are Firefox, Pidgin, Gnumeric, AbiWord, Ristretto, Parole, etc. Fedora Linux XFCE also already has a System Settings menu to make it easier for you to configure your Fedora Linux.

Fedora XFCE Desktop

More information is available at this link: https://spins.fedoraproject.org/en/xfce/

LXQT Desktop

This spin comes with a lightweight Qt desktop environment, and focuses on modern classic desktops without slowing down the system. This version of Fedora Linux includes applications based on the Qt5 toolkit and is Breeze themed. You will be ready to carry out various daily activities with built-in applications, such as QupZilla, QTerminal, FeatherPad, qpdfview, Dragon Player, etc.

Fedora LXQt Desktop

More information is available at this link: https://spins.fedoraproject.org/en/lxqt/

MATE-Compiz Desktop

Fedora Linux MATE Compiz Desktop is a combination of MATE and Compiz Fusion. MATE desktop allows this version of Fedora Linux to work optimally by prioritizing productivity and performance. At the same time Compiz Fusion provides a beautiful 3D look with Emerald and GTK + themes. This Fedora Linux is also equipped with various popular applications, such as Firefox, LibreOffice, Parole, FileZilla, etc.

Fedora Mate-Compiz Desktop

More information is available at this link: https://spins.fedoraproject.org/en/mate-compiz/

Cinnamon Desktop

Because of its user-friendly interface, Fedora Linux Cinnamon Desktop is perfect for those who may be new to the Linux operating system. You can easily understand how to use this version of Fedora Linux. This spin has built-in applications that are ready to use for your daily needs, such as Firefox, Pidgin, GNOME Terminal, LibreOffice, Thunderbird, Shotwell, etc. You can use Cinnamon Settings to configure your operating system.

Fedora Cinnamon Desktop

More information is available at this link: https://spins.fedoraproject.org/en/cinnamon/

LXDE Desktop

Fedora Linux LXDE Desktop has a desktop environment that performs fast but is designed to keep resource usage low. This spin is designed for low-spec hardware, such as netbooks, mobile devices, and older computers. Fedora Linux LXDE has lightweight and popular applications, such as Midori, AbiWord, Osmo, Sylpheed, etc.

Fedora LXDE Desktop

More information is available at this link: https://spins.fedoraproject.org/en/lxde/

SoaS Desktop

SoaS stands for Sugar on a Stick. Fedora Linux Sugar Desktop is a learning platform for children, so it has a very simple interface that is easy for children to understand. The word “stick” in this context refers to a thumb drive or memory “stick”. This means this OS has a compact size and can be completely installed on a thumb drive. Schoolchildren can carry their OS on a thumb drive, so they can use it easily at home, school, library, and elsewhere. Fedora Linux SoaS has a variety of interesting learning applications for children, such as Browse, Get Books, Read, Turtle Blocks, Pippy, Paint, Write, Labyrinth, Physic, and FotoToon.

Fedora SOAS Desktop

More information is available at this link: https://spins.fedoraproject.org/en/soas/

i3 Tiling WM

The i3 Tiling WM spin of Fedora Linux is a bit different from the others. This Fedora Linux spin does not use a desktop environment, but only uses a window manager. The window manager used is i3, which is a very popular tiling window manager among Linux users. Fedora i3 Spin is intended for those who focus on interacting using a keyboard rather than pointing devices, such as a mouse or touchpad. This spin of Fedora Linux is equipped with various applications, such as Firefox, NM Applet, brightlight, azote, htop, mousepad, and Thunar.

Fedora i3 Tiling WM

More information is available at this link: https://spins.fedoraproject.org/en/i3/

Conclusion

Fedora Linux provides a large selection of desktop environments through Fedora Linux Spins. You can simply choose one of the Fedora Spins, and immediately enjoy Fedora Linux with the desktop environment of your choice along with its ready-to-use built-in applications. You can find complete information about Fedora Spins at https://spins.fedoraproject.org/.

4 cool new projects to try in Copr for August 2022

Sunday 14th of August 2022 08:58:39 PM

Copr is a build system for anyone in the Fedora community. It hosts thousands of projects for various purposes and audiences. Some of them should never be installed by anyone, some are already being transitioned to the official Fedora Linux repositories, and the rest are somewhere in between. Copr gives you the opportunity to install third-party software that is not available in Fedora Linux repositories, try nightly versions of your dependencies, use patched builds of your favorite tools to support some non-standard use cases, and just experiment freely.

If you don’t know how to enable a repository or if you are concerned about whether it is safe to use Copr, please consult the project documentation.

This article takes a closer look at interesting projects that recently landed in Copr.

Ntfy

Ntfy is a simple HTTP-based notification service that allows you to send notifications to your devices using scripts from any computer. To send notifications ntfy uses PUT/POST commands or it is possible to send notifications via ntfy CLI without any registration or login. For this reason, choose a hard-to guess topic name, as this is essentially a password.

In the case of sending notifications, it is as simple as this:

$ ntfy publish beer-lovers "Hi folks. I love beer!" {"id":"4ZADC9KNKBse", "time":1649963662, "event":"message", "topic":"beer-lovers", "message":"Hi folks. I love beer!"}

And a listener who subscribes to this topic will receive:

$ ntfy subscribe beer-lovers {"id":"4ZADC9KNKBse", "time":1649963662, "event":"message", "topic":"beer-lovers", "message":"Hi folks. I love beer!"}

If you wish to receive notifications on your phone, then ntfy also has a mobile app for Android so you can send notifications from your laptop to your phone.

Installation instructions

The repo currently provides ntfy for Fedora Linux 35, 36, 37, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable cyqsimon/ntfysh sudo dnf install ntfysh Koi

If you use light mode during the day but want to protect your eyesight overnight and switch to dark mode, you don’t have to do it manually anymore. Koi will do it for you!

Koi provides KDE Plasma Desktop functionality to automatically switch between light and dark mode according to your preferences. Just set the time and themes.

Installation instructions

The repo currently provides Koi for Fedora Linux 35, 36, 37, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable birkch/Koi sudo dnf install Koi SwayNotificationCenter

SwayNotificationCenter provides a simple and nice looking GTK GUI for your desktop notifications.

You will find some key features such as do-not-disturb mode, a panel to view previous notifications, track pad/mouse gestures, support for keyboard shortcuts, and customizable widgets. SwayNotificationCenter also provides a good way to configure and customize via JSON and CSS files.

More information on https://github.com/ErikReider/SwayNotificationCenter with screenshots at the bottom of the page.

Installation instructions

The repo currently provides SwayNotificationCenter for Fedora Linux 35, 36, 37, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable erikreider/SwayNotificationCenter sudo dnf install SwayNotificationCenter Webapp Manager

Ever want to launch your favorite websites from one place? With WebApp manager, you can save your favorite websites and run them later as if they were an apps.

You can set a browser in which you want to open the website and much more. For example, with Firefox, all links are always opened within the WebApp.

Installation instructions

The repo currently provides WebApp for Fedora Linux 35, 36, 37, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable perabyte/webapp-manager sudo dnf install webapp-manager

Contribute at the Fedora Kernel 5.19 and GNOME 43 Beta test weeks

Friday 12th of August 2022 04:01:38 PM

There are two upcoming test weeks in the coming weeks. The first is Sunday 14 August through Sunday 21 August. It is to test Kernel 5.19. The second is Monday 15 August through Monday 22 August. It focuses on testing GNOME 43 Beta. Come and test with us to make the upcoming Fedora 37 even better. Read more below on how to participate.

Kernel test week

The kernel team is working on final integration for Linux kernel 5.19. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week Sunday, August 14, 2022 through Sunday, August 21, 2022. Refer to the wiki page for links to the test images you’ll need to participate.

GNOME 43 Beta test week

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora users. As a part of the planned change the GNOME 43 beta will land on Fedora which then will be shipped with Fedora 37. To ensure that everything works fine The Workstation Working Group and QA team will have this test week Monday 15 August through Monday 22 August. Refer to the GNOME 43 Beta test week wiki page for links and resources needed to participate.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about both test days is available on the wiki pages mentioned above. If you’re available on or around the days of the events, please do some testing and report your results.

Again, the two upcoming test days in the upcoming week are:

  • Kernel 5.19 testing on Sunday 14 August through Sunday 21 August
  • Gnome 43 Beta testing on Monday 15 August through Monday 22 August

Come and test with us to make the upcoming Fedora 37 even better.

Hibernation in Fedora Workstation

Wednesday 10th of August 2022 08:07:00 AM

This article walks you through the manual setup for hibernation in Fedora Linux 36 Workstation using BTRFS and is based on a gist by eloylp on github.

Goal and Rationale

Hibernation stores the current runtime state of your machine – effectively the contents of your RAM, onto disk and does a clean shutdown. Upon next boot this state is restored from disk to memory such that everything, including open programs, is how you left it.

Fedora Workstation uses ZRAM. This is a sophisticated approach to swap using compression inside a portion of your RAM to avoid the slower on-disk swap files. Unfortunately this means you don’t have persistent space to move your RAM upon hibernation when powering off your machine.

How it works

The technique configures systemd and dracut to store and restore the contents of your RAM in a temporary swap file on disk. The swap file is created just before and removed right after hibernation to avoid trouble with ZRAM. A persistent swap file is not recommended in conjunction with ZRAM, as it creates some confusing problems compromising your systems stability.

A word on compatibility and expectations

Hibernation following this guide might not work flawless on your particular machine(s). Due to possible shortcomings of certain drivers you might experience glitches like non-working wifi or display after resuming from hibernation. In that case feel free to reach out to the comment section of the gist on github, or try the tips from the troubleshooting section at the bottom of this article.

The changes introduced in this article are linked to the systemd hibernation.service and hibernation.target units and hence won’t execute on their own nor interfere with your system if you don’t initiate a hibernation. That being said, if it does not work it still adds some small bloat which you might want to remove.

Hibernation in Fedora Workstation

The first step is to create a btrfs sub volume to contain the swap file.

$ btrfs subvolume create /swap

In order to calculate the size of your swap file use swapon to get the size of your zram device.

$ swapon NAME TYPE SIZE USED PRIO /dev/zram0 partition 8G 0B 100

In this example the machine has 16G of RAM and a 8G zram device. ZRAM stores roughly double the amount of system RAM compressed in a portion of your RAM. Let that sink in for a moment. This means that in total the memory of this machine can hold 8G * 2 + 8G of RAM which equals 24G uncompressed data. Create and configure the swapfile using the following commands.

$ touch /swap/swapfile # Disable Copy On Write on the file $ chattr +C /swap/swapfile $ fallocate --length 24G /swap/swapfile $ chmod 600 /swap/swapfile $ mkswap /swap/swapfile

Modify the dracut configuration and rebuild your initramfs to include the resume module, so it can later restore the state at boot.

$ cat <<-EOF | sudo tee /etc/dracut.conf.d/resume.conf add_dracutmodules+=" resume " EOF $ dracut -f

In order to configure grub to tell the kernel to resume from hibernation using the swapfile, you need the UUID and the physical offset.

Use the following command to determine the UUID of the swap file and take note of it.

$ findmnt -no UUID -T /swap/swapfile dbb0f71f-8fe9-491e-bce7-4e0e3125ecb8

Calculate the correct offset. In order to do this you’ll unfortunately need gcc and the source of the btrfs_map_physical tool, which computes the physical offset of the swapfile on disk. Invoke gcc in the directory you placed the source in and run the tool.

$ gcc -O2 -o btrfs_map_physical btrfs_map_physical.c $ ./btrfs_map_physical /path/to/swapfile FILE OFFSET EXTENT TYPE LOGICAL SIZE LOGICAL OFFSET PHYSICAL SIZE DEVID PHYSICAL OFFSET 0 regular 4096 2927632384 268435456 1 <4009762816> 4096 prealloc 268431360 2927636480 268431360 1 4009766912 268435456 prealloc 268435456 3251634176 268435456 1 4333764608 536870912 prealloc 268435456 3520069632 268435456 1 4602200064 805306368 prealloc 268435456 3788505088 268435456 1 4870635520 1073741824 prealloc 268435456 4056940544 268435456 1 5139070976 1342177280 prealloc 268435456 4325376000 268435456 1 5407506432 1610612736 prealloc 268435456 4593811456 268435456 1 5675941888

The first value in the PHYSICAL OFFSET column is the relevant one. In the above example it is 4009762816.

Take note of the pagesize you get from getconf PAGESIZE.

Calculate the kernel resume_offset through division of physical offset by the pagesize. In this example that is 4009762816 / 4096 = 978946.

Update your grub configuration file and add the resume and resume_offset kernel cmdline parameters.

grubby --args="resume=UUID=dbb0f71f-8fe9-491e-bce7-4e0e3125ecb8 resume_offset=2459934" --update-kernel=ALL

The created swapfile is only used in the hibernation stage of system shutdown and boot hence not configured in fstab. Systemd units control this behavior, so create the two units hibernate-preparation.service and hibernate-resume.service.

$ cat <<-EOF | sudo tee /etc/systemd/system/hibernate-preparation.service [Unit] Description=Enable swap file and disable zram before hibernate Before=systemd-hibernate.service [Service] User=root Type=oneshot ExecStart=/bin/bash -c "/usr/sbin/swapon /swap/swapfile && /usr/sbin/swapoff /dev/zram0" [Install] WantedBy=systemd-hibernate.service EOF $ systemctl enable hibernate-preparation.service $ cat <<-EOF | sudo tee /etc/systemd/system/hibernate-resume.service [Unit] Description=Disable swap after resuming from hibernation After=hibernate.target [Service] User=root Type=oneshot ExecStart=/usr/sbin/swapoff /swap/swapfile [Install] WantedBy=hibernate.target EOF $ systemctl enable hibernate-resume.service

Systemd does memory checks on login and hibernation. In order to avoid issues when moving the memory back and forth between swapfile and zram disable some of them.

$ mkdir -p /etc/systemd/system/systemd-logind.service.d/ $ cat <<-EOF | sudo tee /etc/systemd/system/systemd-logind.service.d/override.conf [Service] Environment=SYSTEMD_BYPASS_HIBERNATION_MEMORY_CHECK=1 EOF $ mkdir -p /etc/systemd/system/systemd-hibernate.service.d/ $ cat <<-EOF | sudo tee /etc/systemd/system/systemd-hibernate.service.d/override.conf [Service] Environment=SYSTEMD_BYPASS_HIBERNATION_MEMORY_CHECK=1 EOF

Reboot your machine for the changes to take effect. The following SELinux configuration won’t work if you don’t reboot first.

SELinux won’t like hibernation attempts just yet. Change that with a new policy. An easy although “brute” approach is to initiate hibernation and use the audit log of this failed attempt via audit2allow. The following command will fail, returning you to a login prompt.

systemctl hibernate

After you’ve logged in again check the audit log, compile a policy and install it. The -b option filters for audit log entries from last boot. The -M option compiles all filtered rules into a module, which is then installed using semodule -i.

$ audit2allow -b #============= systemd_sleep_t ============== allow systemd_sleep_t unlabeled_t:dir search; $ cd /tmp $ audit2allow -b -M systemd_sleep $ semodule -i systemd_sleep.pp

Check that hibernation is working via systemctl hibernate again. After resume check that ZRAM is indeed the only active swap device.

$ swapon NAME TYPE SIZE USED PRIO /dev/zram0 partition 8G 0B 100

You now have hibernation configured.

GNOME Shell hibernation integration

You might want to add a hibernation button to the GNOME Shell “Power Off / Logout” section. Check out the extension Hibernate Status Button to do so.

Troubleshooting

A first place to troubleshoot any problems is through journalctl -b. Have a look around the end of the log, after trying to hibernate, to pin-point log entries that tell you what might be wrong.

Another source of information on errors is the Problem Reporting tool. Especially problems, that are not common but more specific to your hardware configuration. Have a look at it before and after attempting hibernation and see if something comes up. Follow up on any issues via BugZilla and see if others experience similar problems.

Revert the changes

To reverse the changes made above, follow this check-list:

  • remove the swapfile
  • remove the swap subvolume
  • remove the dracut configuration and rebuild dracut
  • remove kernel cmdline args via grubby –remove-args=
  • disable and remove hibernation preparation and resume services
  • remove systemd overrides for systemd-logind.service and systemd-hibernation.service
  • remove SELinux module via semodule -r systemd_sleep
Credits and Additional Resources

This article is a community effort based primarily on the work of eloylp. As author of this article I’d like to make transparent that I’ve participated in the discussion to advance the gist behind this but many more minds contributed to make this work. Make certain to check out the discussion on github.

There are already some ansible playbooks and shell scripts to automate the process depicted in this guide. For example check out the shell scripts by krokwen and pietryszak or the ansible playbook by jorp

See the arch wiki for the full guide on how to calculate the swapfile offset.

Fedora and Parental Controls

Wednesday 20th of July 2022 08:57:00 AM

We all have people around us, whom we hold dear. Some of them might even rely on you to keep them safe. And since the world is constantly changing, that can be a challenge. No more is this apparent than with children, and Linux has long been lacking simple tools to help parents. But that is changing, and here we’ll talk about the new parental controls that Fedora Linux provides.

Users and permissions

First, it’s important to know that any Linux system has a lot of options for user, group, and permission management. Many of these advanced tools are aimed at professional users, though, and we won’t be talking about those here. In this article we’ll focus on home users.

Additionally, parental controls are not just useful for parents. You can use them when helping family members who are technically illiterate. Or perhaps you want to configure a basic workstation for simple administrative tasks. Either way, parental control can offer many security and reliability benefits.

Creating users

From the Settings panel, you can navigate to Users and from there you can select Add User… (after unlocking) to add a new user. You can give them a personal name, a username and their own icon. You can even decide if somebody else should also be an administrator.

Adding a user to your machine is as simple as going to settings, users, and clicking Add User…

You can also set a default password, or even allow a computer to automatically log in. You should help others understand digital security and the value of passwords, but for some people it might be better to just auto-login.

Admin rights

When you give somebody administrator rights, that user will have the same powers as you have on the system. They will be able to make any system change they prefer, and they can also add and remove users themselves.

Users who do not have admin rights, will not be able to make fundamental changes to the computer. They can still use all applications that are already on the system, and they can even download applications from the internet to their home folder. Still, they are ultimately blocked from doing anything that could damage the system.

Accessing the user-directories of others. Only administrator users will be able to do this.

Don’t forget that as an administrator, you can always reset a password. You can also enter another user’s home directory in case you have to. As with all ‘sudo’ rights, you should be careful and you should be considerate of other’s privacy.

Application control

Once one or multiple users are created, you can choose to tweak and control what applications somebody can use. This is done from within Settings > Users by selecting the new user then selecting Parental Controls and then Restrict Applications. Other options are available there, as well.

changing Parental Controls for a single user. However, there is a big caveat

Parental controls come with a big caveat: If you want a simple home-user solution, you MUST use Flatpaks.

The problem is as follows. The existing Linux application landscape is quite complex, and it would be almost impossible to introduce a new user-friendly application-control system this late into its life cycle. Thus, the second best solution is to ensure that the next generation of packaging has such functionality from the start.

To use Flatpaks, you can use the Fedora’s repository, or the Flathub repository. If you want to know all the fine details about those projects, then don’t forget to read this recent comparison.

Compromise and limitations

No article would be complete without mentioning the inherit limitations of the parental controls. Besides all the obvious limits of computers not knowing right from wrong, there are also some technical limits to parental controls.

Parental Control’s limits

The security that Parental Controls provides will only work as long as Fedora Linux is running in working order. One could easily bypass all controls by flashing Fedora on a USB stick and starting from a clean, root-powered, installation image. At this point, human supervision is still superior to the machine’s rules.

Adding to that, there are the obvious issues of browsers, store fronts like Steam, and other on-line applications. You can’t block just parts of these applications. Minecraft is a great game for children, but it also allows direct communication with other people. Thus, you’ll have to constantly juggle permissions. Here too, it is better to focus on the human element instead of relying to much on the tools.

Finally, don’t forget about protecting the privacy and well-being of others online. Blocking bad actors with Ublock Origin and/or a DNS based blocker will also help a lot.

Legacy applications

As mentioned before, Fedora and Parental Controls only work with Flatpaks. Every application that is already on the system can be started by users who otherwise don’t have the permissions.

As a rule of thumb; If you want to share a computer with vulnerable family members, don’t install any software that’s inappropriate using the RPM Repositories. Instead, consider using a Flatpak.

Starting the system-wide installation of Firefox from the Terminal. The Flatpak version of Firefox though, will not start. Summary

There is much that you can do to help those who are less experienced with computers. By simply giving these users their own account and using Flatpaks, you can make their lives a lot easier. Age restrictions can even offer additional benefits. But it’s not all perfect, and good communication and supervision will still be important.

The Parental Controls will improve over time. They have been given more priority in the past few years and there are additional plans. Time-tracking is, for example planned. As the migration to Flatpaks continues, you can expect that more software will respect age-restrictions in the future.

Additional US and UK resources Sharing Fedora Linux with Parental Controls

So, let’s start a small collaboration here. We’ve all been younger, so how did you escape your parents’ scrutiny? And for those who are taking care of others… how are you helping others? Let’s see what we can learn from each other.

Community container images available for applications development

Monday 18th of July 2022 08:00:00 AM

This article aims to introduce community containers, where users can pull them from, and use them. The three groups of containers available for use by community users are discussed. These are: Fedora, CentOS, and CentOS Stream.

What are the differences between the containers?

Fedora containers are based on the latest stable Fedora content, and CentOS-7 containers are based on components from the CentOS-7 and related SCLo SIG components. And finally, CentOS Stream containers are based on either CentOS Stream 8 or CentOS Stream 9.

Each container, e.g. s2i-php-container or s2i-perl-container, contain the same packages which are available for a given operating system. It means, that from a functionality point of view these example containers provides the PHP interpreter or Perl interpreter,respectively. 

Differences can be only in versions, which are available for each distribution. For example:

Fedora PHP containers are available in these versions:

CentOS-7 PHP containers are available in these versions:

CentOS Stream 9 PHP containers are available in these versions:

CentOS Stream 8 is not mentioned here for the PHP use case since users can pull it directly from the Red Hat Container Catalog registry as a UBI image. Containers that are not UBI based have CentOS Stream 8 repositories in the quay.io/sclorg namespace with repository suffix “-c8s”.

Fedora container images moved recently

The Fedora container images have recently moved to the quay.io/fedora registry organization. All of them use Fedora:35, and later Fedora:36 ,as a base image. The CentOS-7 containers are stored in the quay.io/centos7 registry organization. All of them use CentOS-7 as a base image.

CentOS Stream container images

The CentOS Stream containers are stored in the quay.io/sclorg registry organization.

The base image used for our CentOS Stream 8 containers is CentOS Stream 8 with the tag “stream8”.

The base image used for our CentOS Stream 9 containers is CentOS Stream 9 with the tag “stream9”.

In this registry organization, each container contains a “suffix”, either “c8s”  for CentOS Stream 8 or “c9s” for CentOS Stream 9, respectively.

See container PHP-74 for CentOS Stream 9.

Frequency of container image updates and testing

The community-based containers are updated in two ways.

First, when a pull request in the container sources that live under github.com/sclorg organization is merged, the corresponding versions in the GitHub repository are built and pushed into the proper repository.

Second, an update process is also implemented by GitHub Actions set up in each of our GitHub repositories. The base images, like “s2i-core” and “s2i-base”, are built each Tuesday at 1:00 pm. The rest of the containers are built each Wednesday at 1:00 pm.

This means every package update or change is updated in the container image within a few days, not later than after a week.

Each container that is not beyond its end of life is tested by our nightly builds. If we discover or detect an error on some of our containers, we attempt to fix it, but there are no guarantees provided.

What container shall I use?

In the end, what containers are we providing? That’s a great question. All containers live in the GitHub organization https://github.com/sclorg

The list of containers with their upstreams is summarized here:

How to use the container image I picked?

All container images are tuned up to be fully functional in the OpenShift (or OKD and even Kubernetes itself) without any trouble. Some containers support the source-to-image build strategy while some are expected to be used as daemons (like databases, for instance). For specific steps, please, navigate to the GitHub page for the respective container image by following one of the links above.

Finally, Some Real examples

Let’s show how to use PHP container images across all platforms that we support.

First of all, clone the container GitHub repository using this command:

$ git clone https://github.com/sclorg/s2i-php-container

Switch to the following directory created by the cloning step:

$ cd s2i-php-container/examples/from-dockerfile Fedora example

Start by pulling the Fedora PHP-80 image with this command:

$ podman pull quay.io/fedora/php-80

Modify “Dockerfile” so it refers to Fedora php-80 image. “Dockerfile” then looks like:

FROM quay.io/fedora/php-80 USER 0 # Add application sources ADD app-src . RUN chown -R 1001:0 . USER 1001 # Install the dependencies RUN TEMPFILE=$(mktemp) && \ curl -o "$TEMPFILE" "https://getcomposer.org/installer" && \ php <"$TEMPFILE" && \ ./composer.phar install --no-interaction --no-ansi --optimize-autoloader # Run script uses standard ways to configure the PHP application # and execs httpd -D FOREGROUND at the end # See more in <version>/s2i/bin/run in this repository. # Shortly what the run script does: The httpd daemon and php need to be # configured, so this script prepares the configuration based on the container # parameters (e.g. available memory) and puts the configuration files into # the appropriate places. # This can obviously be done differently, and in that case, the final CMD # should be set to "CMD httpd -D FOREGROUND" instead. CMD /usr/libexec/s2i/run Check if the application works properly

Build it by using this command:

$ podman build -f Dockerfile -t cakephp-app-80

Now run the application using this command:

$ podman run -ti --rm -p 8080:8080 cakephp-app-80

To check the PHP version use these commands:

$ podman run -it –rm cakephp-app-80 bash $ php –version

To check if everything works properly use this command:

$ curl -s -w ‘%{http_code}’ localhost:8080

This should return HTTP code 200. If you would like to see a web page enter “localhost:8080” in your browser.

CentOS 7 example

Start by pulling the CentOS-7 PHP-73 image using this command:

$ podman pull quay.io/centos7/php-73-centos7

Modify “Dockerfile” so it refers to CentOS php-73 image.

“Dockerfile” then looks like this:

FROM quay.io/centos7/php-73-centos7 USER 0 # Add application sources ADD app-src . RUN chown -R 1001:0 . USER 1001 # Install the dependencies RUN TEMPFILE=$(mktemp) && \ curl -o "$TEMPFILE" "https://getcomposer.org/installer" && \ php <"$TEMPFILE" && \ ./composer.phar install --no-interaction --no-ansi --optimize-autoloader # Run script uses standard ways to configure the PHP application # and execs httpd -D FOREGROUND at the end # See more in <version>/s2i/bin/run in this repository. # Shortly what the run script does: The httpd daemon and php needs to be # configured, so this script prepares the configuration based on the container # parameters (e.g. available memory) and puts the configuration files into # the appropriate places. # This can obviously be done differently, and in that case, the final CMD # should be set to "CMD httpd -D FOREGROUND" instead. CMD /usr/libexec/s2i/run Check if the application works properly

Build it using this command:

$ podman build -f Dockerfile -t cakephp-app-73

Now run the application using this command:

$ podman run -ti --rm -p 8080:8080 cakephp-app-73

To check the PHP version us these commands:

$ podman run -it –rm cakephp-app-73 bash $ php –version

To check if everything works properly use this command:

curl -s -w ‘%{http_code}’ localhost:8080

which should return HTTP code 200. If you would like to see a web page enter localhost:8080 in your browser.

RHEL 9 UBI 9 real example

Start by pulling the RHEL9 UBI-based PHP-80 image using the command:

$ podman pull registry.access.redhat.com/ubi9/php-80

Modify “Dockerfile” so it refers to the RHEL9 ubi9 php-80 image.

“Dockerfile” then looks like:

FROM registry.access.redhat.com/ubi9/php-80 USER 0 # Add application sources ADD app-src . RUN chown -R 1001:0 . USER 1001 # Install the dependencies RUN TEMPFILE=$(mktemp) && \ curl -o "$TEMPFILE" "https://getcomposer.org/installer" && \ php <"$TEMPFILE" && \ ./composer.phar install --no-interaction --no-ansi --optimize-autoloader # Run script uses standard ways to configure the PHP application # and execs httpd -D FOREGROUND at the end # See more in <version>/s2i/bin/run in this repository. # Shortly what the run script does: The httpd daemon and php needs to be # configured, so this script prepares the configuration based on the container # parameters (e.g. available memory) and puts the configuration files into # the appropriate places. # This can obviously be done differently, and in that case, the final CMD # should be set to "CMD httpd -D FOREGROUND" instead. CMD /usr/libexec/s2i/run Check if the application works properly

Build it using this command:

$ podman build -f Dockerfile -t cakephp-app-80-ubi9

Now run the application using this command:

$ podman run -ti --rm -p 8080:8080 cakephp-app-80-ubi9

To check the PHP version use these commands:

$ podman run -it –rm cakephp-app-80-ubi9 bash $ php –version

To check if everything works properly use this command:

curl -s -w ‘%{http_code}’ localhost:8080

which should return HTTP code 200. If you would like to see a web page enter localhost:8080 in your browser.

What to do in the case of a bug or enhancement

Just file a bug (known as an “issue” in GitHub) or even a pull request with a fix, to one of the GitHub repositories mentioned in the previous section.

Your Personal Voice Assistant on Fedora Linux

Friday 1st of July 2022 08:00:00 AM

It’s 7 PM. I sit down at my Fedora Linux PC and start a MMORPG. We want to brawl with Red Alliance over some systems they attacked; the usual stuff on a Friday evening in EVE. While we are waiting on a Titan to bridge us to the fighting area, a tune comes to my mind. “Carola, I wanna hear hurricanes.” My HD LED starts to blink for a second and I hear Carola saying “I found one match” and the music starts.

What sounded like Sci-Fi twenty years ago is now a reality for many PC users. For Linux users, this is now possible by installing “Carola” as your Personal Voice Assistant (PVA)[1].

Carola

The first thing people often ask is, “Why did you name it Carola?”

DIY Embroidery with Inkscape and Ink/Stitch

Wednesday 29th of June 2022 08:00:00 AM
Introduction

Embroidered shirts are great custom gifts and can also be a great way to show your love for open source. This tutorial will demonstrate how to design your own custom embroidered polo shirt using Inkscape and Ink/Stitch. Polo shirts are often used for embroidery because they do not tear as easily as t-shirts when pierced by embroidery needles, though with care t-shirts can also be embroidered. This tutorial is a follow on article to Make More with Inkscape and Ink/Stitch and provides complete steps to create your design.

Logo on Front of Shirt

Pictures with only a few colors work well for embroidery. Let us use a public domain black and white SVG image of Tux created by Ryan Lerch and Garret LeSage.

Black and white image of Tux

Download this public domain image, tux-bw.svg, to your computer, and import it into your document as an editable SVG image using File>Import...

Image of Tux with text to be embroidered Use a Transparent Background

It is helpful to have a checkerboard background to distinguish background and foreground colors. Click File>Document Properties… and then check the box to enable a checkerboard background.

Dialog box to enable checkerboard document background

Then close the document properties dialog box. You can now distinguish between colors used on Tux and the background color.

Tux can be distinguished from the document background Use a Single Color For Tux

Type s to use the Select and Transform objects tool, and click on the image of Tux to select it. Then click on Object>Fill and Stroke, in the menu. Type n to use the Edit paths by Nodes tool and click on a white portion of Tux. Within the Fill and Stroke pane change the fill to No paint to make this portion of Tux transparent.

Tux in one color

Thi leaves the black area to be embroidered.

Enable Embroidering of Tux

Now convert the image for embroidery. Type s to use the Select and Transform objects tool and click on the image of Tux to select it again. Choose Extensions>Ink/Stitch>Fill Tools>Break Apart Fill Objects … In the resulting pop up, choose Complex, click Apply, and wait for the operation to complete.

Dialog to Break Apart Fill Objects

For further explanation of this operation, see the Ink/Stitch documentation.

Resize Document

Now resize the area to be embroidered. A good size is about 2.75 inches by 2.75 inches. Press s to use the Select and Transform objects tool, and select Tux, hold down the shift key, and also select any text area. Then choose Object>Transform …, click on Scale in the dialogue box, change the measurements to inches, check the Scale proportionally box and choose a width of 2.75 inches, and click Apply.

Resized drawing

Before saving the design, reduce the document area to just fit the image. Press s to use the Select and Transform objects tool, then select Tux.

Objects selected to determine resize area

Choose File>Document Properties… then choose Resize to content: or press Ctrl+Shift+R

Dialog to resize page

The document is resized.

Resized document Save Your Design

You now need to convert your file to an embroidery file. A very portable format is the DST (Tajima Embroidery Format) format, which unfortunately does not have color information, so you will need to indicate color information for the embroidery separately. First save your design as an Inkscape SVG file so that you retain a format that you can easily edit again. Choose File>Save As, then select the Inkscape SVG format and enter a name for your file, for example AnotherAwesomeFedoraLinuxUserFront.svg and save your design. Then choose File>Save As and select the DST file format and save your design. Generating this file requires calculation of stitch locations, this may take a few seconds. You can preview the DST file in Inkscape, but another very useful tool is vpype-embroidery

Install vpype-embroidery on the command line using a Python virtual environment via the following commands:

virtualenv test-vpype source test-vpype/bin/activate pip install matplotlib pip install vpype-embroidery pip install vpype[all]

Preview your DST file (in this case named AnotherAwesomeFedoraLinuxUserFront.dst which should be replaced by the filename you choose if it is different), using this command:

vpype eread AnotherAwesomeFedoraLinuxUserFront.dst show Preview of design created by vpype-embroidery

Check the dimensions of your design, if you need to resize it, you should resize the SVG design file before exporting it as a DST file. Resizing the DST file is not recommended since it contains stitch placement information, regenerate this placement information from the resized SVG file to obtain a high quality embroidered result.

Text on the Back of the Shirt

Now create a message to put on the back of your polo shirt. Create a new Inkscape document using File>New. Then choose Extensions>Ink/Stitch>Lettering.

Choose a font, for example Geneva Simple Sans created by Daniel K. Schneider in Geneva. If you want to resize your text, do so at this point using the scale section of the dialog box since resizing it once it is in Inkscape will distort the resulting embroidered pattern. Add your text,

Another Awesome Fedora Linux User Lettering creation dialog box

A preview will appear, click on Quit

Preview image of text to be embroidered

Then click on Apply and Quit in the lettering creation dialog box. Your text should appear in your Inkscape document.

Resulting text in Inkscape document

Create a checkered background and resize the document to content by opening up the document properties dialog box File>Document Properties…

Document properties dialog box

Your document should now be a little larger than your text.

Text in resized document Clean Up Stitches

Many commercial embroidery machines support jump instructions which can save human time in finishing the embroidered garment. Examine the text preview image. A single continuous thread sews all the letters. Stitches joining the letters are typically removed. These stitches can either be cut by hand after the embroidery is done, or they can be cut by the embroidery machine if it supports jump instructions. Ink/Stitch can add these jump instructions.

Add jump instructions by selecting View>Zoom>Zoom Page to enlarge the view of the drawing. Press s to choose the Select and transform objects tool. Choose Extensions>Ink/Stitch>Commands>Attach Commands to Selected Objects. A dialog box should appear, check just the Trim thread after sewing this object option.

Attach commands dialog

Then click in the drawing area and select the first letter of the text

Select first letter of the text

Then click Apply, and some cut symbols should appear above the letter.

Scissor symbols above first letter

Repeat this process for all letters.

Separately embroidered letters

Now save your design, as before, in both SVG and DST formats. Check the likely quality of the embroidered text by previewing your DST file (in this case named AnotherAwesomeFedoraLinuxUserBack.dst – replaced this by the filename you chose), using

vpype eread AnotherAwesomeFedoraLinuxUserBack.dst show Preview of text to be embroidered created by vpype-embroidery

Check the dimensions of your design, if you need to resize it, you should resize the SVG design file before exporting it as a DST file.

Create a Mockup

To show the approximate placement of your design on the polo shirt create a mockup. You can then send this to an embroidery company with your DST file. The Fedora Design Team has a wiki page with examples of mockups. An example mockup made using Kolourpaint is below.

Mockup image of polo shirt with design

You can also use an appropriately licensed drawing of a polo shirt, for example from Wikimedia Commons.

Example Shirt

Pictures of a finished embroidered polo shirt are below

Front of embroidered shirt Back of embroidered shirt Closeup of embroidered Tux Closeup of embroidered text Further Information

A three color image of Tux is also available, but single colors are easiest to achieve good embroidered results with. Adaptation of this shaded multiple color image is required to use it for embroidery. Additional tutorial information is available on the Ink/Stitch website.

Some companies that can do embroidery given a DST file include:

Search the internet for machine embroidery services close to you or a hackerspace with an embroidery machine you can use.

This article has benefited from many helpful suggestions from Michael Njuguna of Marvel Ark and Brian Lee of Embroidery Your Way.

Accessibility in Fedora Workstation

Monday 27th of June 2022 08:00:00 AM

The first concerted effort to support accessibility under Linux was undertaken by Sun Microsystems when they decided to use GNOME for Solaris. Sun put together a team focused on building the pieces to make GNOME 2 fully accessible and worked with hardware makers to make sure things like Braille devices worked well. I even heard claims that GNOME and Linux had the best accessibility of any operating system for a while due to this effort. As Sun started struggling and got acquired by Oracle this accessibility effort eventually trailed off with the community trying to pick up the slack afterwards. Especially engineers from Igalia were quite active for a while trying to keep the accessibility support working well.

But over the years we definitely lost a bit of focus on this and we know that various parts of GNOME 3 for instance aren’t great in terms of accessibility. So at Red Hat we have had a lot of focus over the last few years trying to ensure we are mindful about diversity and inclusion when hiring, trying to ensure that we don’t accidentally pre-select against underrepresented groups based on for instance gender or ethnicity. But one area we realized we hadn’t given so much focus recently was around technologies that allowed people with various disabilities to make use of our software. Thus I am very happy to announce that Red Hat has just hired Lukas Tyrychtr, who is a blind software engineer, to lead our effort in making sure Red Hat Enterprise Linux and Fedora Workstation has excellent accessibility support!

Anyone who has ever worked for a large company knows that getting funding for new initiatives is often hard and can take a lot of time, but I want to highlight how I was extremely positively surprised at how quick and easy it was to get support for hiring Lukas to work on accessibility. When Jiri Eischmann and I sent the request to my manager, Stef Walter, he agreed to champion the same day, and when we then sent it up to Mike McGrath who is the Vice President of Linux Engineering he immediately responded that he would bring this to Tim Cramer who is our Senior Vice President of Software Engineering. Within a few days we had the go ahead to hire Lukas. The fact that everyone just instantly agreed that accessibility is important and something we as a company should do made me incredibly proud to be a Red Hatter.

What we hope to get from this is not only a better experience for our users, but also to allow even more talented engineers like Lukas to work on Linux and open source software at Red Hat. I thought it would be a good idea here to do a quick interview with Lukas Tyrychtr about the state of accessibility under Linux and what his focus will be.

Christian: Hi Lukas, first of all welcome as a full time engineer to the team! Can you tell us a little about yourself?

Lukas: Hi, Christian. For sure. I am a completely blind person who can see some light, but that’s basically it. I started to be interested in computers around 2009 or so, around my 15th or 16th birthday. First, because of circumstances, I started tinkering with Windows, but Linux came shortly after, mainly because of some pretty good friends. Then, after four years the university came and the Linux knowledge paid off, because going through all the theoretical and practical Linux courses there was pretty straightforward (yes, there was no GUI involved, so it was pretty okay, including some custom kernel configuration tinkering). During that time, I was contacted by Red Hat associates whether I’d be willing to help with some accessibility related presentation at our faculty, and that’s how the collaboration began. And, yes, the hire is its current end, but that’s actually, I hope, only the beginning of a long and productive journey.

Christian: So as a blind person you have first hand experience with the state of accessibility support under Linux. What can you tell us about what works and what doesn’t work?

Lukas: Generally, things are in pretty good shape. Braille support on text-only consoles basically just always works (except for some SELinux related issues which cropped up). Having speech there is somewhat more challenging, the needed kernel module (Speakup for the curious among the readers) is not included by all distributions, unfortunately it is not included by Fedora, for example, but Arch Linux has it. When we look at the desktop state of affairs, there is basically only a single screen reader (an application which reads the screen content), called Orca, which might not be the best position in terms of competition, but on the other hand, stealing Orca developers would not be good either. Generally, the desktop is usable, at least with GTK, Qt and major web browsers and all recent Electron based applications. Yes, accessibility support receives much less testing than I would like, so for example, a segmentation fault with a running screen reader can still unfortunately slip through a GTK release. But, generally, the foundation works well enough. Having more and naturally sounding voices for speech synthesis might help attract more blind users, but convincing all the players is no easy work. And then there’s the issue of developer awareness. Yes, everything is in some guidelines like the GNOME ones, however I saw much more often than I’d like to for example a button without any accessibility labels, so I’d like to help all the developers to fix their apps so accessibility regressions don’t get to the users, but this will have to improve slowly, I guess.

Christian: So you mention Orca, are there other applications being widely used providing accessibility?

Lukas: Honestly, only a few. There’s Speakup – a kernel module which can read text consoles using speech synthesis, e.g. a screen reader for these, however without something like Espeakup (an Espeak to Speakup bridge) the thing is basically useless, as it by default supports hardware synthesizers, however this piece of hardware is basically a think of the past, e.g. I have never seen one myself. Then, there’s BRLTTY. This piece of software provides braille output for screen consoles and an API for applications which want to output braille, so the drivers can be implemented only once. And that’s basically it, except for some efforts to create an Orca alternative in Rust, but that’s a really long way off. Of course, utilities for other accessibility needs exist as well, but I don’t know much about these.

Christian: What is your current focus for things you want to work on both yourself and with the larger team to address?

Lukas: For now, my focus is to go through the applications which were ported to GTK 4 as a part of the GNOME development cycle and ensure that they work well. It includes adding a lot of missing labels, but in some cases, it will involve bigger changes, for example, GNOME Calendar seems to need much more work. During all that, educating developers should not be forgotten either. With these things out of the way, making sure that no regressions slip to the applications should be addressed by extending the quality assurance and automated continuous integration checks, but that’s a more distant goal.

Christian: Thank you so much for talking with us Lukas, if there are other people interested in helping out with accessibility in Fedora Workstation what is the best place to reach you?

Actually for now the easiest way to reach me is by email at ltyrycht@redhat.com. Be happy to talk to anyone wanting to help with making Workstation great for accessibility.

Fedora Job Opening: Community Action and Impact Coordinator (FCAIC)

Thursday 23rd of June 2022 09:15:00 PM

It is bittersweet to announce that I have decided to move on from my role as the Fedora Community Action and Impact Coordinator (FCAIC). For me, this role has been full of growth, unexpected challenges, and so much joy. It has been a privilege to help guide our wonderful community through challenges of the last three years. I’m excited to see what the next FCAIC can do for Fedora. If you’re interested in applying, see the FCAIC job posting on Red Hat Jobs and read more about the role below. 

Adapting to Uncertain Times

When I applied back in 2019, a big part of the job description was to travel the globe, connecting with and supporting Fedora communities worldwide. As we all know, that wasn’t possible with the onset of COVID-19 and everything that comes with a pandemic. 

Instead, I learned how to create virtual experiences for Fedora, connect with people solely in a virtual environment, and support contributors from afar. Virtual events have been a HUGE success for Fedora. The community has shown up for those events in such a wonderful way. We have almost tripled our participation in our virtual events since the first Release Party in 2020. We have more than doubled the number of respondents to the Annual Contributor Survey over last year’s turnout. I am proud of the work I have accomplished and even more so how much the community has grown and adapted to a very challenging couple of years.

What’s next for me

As some of you may know, I picked up the Code of Conduct (CoC) work that my predecessor Brian Exelbierd (Bex) started for Fedora. After the Fedora Council approved the new CoC, I then got started on additional pieces of related work: Supplemental Documentation and Moderation Guidelines. I am also working on expanding the small Code of Conduct Committee(CoCC) to include more community members. As a part of the current CoCC, I have helped to deal with the majority of the incidents throughout my time as FCAIC. 

Because of my experience with all this CoC work, I will be moving into a new role inside of Red Hat’s OSPO: Code of Conduct Specialist. I will be assisting other Community Architects (like the FCAIC role) to help roll out CoC’s and governance around them, as well as collaborating with other communities to develop a Community of Practice around this work. I am excited and determined to take on this new challenge and very proud to be a part of an organization that values work that prioritizes safety and inclusion. 

What’s next for Fedora

This is an amazing opportunity for the Fedora community to grow in new and exciting ways. Every FCAIC brings their own approach to this role as well as their own ideas, strengths, and energy. I will be working with Matthew Miller, Ben Cotton, and Red Hat to help hire and onboard the new Fedora Community Action and Impact Coordinator. I will continue as FCAIC until we hire someone new, and will help transition them into the role. Additionally, I will offer support, advice, and guidance as others who have moved on have done for me. I am eager to see who comes next and how I can help them become a success. And, as I have for years prior to my tenure as FCAIC, I will continue to participate in the community, albeit in different ways. 

This means we are looking for a new FCAIC! Do you love Fedora? Do you want to help support and grow the community full time? This is the core of what the FCAIC does. The job description has a list of the primary job responsibilities and required skills- but that is just a taste of what is required and what it is to support the Fedora community full time. Day-to-day work includes working with the Mindshare Committee, managing the Fedora budget, and being a part of many other teams and in many places. You should be ready and excited to write about Fedora’s achievements, policies, as well as generate strategies to help the community succeed. And, of course, there is event planning and support (Flock, Nest, Hatch, Release Parties, etc). It can be tough work, but it is a lot of fun and wonderfully rewarding to help Fedora thrive. 

How to apply

Do you enjoy working with people all over the world, with a variety of skills and interests? Are you good at setting long term goals and seeing them through to completion? Can you set priorities, follow through, and know when to say “no” in order to focus on the most important tasks for success? Are you excited about building not only a successful Linux distribution, but also a healthy project? Is Fedora’s mission deeply important to you? If you said “yes” to these questions, you might be a great candidate for the FCAIC role. If you think you’re a great fit, please apply online, or contact Marie Nordin, or Jason Brooks.

Using Linux System Roles to implement Clevis and Tang for automated LUKS volume unlocking

Wednesday 22nd of June 2022 08:00:00 AM

One of the key aspects of system security is encrypting storage at rest. Without encrypted storage, any time a storage device leaves your presence it can be at risk. The most obvious scenario where this can happen is if a storage device (either just the storage device or the entire system, server, or laptop) is lost or stolen.

However, there are other scenarios that are a concern as well: perhaps you have a storage device fail, and it is replaced under warranty — many times the vendor will ask you to return the original device. If the device was encrypted, it is much less of a concern to return it back to the hardware vendor.

Another concern is anytime your storage device is out of sight there is a risk that the data is copied or cloned off of the device without you even being aware. Again, if the device is encrypted, this is much less of a concern.

Fedora (and other Linux distributions) include the Linux Unified Key Setup (LUKS) functionality to support disk encryption. LUKS is easy to use, and is even integrated as an option in the Fedora Anaconda installer.

However there is one challenge that frequently prevents people from implementing LUKS on a large scale, especially for the root filesystem: every single time you reboot the host you generally have to manually access the console and type in the LUKS passphrase so the system can boot up.

If you are running Fedora on a single laptop, this might not be a problem, after all, you probably are sitting in front of your laptop any time you reboot it. However, if you have a large number of Fedora instances, this quickly becomes impractical to deal with.

If you have hundreds of systems, it is impractical to manually type the LUKS passphrase on each system on every reboot

You might be managing Fedora systems that are at remote locations, and you might not even have good or reliable ways to access a console on them. In this case, rebooting the hosts could result in them not coming up until you or someone else travels to their location to type in the LUKS passphrase.

This article will cover how to implement a solution to enable automated LUKS volume unlocking (and the process to implement these features will be done using automation as well!)

Overview of Clevis and Tang

Clevis and Tang are an innovative solution that can help with the challenge of having systems with encrypted storage boot up without manual user intervention on every boot. At a high level, Clevis, which is installed on the client systems, can enable LUKS volumes to be unlocked without user intervention as long as the client system has network access to a configurable number of Tang servers.

The basic premise is that the Tang server(s) are on an internal/private or otherwise secured network, and if the storage devices are lost, stolen, or otherwise removed from the environment, that they would no longer have network access to the Tang server(s), and thus no longer automatically unlock automatically at boot.

Tang is stateless and doesn’t require authentication or even TLS, which means it is very lightweight and easy to configure, and can run from a container. In this article, I’m only setting up a single Tang server, however it is also possible to have multiple Tang servers in an environment, and to configure the number Tang servers the Clevis clients must connect to in order to unlock the encrypted volume. For example, you could have three Tang servers, and require the Clevis clients to be able to connect to at least two of the three Tang servers.

For more information on how Tang and Clevis work, refer to the GitHub pages: Clevis and Tang, or for an overview of the inner workings of Tang and Clevis, refer to the Securing Automated Decryption New Cryptography and Techniques FOSDEM talk.

Overview of Linux System Roles

Linux System Roles is a set of Ansible Roles/Collections that can help automate the configuration and management of many aspects of Fedora, CentOS Stream, RHEL, and RHEL derivatives. Linux System Roles is packaged in Fedora as an RPM (linux-system-roles) and is also available on Ansible Galaxy (as both roles and as a collection). For more information on Linux System Roles, and to see a list of included roles, refer to the Linux System Roles project page.

Included in the list of Linux System Roles are the nbde_client, nbde_server, and firewall roles that will be used in this article. The nbde_client and nbde_server roles are focused on automating the implementation of Clevis and Tang, respectively. The “nbde” in the role names stands for network bound disk encryption, which is another term to refer to using Clevis and Tang for automated unlocking of LUKS encrypted volumes. The firewall role can automate the implementation of firewall settings, and will be used to open a port in the firewall on the Tang server.

Demo environment overview

In my environment, I have a Raspberry Pi, running Fedora 36 that I will install Linux System Roles on and use as my Ansible control node. In addition, I’ll use this same Raspberry Pi as my Tang server. This device is configured with the pi.example.com hostname.

In addition, I have four other systems in my environment: two Fedora 36 systems, and two CentOS Stream 9 systems, named fedora-server1.example.com, fedora-server2.example.com, c9s-server1.example.com, and c9s-server2.example.com. Each of these four systems has a LUKS encrypted root filesystem and currently the LUKS passphrase must be manually typed in each time the systems boot up.

I’ll use the nbde_server and firewall roles to install and configure Tang on my pi.example.com system, and use the nbde_client role to install and configure Clevis on my four other systems, enabling them to automatically unlock their encrypted root filesystem if they can connect to the pi.example.com Tang system.

Installing Linux System Roles and Ansible on the Raspberry Pi

I’ll start by installing the linux-system-roles package on the pi.example.com host, which will act as my Ansible control node. This will also install ansible-core and several other packages as dependencies. These packages do not need to be installed on the other four systems in my environment (which are referred to as managed nodes).

$ sudo dnf install linux-system-roles

SSH keys and sudo configuration need to be configured so that the control node host can connect to each of the managed nodes in the environment and escalate to root privileges.

Defining the Ansible inventory file

Still on the pi.example.com host, I’ll create an Ansible inventory file to group the five systems in my environment into two Ansible inventory groups. The nbde_servers group will contain a list of hosts that I would like to configure as Tang servers (which in this example is only the pi.example.com host), and the nbde_clients group will contain a list of hosts that I would like to configure as Clevis clients. I’ll name this inventory file inventory.yml and it contains the following content:

all: children: nbde_servers: hosts: pi.example.com: nbde_clients: hosts: fedora35-server1.example.com: fedora35-server2.example.com: c9s-server1.example.com: c9s-server2.example.com: Creating Ansible Group variable files

Ansible variables are set to specify what configuration should be implemented by the Linux System Roles. Each role has a README.md file that contains important information on how to use each role, including a list of available role variables. The README.md files for the nbde_server, nbde_client, and firewall roles are available in the following locations, respectively:

  • /usr/share/doc/linux-system-roles/nbde_server/README.md
  • /usr/share/doc/linux-system-roles/nbde_client/README.md
  • /usr/share/doc/linux-system-roles/firewall/README.md

I’ll create a group_vars directory with the mkdir group_vars command. Within this directory, I’ll create a nbde_servers.yml file and nbde_clients.yml file, which will define, respectively, the variables that should be set for systems listed in the nbde_servers inventory group and the nbde_clients inventory group.

The nbde_servers.yml file contains the following content, which will instruct the firewall role to open TCP port 80, which is the default port used by Tang:

firewall: - port: ['80/tcp'] state: enabled

The nbde_clients.yml file contains the following content:

nbde_client_bindings: - device: /dev/vda2 encryption_password: !vault | $ANSIBLE_VAULT;1.1;AES256 62666465373138636165326639633... servers: - http://pi.example.com

Under nbde_client_bindings, device specifies the backing device of the encrypted root filesystem on the four managed nodes. The encryption_password specifies a current LUKS passphrase that is required to configure Clevis. In this example, I’ve used ansible-vault to encrypt the string rather than place the LUKS passphrase in clear text. And finally, under servers, a list of Tang servers that Clevis should bind to are specified. In this example, the Clevis clients will be configured to bind to the pi.example.com Tang server.

Creating the playbook

I’ll create a simple Ansible playbook, named nbde.yml that will call the firewall and nbde_server roles for systems in the nbde_servers inventory group, and call the nbde_client role for systems in the nbde_clients group:

- name: Open firewall for Tang hosts: nbde_servers roles: - linux-system-roles.firewall - name: Deploy NBDE Tang server hosts: nbde_servers roles: - linux-system-roles.nbde_server - name: Deploy NBDE Clevis clients hosts: nbde_clients roles: - linux-system-roles.nbde_client

At this point, I have the following files and directories created:

  • inventory.yml
  • nbde.yml
  • group_vars/nbde_clients.yml
  • group_vars/nbde_servers.yml
Running the playbook

The nbde.yml playbook can be run with the following command:

$ ansible-playbook nbde.yml -i inventory.yml --ask-vault-pass -b

The -i flag specifies which inventory file should be used, the –ask-vault-pass flag will prompt for the Ansible Vault password to decrypt the encryption_password variable, and the -b flag specifies that Ansible should escalate to root privileges.

play recap output from ansible-playbook command showing playbook successfully completed Validating the configuration

To validate the configuration, I rebooted each of my four managed nodes that were configured as Clevis clients of the Raspberry Pi Tang server. Each of the four managed nodes boots up and briefly pauses on the LUKS passphrase prompt:

Systems boot up to LUKS passphrase prompt, and automatically continue booting after a brief pause

However, after the brief delay, each of the four systems continued booting up without requiring me to enter the LUKS passphrase.

Conclusion

If you would like to secure your data at rest with LUKS encryption, but need a solution that enables systems to boot up without intervention, consider implementing Clevis and Tang. Linux System Roles can help you implement Clevis and Tang, as well as a number of other aspects of your system, in an automated manner.

Fedora Workstation’s State of Gaming – A Case Study of Far Cry 5 (2018)

Friday 17th of June 2022 08:00:00 AM

First-person shooter video games are a great proving ground for strategies that make you finish on the top, reflexes that help you to shoot before getting shot and agility that adjusts you to whatever a situation throws at you. Add the open-ended nature brought in by large intricately-designed worlds into the mix, and it dials the player experience to eleven and, with that, it also becomes great evidence of what a platform is capable of. Needless to say, I have been a great fan of open-world first-person shooter games. And Ubisoft’s Far Cry series happens to be the one which remains closest to my heart. So I tried the (second) most recent release in the long-running series, Far Cry 5 which came out in 2018, on Fedora Workstation 35 to see how it performs.

Just like in my previous case study, the testing hardware has an AMD RDNA2-based GPU, where the video game was configured to the highest possible graphical preset to stress the hardware into performing as much as its limiting factor. To ensure that we have a fair comparison, I set up two environments – one with Windows 10 Pro 21H2 and one with Fedora Workstation 35, both having up-to-date drivers and support software such as MSI Afterburner or MangoHUD for monitoring, Steam or Lutris for video game management and OBS Studio for footage recording. Adding to that, the benchmarks were ensured to be both representatives of a common gameplay scenario and variable enough to address resolution scaling and HD textures.

Cover art for “Far Cry 5”, Ubisoft, fair use, via Wikimedia Commons

Before we get into some actual performance testing and comparison results, I would like to go into detail about the video game that is at the centre of this case study. Far Cry 5 is a first-person action-adventure video game developed by Ubisoft Montreal and Ubisoft Toronto. The player takes the role of an unnamed junior deputy sheriff who is trapped in Hope County, a fictional region based in Montana and has to fight against a doomsday cult to take back the county from the grasp of its charismatic and powerful leader. The video game has been well received for the inclusion of branching storylines, role-playing elements and side quests, and is optimized enough to be a defining showcase of what the underlying hardware and platform are capable of.

Preliminary Framerate

The first test that was performed had a direct implication on how smooth the playing experience would be across different platforms but on the same hardware configuration.

Without HD textures

On a default Far Cry 5 installation, I followed the configuration stated above but opted out of the HD textures pack to warm up the platforms with a comparatively easier test. Following are the results.

  1. On average, the video game had around a whopping 59.25% more framerate on Fedora Workstation 35 than on Windows 10 Pro 21H2.
  2. To ensure an overall consistent performance, both the minimum and maximum framerates were also noted to monitor dips and rises.
  3. The minimum framerates on Fedora Workstation 35 were ahead by a big 49.10% margin as compared to those on Windows 10 Pro 21H2.
  4. The maximum framerates on Fedora Workstation 35 were ahead by a big 62.52% margin as compared to those on Windows 10 Pro 21H2.
  5. The X11 display server had roughly 0.52% more minimum framerate as compared to Wayland, which can be taken as a margin of error.
  6. The Wayland display server had roughly 3.87% more maximum framerate as compared to X11, which can be taken as a margin of error.
With HD textures

On a default Far Cry 5 installation, I followed the configuration stated above, but this time I enabled the HD textures pack to stress the platforms with a comparatively harder test. Following are the results.

  1. On average, the video game had around a whopping 65.63% more framerate on Fedora Workstation 35 than on Windows 10 Pro 21H2.
  2. To ensure an overall consistent performance, both the minimum and maximum framerates were also noted to monitor dips and rises.
  3. The minimum framerates on Fedora Workstation 35 were ahead by a big 59.11% margin as compared to those on Windows 10 Pro 21H2.
  4. The maximum framerates on Fedora Workstation 35 were ahead by a big 64.21% margin as compared to those on Windows 10 Pro 21H2.
  5. The X11 display server had roughly 9.77% more minimum framerate as compared to Wayland, which is big enough to be considered.
  6. The Wayland display server had roughly 1.12% more maximum framerate as compared to X11, which can be taken as a margin of error.
Video memory usage

The second test that was performed had less to do with the playing experience and more with the efficiency of graphical resource usage. Following are the results.

Without HD textures

On a default Far Cry 5 installation, I followed the configuration stated above but opted out of the HD textures pack to use comparatively lesser video memory across the platforms. Following are the results.

  1. On average, Fedora Workstation 35 uses around 31.94% lesser video memory as compared to Windows 10 Pro 21H2.
  2. The Wayland display server uses roughly 1.78% more video memory as compared to X11, which can be taken as a margin of error.
  3. The video game usage estimated is closer to the actual readings on Fedora Workstation 35 than they are those on Windows 10 Pro 21H2.
  4. Adding this to the previous results speaks about how Fedora Workstation 35 performs better while using fewer resources.
With HD textures

On a default Far Cry 5 installation, I followed the configuration stated above but this time I enabled the HD textures pack to stress the platforms by occupying more video memory. Following are the results.

  1. On average, Fedora Workstation 35 uses around 22.79% lesser video memory as compared to Windows 10 Pro 21H2.
  2. The Wayland display server uses roughly 2.73% more video memory as compared to X11, which can be taken as a margin of error.
  3. The video game usage estimated is closer to the actual readings on Fedora Workstation 35 than they are those on Windows 10 Pro 21H2.
  4. Adding this to the previous results speaks about how Fedora Workstation 35 performs better while using fewer resources.
System memory usage

The third test that was performed had less to do with the playing experience and more with how other applications can fit in the available memory while the video game is running. Following are the results.

Without HD textures

On a default Far Cry 5 installation, I followed the configuration stated above but opted out of the HD textures pack to warm up the platforms with a comparatively easier test. Following are the results.

  1. On average, Fedora Workstation 35 uses around 38.10% lesser system memory as compared to Windows 10 Pro 21H2.
  2. The Wayland display server uses roughly 4.17% more system memory as compared to X11, which can be taken as a margin of error.
  3. Adding this to the previous results speaks about how Fedora Workstation 35 performs better while using fewer resources.
  4. Lesser memory usage by the video game leaves out extra headroom for other applications to run simultaneously with no compromises.
With HD textures

On a default Far Cry 5 installation, I followed the configuration stated above, but this time I enabled the HD textures pack to stress the platforms with a comparatively harder test. Following are the results.

  1. On average, Fedora Workstation 35 uses around 33.58% lesser system memory as compared to Windows 10 Pro 21H2.
  2. The Wayland display server uses roughly 7.28% more system memory as compared to X11, which is big enough to be considered.
  3. Adding this to the previous results speaks about how Fedora Workstation 35 performs better while using fewer resources.
  4. Lesser memory usage by the video game leaves out extra headroom for other applications to run simultaneously with no compromises.
Advanced Without HD textures

On a default Far Cry 5 installation, I followed the previously stated configuration without the HD textures pack and ran the tests with varied resolution multipliers. Following are the results.

Minimum framerates recorded
  1. A great deal of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.1x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers do not seem to have a great effect on the framerate on Windows 10 Pro 21H2 as much as on Fedora Workstation 35.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 2.0x resolution multiplier appear to be marginally better than those on Fedora Workstation 35.
Maximum framerates recorded
  1. A small amount of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.1x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers change starts noticeably affecting performance on Windows 10 Pro 21H2 on a 1.6x scale, beyond which it falls greatly.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 1.6x resolution multiplier and beyond appear to be better than those on Fedora Workstation 35.
Average framerates recorded
  1. A minor amount of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.1x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers change starts noticeably affecting performance on Windows 10 Pro 21H2 on a 1.6x scale, beyond which it falls greatly.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 1.9x resolution multiplier and beyond appear to be better than those on Fedora Workstation 35.
With HD textures

On a default Far Cry 5 installation, I followed the previously stated configuration with the HD textures pack and ran the tests with varied resolution multipliers. Following are the results.

Minimum framerates recorded
  1. A great deal of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.5x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers do not seem to have a great effect on the framerate on Windows 10 Pro 21H2 as much as on Fedora Workstation 35.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 2.0x resolution multiplier appear to be marginally better than those on Fedora Workstation 35.
Maximum framerates recorded
  1. A great deal of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.0x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers change starts noticeably affecting performance on Windows 10 Pro 21H2 on a 1.6x scale, beyond which it falls greatly.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 1.6x resolution multiplier and beyond appear to be better than those on Fedora Workstation 35.
Average framerates recorded
  1. A minor amount of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.1x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers change starts noticeably affecting performance on Windows 10 Pro 21H2 on a 1.6x scale, beyond which it falls greatly.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 1.9x resolution multiplier and beyond appear to be better than those on Fedora Workstation 35.
Inferences

If the test results and observations baffle you, please allow me to tell you that you are not the only one who feels like that. For a video game that was created to run on Windows, it is hard to imagine how it ends up performing way better on Fedora Workstation 35, all while using a much lesser amount of system resources at all times. Special attention has been given to noting down the highest highs and lowest lows of framerates to ensure that consistent performance is made available.

But wait a minute – how is it that Fedora Workstation 35 manages to make this possible? Well, while I do not have a clear idea of what exactly goes on behind the scenes, I do have a certain number of assumptions that I suspect might be the reasons attributing to such brilliant visuals, great framerates and efficient resource usage. These can potentially act as starting points for us to understand the features of Fedora Workstation 35 for compatibility layers to make use of.

  1. Effective caching of graphical elements and texture assets in the video memory allows for keeping only those data in the memory which are either actively made use of or regularly referenced. The open-source AMD drivers help Fedora Workstation 35 make efficient use of the available frame buffer.
  2. Quick and frequent cycling of data elements from the video memory helps to bring down total occupancy per application at any point in time. The memory clocks and shader clocks are left at the application’s disposal by the open-source AMD drivers, and firmware bandwidth limits are all but absent.
  3. With AMD Smart Access Memory (SAM) enabled, the CPU is no longer restricted to using only 256MiB of the video memory at a time. A combination of leading-edge kernel and up-to-date drivers makes it available on Fedora Workstation 35 and capable of harnessing the technology to its limits.
  4. Extremely low system resource usage by supporting software and background services leaves out a huge majority of them to be used by the applications which need it the most. Fedora Workstation 35 is a lightweight distribution, which does not get in your way and puts the resources on what’s important.
  5. Faster loading of data elements to and from the physical storage devices to the system memory is greatly enhanced with the use of high-capacity modern copy-on-write file systems like BTRFS and journaling file systems like EXT4, which happens to be the suggested file system for Fedora Workstation 35.

Performance improvements like these only make me want to indulge more in testing and finding out what else Fedora Workstation is capable of. Do let me know what you think in the comments section below.

Contribute at the Fedora Linux 37 Test Week for Kernel 5.18 

Sunday 5th of June 2022 08:00:00 AM

The kernel team is working on final integration for Linux kernel 5.18. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week now through Sunday, June 12, 2022. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

More in Tux Machines

today's howtos

  • How to install go1.19beta on Ubuntu 22.04 – NextGenTips

    In this tutorial, we are going to explore how to install go on Ubuntu 22.04 Golang is an open-source programming language that is easy to learn and use. It is built-in concurrency and has a robust standard library. It is reliable, builds fast, and efficient software that scales fast. Its concurrency mechanisms make it easy to write programs that get the most out of multicore and networked machines, while its novel-type systems enable flexible and modular program constructions. Go compiles quickly to machine code and has the convenience of garbage collection and the power of run-time reflection. In this guide, we are going to learn how to install golang 1.19beta on Ubuntu 22.04. Go 1.19beta1 is not yet released. There is so much work in progress with all the documentation.

  • molecule test: failed to connect to bus in systemd container - openQA bites

    Ansible Molecule is a project to help you test your ansible roles. I’m using molecule for automatically testing the ansible roles of geekoops.

  • How To Install MongoDB on AlmaLinux 9 - idroot

    In this tutorial, we will show you how to install MongoDB on AlmaLinux 9. For those of you who didn’t know, MongoDB is a high-performance, highly scalable document-oriented NoSQL database. Unlike in SQL databases where data is stored in rows and columns inside tables, in MongoDB, data is structured in JSON-like format inside records which are referred to as documents. The open-source attribute of MongoDB as a database software makes it an ideal candidate for almost any database-related project. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you the step-by-step installation of the MongoDB NoSQL database on AlmaLinux 9. You can follow the same instructions for CentOS and Rocky Linux.

  • An introduction (and how-to) to Plugin Loader for the Steam Deck. - Invidious
  • Self-host a Ghost Blog With Traefik

    Ghost is a very popular open-source content management system. Started as an alternative to WordPress and it went on to become an alternative to Substack by focusing on membership and newsletter. The creators of Ghost offer managed Pro hosting but it may not fit everyone's budget. Alternatively, you can self-host it on your own cloud servers. On Linux handbook, we already have a guide on deploying Ghost with Docker in a reverse proxy setup. Instead of Ngnix reverse proxy, you can also use another software called Traefik with Docker. It is a popular open-source cloud-native application proxy, API Gateway, Edge-router, and more. I use Traefik to secure my websites using an SSL certificate obtained from Let's Encrypt. Once deployed, Traefik can automatically manage your certificates and their renewals. In this tutorial, I'll share the necessary steps for deploying a Ghost blog with Docker and Traefik.

Red Hat Hires a Blind Software Engineer to Improve Accessibility on Linux Desktop

Accessibility on a Linux desktop is not one of the strongest points to highlight. However, GNOME, one of the best desktop environments, has managed to do better comparatively (I think). In a blog post by Christian Fredrik Schaller (Director for Desktop/Graphics, Red Hat), he mentions that they are making serious efforts to improve accessibility. Starting with Red Hat hiring Lukas Tyrychtr, who is a blind software engineer to lead the effort in improving Red Hat Enterprise Linux, and Fedora Workstation in terms of accessibility. Read more

Today in Techrights

Android Leftovers