Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE - http://planetKDE.org/
Updated: 3 hours 25 min ago

0.4.2 Release of Elisa

Monday 1st of July 2019 08:56:15 PM

Elisa is a music player developed by the KDE community that strives to be simple and nice to use. We also recognize that we need a flexible product to account for the different workflows and use-cases of our users.

We focus on a very good integration with the Plasma desktop of the KDE community without compromising the support for other platforms (other Linux desktop environments, Windows and Android).

We are creating a reliable product that is a joy to use and respects our users privacy. As such, we will prefer to support online services where users are in control of their data.

I am happy to announce the release of 0.4.2 version of the Elisa music player.

The following fixes have been added to this release:

  • Fix restore of tracks with missing metadata in playlist (this was the case for tracks without album metadata) by Matthieu Gallien ;
  • Fix view selector not following the color theme (BUG 408435) by Matthieu Gallien.
Fixed Elisa with Breeze Dark Getting Involved

I would like to thank everyone who contributed to the development of Elisa, including code contributions, code reviews, testing, and bug reporting and triaging. Without all of you, I would have stopped working on this project.

New features and fixes are already being worked on. If you enjoy using Elisa, please consider becoming a contributor yourself. We are happy to get any kind of contributions!

We have some tasks that would be perfect junior jobs. They are a perfect way to start contributing to Elisa. There are more not yet reported here but reported in bugs.kde.org.

The flathub Elisa package allows an easy way to test this new release.

Elisa source code tarball is available here. There is no Windows setup. There is currently a blocking problem with it (no icons) that is being investigated. I hope to be able to provide installers for later bugfix versions.

The phone/tablet port project could easily use some help to build an optimized interface on top of Kirigami. It remains to be seen how to handle this related to the current desktop UI. This is something very important if we want to also support free software on mobile platforms.

Multiple Datasets: Tutorial

Monday 1st of July 2019 08:43:45 PM
Gcompris Multiple dataset Migration of an Activity

This post is a step by step tutorial for adding multiple datasets to an activity in Gcompris.
The procedure of adding multiple datasets to an activity is fairly simple in Gcompris. The steps for it are given below.
Note: In these steps we'll refer the activity in consideration as current_activity. Also we assume that we plan to add 3 datasets to current_activity.

PROCEDURE
  1. Add the following line to current_activity/ActivityInfo.qml file

    levels: "1,2,3"

    The above line indicates that that the activity will contain 3 datasets and will automatically create the dataset selection menu for the activity with 3 options.
    example

    import GCompris 1.0

    ActivityInfo {
    name: "money/Money.qml"
    difficulty: 2
    icon: "money/money.svg"
    author: "Bruno Coudoin <bruno.coudoin@gcompris.net>"
    demo: false
    //: Activity title
    title: qsTr("Money")
    //: Help title
    description: qsTr("Practice money usage")
    // intro: "Click or tap on the money to pay."
    //: Help goal
    goal: qsTr("You must buy the different items and give the exact price. At higher levels, several items are displayed, and you must first calculate the total price.")
    //: Help prerequisite
    prerequisite: qsTr("Can count")
    //: Help manual
    manual: qsTr("Click or tap on the coins or paper money at the bottom of the screen to pay. If you want to remove a coin or note, click or tap on it on the upper screen area.")
    credit: ""
    section: "math money measures"
    createdInVersion: 0
    levels: "1,2,3"
    }
  2. Create a resource directory inside the current_activity folder and inside it create separate folders for separate datasets with the name of the folder representing dataset number. The resultant directory structure would be as follows.
    +-- current_activity
    | ++-- resource
    | +++-- 1
    | ++++-- Data.qml
    | +++-- 2
    | ++++-- Data.qml
    | +++-- 3
    | ++++-- Data.qml

  3. Create a Data.qml file inside each dataset folder in the following format

    • objective - It will contain the text corresponding to this dataset that would be shown in the dataset selection menu.
    • difficulty - contains the difficulty of the dataset.
    • data - contains the actual data of the dataset The following example demonstrates the layout.
    import QtQuick 2.6
    import GCompris 1.0
    import "../../../../core"

    Dataset {
    objective: qsTr("Set and display time on analog clock for full half and quarters of an hour.")
    difficulty: 2
    data: [
    {
    "numberOfSubLevels": 5,
    "fixedMinutes": 0,
    "displayMinutesHand": false,
    "fixedSeconds": 0,
    "displaySecondsHand": false
    },
    {
    "numberOfSubLevels": 5,
    "fixedMinutes": 15,
    "displayMinutesHand": true,
    "fixedSeconds": 0,
    "displaySecondsHand": false
    },
    {
    "numberOfSubLevels": 5,
    "fixedMinutes": 30,
    "displayMinutesHand": true,
    "fixedSeconds": 0,
    "displaySecondsHand": false
    },
    {
    "numberOfSubLevels": 5,
    "fixedMinutes": 45,
    "displayMinutesHand": true,
    "fixedSeconds": 0,
    "displaySecondsHand": false
    }
    ]
    }
  4. In the current_activity/CurrentActivity.qml file add the following line to get the currenlty selected dataset.

    property var levels: activity.datasetLoader.item.data

    example

    QtObject {
    id: items
    property Item main: activity.main
    property alias background: background
    property GCSfx audioEffects: activity.audioEffects
    property alias answerModel: answerArea.pocketModel
    property alias pocketModel: pocketArea.pocketModel
    property alias store: store
    property alias instructions: instructions
    property alias tux: tux
    property var levels: activity.datasetLoader.item.data
    property alias tuxMoney: tuxMoney
    property alias bar: bar
    property alias bonus: bonus
    property int itemIndex
    property int pocketRows
    property var selectedArea
    property alias pocket: pocketArea.answer
    property alias answer: answerArea.answer
    }

    This way the variable levels will contain the data section of the selected dataset.

  5. The dataset can be extracted from the levels variable inside the js file as follows.

    dataset = items.levels
    var data = dataset[currentLevel]

KDE ISO Image Writer – Windows Build

Monday 1st of July 2019 09:13:57 AM

One of the main goals of this GSoC project is to have a fully working build of KDE ISO Image Writer on Windows to allow people that want to install KDE Neon to easily write the ISO image onto a USB flash drive.

In order to compile the code on Windows, I used Craft which is a cross-platform build system and package manager. With Craft, I could easily get the dependencies of KDE ISO Image Writer.

I started by writing a Craft blueprint which is a Python file that describes an application (or library) and list its dependencies which allows Craft to fetch the necessary packages before compiling the application.

My journey to get KDE ISO Image Writer running on Windows was not without its hurdles. I first had to figure out how to use Craft to compile an application from a Git repository but that was quickly solved after using Craft’s --help command. Then, I run into an issue with Qgpgme, which is used by KDE ISO Image Writer to verify the digital signature of an ISO image. I tried to compile using MSVC which failed systematically because of CMake complaining about not being able to finding Qgpgme. I learned from the KDE Windows team that Qgpgme can, at the moment, only be compiled using MinGW. I was finally able to compile KDE ISO Image Writer on Windows using Craft and MinGW.

In parallel to working on a Windows build of KDE ISO Image Writer, I continued my work on the user interface by implementing the designs made by the KDE Community. You can see in the following screenshots the new user interface running on Windows:

Writing an ISO image to a USB flash drive

Quick update for Google Summer of Code <2019-06-30 Sun>

Monday 1st of July 2019 07:13:00 AM

For the last week I have been reading and understanding the code of kis_imagepipe_brush.cpp, and it has been an eye opening experience. Read More...

Quick update for Google Summer of Code <2019-06-30 Sun>

Monday 1st of July 2019 07:13:00 AM

For the last week I have been reading and understanding the code of kis_imagepipe_brush.cpp, and it has been an eye opening experience. Read More...

A Week in Valencia – the 2019 Plasma/Usability & Productivity Sprint

Sunday 30th of June 2019 09:59:14 PM

For those that don't know me, I'm relatively new to KDE and spend most of my time doing VDG (Visual Design Group) stuff . The Plasma/Usability & Productivity sprint in Valencia which took place from June 19th to June 26th, was my first ever KDE sprint. Although we were all working together, I was formally...... Continue Reading →

Smart Pointers in Qt Projects

Sunday 30th of June 2019 03:30:22 PM

Actually, a smart pointer is quite simple: It is an object that manages another object by a certain strategy and cleans up memory, when the managed object is not needed anymore. The most important types of smart pointers are:

  • A unique pointer that models access to an object that is exclusively maintained by someone. The object is destroyed and its memory is freed then the managing instance destroys the unique pointer. Typical examples are std::unique_ptr or QScopedPointer.
  • A shared pointer is a reference counting pointer that models the shared ownership of an object, which is managed by several managing instances. If all managing instances release their partly ownership, the managed object is automatically destroyed. Typical examples are std::shared_ptr or QSharedPointer.
  • A weak pointer is a pointer to an object that is managed by someone else. The important use case here is to be able to ask if the object is still alive and can be accessed. One example is std::weak_ptr that can point to a std::shared_ptr managed object and can be used to check if the object managed by the shared pointer still exists and it can be used to obtain a shared pointer to access the managed object. Another example is QPointer, which is a different kind of weak pointer and can be used to check if a QObject still exists before accessing it.

For all these pointers one should always keep one rule in mind: NEVER EVER destroy the managed objects by hand, because the managed object must only be managed by the smart pointer object. Otherwise, how could the smart pointer still know if an object can still be accessed?! E.g. the following code would directly lead to a crash because of a double delete:

{ auto foo = std::make_unique<Foo>(); delete foo.get(); } // crash because of double delete when foo gets out of scope

This problem is obvious, now let’s look at the less obvious problems one might encounter when using smart pointers with Qt.

QObject Hierarchies

QObject objects and instances of QObject derived classes can have a parent object set, which ensures that child objects get destroyed whenever the parent is destroyed. E.g., think about a QWidget based dialog where all elements of the dialog have the QDialog as parent and get destroyed when the dialog is destroyed. However, when looking at smart pointers there are two problems that we must consider:

1. Smart Pointer Managed objects must not have a QObject parent

It’s as simple as the paragraph’s headline: When setting a QObject parent to an object that is managed by a smart pointer, Qt’s cleanup mechanism destroys your precious object whenever the parent is destroyed. You might be lucky and destroy your smart pointer always before the QObject parent is destroyed (and nothing bad will happen), but future developers or user of your API might not do it.

2. Smart Pointers per default call delete and not deleteLater

Calling delete on a QObject that actively participates in the event loop is dangerous and might lead to a crash. So, do not do it! – However, all smart pointers that I am aware of call “delete” to destroy the managed object. So, you actively have to take care of this problem by specifying a custom cleanup handler/deleter function. For QScopedPointer there already exists “QScopedPointerDeleteLater” as a predefined cleanup handler that you can specify. But you can do the same for std::unique_ptr, std::shared_ptr and QSharedPointer by just defining a custom deleter function and specifying it when creating the smart pointer.

Wrestling for Object Ownership with the QQmlEngine

Besides the QObject ownerships there is another, more subtle problem that one should be aware of when injecting objects into the QQmlEngine. When using QtQuick in an application, often there is the need to inject objects into the engine (I will not go into detail here, but for further reading see https://doc.qt.io/qt-5/qtqml-cppintegration-topic.html). The important important fact one should be aware of is that at this point there is a heuristic that decides whether the QML engine and its garbage collector assumes ownership of the injected objects or if the ownership is assumed to be on C++ side (thus managed by you and your smart pointers).

The general rule for the heuristic is named in the QObjectOwnership enum. Here, make sure that you note the difference between QObjects returned via a Q_PROPERTY property and via a call of a Q_INVOKABLE methods. Moreover, note that the description there misses the special case of when an Object has a QObject parent, then also the CppOwnership is assumed. For a detailed discussion of the issues there (which might show you a surprisingly hard to understand stack trace coming from the depths of the QML engine), I suggest reading this blog post.

Summing up the QML part: When you are using a smart pointer, you will hopefully not set any QObject parent (which automatically would have told the QML engine not to take ownership…). Thus, when making the object available in the QML engine, you must be very much aware about the way you are using to put the object into the engine and if needed, you must call the QQmlEngine::setObjectOwnership() static method to mark your objects specifically that they are handled by you (otherwise, bad things will happen).

Conclusion

Despite of the issues above, I very much favor the use of smart pointers. Actually, I am constantly switching to smart pointers for all projects I am managing or contributing. However, one must be a little bit careful and conscious about the side effects when using them in Qt-based projects. Even if they bring you a much simpler memory management, they do not relieve you from the need to understand how memory is managed in your application.

PS: I plan to continue with a post about how one could avoid those issues with the QML integration on an architectural level, soon; but so much for now, the post is already too long

May/June in KDE PIM

Sunday 30th of June 2019 07:45:00 AM

Following Dan it’s my turn this time to provide you with an overview on what has happened around Kontact in the past two months. With more than 850 commits by 22 people in the KDE PIM repositories this can barely scratch the surface though.

KMail

Around email a particular focus area has been security and privacy:

  • Sandro worked on further hardening KMail against so-called “decryption oracle” attacks (bug 404698). That’s an attack where intercepted encrypted message parts are carefully embedded into another email to trick the intended receiver in accidentally decrypting them while replying to the attacker’s email.
  • Jonathan Marten added more fine-grained control over proxy settings for IMAP connections.
  • André improved the key selection and key approval workflow for OpenGPG and S/MIME.

Laurent also continued the work on a number of new productivity features in the email composer:

  • Unicode color emoji support in the email composer.
Color emoji selector. Grammalecte reporting a grammar error.
  • Markdown support in the email composer, allowing to edit HTML email content in markdown syntax.
Markdown editing with preview.

This isn’t all of course, there’s plenty of fixes and improvements all around KMail:

  • Albert fixed fixed an infinite loop when the message list threading cache is corrupted.
  • David fixed a Kontact crash on logout (bug 404881).
  • Laurent fixed access to more than the first message when previewing MBox files (bug 406167).
  • The itinerary extraction plugin benefited from a number of improvements in the extractor engine, see this post for details.

And fixing papercuts and general polishing wasn’t forgotten either (most changes by Laurent):

  • Fix cursor jumping into the Bcc field in new email (bug 407967).
  • Fix opening the New Mail Notifier agent configuration.
  • Fix settings window being too small (bug 407143).
  • Fix account wizard not reacting to Alt-F4 (bug 388815).
  • Fix popup position in message view with a zoom level other than 100%.
  • Fix importing attached vCard files (bug 390900).
  • Add keyboard shortcut for locking/unlocking the search mode.
  • David fixed interaction issues with the status bar progress overlay.
KOrganizer

Around calendaring, most work has been related to the effort of making KCalCore part of KDE Frameworks 5, something that particularly benefits developers using KCalCore outside of KDE PIM. The changes to KCalCore also aimed at making it easier to use from QML, by turning more data types into implicitly shared value types with Q_GADGET annotations. This work should come to a conclusion soon, so we can continue the KF5 review process.

Of course this isn’t all that happened around calendaring, there were a few noteworthy fixes for users too:

  • Fixed an infinite loop in the task model in case of duplicate UIDs.
  • Improved visibilities of timeline/Gantt views in KOrganizer with dark color schemes.
  • Damien Caliste fixed encoding of 0 delay durations in the iCal format.
KAddressBook

Like calendaring, contact handling also saw a number of changes related to making KContacts part of KDE Frameworks 5. Reviewing the code using KContacts lead to a number of repeated patterns being upstreamed, and to streamlining the contact handling code to make it easier to maintain. As a side-effect, a number of issues around the KContacts/Grantlee integration were fixed, solving for example limitations regarding the localization of contact display and contact printing.

There is one more step required to complete the KContacts preparation for KDE Frameworks 5, the move from legacy custom vCard entries to the IMPP element for messaging addresses.

Akregator

Akregator also received a few fixes:

  • Heiko Becker fixed associating notifications with the application.
  • Wolfgang Bauer fixed a crash with Qt 5.12 (bug 371511).
  • Laurent fixed comment font size issues on high DPI screens (bug 398516), and display of feed comments in the feed properties dialog (bug 408126).
Common Infrastructure

The probably most important change in the past two months happened in Akonadi: Dan implemented an automatic recovery path for the dreaded “Multiple Merge Candidate” error (bug 338658). This is an error condition the Akonadi database state can end up in for still unknown reasons, and that so far blocked successful IMAP synchronization. Akonadi is now able to automatically recover from this state and with the next synchronization with the IMAP server put itself back into a consistent state.

This isn’t all though:

  • Another important fix was raising the size limit for remote identifiers to 1024 characters (bug 394839), also by Dan.
  • David fixed a number of memory leaks.
  • Filipe Azevedo fixed a few macOS specific deployment issues.
  • Dan implemented improvements for using PostreSQL as a backend for Akonadi.

The backend connectors also saw some work:

  • For the Kolab resource a memory management issues got fixed, and it was ported away from KDELibs4Support legacy code.
  • David Jarvie fixed a few configuration related issues in the KAlarm resource.
  • Laurent fixed configuration issues in the MBox resources.

The pimdataexporter utility that allows to import/export the entire set of KDE PIM settings and associated data has received a large number of changes too, with Laurent fixing various import/export issues and improving the consistency and wording in the UI.

Help us make Kontact even better!

Take a look at some of the junior jobs that we have! They are simple, mostly programming tasks that don’t require any deep knowledge or understanding of Kontact, so anyone can work on them. Feel free to pick any task from the list and reach out to us! We’ll be happy to guide you and answer all your questions. Read more here…

KDE Usability & Productivity: Week 77

Sunday 30th of June 2019 05:11:17 AM

We’re up to week 77 in KDE’s Usability & Productivity initiative! This week’s report encompasses the latter half of the Usability & Productivity sprint. Quite a lot of great work got done, and two features I’m particularly excited about are in progress with patches submitted and under review: image annotation support in Spectacle, and customizable sort ordering for wallpaper slideshows. These are not done yet, but should be soon! Meanwhile, check out what’s already landed:

New Features Bugfixes & Performance Improvements User Interface Improvements

Pretty freakin’ sweet, huh?! It was a great development sprint and I’m really happy with how it went. I’ll be writing another more in-depth article about it, so stay tuned.

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a tax-deductible donation to the KDE e.V. foundation.

[GSoC – 3] Achieving consistency between SDDM and Plasma

Saturday 29th of June 2019 03:41:20 PM

Previously: 1st GSoC post 2nd GSoC post With the first phase of Google Summer of Code over it's high time some substantial progress on achieving the main goal of the project was presented. Since the last post, there's two things that have been done. First, Plasma is now going to be following upstream advice on...... Continue Reading →

Latte bug fix release v0.8.9

Friday 28th of June 2019 11:43:48 AM

Welcome Latte Dock v0.8.9  the LAST stable release for v0.8 branch!


Go get  v0.8.9   from, download.kde.orgor  store.kde.org*
-----* archive has been signed with gpg key: 325E 97C3 2E60 1F5D 4EAD CF3A 5599 9050 A2D9 110E
Fixes:
  • fix: show notifications applet when present in Latte (for Plasma >= 5.16)

Latte v0.9:

For those following Latte news, in July first beta release of v0.9 branch will land and if everything goes on schedule v0.9 will replace v0.8 during first days of August as the officially supported stable version. Requirements for v0.9 are the same with v0.8:
Minimum requirements:
  • Qt >= 5.9
  • Plasma >=5.12
Proposed requirements:
  • Qt >= 5.12
  • Plasma >=5.15

Community Help:
Latte v0.9 introduces two new APIs that can be used from developers in order to leverage the full potential of the new version.
First API can be used from plasma applets to exchange information with Latte from qml code. This way we can have applets that work just fine with plasma panels and at the same time leveraging the maximum of capabilities from Latte docks/panels. My Window Applets are already using it in their latest versions.
Second API is related to new standalone Latte indicators and how they can be implemented from developers.
In order for these APIs to reach developers I would like these information to be present at kde techbase. I have already created a word style document for them but you would sorry me that I do not have the time to upload them in markup language in the referenced page. If you think you can help or take up this task please contact me through the relevant bug report in kde bug tracker . If no community interest appears, I will just upload the 15 pages pdf  somewhere on the net and forward all the developers interested at that link.

Donations:

You can find Latte at Liberapay if you want to support,    

or you can split your donation between my active projects in kde store.

Qt Creator 4.10 Beta2 released

Friday 28th of June 2019 10:38:48 AM

We are happy to announce the release of Qt Creator 4.10 Beta2 !

Most notably we fixed a regression in the signing options for iOS devices, and that the “Build Android APK” step from existing Android projects was not restored.
As always you find more details in our change log.

Get Qt Creator 4.10 Beta2

The opensource version is available on the Qt download page under “Pre-releases”, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.10 Beta2 is also available under Preview > Qt Creator 4.10.0-beta2 in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.10 Beta2 released appeared first on Qt Blog.

How to comply with the upcoming requirements in Google Play

Friday 28th of June 2019 07:55:14 AM

Starting on August 1st, Google Play will no longer accept new applications or application updates without a 64-bit version (unless of course there is no native code at all). For Qt users, this means you have to build an additional APK that contains the 64-bit binaries.

Qt has shipped 64-bit binaries for Android since Qt 5.12.0, so complying with the new requirement is technically no big deal. But after discussing with users, I see that it is not clear to everyone exactly how to set up an app in Google Play that supports multiple architectures at once.

This call for help, combined with the fact that I am currently setting up a fresh Windows work station, made for a golden opportunity to look at Qt for Android app development in general. In this blog, I will start with a clean slate and show how to get started on Android, as well as how to publish an app that complies with the Google Play requirements.

I will

  • guide you through the installation steps needed to get a working environment,
  • describe the process of building an application for multiple architectures,
  • and show you how to upload your binaries to Google Play.

The first few parts might be familiar to many of you, so if you get bored and want to hear about the main topic, feel free to skip right to Step 4.

A note about SDK versions

The Android SDK is itself under heavy development, and quite often it isn’t backwards compatible, causing problems with our integration in Qt. We react as quickly as we can to issues that arise from changes or regressions in the SDK, but a general rule of thumb is to wait before you upgrade to the latest and greatest versions of the Android tools until we have had a chance to adapt to incompatibilities in Qt.

While there have been some issues on other platforms as well, the majority of the problems we have seen have been on Windows. So if you are on this host system, be extra aware to check for known good versions before setting up your environment.

We are currently recommending the use of the following tools together with Qt 5.13.0:

  • Android build tools version 28
  • Android NDK r19
  • Java Development Kit 8

If you do bump into some problems, please make sure to check our known issues page to see if there is any updated information.

Now for the details on where and how to get the right versions of everything.

Step 1: Installing the JDK

Android is primarily a Java-based platform, and while you can write your Qt applications entirely in C++ and/or QML, you will need the Java Development Kit in order to compile the files that make the integration possible.

Note that there is an incompatibility between Android’s SDK Manager tool and the later versions of Oracle’s JDK, making the latest JDK versions unusable together with the Android environment. To work around this, we recommend that you download JDK version 8 for use with Android.

You may use the official binaries from Oracle, or an alternative, such as the AdoptOpenJDK project.

Download and run the installer and install it in the default location.

Step 2: Setting up the Android environment

The second step is getting the actual Android development environment. Start by downloading and installing Android Studio. Scroll past the different “beta” and “canary” releases, and you will find the latest stable release.

Once the Android Studio has been installed, you can use this to install the “SDK Platform” for you. This is the actual collection of Java classes for a particular Android distribution. When you start Android Studio the first time, it should prompt you to install the SDK Platform. You can safely use the latest version of the SDK, platform 29, which is the suggested default.

In addition to the SDK, we also need to install the NDK. This is the development kit used for cross-compiling your C++ code to run on Android. As mentioned above, we will use Android NDK r19c and not the latest release, since there are issues with Android NDK r20 causing compilation errors. The issue will be been addressed in Qt 5.13.1 and Qt 5.12.5, so when you start using those, then upgrading to Android NDK r20 is possible.

And as a final step, we need to make sure that we are using version 28.0.3 of the Android build tools rather than the latest version. Note that this is only an issue on Windows hosts.

From the starting dialog box of Android Studio, click on Configure and then select SDK Manager. Go to the SDK Tools tab and make sure Show Package Details is checked. Under the Android build tools, make sure you deselect 29.0.0 and select 28.0.3 instead.

This will uninstall the non-functioning version of the build tools and install the older one. Click Apply to start the process, and when it is done you will have installed a functioning Android environment.

Step 3: Install Qt

For this guide, we will be using Qt 5.13.0. If you haven’t already, start by downloading the online installer tool from your Qt Account.

When you run the installer, make sure you select the arm64-v8a and armv7a target architectures. These are technical names for, respectively, the 64-bit and 32-bit versions of the ARM family of processors, which is the most commonly used processors on Android devices.

Note: For this example in particular, we will also need Qt Purchasing, since it contains the application I am planning to use as demonstration. This can also be selected from the same list.

When Qt is finished installing, start Qt Creator and open the Options. Under Devices, select the Android tab and select the directories where you installed the different packages in the previous steps.

If everything is set up correctly, Qt Creator will show a green check mark, and you will be ready to do Android development with Qt.

Step 4: Setting up project in Qt Creator

For this example, I will use the Qt Hangman example. This is a small example we made to show how to implement in-app purchases in a cross-platform way.

First we open the example in Qt Creator, which can be done from the Welcome screen. Once it has been opened, Qt Creator will ask us to select which Qt versions we want to use for building it.

Select both the 64-bit and 32-bit versions of Qt and click Configure Project.

In order to comply with the additional requirements in Google Play, we want to create two APK packages: One for 32-bit devices and one for 64-bit devices. We need to configure each of these separately.

This screenshot shows an example setup for the 32-bit build. Important things to notice here:

  • Use a different shadow build directory for each of the builds.
  • Make sure you select the Release configuration.
  • You should also tick the Sign package checkbox to sign your package, otherwise the Google Play store will reject it.

With the exception of the build directory, the setup for the 64-bit build should be the same. Select the 64-bit kit on the left-hand side and make the equivalent adjustments there.

Step 5: Preparing the manifest

In addition, the two packages will need identical AndroidManifest.xml files, except for one detail: The version code of the two has to differ. The version code can be pretty much anything you choose, as long as you keep in mind that when an APK is installed on a device from the store, it will select the APK with the highest version code. As Qt user Fabien Chéreau poined out in a comment to a bug report, you therefore typically want to set the version code of the 64-bit version to be higher than for the 32-bit version, so that a device which supports both will prefer the 64-bit one.

As Felix Barz pointed out in the same thread, this can be automated in the .pro file of the project. Here is my slightly modified version of his code:

defineReplace(droidVersionCode) { segments = $$split(1, ".") for (segment, segments): vCode = "$$first(vCode)$$format_number($$segment, width=3 zeropad)" contains(ANDROID_TARGET_ARCH, arm64-v8a): \ suffix = 1 else:contains(ANDROID_TARGET_ARCH, armeabi-v7a): \ suffix = 0 # add more cases as needed return($$first(vCode)$$first(suffix)) } VERSION = 1.2.3 ANDROID_VERSION_NAME = $$VERSION ANDROID_VERSION_CODE = $$droidVersionCode($$ANDROID_VERSION_NAME)

This neat trick (thanks, Felix!) will convert the application’s VERSION to an integer and append a new digit, on the least significant end, to signify the architecture. So for version 1.2.3 for instance, the version code will be 0010020030 for the 32-bit package and 0010020031 for the 64-bit one.

When you generate an AndroidManifest.xml using the button under Build APK in the project settings, this will automatically pick up this version code from the project. Once you have done that and edited the manifest to have your application’s package name and title, the final step is to build the package: First you do a build with one of the two kits, and then you must activate the other kit and do the build again.

When you are done, you will have two releasable APK packages, one in each of the build directories you set up earlier. Relative to the build directory, the package will be in android-build\build\outputs\apk\release.

Note that for a more efficient setup, you will probably want to automate this process. This is also quite possible, since all the tools used by Qt Creator can be run from the command line. Take a look at the androiddeployqt documentation for more information.

Step 6: Publish the application in Google Play

The Google Play publishing page is quite self-documenting, and there are many good guides out there on how to do this, so I won’t go through all the steps for filling out the form. In general, just fill out all the information it asks for, provide the images it needs, and make sure all the checkmarks in the left side bar are green. You can add all kinds of content here, so take your time with it. In the end, it will have an impact on how popular your app becomes.

Once that has been done, you can create a new release under App Releases and upload your APKs to it.

One thing to note is that the first time you do this, you will be asked if you want to allow Google Play to manage your app signing key.

For now, you will have to select to Opt Out of this. In order to use this feature, the application has to be in the new “Android App Bundle” format. This is not yet supported by Qt, but we are working to support this as well. In fact, Bogdan Vatra from KDAB (who is also the maintainer of the Android port of Qt) has already posted a patch which addresses the biggest challenge in getting such support in place.

When we do get support for it, it will make the release process a little bit more convenient. With the AAB format, Google Play will generate the optimized APKs for different architectures for us, but for now we have to do this manually by setting up multiple kits and building multiple APKs, as I have described in this tutorial.

When the two APKs have been uploaded to a release, you should see a listing such as this: Two separate APK packages, each covering a single native platform. By expanding each of the entries, you can see what the “Differentiating APK details” are. These are the criteria used for selecting one over the other when a device is downloading the APK from the Google Play Store. In this case, the differentiating detail should be the native platform.

And that is all there is to it: Creating and releasing a Qt application in Google Play with both 32-bit and a 64-bit binaries. When the APKs have been uploaded, you can hit Publish and wait for Google Play to do its automated magic. And if you do have existing 32-bit apps in the store at the moment, make sure you update them with a 64-bit version well before August 2021, as that is when non-compliant apps will no longer be served to 64-bit devices, even if they also support 32-bit binaries.

Until then, happy hacking and follow me on Twitter for irregular updates and fun curiosities.

The post How to comply with the upcoming requirements in Google Play appeared first on Qt Blog.

Week 4, Titler Tool and MLT – GSoC ’19

Friday 28th of June 2019 05:30:50 AM

Hi again!

It’s already been a month now, and this week – it hasn’t been the most exciting one. Mostly meddling with MLT, going through pages of documentation, compiling MLT and getting used to the MLT codebase.

With the last week, I concluded with the rendering library part and now this week, I began writing a new producer in MLT for QML which will be rendered using the renderering library. So I went through a lot of MLT documentation, and it being a relatively new field for me, here is what I’ve gathered so far:

At its core, MLT employs the basic producer-consumer concept. A producer produces data (here, frame objects) and a consumer consumes frames – as simple as that.

Producer —> Consumer

We have producers for different things which the current titler uses like qtext, qimage and kdenlivetitle. What these producers do is simple, take the case of kdenlivetitle, it loads an XML file, parses it, initializes producer properties and then the producer is ready to produce frames.

What I have to do for the next course of days is to write a new producer which loads QML, renders them (using my library) and then produce these frames. I’ve started with writing the new producer although the progress has been slow as I’m still wrapping my head around all the code and trying to figure out what my next step should be. You can look at the code here, although there isn’t much at the moment – producer_qml.c, qml_wrapper.*

Apart from that, the build system for the rendering library will soon be added to the MLT build system within the next few days and with that, I’ll be able to use the rendering library for the producer. And pretty soon, we should have a producer working hopefully!

 

KDE Applications 19.08 Schedule finalized

Thursday 27th of June 2019 08:42:39 PM

It is available at the usual place https://community.kde.org/Schedules/Applications/19.08_Release_Schedule

Dependency freeze is two weeks (July 11) and Feature Freeze a week after that, make sure you start finishing your stuff!


P.S: Remember last day to apply for Akademy Travel Support is this Sunday 30 of June!

New Facebook Account

Thursday 27th of June 2019 03:13:39 PM

Facebook is a business selling very targeted advertising channels.  This is not new, Royal Mail Advertising Mail service offers ‘precision targeting’.  But Facebook does it with many more precision options, with emotive impact because it uses video and feels like it comes from your friends and the option of anonymity.  This turns out to be most effective in political advertising.  There are laws banning political advertising on television because politics should be about reasoned arguments not emotive simplistic soundbites but the law has yet to be changed to include this ban on video on the internet. The result has undermined the democracy of the UK during the EU referendum and elsewhere.

To do this Facebook collects data and information on you.  Normally this isn’t a problem but you never know when journalists will come sniffing around for gossip in your past life, or an ex-partner will want to take something out of context to prove a point in diverse proceedings.  The commonly used example of data collection going wrong was the Dutch government keeping a list of who was Jewish, with terrible consequences when the Nazis invaded.  We do not have a fascist government here but you can never assume it will never happen.  Facebook has been shown to care little for data protection and allowed companies such as Cambridge Analytica to steal data illegally and without oversight.  Again this was used to undermine democracy using the 2016 EU referendum.

In return we get a useful way to keep in touch with friends and family and have discussions with groups and chat with people, these are useful services.  So what can you do if you don’t want your history to be kept by an untrusted third party?  Delete your account and you’ll miss out on important social interactions.  Well there’s an easy option that nobody seems to have picked up on which is to open a new account and move your important content over but dropping your history.

Thanks to the EU legislation GDPR we have a Right to Data Portability. This is similar but separate from the Right to Access.  And it means it’s easy enough to extract your data out of Facebook.  I downloaded mine and it’s a whopping 4GB of text and photos and Video.  I then set up a new account and started triaging anything I wanted to keep.  What’s in my history?

Your Posts and Other People’s Posts to Your Timeline

These are all ephemeral.  You post them, get some reaction, but they’re not very interesting a week or more later.  Especially all the automated ones Spotify sent saying what music I was playing.

Photos and videos

Here’s a big chunk.  Over 1500, some 2GB of pics, mostly of me looking awesome paddling.  I copied any I want to keep over to easy photo dump Google Photos. There was about 250 I wanted to keep.

Comments

I’ve really no desire to keep these.

Likes and reactions

Similarly ephemeral.

Friends

This can be copied over easily to a new account, you just friend your old account and then it’ll suggest all your old friends.  A Facebook friend is not the same as a real life friend so it’s sensible to triage out anyone you don’t have contact with and don’t find interesting to get updates from.

You can’t see people who have unfriended you, probably for the best.

Stories

Facebook’s other way to post pics to try to be cool with the Snapchat generation.  Their very nature is that they don’t stay around long so nothing important here.

Following and followers

This does include some people who have ignored a friend request but still have their feed public so that request gets turned into a follow.  Nobody who I deperately crave to be my friend is on the list fortunately so they can be ignored.

Messages

Despite removing the Facebook branding from their messaging service a few years ago it’s still very much part of Facebook.  Another nearly 2GB of text and pics in here.  This is the kind of history that is well worth removing, who knows when those chats will come back to haunt you.  Some more pics here worth saving but not many since any I value for more than a passing comment are posted on my feed.  There’s a handful of longer term group chats I can just add my new account back into.

Groups

One group I run and a few I use frequently, I can just rejoin them and set myself as admin on the one I run.

Events

Past events are obviously not important.  I had 1 future event I can easily rejoin.

Profile information

It’s worth having a triage and review of this to keep it current and not let Facebook know more than you want it to.

Pages

Some pages I’m admin or moderator of than I can rejoin, where moderator you need to track down an admin person to add you back in.

Marketplace, Payment history, Saved items and collections, Your places

I’ve never found a use for these features.

Apps and websites

It’s handy to use Facebook as a single sign on for websites sometimes but it’s worth reviewing and triaging these to stop them taking excess data without you knowing.  The main one I used was Spotify but it turns out that has long since been turned into a non-Facebook account so no bother wiping all these.

Other activity

Anyone remember pokes?

What Facebook Decides about me

Facebook gives you labels to give to advertisers.  Seems I’m interested in Swahili language, Sweetwater in Texas, Secret Intelligence Service and other curiosities.

Search history

I can’t think of any good reason why I’d want Facebook to know about 8 years of searches.

Location history

Holy guacamole, they keep my location each and every day since I got a smartphone.  That’s going to be wiped.

Calls and messages

Fortunately they haven’t been taking these from my phone history but I’m sure it’s only one setting away before they do.

Friend Peer Group

They say I have ‘Established Adult Life’.  I think this means I’m done.

Your address books

They did however keep all my contacts from GMail and my phone whenever I first logged on from a web browser and phone.  They can be gone.

So most of this can be dropped and recreated quite easily. It’s a fun evening going through your old photos.  My 4GB of data is kept in a cloud drive which can be accessed through details in my will so if I die and my autobiographer wants to dig the gossip on me they can.

I also removed the app from my phone.  The messenger app is useful but the Facebook one seems a distraction, if I want to browse and post Facebook stuff I can use the web browser.  And on a desktop computer I can use https://www.messenger.com/ rater than the distraction of the Facebook website.

And the first thing I posted?  Going cabogganing!

New account at https://www.facebook.com/jonathan.riddell.737 do re-friend me if you like.

 

My experience using Kdenlive on the 48 Hour Film Project

Thursday 27th of June 2019 02:59:16 PM

Cutelyst 2.8.0 released

Thursday 27th of June 2019 02:37:19 PM

Cutelyst a Qt/C++ Web framework got a new release!

This release took a while to be out because I wanted to fix some important stuff, but time is short, I’ve been working on polishing my UPnpQt library and on a yet to be released FirebaseQt and FirebaseQtAdmin (that’s been used on a mobile app and REST/WebApp used with Cutelyst), the latter is working quite well although it depends ATM on a Python script to get the Google token, luckly it’s a temporary waste of 25MB of RAM each 45 minutes.

Back to the release, thanks to Alexander Yudaev it has cpack support now and 顏子鳴 also fixed some bugs and added a deflate feature to RenderView, FastCGI and H2.

I’m also very happy we now have more than 500 stars on GitHub

Have fun https://github.com/cutelyst/cutelyst/releases/tag/v2.8.0

Little Trouble in Big Data – Part 2

Thursday 27th of June 2019 08:45:15 AM

In the first blog in this series, I showed how we solved the original problem of how to use mmap() to load a large set of data into RAM all at once, in response to a request for help from a bioinformatics group dealing with massive data sets on a regular basis. The catch in our solution, however, was that the process still took too long. In this blog, I describe how we solve this, starting with Step 3 of the Process I introduced in Blog 1:

3. Fine-grained Threading

The original code we inherited was written on the premise that:

  1. Eigen uses OpenMP to utilize multiple cores for vector and matrix operations.
  2. Writing out the results of the Monte Carlo simulation is time-consuming and therefore put into its own thread by way of OpenMPI with another OpenMPI critical section doing the actual analysis.

Of course, there were some slight flaws in this plan.

  1. Eigen’s use of OpenMP is only for some very specific algorithms built into Eigen itself. None of which this analysis code was using, so that was useless. Eigen does make use of vectorization, however, which is good and can in ideal circumstances give a factor of 4 speedup compared to a simplistic implementation. So we wanted to keep that part.
  2. The threading for writing results was, shall we say, sub-optimal. Communication between the simulation thread and the writer thread was by way of a lockless list/queue they had found on the interwebs. Sadly, this was implemented with a busy spin loop which just locked the CPU at 100% whilst waiting for data to arrive once every n seconds or minutes. Which means it’s just burning cycles for no good reason. The basic outline algorithm looks something like this:
const std::vector colIndices = {0, 1, 2, 3, ... }; const std::vector markerIndices = randomise(colIndices); for (i = 0; i < maxIterations; ++i) { for (j = 0; j < numCols; ++j) { const unsigned int marker = markerIndices[j]; const auto col = data.mappedZ.col(marker); output += doStuff(col); } if (i % numIterations == 0) writeOutput(output); }

So, what can we do to make better use of the available cores? For technical reasons related to how Markov Chain Monte Carlo works, we can neither parallelize the outer loop over iterations nor the inner loop over the columns (SNPs). What else can we do?

Well, recall that we are dealing with large numbers of individuals – 500,000 of them in fact. So we could split the operations on these 500k elements into smaller chunks and give each chunk to a core to process and then recombine the results at the end. If we use Eigen for each chunk, we still get to keep the SIMD vectorization mentioned earlier. Now, we could do that ourselves but why should we worry about chunking and synchronization when somebody else has already done it and tested it for us?

This was an ideal chance for me to try out Intel’s Thread Building Blocks library, TBB for short. As of 2017 this is now available under the Apache 2.0 license and so is suitable for most uses.

TBB has just the feature for this kind of quick win in the form of its parallel_for and parallel_reduce template helpers. The former performs the map operation (applies a function to each element in a collection where each is independent). The latter performs the reduce operation, which is essentially a map operation followed by a series of combiner functions, to boil the result down to a single value.

These are very easy to use so you can trivially convert a serial piece of code into a threaded piece just by passing in the collection and lambdas representing the map function (and also a combiner function in the case of parallel_reduce).

Let’s take the case of a dot (or scalar) product as an example. Given two vectors of equal length, we multiply them together component-wise then sum the results to get the final value. To write a wrapper function that does this in parallel across many cores we can do something like this:

const size_t grainSize = 10000; double parallelDotProduct(const VectorXf &Cx, const VectorXd &y_tilde) { const unsigned long startIndex = 0; const unsigned long endIndex = static_cast(y_tilde.size()); auto apply = [&](const blocked_range& r, double initialValue) { const long start = static_cast(r.begin()); const long count = static_cast(r.end() - r.begin()); const auto sum = initialValue + (Cx.segment(start, count).cast() * y_tilde.segment(start, count)).sum(); return sum; }; auto combine = [](double a, double b) { return a + b; }; return parallel_reduce(blocked_range(startIndex, endIndex, grainSize), 0.0, apply, combine); }

Here, we pass in the two vectors for which we wish to find the scalar product, and store the start and end indices. We then define two lambda functions.

  1. The apply lambda, simply uses the operator * overload on the Eigen VectorXf type and the sum() function to calculate the dot product of the vectors for the subset of contiguous indices passed in via the blockedRange argument. The initialValue argument must be added on. This is just zero in this case, but it allows you to pass in data from other operations if your algorithm needs it.
  2. The combine lambda then just adds up the results of each of the outputs of the apply lambda.

When we then call parallel_reduce with these two functions, and the range of indices over which they should be called, TBB will split the range behind the scenes into chunks based on a minimum size of the grainSize we pass in. Then it will create a lightweight task object for each chunk and queue these up onto TBB’s work-stealing threadpool. We don’t have to worry about synchronization or locking or threadpools at all. Just call this one helper template and it does what we need!

The grain size may need some tuning to get optimal CPU usage based upon how much work the lambdas are performing but as a general rule of thumb, it should be such that there are more chunks (tasks) generated than you have CPU cores. That way the threadpool is less likely to have some cores starved of work. But too many and it will spend too much time in the overhead of scheduling and synchronizing the work and results between threads/cores.

I did this for all of the operations in the inner loop’s doStuff() function and for some others in the outer loop which do more work across the large (100,000+ element) vectors and this yielded a very nice improvement in the CPU utilization across cores.

So far so good. In the next blog, I’ll show you how we proceed from here, as it turns out this is not the end of the story.

The post Little Trouble in Big Data – Part 2 appeared first on KDAB.

Krita 4.2.2 Released

Thursday 27th of June 2019 07:00:16 AM

Within a month of Krita 4.2.1, we’re releasing Krita 4.2.2. This is another bug fix release. We intend to have monthly bug fix releases of Krita 4.2 until it’s time to release 4.3, which will have new features as well. Here’s the list of bug fixes:

  • Text editor: make sure the background color is the one set in the settings (BUG:408344)
  • Fix a crash when creating a text shape (BUG:407554)
  • Make sure the text style is not reset when removing the last character in the text editor (BUG:408441)
  • Fix an issue on macOS where some libraries could not be loaded (BUG:408343)
  • Use a highlighted tool button in the selection tool option dockers so it’s easier to see which selection action is active
  • Fix the nearest neighbour transform algorithm (BUG:408182)
  • Fix a styling issue in the filter layers properties dialog (BUG:408171)
  • Fix an issue where if Krita was set to use a language other than English, vector strokes were drawn wrongly
  • Fix selecting colors from the combobox in the palette docker
  • Fix a crash when loading a broken KPL file (BUG:408447)
  • Fix an issue where a transparent pattern fill loader was loaded incorrectly (BUG:408169)
  • Make it possible to make the onion skin docker smaller (BUG:407646)
  • Improve loading GPL palettefiles with thousands of columns
  • Fix the slider widget to make it impossible to get negative values
  • Improve the tiff import/export filter (BUG:408177)
  • Fix loading the scripter Python plugin when using a language other than English
  • Improve the reference image tool and optimize loading images from clipboard
  • Make the camera raw import filter honor batch mode
  • Fix rendering of clone layers if the source layer is not visible (BUG:408167, BUG:405536)
  • Fix move and transform tools after a quick layer duplication (BUG:408593)
  • Fix a crash when selecting the opaque pixels on a transform mask (BUG:408618)
  • Fix loading sRGB EXR files (BUG:408485)
  • Make the new image dialog choose the last used option even when the user’s language has changed
  • Fix the “Enforce Palette Colors” feature (BUG:408256)
  • Update the brush preview on every brush stamp creation (BUG:389432)
  • Make it possible to edit vector shapes on duplicated vector layers (BUG:408028)
  • Hide the color picker button in the vector object properties docker, it’s unimplemented
  • Fix color as mask export in GIH and GBR brush tip export (BUG:389928)
  • Restore the default favorite blending modes
  • Add a header to all right-click menus on the canvas so the first thing under the cursor isn’t something dangerous, like ‘cut’ (BUG:408696)
  • Fix an incorrect condition when rendering animations where Krita would complain to be out of memory
  • Keep the community links in the welcome screen visible when changing theme (BUG:408686)
  • Check after saving whether the saved file can be opened and has correct contents
  • Improve the import/export error handling and reporting
  • Make sure the filter dialog shows up in front of Krita’s main window (BUG:408867)
  • Make sure that the contiguous selection tool provides the antialiasing switch (BUG:408733)
  • Fix the fuzziness setting in the contiguous selection tool
  • Fix putting the text shape behind every other shape on a vector layer after editing text (BUG:408693)
  • Fix switching the pointer type by stylus tip (BUG:408454, BUG:405747)
  • Fix an issue on Linux where switching from pen to mouse would prevent the mouse from drawing on the canvas (BUG:407595)
  • Fix a crash when the user undoes creating layers too quickly (BUG:408484)
  • Fix using .KRA and .ORA files as file layers (BUG:408087)
  • Clear all points in the outline selection on clicking (BUG:408439)
  • Fix a crash when using the fill tool in fast mode on a pixel selection mask
  • Fix merging layers with inactive selection masks (BUG:402070)
  • Remove default actions from the Reference Image tool that were inappropriate (BUG:408427)
  • Fix undo/redo not restoring the document to unmodified (BUG:402263)
  • Fix the deform tool leaving darkish traces when scrubbing a lot on a 16 bit canvas (BUG:290383)
  • Updated Qt to 5.12.4

Warning: on some Windows systems, we see that Krita 4.2.x doesn’t start. We haven’t found a system where we could reproduce this issue, and it seems it mostly has to do with those systems not having a working OpenGL or Direct3D driver. We’re working on a solution.

Download Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

OSX

Note: the gmic-qt is not available on OSX.

Source code md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

More in Tux Machines

Raccoon – APK Downloader for Linux, MacOS, and Windows

We’ve covered APK stories before in articles like the one about F-Droid and Google Play Downloader, but never have we covered an app as cool as this one with a name inspired by the North American mammal, Raccoon. Raccoon is a free and modern open-source APK downloader application that enables you to safely download any Android app available on Google Play Store to your Linux, Windows, or Mac desktop. The incentive of Raccoon is to enable users to install Android apps without sending any kind of information to Google. It also works to store APK files locally, use a “Split APK” format, bypass application region restrictions, and aims to improve your phone’s battery life. Read more

Games: MMO Path of Titans, Steam Play Milestone, Rocket Pass, Stay Safe: Labyrinth, OBS Studio

  • Try the first demo of the dino MMO Path of Titans, we have some testing keys to give away

    After Alderon Games successful crowdfunding campaign on IndieGoGo for their dino themed survival MMO Path of Titans, the developer reached out to gather more Linux testers. They've released a first demo and it's currently quite limited with the character creation ability the only thing possible. However, once a month they will be deploying a big new feature for it like the ability to run around, AI, quests and so on.

  • Steam Play passes six thousand Windows games playable on Linux, according to ProtonDB

    On the day of Steam Play hitting the big one year anniversary (August 21st), it seems another milestone has been reached in terms of compatibility. According to ProtonDB, the handy (but unofficial) tracking website, over six thousand games are now working. At time of writing, exactly 6,023 "games work" against the 9,134 total of games that currently have user reports to see if they run or not. That's quite an impressive number! It's worth noting though, that with little over nine thousand games currently reported, Steam does host well over thirty thousand so there's a huge amount that hasn't yet been tested. How about a question for you to answer in the comments: What does Steam Play mean to you? I'll start.

  • Rocket Pass 4 is coming to Rocket League on August 28th, with a new rally-inspired Battle-Car

    The fourth Rocket Pass is due to arrive in Rocket League soon, along with the start of Competitive Season 12. For those of you wanting to rank up and ensure you get the best rewards possible, Season 11 is ending really soon on August 27th. A day later, Rocket Pass 4 is going to be released.

  • Roguelike Stay Safe: Labyrinth of the Mad now has a Linux beta, sounds quite unique

    Stay Safe: Labyrinth of the Mad from Yellowcake Games is a roguelike with plenty of random generation, including an interesting way of generating the world. When starting a new game, the developer said you can use files on your PC or a combination of keyboard/gamepad button presses to generate the dungeon, items and gems. That's not all that makes it somewhat unique, there's also another feature where you will come across a copy of other players. It's a single-player game, so you're not directly facing other people only a shadow of what they had. Although that feature is entirely optional.

  • OBS Studio has a fresh release candidate available for a major new version

    OBS Studio, the free and open source video livestreaming and recording software is my one and only stop for video capturing and it continues to mature. The upcoming 24.0 release has a first release candidate now available and it has some fun new features. For starters, you can now actually pause recordings to easily cut away parts you know you don't need. I've tested that and it works perfectly. It does need you to have separated encoders for streaming and recording though, so you can't have the recording encoder set to "same as stream".

LWN: Spectre, Linux and Debian Development

  • Grand Schemozzle: Spectre continues to haunt

    The Spectre v1 hardware vulnerability is often characterized as allowing array bounds checks to be bypassed via speculative execution. While that is true, it is not the full extent of the shenanigans allowed by this particular class of vulnerabilities. For a demonstration of that fact, one need look no further than the "SWAPGS vulnerability" known as CVE-2019-1125 to the wider world or as "Grand Schemozzle" to the select group of developers who addressed it in the Linux kernel. Segments are mostly an architectural relic from the earliest days of x86; to a great extent, they did not survive into the 64-bit era. That said, a few segments still exist for specific tasks; these include FS and GS. The most common use for GS in current Linux systems is for thread-local or CPU-local storage; in the kernel, the GS segment points into the per-CPU data area. User space is allowed to make its own use of GS; the arch_prctl() system call can be used to change its value. As one might expect, the kernel needs to take care to use its own GS pointer rather than something that user space came up with. The x86 architecture obligingly provides an instruction, SWAPGS, to make that relatively easy. On entry into the kernel, a SWAPGS instruction will exchange the current GS segment pointer with a known value (which is kept in a model-specific register); executing SWAPGS again before returning to user space will restore the user-space value. Some carefully placed SWAPGS instructions will thus prevent the kernel from ever running with anything other than its own GS pointer. Or so one would think.

  • Long-term get_user_pages() and truncate(): solved at last?

    Technologies like RDMA benefit from the ability to map file-backed pages into memory. This benefit extends to persistent-memory devices, where the backing store for the file can be mapped directly without the need to go through the kernel's page cache. There is a fundamental conflict, though, between mapping a file's backing store directly and letting the filesystem code modify that file's on-disk layout, especially when the mapping is held in place for a long time (as RDMA is wont to do). The problem seems intractable, but there may yet be a solution in the form of this patch set (marked "V1,000,002") from Ira Weiny. The problems raised by the intersection of mapping a file (via get_user_pages()), persistent memory, and layout changes by the filesystem were the topic of a contentious session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit. The core question can be reduced to this: what should happen if one process calls truncate() while another has an active get_user_pages() mapping that pins some or all of that file's pages? If the filesystem actually truncates the file while leaving the pages mapped, data corruption will certainly ensue. The options discussed in the session were to either fail the truncate() call or to revoke the mapping, causing the process that mapped the pages to receive a SIGBUS signal if it tries to access them afterward. There were passionate proponents for both options, and no conclusion was reached. Weiny's new patch set resolves the question by causing an operation like truncate() to fail if long-term mappings exist on the file in question. But it also requires user space to jump through some hoops before such mappings can be created in the first place. This approach comes from the conclusion that, in the real world, there is no rational use case where somebody might want to truncate a file that has been pinned into place for use with RDMA, so there is no reason to make that operation work. There is ample reason, though, for preventing filesystem corruption and for informing an application that gets into such a situation that it has done something wrong.

  • Hardening the "file" utility for Debian

    In addition, he had already encountered problems with file running in environments with non-standard libraries that were loaded using the LD_PRELOAD environment variable. Those libraries can (and do) make system calls that the regular file binary does not make; the system calls were disallowed by the seccomp() filter. Building a Debian package often uses FakeRoot (or fakeroot) to run commands in a way that appears that they have root privileges for filesystem operations—without actually granting any extra privileges. That is done so that tarballs and the like can be created containing files with owners other than the user ID running the Debian packaging tools, for example. Fakeroot maintains a mapping of the "changes" made to owners, groups, and permissions for files so that it can report those to other tools that access them. It does so by interposing a library ahead of the GNU C library (glibc) to intercept file operations. In order to do its job, fakeroot spawns a daemon (faked) that is used to maintain the state of the changes that programs make inside of the fakeroot. The libfakeroot library that is loaded with LD_PRELOAD will then communicate to the daemon via either System V (sysv) interprocess communication (IPC) calls or by using TCP/IP. Biedl referred to a bug report in his message, where Helmut Grohne had reported a problem with running file inside a fakeroot.

Flameshot is a brilliant screenshot tool for Linux

The default screenshot tool in Ubuntu is alright for basic snips but if you want a really good one you need to install a third-party screenshot app. Shutter is probably my favorite, but I decided to give Flameshot a try. Packages are available for various distributions including Ubuntu, Arch, openSuse and Debian. You find installation instructions on the official project website. Read more