Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE -
Updated: 9 hours 41 min ago

Effective HMI interaction and safety attention monitoring using eye tracking technology: DeepGlance Quick

Friday 22nd of March 2019 11:06:55 AM

Interacting effectively with increasingly widespread and advanced systems is one of the most important challenges of our time. Most modern HMIs are based on mouse, keyboard or touch screen and allow controlling even very complex devices in a simple and intuitive way. However, in certain contexts, the user may be unable to have direct contact with a device, in this case, we are talking about hands-free interactions and often voice commands are used to interact. But controlling a system by voice, however natural, is not effective for all types of operations and in all environments. In fact, every technology has its peculiarities, that’s why the HMI design and the UX are the subject of continuous research and aim to offer increasingly effective and natural interaction methods, also thanks to the combined use of more complementary technologies between them.

Eye tracking technology

Eye tracking is an innovative technology that allows you to accurately estimate where a person is looking, instant by instant, through the use of dedicated devices. Today this technology is spreading more and more thanks to the reduction of costs, the miniaturization of the devices and the high degree of reliability achieved.

Knowing the direction of the gaze allows a system to understand in an immediate way which element we are interested in and to which we want to issue commands, avoiding having to move a cursor or touch a surface. Knowing where we look, a system can also make a page or image scroll automatically, always offering the best frame with respect to the point of interest. The gaze is also related to our attention and attention monitoring is essential when we carry out particularly critical operations.

DeepGlance Quick

The DeepGlance team has over 15 years of experience with this technology and its applications in the main markets, including medical, healthcare, automotive, retail and entertainment.

The current barrier to integrating this technology is that it requires an important initial effort related to specific knowledge of how eye movements work, device management and how to transform low-level data returned by an eye tracker into high-level behaviors necessary to control a system.

This leads to DeepGlance Quick, a QML extension plugin that encapsulates the complexity linked to the use of eye tracking technology and that allows anyone to integrate it immediately into their own Qt application. In fact, through the use of the plugin within Qt Design Studio it is possible in a few minutes to create and test gaze-controlled applications or add eye tracking functionality to existing Qt projects.

DeepGlance Quick was presented for the first time at the SPS IPC Drive Nuremberg 2018, the most important European exhibition for industrial automation, gathering great enthusiasm among the insiders.

How to get started using Qt Design Studio

Before starting, make sure you have installed Qt Design Studio 1.0.0.

An eye tracking device is required to control the application, but If you do not have one, you can use the eye tracker simulator provided by the plugin.

As a first step, download the DeepGlance Quick package.

The Tobii Stream Engine library is a dependency and it is distributed in the plugin package. You must be sure that the Qt Design Studio and your application can find it by copying it to a folder in your system path.

The following example uses an EyeArea in a Rectangle that changes the Rectangle color to red when observed for a certain amount of time:

To get started, create a new Qt Design Studio project named HelloWorld and copy the DgQuick plugin folder in the “imports” directory.

Modify HelloWorld.qml adding the plugin types:

import QtQuick import DgQuick import HelloWorld Item { width: Constants.width height: Constants.height Screen01 { } EyeTracker { model: EyeTracker.ModelTobii Component.onCompleted: start() } ErrorHandler { onError: { if (error === ErrorHandler.ErrorEyeTrackerSoftwareNotInstalled) { console.log("Tobii software not installed"); } else if (error === ErrorHandler.ErrorEyeTrackerNotConnected) { console.log("Tobii eye tracker not connected"); } } } EventHandler { anchors.fill: parent } }

EyeTracker type is used to control the eye tracker device, ErrorHandler to handle the errors and the EventHandler to forward the mouse and keyboard events to the plugin.

To use the simulator just set the EyeTracker model property to EyeTracker.ModelSimulator.

Add EyeButton.qml:

import QtQuick 2.10 Import DgQuick 1.0 Rectangle { width: 100 height: 100 color: "green" EyeArea { anchors.fill: parent onAction: { parent.color = 'red' } } }

The EyeArea is an area that is gaze-sensitive, it is is the main component of the DeepGlance Quick plugin.

Finally, modify the Screen01.ui.qml adding the EyeButton:

import QtQuick 2.10 import HelloWorld 1.0 Rectangle { width: Constants.width height: Constants.height EyeButton { anchors.centerIn: parent } }

For more information and details please refer to the official documentation.

Hands-on experience

Eye tracking is one of those technologies whose potential is fully understood only by experiencing it firsthand.

For this reason, the DeepGlance Plugin was used to develop a demonstrator that collects the main interactions which may be carried out using eye tracking technology.

Take a look at the tutorial video.

You can download the demonstrator at the link below:

Interaction and Experience design

Technology is not enough to create natural interfaces and valuable user experience, it is necessary to create a design focused on the engagement and involvement of users. For that reason, DeepGlance has started a collaboration with the Interaction & Experience Design Research Lab of the Polytechnic University of Milan, whose design department is ranked 6th in the world. Recently, during the workshop “New paradigms for HC interaction and eye tracking driven interfaces” realized in collaboration with Qt, the students of Digital and Interaction Design have designed solutions that integrate eye tracking technology. And thanks to the integration of DeepGlance Quick in Qt Design Studio they quickly developed working prototypes of the projects.

Use cases

There are many applications where eye tracking technology can add value.

In the medical field, there is often the need to consult images, information or check systems in contexts where hands cannot be used. This is what happens for example for the operations of robotic surgery in which the surgeon has his hands engaged with the manipulators for the control of the robotic arms. In this case, the gaze is used to make adjustments, decide which robotic arm to associate with the right or left hand, automatically move the robotic endoscope camera based on the point of interest by providing always the best view and finally to make sure that the surgeon’s attention is focused while he is controlling the system. In robotic surgery, the gaze is often used as a method to select and a single physical button mounted on the manipulator is used to confirm an action.

Eye tracking technology is widespread in the healthcare field to allow communication to patients in different clinical conditions such as Cerebral Palsy, ALS, Autism, Spinal Cord Injury. It is used to create personal communication and entertainment systems in a home or clinical / hospital environment controlled exclusively with the eyes and that allows the patient to write and vocalize messages, send emails, browse books, surf the Internet, control television, home automation systems and more.

In the field of retail, events, exhibitions and interactive museums entertainment, gamification and shopping experience systems can be developed to offer the visitor an engaging experience, contrast the “showrooming” effect, increase the time spent in the store / stand and ennobling the image of the brand with a “cool” and innovative experience.

For the creation of information points for hotels, airports, shopping malls, real estate agencies, shop windows to present content adapted to the user’s interest, profile prospects in relation to the content consumed and use information in a confidential manner, that is without making no gesture or touch the display.

As an alternative method for interacting with vending machines for drinks, snacks, tickets, orders to avoid having to use hands exclusively to interact, for reasons of hygiene, comfort and to offer an innovative experience to the consumer.

In the marketing field to effectively present promotions and offers, increase the exposure of the prospect to the message that the brand wants to convey, reach an audience that is not currently interested in the message or product, and open a quality two-way communication channel with it also through cross-media marketing.

In the control rooms, it is possible to improve the efficiency of the operators through a multimodal interaction, that is by combining eye tracking with traditional inputs to show information in an adaptive and contextualized way and interact more quickly and naturally.

In the automotive sector, it is possible to monitor the attention of the driver detecting any distractions and allow to interact with the instrumentation on board in a more effective and safe way. For example, understanding the target of a voice command or automatically selecting the element or device with which to interact with the joystick on the steering wheel.

In short

DeepGlance Quick is a plugin that allows you to integrate and test eye tracking technology in your Qt application in an immediate and effortless way, transforming a traditional interface, into a gaze-controllable interface. Eye tracking can be used as an exclusive input method or in combination with other technologies, to make the interaction more effective and natural, taking the most of each of them in a complementary way. On the machine side, it can be used to monitor the attention of an operator and adapt the information shown in an interface based on the elements of interest to the user.

More info:

The post Effective HMI interaction and safety attention monitoring using eye tracking technology: DeepGlance Quick appeared first on Qt Blog.

No Deal Brexit

Thursday 21st of March 2019 02:43:27 PM

No deal Brexit will mean shutting off most of the supply capacity from the EU to Great Britain, as the government says this will be chaotic. Many of the effects are unknown but in the days and weeks that follow food supplies and medicine supplies will start to fail. The rules on moving money about and even making a phone call will be largely undefined. International travel will get unknown new bureaucracies. EU and WTO law means there also needs to be a hard border in Ireland again, restarting terrorist warfare. Inflation will kick in, unemployment will sky rocket and people will die.

Although the UK government has dropped the dangerous saying of “no deal is better than a bad deal” it is astonishing they were allowed to get away with saying that for so long without challenge. There are still many members of the UK government who are perfectly happy with a chaotic no deal Brexit and the Prime Minister, unwilling to change any tactics, is using more and more Populist language to say how everyone should support her and threaten the whole UK society in the greatest game of chicken since the cold war. It would be trivial to revoke the Article 50 process but unless that is chosen a no deal Brexit will happen.

The political process is broken and has been for many years on this topic, there is no campaign from the normal groups I would expect to have one that I can join. The SNP, Greens and Quakers are not doing what they would usually do and enabling their members to have a voice. Religions in general exist to look after their members in times of crisis but so far nobody in Quakers that I’ve spoken to has any interest in many any practical mitigation steps.

Most people in Britain still think it’ll never happen as the politicians will see sense and back down, but they are wrong because the politicians are not acting rationally they are acting very irrationally and all it takes for no deal Brexit to happen is for no other decision to be taken.

So I find myself waving an European flag in Edinburgh each evening for the People’s Vote campaign, a London based campaign with a load of problems but the only one going. I’ll go to London this weekend to take part in the giant protest there.

Please come along if you live in the UK.  Please also sign the petition to revoke article 50.  Wish us luck.


foss-north 2019: Community Day

Thursday 21st of March 2019 07:28:01 AM

I don’t dare to count the days until foss-north 2019, but it is very soon. One of the changes to this year is that we expand the conference with an additional community day.

The idea with the community day here is that we arrange for conference rooms all across town and invite open source projects to use them for workshops, install fests, hackathons, dev sprints or whatever else they see fit. It is basically a day of mini-conferences spread out across town.

The community day is on April 7, the day before the conference days, and is free of charge.

This part of the arrangements has actually been one of the most interesting ones, as it involves a lot of coordination. I’d like to start by thanking all our room hosts. Without them, the day would not be possible!

The other half of the puzzle is our projects. I am very happy to see such a large group of projects willing to try this out for the first time, and I hope for lots and lots of visitors so that they will want to come back in the future as well.

The location of each project, as well as the contents of each room can be found on the community day page. Even though the day is free of charge, some of the rooms want you to pre-register as the seats might be limited, or they want to know if they expect five or fifty visitors. I would also love for you to register at our community day meetup, just to give me an indication of the number of participants.

Also – don’t forget to get your tickets for the conference days – and combine this with a training. We’re already past the visitor count of the 2018 event, so we will most likely be sold out this year!

Making the Most of your Memory with mmap

Wednesday 20th of March 2019 10:00:25 AM

Sometimes it seems that we have nearly infinite memory resources, especially compared to the tiny 48K RAM of yesteryear’s 8-bit computers. But today’s complex applications can soak up megabytes before you know it. While it would be great if developers planned their memory management for all applications, thinking through a memory management strategy is crucial for applications with especially RAM intensive features like image/video processing, massive databases, and machine learning.

How do you plan a memory management strategy? It’s very dependent on your application and its requirements, but a good start is to work with your operating system instead of against it. That’s where memory mapping comes in. mmap can make your application’s performance better while also improving its memory profile by letting you leverage the same virtual memory paging machinery that the OS itself relies on. Smart use of the memory mapping API (Qt, UNIX, Windows) allows you to transparently handle massive data sets, automatically paging them out of memory as needed – and it’s much better than you’re likely to manage with a roll-your-own memory management scheme.

Here’s a real-life use case of how we used mmap to optimize RAM use in QiTissue, a medical image application. This application loads, merges, manipulates, and displays highly detailed microscope images that are up to gigabytes in size. It needs to be efficient or risks running out of memory even on desktops loaded with RAM.

QiTissue highlighting tumor cell division on a digital microscopic image


The above image is stitched together from many individual microscope images, and the algorithm needs access to many large bitmaps to do that – a pretty memory-intensive process. Capturing a memory snapshot of the application in action shows that memory use grows as the application pulls in the images required for the stitching algorithm. The top purple line, Resident Set Size (RSS), is the amount of RAM that belongs to the application and that physically resides in memory. That curve reveals that memory use fluctuates but tops out over 6GB.

Memory use before mmap optimization


We used mmap extensively in our rewrite of QiTissue to help its overall memory profile. In this case, we decompress the microscope images into files and then memory map those files into memory. Note that using memory-mapped files won’t eliminate the RAM needed to load, manipulate, or display the images. However, it does allow the OS to do what it does best – intelligently manage memory so that our application operates effectively with the RAM that’s available.


Memory use after mmap optimization

The post-mmap optimization memory diagram looks similar, so what are the practical differences?

  • Heap consumption drops. For our stitching algorithm, the heap size drops from around 500MB down to around 35MB, which is a fraction of its original size. The memory hasn’t disappeared; it’s still being used and counts against the application’s RSS total. However, because the memory used isn’t part of the heap, it doesn’t come out of the OS’s virtual memory paging file. That’s important because there’s a cap on the total amount of RAM that can be allocated. The size of the paging file is usually set to be around 1.5 to 2 times the physical RAM, and this size limit prevents the system from wasting all its time swapping memory in and out from disk. That means that even when using virtual memory, the total amount of memory a program can access is limited. By mmapping memory, you can intelligently exceed that cap if you need to.
  • Less dirty memory. Dirty memory is memory that has been written to and as a result, no longer reflects the copy of the memory paged in from disk. If a memory block is dirty, the OS has to write it back out to disk where it’s paged out, and that write introduces a huge performance hit. Why does our dirty memory drop? The heap is actually a living data structure that manages all the news/deletes/mallocs/frees that your application makes. As normal C++ code lives and breathes, the C++ malloc libraries must maintain its linked lists of memory blocks by updating heap structures. Of course once touched, that memory must be flushed back to disk. Moving our big data out of the heap and into mmapped files prevents dirty flushes of those large memory structures, saving a lot of unnecessary CPU cycles.
  • Better RSS profile. Moving the majority of application data out of the heap and into mmapped files also trims down the structures needed to maintain that data. An operating system’s raw memory paging, which mmap leverages, may be limited to 4096 byte blocks, but it’s pretty efficient in managing those blocks. That is reflected in the memory required when we convert over to a mmap-based architecture. Not only do we need a smaller amount of resident RAM (6.2GB vs 5.8GB), but also our overall RAM consumption profile peaks less and recovers faster when it does peak. Meaning that there is more memory left for all the other OS tasks and more RAM that can be used before paging memory back in from disk is required.

Best of all, incorporating mmap isn’t too difficult to do. Less memory, faster, and easy: what’s not to like? I’ve put together a small example on GitHub if you want to experiment around with your own mmap application:

The post Making the Most of your Memory with mmap appeared first on KDAB.

foss-north 2019: Training Day

Tuesday 19th of March 2019 02:31:46 PM

The 2019 incarnation of foss-north is less than a month away. This year we’re extending the conference in two directions: a training day and a community day. This time, I wanted to write about the training day.

The training day, April 10, is an additional day for those who want to extend
the conference with a day of dedicated training. I’m very happy to have two experienced and well known trainers on side: Michael Kerrisk and Chris Simmonds. Both has years of training experience.

Michael will teach about the details in dynamic linking. The topic may seem trivial, but when you start scratching the surface, there are a lot of details to discover such as how to handle version compatibility, how symbol resolution really works, and so on. You can read more about the Building and Using Shared Libraries on Linux training here.

Chris will run a getting started with embedded Linux training. Using BeagleBone Black devices the participants will learn how to build linux for the target, how to get devices such as GPIO and i2c working and more. You can read more about the Fast Track to Embedded Linux training here.

The best part of enrolling for training at foss-north is that you also get full access to the two conference days, and that you help fund the conference itself. If you are interested, check out the tickets page.

Interview with Svetlana Rastegina

Monday 18th of March 2019 08:00:47 AM

Could you tell us something about yourself?

My name is Svetlana Rastegina. I work in graphic design and 3D modeling. I have been painting since I was twelve. I have a background in watercolour painting.

Do you paint professionally, as a hobby artist, or both?

I’m a professional graphic artist. Although, sometimes I get carried away drawing for myself.

What genre(s) do you work in?

I work in various genres: fantasy, cartoons, conceptual, children’s illustration. Among these genres, my favorite is fantasy.

Whose work inspires you most — who are your role models as an artist?

I don’t have a single favorite artist. The creativity of many artists appeals to me and I try to learn the best from each of them. Among Russian illustrators I like Vladyslav Yerko.

How and when did you get to try digital painting for the first time?

For the first time I tried digital painting at the university, where I studied environment design. I started drawing on the tablet in 2006.

What makes you choose digital over traditional painting?

I think prefer working with digital painting because it is more dynamic and provides more freedom.

How did you find out about Krita?

I prefer software with an open source license. One of my friends suggested to try it and it met my needs.

What was your first impression?

It was very easy to start working in Krita. I fell in love with it almost immediately.

What do you love about Krita?

I like the big set of brushes and filters. And above all I like stability.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I would like a bigger dynamic menu to have more brushes on hand.

What sets Krita apart from the other tools that you use?

Krita has one of the most convenient interface and tool sets for working with illustrations.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

It is very difficult to choose only one project. From my recent works I would pick the Cellist, because I like to work in the fantasy genre and monochrome colors.

What techniques and brushes did you use in it?

I work in the technique of a smoothed dab with the subsequent parts refinement, often using texture technique. I make multiple layers, with separate layer for sketching, shadows, highlights and some details. Some brushes used in the work: Basic, Blender blur, Dry Bristles, Chalc Grainy, Blender Textured Soft.

Where can people see more of your work?

If you want to know more about my works you can visit my website:
Other resources where you can find my works:

Anything else you’d like to share?

I am always open for communication and new projects.

The Second Return of the Fluffy Bunny

Sunday 17th of March 2019 11:26:11 AM

The old among us might remember KDE4 and something one could call the “Pink Phase”, when people explored how “interesting” they could make their digital workplace by applying certain colors and themes… don’t most of us sometimes need a fluffy and color-intensive world to escape to… if only to learn to value reality again, when things get too fluffy

The most outstanding work useful for that had been done once by Florian Schepper who created the Plasma theme “Fluffy Bunny”, which won hearts over on first sight. Sadly though the theme bundle got lost, was recovered, only to then by the times getting lost again from the stores. Time to repeat that, at least the recovering

And so last week the internet and local backups had been scanned to restore the theme again, to quick first success:

Well, besides the regression Plasma5 has over the old Plasma With only thin & non-tiled border themes trendy and used the last years, sadly current Plasma has some issues, also does it assume that borders of panels are rectangular when creating the default BlurBehind mask. Some first patches (1, 2, 3) are already under review.

Get the initial restored version from your Plasma desktop via System settings/Workspace Theme/Plasma Theme/Get new Plasma Themes…/Search “Fluffy Bunny” or via the Web interface from and enjoy a bit of fluffy Plasma

Next up: restoring the “Plasma Bunny” theme from the once drafted “Fluffy” linux distribution… less flurry, but more pink!
Update: First version of new named “Unicorn” is now up in the store for your entertainment.

KDE Usability & Productivity: Week 62

Saturday 16th of March 2019 11:59:02 PM

Week 62 for KDE’s Usability & Productivity initiative is here, and we didn’t let up! We’ve got new features, bugfixes, more icons… we’ve got everything! Take a look:

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

KDE Itinerary - Using Public Transport Data

Saturday 16th of March 2019 09:15:00 AM

Now that we have a way to access realtime public transport data this needs to be integrated into KDE Itinerary. There’s three use-cases being looked at so far, described below.

Realtime information

The first obvious use-case is displaying delays, platform changes, etc in the timeline and reservation details views, and notifying about such changes. This is already implemented for trains based on KPublicTransport, and to a very limited extend (gate changes) for flights using KPkPass for Apple Wallet boarding passes containing a working update API endpoint.

Train delay and platform changes in the train trip details page

For KDE Itinerary to check for changes you either need to use the “Check for Updates” action in the global action drawer, or enable automatic checking in the settings. KDE Itinerary will not reach out to online services on its own by default.

When enabled, the automatic polling tries to adapt the polling frequency to how far away an arrival or departure is, so you get current information within minutes without wasting battery or bandwidth. This still might need a bit of fine tuning and/or support for more corner cases (a departure delay past the scheduled arrival time was such a case for example), so feedback on this from use in practice is very much welcome.

Whenever changes are found, KDE Itinerary will also trigger a notification, which should work on all platforms.

Train delay notification on Android

Querying online services for realtime data might result in additional information that were previously not included in the timeline data, geo coordinates for all stations along the way for example. KDE Itinerary will try to augment the existing data with whatever new information we come across this way, having realtime data polling enabled will therefore also result in more navigation options or weather forecasts for more locations being shown.

Alternative connections

The second integration point currently being worked on is selecting alternative train connections, for example when having missed a connection or when having an unbound reservation that isn’t specific to a trip to begin with.

This is available in the context drawer on the details page of the corresponding train reservation. KDE Itinerary will then query for journeys along the same way as the current reservation, and allows you to pick one of the results as the new itinerary for this trip. Realtime information are of course shown here too, if available.

Three alternative connections options for a train trip, the first is expanded to show details

While displaying the connections already works, actually saving the result is still missing. Nevertheless feedback is already useful to see if sensible results are returned for existing bookings.

Filling gaps

The third use-case is filling gaps in the itinerary, such as how to get from the airport to the hotel by local transport. Or similarly, determining when you have to leave from your current location to make it to the airport in time for your flight. This would result in additional elements in the timeline containing suggested public transport routes.

Implementation on this hasn’t started yet, technically it’s a variations of the journey query already used in the previous point. The bigger challenge therefore will likely be on presenting this in a usable and useful way.


This is all work in progress, so it’s the best point in time to influence things, any input or help is very much welcome of course. See our Phabricator workboard for what’s on the todo list, for coordinating work and for collecting ideas. For questions and suggestions, please feel free to join us on the KDE PIM mailing list or in the #kontact channel on Freenode or Matrix.

If you happen to know a source for realtime flight information that could be usable by Free Software without causing recurring cost I’d also be thankful for hints :)

Qt 5.12.2 Released

Friday 15th of March 2019 09:45:31 AM

I am pleased to announce that the second patch release of Qt 5.12 LTS, the Qt 5.12.2 is released today. While not adding new features, the Qt 5.12.2 release provides a number of bug fixes and other improvements.

Compared to Qt 5.12.1, the new Qt 5.12.2 contains more than 250 bug fixes. For details of the most important changes, please check the Change files of Qt 5.12.2.

With Qt 5.12.2 we bring back widely asked MinGW 32 bit prebuild binaries in addition to 64 bit ones.

Qt 5.12 LTS will receive many more patch releases throughout the coming years and we recommend all active developed projects to migrate to Qt 5.12 LTS. Qt 5.9 LTS is currently in ‘Strict’ phase and receives only the selected important bug and security fixes, while Qt 5.12 LTS is currently receiving all the bug fixes. With Qt 5.6 Support ending in March 2019 all active projects still using Qt 5.6 LTS should now migrate to a later version of Qt.

Qt 5.12.2 is available via the maintenance tool of the online installer. For new installations, please download latest online installer from Qt Account portal or from Download page. Offline packages are available for commercial users in the Qt Account portal and at the Download page for open-source users. You can also try out the Commercial evaluation option from the Download page.

The post Qt 5.12.2 Released appeared first on Qt Blog.

Template meta-functions for detecting template instantiation

Friday 15th of March 2019 12:00:00 AM

I’ve been playing around with type meta-tagging for my Voy reactive streams library (more on that some other time) and realized how useful it is to be able to check whether a given type is an instantiation of some class template, so I decided to write a short post about it.

Imagine you are writing a generic function, and you need to check whether you are given a value or a tuple so that you can unpack the tuple before doing anything with it.

If we want to check whether a type T is an instance of std::tuple or not, we can create the following meta-function:

template <typename T> struct is_tuple: std::false_type {}; template <typename... Args> struct is_tuple<std::tuple<Args...>>: std::true_type {};

The meaning of this code is simple:

  • By default, we return false for any type we are given
  • If the compiler is able to match the type T with std::tuple<Args...> for some list of types, then we return true.

To make it easier to use, we can implement a _v version of the function like the meta-functions in <type_traits> have:

template <typename T> constexpr bool is_tuple_v = is_tuple<T>::value;

We can now use it to implement a merged std::invoke+std::apply function which calls std::apply if the user passes in a tuple, and std::invoke otherwise:

template <typename F, typename T> auto call(F&& f, T&& t) { if constexpr(is_tuple_v<T>) { return std::apply(FWD(f), FWD(t)); } else { return std::invoke(FWD(f), FWD(t)); } } Up one level

The previous meta-function works for tuples. What if we needed to check whether a type is an instance of std::vector or std::basic_string?

We could copy the previously defined meta-function, and replace all occurrences of “tuple” with “vector” or “basic_string”. But we know better than to do copy-paste-oriented programming.

Instead, we can increase the level of templatedness.

For STL algorithms, we use template functions instead of ordinary functions to allow us to pass in other functions as arguments. Here, we need to use template templates instead of ordinary templates.

template <template <typename...> typename Template, typename Type> struct is_instance_of: std::false_type {}; template <template <typename...> typename Template, typename... Args> struct is_instance_of<Template, Template<Args...>>: std::true_type {}; template <template <typename...> typename Template, typename Type> constexpr bool is_instance_of_v = is_instance_of<Template, Type>::value;

The template <template <typename...> allows us to pass in template names instead of template instantiations (concrete types) to a template meta-function.

We can now check whether a specific type is an instantiation of a given template:

static_assert(is_instance_of_v<std::basic_string, std::string>); static_assert(is_instance_of_v<std::tuple, std::tuple<int, double>>); static_assert(!is_instance_of_v<std::tuple, std::vector<int>>); static_assert(!is_instance_of_v<std::vector, std::tuple<int, double>>);

A similar trick is used alongside void_t to implement the detection idiom which allows us to do some compile-time type introspection and even simulate concepts.

I’ll cover the detection idiom in some of the future blog posts.

P.S. There is a 50% discount on for Functional Programming in C++ and other books til the Tuesday, 19th of March. Use wm031519lt at checkout.

You can support my work on Patreon, or you can get my book Functional Programming in C++ at Manning if you're into that sort of thing.

Krita 4.2.0: the First Painting Application to bring HDR Support to Windows

Thursday 14th of March 2019 09:53:40 AM

We’re deep in bug fixing mode now, because in May we want to release the next major version of Krita: Krita 4.2.0. While there will be a host of new features, a plethora of bug fixes and performance improvements, one thing is unique: support for painting in HDR mode. Krita is the very first application, open source or proprietary, that offers this!

So, today we release a preview version of Krita 4.2.0 with HDR support baked in, so you can give the new functionality a try!

Of course, at this moment, only Windows 10 supports HDR monitors, and only with some very specific hardware. Your CPU and GPU need to be new enough, and you need to have a monitor that supports HDR. We know that the brave folks at Intel are working on HDR support for Linux, though!

What is HDR?

HDR stands for High Dynamic Range. The opposite is, of course, Low Dynamic Range.

Now, many people when hearing the word “HDR” will think of the HDR mode of their phone’s cameras. Those cameras map together images taken at different exposure levels in one image to create, within the small dynamic range of a normal image, the illusion of a high dynamic range image, with often quite unnatural results.

This is not that! Tone-mapping is old-hat. These days, manufacturers are bringing out new monitors that can go much brighter than traditional monitors, up to a 1000 nits (a standard for brightness), or even brighter for professional monitors. And modern systems, Intel 7th generation Core CPU’s, support these monitors.

And it’s not just brightness, these days most normal monitors are manufactured to display the sRGB gamut. This is fairly limited, and lacks quite a bit of greens (some profession monitors have a wider gamut, of course). HDR monitors use a far wider gamut, with the Rec. 2020 colorspace. And instead of using traditional exponential gamma correction, they use Perceptual Quantizer (PQ), which not just extends the dynamic range to sun-bright values, but also allows to encode very dark areas, not available in usual sRGB.

And finally, many laptop panels only support 6 bits per channel; most monitors only 8 bits, monitors for graphics professionals 10 bits per channel — but HDR monitors support from 10 to 16 bits per channel. This means much nicer gradations.

It’s early days, though, and from a developers’ point of view, the current situation is messy. It’s hard to understand how everything fits together, and we’ll surely have to update Krita in the future when things straighten out, or if we discover we’ve misunderstood something.

So… The only platform that supports HDR is Windows 10 via DirectX. Linux, nope, OpenGL, nah, macOS, not likely. Since Krita speaks OpenGL, not DirectX, we had to hack the OpenGL to DirectX compatibility layer, Angle, to support the extensions needed to work in HDR. Then we had to hack Qt to make it possible to convert the UI (things like buttons and panels) from sRGB to p2020-pq, while keeping the main canvas unconverted. We had to add, of course, a HDR capable color selector. All in all, quite a few months of hard work.

That’s just the technical bit: the important bit is how people actually can create new HDR images.

So, why is this cool?

You’ve got a wider range of colors to work with. You’ve got a wider range of dark to light to work with: you can actually paint with pure light, if you want to. What you’re doing when creating an HDR image is, in a way, not painting something as it should be shown on a display, but as the light falls on a scene. There’s so much new flexibility here that we’ll be discovering new ways to make use of it for quite some time!

If you’d have a HDR compatible set-up, and a browser that would support HDR, well, this video could be in HDR:

(Image by Wolthera van Hövell tot Westerflier)

And how do I use it?

Assuming you have a HDR capable monitor, a DisplayPort 1.4 or HDMI 2.0a (the ‘a’ is important!) or higher cable, the latest version of Windows 10 with WDDM 2.4 drivers and a CPU and GPU that supports hits, this is how it works:

You have to switch the display to HDR mode manually, in the Windows settings utility. Now Windows will start talking to the display in p2020-pq mode. To make sure that you don’t freak out because everything looks weird, you’ll have to select a default SDR brightness level.

You have to configure Krita to support HDR. In the Settings → Configure Krita → Display settings panel you need to select your preferred surface. You’ll also want to select the HDR-capable small color selector from the Settings → Dockers menu.

To create a proper HDR image, you will need to make a canvas using a profile with the rec 2020 gamut and a linear tone-response-curve: “Rec2020-elle-V4-g10.icc” is the profile you need to choose. HDR images are standardized to use the Rec2020 gamut, and the PQ trc. However, a linear TRC is easier to edit images in, so we don’t convert to PQ until we’re satisfied with our image.

Krita’s native .kra file format can save HDR images just fine. You should use that as your working format. For sharing with other image editors, you should use the OpenEXR format. For sharing on the Web, you can use the expanded PNG format. Since all this is so new, there’s not much support for that standard yet.

You can also make HDR animations… Which is pretty cool! And you can export your animations to mp4 and H.265. You need a version of FFMpeg that supports H.265. And after telling Krita where to find that, it’s simply a matter of:

  • Have an animation open.
  • Select File → Render Animation
  • Select Video
  • Select Render as MPEG-4 video or Matroska
  • Press the configure button next to the fileformat dropdown.
  • Select at the top ‘H.265, MPEG-H Part 2 (HEVC)’
  • Select for the Profile: ‘main10’.
  • The HDR Mode checkbox should now be enabled: toggle it.
  • Click ‘HDR Metadata’ to configure the HDR metadata
  • Finally, when done, click ‘render’.

If you have a CPU of 7th gen Core or later with intel Graphics integrated you can take advantage of HW accelerated encode to save time in the export stage: ffmpeg does that for you.


Sorry, this version of Krita is only useful for Windows users. Linux graphics developers, get a move on!


New package in Fedora: python-xslxwriter

Thursday 14th of March 2019 09:42:09 AM

XlsxWriter is a Python module for creating files in xlsx (MS Excel 2007+) format. It is used by certain python modules some of our customers needed (such as OCA report_xlsx module).

This module is available in pypi but it was not packaged for Fedora. I’ve decided to maintain it in Fedora and created a package review request which is helpfully reviewed by Robert-André Mauchin.

The package, providing python3 compatible module, is available for Fedora 28 onwards.

All new Okteta features of version 0.26 in a picture

Monday 11th of March 2019 03:50:47 PM

Okteta, a simple editor for the raw data of files, has been released in version 0.26.0. The 0.26 series mainly brings a clean-up of the public API of the provided shared libraries. The UI & features of the Okteta program have been kept stable, next to one added new feature: there is now a context menu in the byte array viewer/editor available.

Since the port to Qt5 & KF5 Okteta has not seen work on new features. Instead some rework of the internal architecture has been started, and is still on-going.

Though this release there is a small feature added again, and thus the chance to pick up on the good tradition of the series of all-new-features-in-a-picture, like done for 0.9, 0.7, 0.4, 0.3, and 0.2. See in one quick glance what is new since 0.9 (sic):

Time to merge!

Monday 11th of March 2019 03:38:18 PM

After many delays, we finally think the Timeline Refactoring branch is ready for production. This means that in the next days major changes will land in Kdenlive’s git Master branch scheduled for the KDE Applications 19.04 release.

A message to contributors, packagers and translators

A few extra dependencies have been added since we now rely on QML for the timeline as well as QtMultimedia to enable the new audio recording feature. Packagers or those compiling themselves, our development info page should give you all information to successfully build the new version.

We hope everything goes smoothly and will be having our second sprint near Lyon in France next week to fix the remaining issues.

We all hope you will enjoy this new version, more details will appear in the next weeks.

Stay tuned!

Inside Kdenlive: How to fuzz a complex GUI application?

Sunday 10th of March 2019 08:13:47 PM
Introduction and approach

Fuzz-testing, also called Fuzzing, is an essential tool inside the tool-box of software developers and testers. The idea is simple yet effective: simply throw a huge amount of random test inputs at the target program, until you manage to get it to crash or otherwise misbehave. This crashes are often revealing some defects in the code, some overlooked corner-cases that are at best annoying for the end-user if she stumbles upon them, or at worse dangerous if the holes have security implications. As part of our refactoring efforts of the main components of Kdenlive, this is one of the tools we wanted to use to ensure as much stability as possible.

One of the most commonly used fuzzing library is called LibFuzzer, and is built upon LLVM. It has already helped finding thousands of issues in a wide range of projects, including well tested ones. LibFuzzer is a coverage based fuzzer, which means that it attempts to generate inputs that creates new execution paths. That way, it tries to cover the full scope of the target software, and is more likely to uncover corner-cases.
Building a library (in this case Kdenlive’s core library) with the correct instrumentation to support fuzzing is straightforward: with Clang, you simply need to pass the flag -fsanitize=fuzzer-no-link. And while we’re at it, we can also add Clang’s extremely useful Address sanitizer with -fsanitize=fuzzer-no-link, address. This way, we are going to detect any kind of memory malfunction as soon as it occurs.

Now that the library is ready for fuzzing, we need to create a fuzz target. That corresponds to the entry point of our program, to which the fuzzer is going to pass the random inputs. In general, it looks like this:

// extern "C" int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size) { DoSomethingWithData(Data, Size); return 0; }

Now, the challenge is to come up with a good fuzzing function. Utilities that read from stdin or from an input file, like ffmpeg, any compression tool, any conversion tool, etc., are easy to fuzz since we just need to fuzz the data we feed them. In the case of Kdenlive, we also read project files, but this represents only a tiny amount of what a full editing software is supposed to do, and further-more our project opening loading logic is mostly deferred to third-party libraries. So, how to fuzz the interesting parts of Kdenlive? Well, if you look at it from a distance, Kdenlive can more or less be summed up as a “just” a (rich) GUI sitting on top of existing video manipulation libraries. That means that our most prominent source of inputs is the user: at the core, what Kdenlive must excel at is handling any kind of action the user may want to perform.

During the rewrite of our core modules, we changed a bit the architecture so that there is a clear separation between the model, which handles all the actual logic, and the view (written in QML), which is designed to be as thin as possible. Essentially, this means that any action executed by the user corresponds exactly to one or several call to the model’s API. This makes our life easier when performing fuzzing: in order to effectively push Kdenlive to its limits, we simply need to call random model functions in a random order with random parameters.

The plan is getting clear now, but one crucial piece is missing: how do we turn the input provided by LibFuzzer (a random string) into a random sequence of model actions?

Generating a maintainable script language

One obvious idea would be to define a scripting language that maps text to actions, for example move 0 3 4 could move clip 0 on track 3 at position 4. However, writing such a scripting language from scratch, and then an interpreter for it, is a daunting task and is hard to maintain: each time a new function is added in the API it must be added in the script language as well, and any change in the API is likely to break the interpreter.

Basically, we want to generate the script language and its interpreter programmatically in a semi-automated way. One way to do this is to use reflection: by enumerating all the methods in the API, we can figure out what is a legal operation, and interpret it correctly if it is indeed an existing operation. As of today, C++ still lacks native reflection capabilities, but there are some great libraries out there that allow you to fill this gap a little. We used RTTR, which is a Runtime reflection library. It requires to register the functions you want to make available: in the following snippet we register a method called “requestClipsUngroup” from our timeline model:

RTTR_REGISTRATION { using namespace rttr; registration::class_("TimelineModel") .method("requestClipsUngroup", &TimelineModel::requestClipsUngroup) (parameter_names("itemIds", "logUndo")); }

Note that specifying the names of the parameter is technically not required by RTTR, but it is useful for our purposes.

Once we have that, our script interpreter is much easier to write: when we obtain a string like “requestClipDoSomething”, we check the registered methods for anything similar, and if we find it, we also know which arguments to expect (their name as well as their type), so we can parse that easily as well (arguments are typically numbers, booleans or strings so they don’t require complicated parsing).

For Kdenlive, there is one caveat though: the model is, by design, very finicky with the inputs its receives. In our example function, the first parameter, itemIds is a list of ids of items on the timeline (clips, compositions,…). If one of the element of the input list is NOT a known item id, the model is going to send an abort, because everything is checked through an assert. This behavior was designed to make sure that the view cannot sneak an invalid model call without us knowing about it (by getting an immediate and irrevocable crash). The problem is that this is not going to play well within a fuzzing framework: if we let the fuzzer come up with random ids, there is little chance that they are going to be valid ids and the model is going to be crashing all the time, which is not what we want.
To work around this, we implemented a slight additional thing in our interpreter: whenever the argument is some kind of object id, for example an item id, we compute a list of currently valid ids (in the example, allValidItemIds). That way, if we parse an int with value i for this argument, we send allValidItemIds[i % allValidItemIds.size()] to the model instead. This ensures that all the ids it receives are always going to be valid.

The final step for this interpreter to be perfect is to automatically create a small translation table between the long API names and shorter aliases. The idea behind this is that the fuzzer is less likely to randomly stumble upon a complicated name like “requestClipUngroup” than a one letter name like “u”. In practice, LibFuzzer supports dictionaries, so it could in theory be able to deal with these complicated names, but maintaining a dictionary is one extra hassle, so if we can avoid it, it’s probably for the best. All in all, here is a sample of a valid script:

a c red 20 1 c blue 20 1 c green 20 1 b 0 -1 -1 $$ 0 b 0 -1 -1 $$ 0 b 0 -1 -1 $$ 0 e 0 294 295 0 1 1 0 e 0 298 295 23 1 1 0 e 0 299 295 45 1 1 0 e 0 300 296 4 1 1 0 e 0 299 295 43 1 1 0 e 0 300 296 9 1 1 0 l 0 2 299 294 1 0 l 0 2 300 294 1 0 e 0 299 295 43 1 1 0 e 0 300 296 9 1 1 0 e 0 299 295 48 1 1 0 e 0 294 296 8 1 1 0 e 0 294 295 3 1 1 0 Generating a corpus

To work optimally, Libfuzzer needs a initial corpus, which is an initial set of inputs that trigger diverse behaviors.
One could write some scripts by hand, but once again that would not scale very well and would not be maintainable. Luckily, we already have a trove of small snippets that call a lot of model functions: our unit-tests. So the question becomes: how do we (automatically) convert our unit-tests into scripts with the syntax described above?

The answer is, once again, reflection. We have a singleton class Logging that keeps track of all the operations that have been requested. We then instrument our API functions so that we can log the fact that they have been called:

bool TimelineModel::requestClipsUngroup(const std::unordered_set& itemIds, bool logUndo) { TRACE(itemIds, logUndo); // do the actual work here return result }

Here TRACE is a convenience macro that looks like this:

#define TRACE(...) \ LogGuard __guard; \ if (__guard.hasGuard()) { \ Logger::log(this, __FUNCTION__, {__VA_ARGS__}); \ }

Note that it passes the pointer (this), the function name (__FUNCTION__) and the arguments to the logger.
The LogGuard is a small RAII utility that prevents duplicate logging in the case of nested calls: if our code looks like this:

int TimelineModel::foo(int foobaz) { TRACE(foobaz); return baz * 5; } int TimelineModel::bar(int barbaz) { TRACE(barbaz); return foo(barbaz - 2); }

If bar is called, we want to have only one logging entry, and discard the one that would result from the inner foo call. To this end, the LogGuard prevents further logging until its deleted, which happens when it goes out of scope, i.e when bar returns. Sample implementation:

class LogGuard{ public: LogGuard() : m_hasGuard(Logger::start_logging()) {} ~LogGuard() { if (m_hasGuard) Logger::stop_logging(); } // @brief Returns true if we are the top-level caller. bool hasGuard() const { return m_hasGuard; } protected: bool m_hasGuard = false; };

Once we have a list of the function calls, we can generate the script by simply dumping them in a format that is consistent with what the interpreter expects.

This kind corpus is very useful in practice. Here is the output of LibFuzzer after a few iterations on an empty corpus

#1944 NEW cov: 6521 ft: 10397 corp: 46/108b lim: 4 exec/s: 60 rss: 555Mb L: 4/4 MS: 1 ChangeBit-

The important metric is “cov”, which indicates how well we cover the full source code. Note that at this point, not a single valid API call has been made.

With a corpus generated through our unit tests, it looks like this

#40 REDUCE cov: 13272 ft: 65474 corp: 1148/1077Kb lim: 6 exec/s: 2 rss: 1340Mb L: 1882/8652 MS: 2 CMP-EraseBytes- DE: "movit.convert"-

The coverage is more than twice as big! And at this point at lot of valid calls are made all the time.


In a nutshell, here are the steps we went through to be able to efficiently fuzz a complex application like Kdenlive:

  • Structure the code so that model and view are well separated
  • Generate a scripting language using reflection, to be able to query the model
  • Trace the API calls of the unit-tests to generate an initial script corpus
  • Fuzz the model through the script interface
  • Profit!

For us at Kdenlive, this approach has already proved useful to uncover bugs that were not caught by our test-cases. See this commit for example: Note that our logger is able to emit either a script or a unit-test after an execution: this means that when we find a script that triggers a bug, we can automatically convert it back to a unit-test to be added to our test library!

This week in Usability & Productivity, part 61

Sunday 10th of March 2019 07:01:29 AM

In week 61 for KDE’s Usability & Productivity initiative, the MVP has got to be the KDE community itself–all of you. You see, Spectacle has gotten a lot of work thanks to new contributors David Redondo and Nils Rother after I mentioned on Reddit a week ago that “a team of 2-4 developers could make Spectacle sparkle in a month“. I wasn’t expecting y’all to take it so literally! The KDE community continues to amaze. But there’s a lot more, too:

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

pulseaudio-qt 1.0.0 is out!

Thursday 7th of March 2019 10:49:12 PM

pulseaudio-qt 1.0.0 is out!

It’s a Qt framework C++ bindings library for the PulseAudio sound system.

It was previously part of plasma-pa but is now standalone so it can be used by KDE Connect and anyone else who wants it.

sha256: a0a4f2793e642e77a5c4698421becc8c046c426811e9d270ff2a31b49bae10df pulseaudio-qt-1.0.0.tar.xz

The tar is signed by my GPG key.


libqaccessibilityclient 0.4.0

Thursday 7th of March 2019 02:41:21 PM

I’ve released libqaccessibilityclient 0.4.0.


  • bump version for new release
  • Revert “add file to extract strings”
  • add file to extract strings
  • Set include dir for exported library target
  • Create and install also a QAccessibilityClientConfigVersion.cmake file
  • Create proper CMake Config file which also checks for deps
  • Use imported targets for Qt libs, support BUILD_TESTING option
  • Use newer signature of cmake’s add_test()
  • Remove usage of dead QT_USE_FAST_CONCATENATION
  • Remove duplicated cmake_minimum_required
  • Use override
  • Use nullptr
  • Generate directly version
  • Add some notes about creating releases

Signed using my key: Jonathan Riddell <> 2D1D5B0588357787DE9EE225EC94D18F7F05997E

6630f107eec6084cafbee29dee6a810d7174b09f7aae2bf80c31b2bc6a14deec libqaccessibilityclient-0.4.0.tar.xz

What is it?

Most of the stack is part of Qt 5, so nothing to worry about, that’s the part that lets applications expose their UI over DBus for AT-SPI, so they work
nicely with assisitve tools (e.g. Orca). In accessibility language, the applications act as “servers” and the screen reader for example is a client.

This library is for writing clients, so applications that are assistive, such as screen readers. It currently has two users: KMag and Simon with Plasma also taking an interest. KMag can use it to follow the focus (e.g. when editing text, it can automatically magnify the part of the document where the cursor is. For Simon Listens, the use is to be able to let the user trigger menus and buttons by voice input.


Qt Creator 4.9 Beta2 released

Thursday 7th of March 2019 12:43:29 PM

We are happy to announce the release of Qt Creator 4.9 Beta2!

Please have a look at the blog post for the Beta for an overview of what is new in Qt Creator 4.9. Also see the change log for a more complete list of changes.

Get Qt Creator 4.9 Beta2

The opensource version is available on the Qt download page under “Pre-releases”, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.9 Beta2 is also available under Preview > Qt Creator 4.9.0-beta2 in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on, and on the Qt Creator mailing list.

The post Qt Creator 4.9 Beta2 released appeared first on Qt Blog.

More in Tux Machines

Events: SREcon19 Americas, Scale, FudCon and Snapcraft Summit Montreal

  • SREcon19 Americas Talk Resources
    At SREcon19 Americas, I gave a talk called "Operating within Normal Parameters: Monitoring Kubernetes". Here's some links and resources related to my talk, for your reference.
  • Participating at #Scale17x
    Everytime somebody asks me about Scale I can only think of the same: Scale is the most important community lead conference in North America and it only gets better by the years. This year it celebrated its seventeenth edition and it just struck me: with me being there this year, there have been more Scales I have attended than I have not. This is my nineth conference out of 17. The first time that I attended it was 2011, it was the edition followed by FudCon Tempe 2010 which happened to be my first Fedora conference and it was also the first time I got to meet some contributors that I had previously collaborated with, many of which I still consider my brothers. As for this time, I almost didn’t make it as my visa renewal was resolved on Friday’s noon, one day after the conference started. I recovered it that same day and book a flight in the night. I couldn’t find anything to LAX -as I regularly fly- so I had to fly to Tijuana and from there I borrowed a cart to Pasadena. Long story short: I arrived around 1:30 AM on Saturday.
  • Snapcraft Summit Montreal
    Snapcraft is the universal app store for Linux that reaches millions of users and devices and serves millions of app installs a month. The Snapcraft Summit is a forward-thinking software workshop attended by major software vendors, community contributors and Snapcraft engineers working at every level of the stack.

today's howtos

Draw On Your Screen with this Neat GNOME Shell Extension

Ever wish you could draw on the Linux desktop or write on the screen? Well, there’s a new GNOME Shell extension that lets you do exactly that: draw on the Linux desktop. You may want to point out a bug, highlight a feature, or provide some guidance to someone else by sending them an annotated screenshot. In this short post we’ll show you how to install the add-on and how to use it. Read more

Fedora 31 Preparing To Start Removing Packages Depending Upon Python 2

Python 2 support will formally reach end-of-life on 1 January 2020 and Fedora 31 is preparing for that by working to drop packages (or parts of packages) that depend upon Python 2. Fedora has been pushing for a Python 2 to Python 3 migration for many cycles now -- as most Linux distributions have -- while with Fedora 31 they are planning a "mass Python 2 package removal" if necessary. They are planning to closely track the state of packages depending upon Python 2 to either drop the packages or allow packagers to easily abandon Python 2 parts of programs. Read more