Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE -
Updated: 2 hours 24 min ago

GSoC 2019

Wednesday 15th of May 2019 07:29:46 AM

This summer will be a little bit interesting as I joined theGoogle Summer of Code (GSoC).The software I will be working on is Krita.Krita is a painting software I have been using for more than one year.Since the (pre)release of Krita 4.0, I use it to paint all my works.

Before using Krita, I used to use PaintToolSAI, and there are quite a lot of conceptsand functionalities in it that I find really useful; after getting involved in theKrita community I am pretty lucky to be able to introduce these little shiny starsto our community, and even implement some of them.

My project for GSoC is onthe undo/redo system in Krita. The system currently works using an undo stack to storage individual changes to the document,and invoking these commands to perform undos and redos. This system is complex and not easyto maintain. As Dmitry suggests, a better solution wouldbe storing the states of the document as shallow copies, since it simplifies the system and make history brushes possible. It would be a rather hugeand fundamental change in the code, and he recommends me to experiment with vector layers first.

Another part of the project, which is not a research, is the snapshot docker that would allowusers to temporarily save some states of the document and return to them quickly at a later time.This is an enhancement on the GUI level, as the tile data in paint layers are shallow copied, makingit possible to make a clone of the document relatively fast.

I will make more posts on KDE and Krita in the near future. Let’s keep in touch! (.w.)

KDE Itinerary - Barcodes

Wednesday 15th of May 2019 06:30:00 AM

While discussing data extraction methods for KItinerary earlier I briefly mentioned barcodes as one source of information. It’s a subject that deserves a few more details though, as it’s generally good to know what information you are sharing when your ticket barcode gets scanned.

Why Barcodes?

Barcodes on booking confirmations or tickets serve multiple purposes:

  • Carrying some form of token used for validation. This can be a simple number or actual cryptographic signatures. That token typically does not contain any direct information about you or your booking, but it can act as a key for online lookup of such information, in which case it is even relevant to protect just that token from a privacy point of view.
  • Information about you or your booking. Often this is a machine-readable version of what’s also printed in human readable form on a ticket, such as your name, booking number or details about what you booked. From a privacy point of view even more problematic are cases where the barcode contains additional information not visible on the human readable part.

For data extraction we of course benefit from a machine readable format that doesn’t require fragile text parsing in PDF or HTML files. Additionally, barcodes tend to use systematic identifiers instead of ambiguous and/or localized human readable names, for example for airports or stations. The most well-known such identifier is probably the 3 letter IATA airport code. Such identifiers allow us to easily retrieve additional information about that location from sources like Wikidata.

KDE Itinerary’s nighlty Flatpak builds therefore recently got the ZXing-C++ dependency added to make full use of that, and we are working on getting that into the nightly Android builds too. If you are deploying or packaging KDE Itinerary or the KMail integration plug-in by other means you probably want to make sure to have ZXing-C++ available too.

While we are mainly interested in itinerary related information, we also come in touch with whatever else is in the barcodes. Besides general privacy insights this also has the very practical impact on how to sanitize our test data. While it’s fairly straightforward to replace your credit card number in a simple ASCII-based code, doing this in partially understood binary codes with cryptographic security features is next to impossible.

Barcode Types

There’s a number of different aspects of the barcodes that are relevant for understanding what is (or can be) encoded in them:

  • The size of the encoded data. That’s a very good indicator if there is only a ticket token or also additional booking information. One-dimensional codes can only store short alpha-numeric payloads, which is usually a strong indicator of a token-only code. Two-dimensional codes like QR or Aztec on the other hand can store up to a few hundred bytes.
  • ASCII or binary payloads. Many of the barcode codecs are optimized for alpha-numeric content rather than arbitrary binary data, so this doesn’t necessarily say anything about the amount of data in there. Textual content is however much easier to analyze, any barcode scanning app can show you the content. Many of those scanners however choke on e.g. 0 bytes, so even capturing the full binary payload isn’t straightforward.
  • Standardized or proprietary content. In some areas barcode content is standardized to achieve inter-operator compatibility, airline boarding passes being the extreme with a sinle international standard. Unfortunately, there are few other standards, let alone some with even remotely such a wide coverage. So in many cases we encounter vendor-specific codes with little or no public documentation. Those however are often a bit simpler in their structure, while standards tend to be modular and offer support for extensions. Standardization also doesn’t necessarily imply the specification is publicly available, but it makes it at least more likely that it’s findable somewhere on the Internet ;-)

As mentioned above there is only one relevant barcode type for flights, “IATA Bar Coded Boarding Passes (BCBP)”. It’s a fairly old standard, containing a modular ASCII payload for one to four legs. The set of mandatory fields is very small:

  • Passenger name (as 6bit ASCII and truncated to 20 characters).
  • Booking reference.
  • Start and destination IATA airport codes.
  • Flight number.
  • Day of flight. This is the number of days since January 1st of the year of the trip. The year however is not encoded at all.
  • Seat number and class.
  • Passenger sequence number (part of the unique identification of a passenger).

Privacy-wise, this is already enough to be problematic, as was shown at 33C3 in 2016. For KItinerary’s data extraction this is almost all the useful information in here, particularly annoying is the lack of a full date, requiring us to guess the year from context.

However, there’s plenty of optional fields that are populated based on the airline and the travel destination. A few noteworthy examples are:

  • Frequent flyer number (which sometimes doubles as a credit card).
  • Baggage tag numbers.
  • A “document id”, which has been seen containing the passport number for flights to the UK for example.
  • A variable length vendor-specific field. This is often seen to be used by Lufthansa-associated airlines, with unknown content.
  • Fields specific to US security requirements (and only used for flights in or to the US).
  • A cryptographic signature of the content, to be specified by “local authorities”. This so far has also only been observed for US destinations.

It would be interesting to explore if a “privacy mode” for boarding passes in KDE Itinerary would work in practice, that is only presenting the mandatory fields of the boarding pass and see how far you get with that at the airport. It’s unlikely to work for security-related fields or with signatures as used in the US, but fields primarily of commercial interest are probably avoidable in other parts of the world.


For train tickets the situation is a lot more diverse. The closest thing to an international standard is UIC 918.3, which is the big 50x50mm Aztec code found on European international tickets, as well as on domestic tickets in at least Austria, Denmark, Germany and Switzerland. UIC 918.3 however only defines a container format with a minimal header, cryptographic signatures and a zlib compressed payload.

To get an idea on the variety of payloads we find on train ticket, here’s an overview of what KItinerary supports so far, order roughly by complexity and usefulness of the content:

  • Koleje Małopolskie (a local polish provider): a simple JSON structure containing almost all relevant trip information, even exact times in UNIX format. Very useful and very easy to extract. Contains the passenger name, but at least nothing beyond what’s on the paper ticket. Uses human readable station names rather than station identifiers though.
  • SNCF (French national railway): a simple fixed-length ASCII format encoding one or two legs of a trip. Easy to extract and useful too, privacy-wise this contains the passenger birth date beyond what’s on the paper ticket. There’s still 4 bytes in there with unknown meaning.
  • Trenitalia (Italian national railway): A 67 byte binary blob encoding one leg of a trip. It seems very optimized for size, with numeric values having no alignment at all, so it needs to be looked at as a bit array rather than a byte array. Being entirely undocumented, we had to decode this ourselves. This is ongoing work, the current state can be found in the wiki, about half of the content can be attributed to a meaning or is always 0. The data we got out of this so far is quite useful, but it’s still incomplete (date/time values for example are suspected to be in there, but haven’t been decoded yet successfully). With parts of the content stlll being unknown it’s to early to asses this for privacy concerns.
  • RCT2 (the standard UIC 918.3 payload for European international tickets, and also used by DSB, ÖBB and SBB): There’s at least decent documentation about this. Unfortunately it’s of very limited use for data extraction. RCT2 is essentially an ASCII art representation of the upper part on the corresponding paper ticket, being designed for display to a human reader rather than for machine reading. The limited space in there conflicts with the realities of multi-lingual tickets, leading to a rather flexible interpretations of the standard. Relevant information for us like the exact train a ticket is valid for is not part of the specified fields in many cases, but encoded in an operator-specific format in a free text description field. Therefore KItinerary is only using this as fallback if no other information is available. Being designed as an exact representation of the paper ticket, it has not been seen containing any additional information.
  • Deutsche Bahn (vendor-specific payload for UIC918.3): That’s another modular hybrid binary/textual structure, wrapped inside the UIC 918.3 container, relatively complicated to decode and unfortunately containing very little useful information for KItinerary. Many fields are related to tariff details, but there’s also the passenger name and in older versions also full or partial numbers of the (credit) card used for payment and/or identification. This has meanwhile been fixed though. Tickets with an option for local public transport at the destination contain additional operator-specifc payloads, it’s unknown whether those contain useful/sensitive information.

There’s also a few operators we know use barcodes with trip-related content, but that we don’t support yet due to not having enough information or sample data to properly decode their barcodes:

  • VIA Rail (Canadian railway): ASCII payload, structurally probably comparable to SNCF, so this might be fairly easy to support given a sufficient amount of samples.
  • VR (Finish national railway): A 108 byte binary code with entirely unknown content so far. It looks more complex than the Trenitalia one, with the larger size and more parts of the code changing between even adjacent tickets, but not entirely random which suggests there is no encryption, compression or other sophisticated encoding.

Other transport operators like SNCB (Belgian national railway) or Flixbus are also using barcodes, but those seem merely to contain ticket tokens. The same is true for all event ticket samples we have so far.


One of the easiest way to help with decoding such barcodes is looking for prior work or documents on that subject in your local language. For Deutsche Bahn I found numerous useful sources online, all in German though. For SNCF some material exists as well, but it required French language skills to find that.

While obviously conflicting with striving for privacy, another very helpful way to help is donating test samples, especially for not fully understood yet barcodes. Decoding an entirely undocumented binary code requires enough samples so you can look at meaningful differences between partially differing tickets, and enough samples to verify you theories on the semantics of certain bits with sufficient certainty. We are not talking about machine-learning scale amounts here though, for the current understanding of the Trenitalia codes it took about 30 barcodes from about a dozen different bookings.

And of course if you like solving binary puzzles, there are some nice challenges here too ;-)

Hello KDE

Tuesday 14th of May 2019 03:55:26 PM
Hello, my name is Sharaf. My nick on IRC is sh_zam.

My project is to port Krita to android devices. We've been successful in making the APK, but it only works if I build it, as it requires tweaking qt libraries, a bit. At the moment, my goal is to make the build system fully automatic and spit out the signed APKs for different architectures at the end.

Once I do that, I'll move on to UI, events and other fun stuff!

So, there's a lot to do and learn. Now I'll go back to coding and will write more technical stuff in my blogs in future, as I'm not that good at other stuff (-:

So, thank you KDE for choosing me and I hope I'll learn a lot from this community!

Upcoming news in Plasma 5.16

Tuesday 14th of May 2019 02:17:31 PM
Plasma-nm WireGuard support

We already had WireGuard support in Plasma 5.15, but it existed as a VPN plugin based on a NM WireGuard plugin, which wasn’t really working very well and didn’t utilize many of already existing NM properties. With release of NetworkManager 1.16, we have a new native support of WireGuard which is much more usable. It now exists as a new connection type so it’s implemented a bit differently compared to other VPNs. This mean that we had to implement first support for this connection type and its properties into NetworkManagerQt and implement an UI on top of that. The UI part of the new WireGuard support, same as the old VPN plugin, were implemented by Bruce Anderson. We are also probably (at this moment) the only one who provides an UI for WireGuard configuration so thank you Bruce for such a big contribution.

OTP support in Openconnect VPN plugin

Another big contribution, this time made by Enrique Melendez, is support for one time passwords in the Openconnect VPN plugin. This support was missing for some time so starting with Plasma 5.16, you should be able to use TOTP/HOTP/RSA/Yubikey tokens for your Openconnect connections.

PAN GlobalProtect VPN

OpenConnect 8.00 introduced support for PAN GlobalProtect VPN protocol. You can now see this new VPN type entry thanks to Alejandro Valdes.

Xdg-desktop-portal-kde Remote desktop portal

Remote desktop portal brings possibility to control remotely your Wayland Plasma sessions. It utilizes screensharing portal to get the screen content and adds API for mouse/keyboard/touch control. Unfortunately at this moment only mouse support is implemented, mainly because I use KWayland::FakeInput protocol and mouse support is the only one currently implemented there. At this moment there is no Qt/KDE based application using remote desktop portal (or at least released one), but I have added support into Krfb, which is currently on review and I hope to get it merged for KDE Applications 19.08. Alternatively you can use gnome-remote-desktop.

Here is a short demo of remote desktop in action over VNC protocol. On the server side I’m running Krfb on Plasma wayland session and I control it from my second laptop using Krdc.

Qt on CMake Workshop Summary – May ’19

Tuesday 14th of May 2019 01:58:40 PM

This is a follow-up post to Qt on CMake Workshop Summary – Feb ’19


From May 2nd to May 3rd another Qt on CMake workshop was hosted at the KDAB premises in Berlin, where interested stakeholders from both The Qt Company and KDAB gathered together to drive the CMake build system in Qt further. Many of KDAB’s customers are using CMake in their Qt projects, so we are keen to see the CMake support for Qt improve and happy to help out to make it happen. The workshop was public, for anyone interested, but we had no external visitors this time. We’d be happy to have some more CMake enthusiasts or interested people in these workshops, so be sure to sign up for the next CMake workshop (watch the qt-development mailing list for this)!

This workshop in May was mostly intended to reassess what has happened in the wip/cmake branch of qtbase since the last workshop and to discuss any further work. We spent almost half of the first day just deciding how to approach certain things such as how the CMake build system port will affect the upcoming Qt6 work, which is currently gaining momentum as well. We had between 8 and 10 people present across the 2 day workshop, from KDAB and (mostly) The Qt Company.

Workshop summary

Excerpt of the top-level CMakeLists.txt in qtbase.git

First of all: Thanks to Alexandru Croitor for driving the porting efforts and for organizing sprints and meetings where interested people can keep track of the progress!

The workshop summary notes are also courtesy of Alexandru, let me try to quickly recap the most interesting bits:

CMake config files in Qt6 and beyond

One of the key considerations of the CMake config files installed as part of the upcoming Qt6 was that there should a) be the possibility to just be able to use CMake targets like Qt::Core (compared to Qt5::Core) and functions/macro names like qt_do_x() (instead of qt5_do_x()) to allow most applications to just pull in a Qt version of their choice and then use “versionless” CMake identifiers. This allows to upgrade Qt versions more easily, without a lot of search-replace in CMake code. Note that you can continue to use the version-specific identifiers as before. This is an additional feature.

But on the other hand we’d also like to keep the possibility to mix Qt version X and Qt version Y in the same CMake project. Think about a project where two executables are being built, one depending on Qt6, the other one a potential Qt7 version. This is not as cumbersome as you’d think; we experience a lot of customer projects where people have this setup. It might as well be the case during a porting project, where old code might still continue to use an older Qt version.

Consider this example (which is not fully implemented yet, but you get the idea):

### Usecase: application wants to mix both Qt5 and Qt6, to allow gradual porting set(QT_CREATE_VERSIONLESS_TARGETS OFF) find_package(Qt5 COMPONENTS Core Gui Widgets) # Creates only Qt5::Core find_package(Qt6 COMPONENTS Core Gui Widgets) # Creates only Qt6::Core target_link_libraries(myapp1 Qt5::Core) target_link_libraries(myapp2 Qt6::Core) ### Usecase: application doesn't mix Qt5 and Qt6, but allows to fully switch to link against either Qt5 or Qt6 set(MY_APP_QT_MAJOR_VERSION 6) # <- potentially set at command line by application developer # set(QT_CREATE_VERSIONLESS_TARGETS ON) <- Default, doesn't need to be set find_package(Qt${MY_APP_QT_MAJOR_VERSION} COMPONENTS Core Gui Widgets) # Creates Qt5::Core and Qt::Core OR Qt6::Core and Qt::Core, based on the variable target_link_libraries(myapp Qt::Core) # Just links to whatever Qt major version was requested

More details (and development notes from the past meetings):

After a lot of back and forth we actually found a straight-forward way to at least create the two namespaces in the CMake config files easily, see e.g.:

QMake will still be around in Qt6

As it stands, existing users of Qt and specifically users of QMake do not have to fear the CMake port too much, for now. The current bias is towards keeping the qmake executable (and the associated mkspec functionality) around for the Qt6 lifetime, as we’d obviously create a lot of additional porting effort for our users. During the Qt6 lifetime it would probably be wise to consider moving your pet project to a CMake build system, but only time will tell.

QMake is currently built via the CMake build system in the wip/cmake branch and already available for use. Upside-down world, right. Additionally, we’re looking into generating the QMake module .pri files using CMake as well. All this is definitely no witch craft but just needs dedicated people to implement all of it.

Further notes

You can find a lot more details on the Wiki, in case you are curious, I would not like to duplicate even more of the really comprehensive work log produced here:

If you would like to learn more about CMake, we are offering a one-day Introduction to CMake training at the KDAB training day as part of Qt World Summit in Berlin this year.

If you have comments or if you want to help out, please ideally post feedback on the Qt Project infrastructure. Send a mail to the qt-development mailing list or comment on the wiki page dedicated for the CMake port. Or just join us in the IRC channel #qt-cmake on Freenode!


The post Qt on CMake Workshop Summary – May ’19 appeared first on KDAB.

Kate LSP Client Progress

Sunday 12th of May 2019 09:54:00 PM

The Kate lsp branch contains now the infrastructure as used by Qt Creator. In addition, clangd is now somehow started in a working state for the first project opened inside Kate.

For example, if you use the CMake Kate project generator and you compile Kate from the “lsp” branch, clangd should pick up the compile_commands.json for a CMake generated Kate project.

;=) Unfortunately not much more than starting and informing clangd about the open workspaces (for the first opened project) works ATM.

If you press ALT-1 over some identifier, you will get some debug output on the console about found links, like below:

qtc.languageclient.parse: content: “{\“id\”:\“{812e04c6-2bca-42e3-a632-d616fdc2f7d4}\“,\“jsonrpc\”:\“2.0\“,\“result\”:[{\“range\”:{\“end\”:{\“character\”:20,\“line\”:67},\“start\”:{\“character\”:6,\“line\”:67}},\“uri\”:\“file:///local/cullmann/kde/src/kate/kate/katemainwindow.h\“}]}”

The current ALT-1 handling is a big hack, as then one just adds the current document and triggers the GotoDefinitionRequest. A proper implementation tracks the opened/closed documented of the editor.

But at least in principle Kate is now able to start some language server processes and talk a bit with them, all thanks to the nice code borrowed from Qt Creator.

:=) As my spare time is limited, any help in bringing the branch up-to-speed is highly welcome, just drop us a mail to or mail me in private ( A working LSP integration will help to make Kate more attractive for programmers of many languages.

Latte and a "Shared Layouts" dream...

Sunday 12th of May 2019 06:35:53 PM

Following Latte and an Indicators tale,  today I am going to introduce you another major feature that Latte git version supported this month, Shared Layouts.

“Lets share our docks and panels in different Activities”

- Share the same top panel between two different layouts at different Activities -- youtube presentation -
Why do we need Shared Layouts for?

Latte v0.8 introduced Latte layouts that could function in Multiple mode. Meaning that the layouts can be assigned to different Activities and all together can work simultaneously based on the Activities running. That approach of course has limits. In case you want to share a dock/panel between different layouts you can not. The only thing you could do is to Copy the panel or view that you want in more than one layouts and have different instances in different Activities. These instances of course could not sync their settings and they were all running at the same time, meaning more memory usage because of all these duplicates.
I suppose that in the past the users called this feature "Different panels at Different Activities". Latte v0.9 actually answers that completely with Shared Layouts.

What can Shared Layouts do?

Any Latte layout can be Shared To one or more layouts under Multiple Layouts mode. Contrary to Shared Layouts all the other layouts that are NOT-Shared are now called Central in order to distinguish them. All these layouts can live at the same time at different Activities and cooperate with each other of course.

Note: I dont know how to describe this better so please watch the youtube video in the beginning in order to understand better the concept
Shared "TopPanel" layout that lives with "My Profile" and "Unity Dream"
layouts at the same time

Shared Layouts Goals
  1. Any layout can act as Central or Shared layout BUT NOT at the same time
  2. A Central Layout can be assigned to ONLY ONE Shared Layout BUT the Shared Layout can be assigned to one or more Central layouts at the same time
  3. Central Layouts define the Activities to follow and its SHARED layout will just obey
  4. Panels/Docks from Shared Layouts have ALWAYS HIGHER PRIORITY to be drawn in the screen compared to Panels/Docks from Central Layouts
  5. Multi-Screen behavior should work out of the box with no restrictions
  6. Docks/Panels from Shared or Central Layouts can be moved/exchanged from layout to layout freely with no restrictions

Latte with [1-6] can now act in multi-layouts / multi-activities / multi-screens mode with no restrictions


I dont know how many Latte users will need this feature, probably very few... On the other hand I always try to promote and enhance the Plasma Activities concept because I really believe in them... ��

That is all for now, I hope you enjoy and play with Shared Layouts! Personally I love them and I have switched to a Multiple Layouts mode with Shared Layouts to keep things in sync...


You can find Latte at Liberapay if you want to support,    

or you can split your donation between my active projects in kde store.

Nanonote 1.2.0

Sunday 12th of May 2019 03:09:11 PM

Time for a new Nanonote release!

This new version comes with several changes from Daniel Laidig: you can now use Ctrl+mouse wheel to make the text bigger or smaller and Ctrl+0 to reset the font to its default size.

He also fixed the way links are displayed: they now use the theme color instead of being hard-coded to blue. If you use a dark theme, this should make Nanonote more usable for you.

Nanonote now speaks German, thanks to Vinzenz Vietzke.

I made a few minor changes on my side, the most visible being that @ is now allowed in URLs, which is handy for sites like Medium.

A more complete changelog as well as deb and rpm packages are available on the release page.

KDE Usability & Productivity: Week 70

Sunday 12th of May 2019 06:01:42 AM

Hold on to your hats, folks, because this week’s Usability & Productivity report is OVERFLOWING WITH AWESOMENESS! Let’s jump right in:

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

KStars v3.2.2 is Released!

Sunday 12th of May 2019 05:36:12 AM
Thanks to all to the hard work by KStars developers and volunteers, we are happy to announce KStars v3.2.2 release for Windows, MacOS, and Linux.

In this release, support for x86-32 bit architecture has been dropped and the Windows 10 executable now requires an x86-64 bit system.

Improved Observing List Wizard

This is a maintenance release to fix a few bugs and introduce a few enhancements:

  • Important stability fix for crashes reported with FITS Viewer.
  • Ignore Video Streaming when guiding via PHD2 with a video device.
  • Automatic syncing for Active Devices on Startup.
  • Meridian Flip Fixes.
  • Keep GUI parameters for scheduler and capture in sync with row selection
  • When a manual filter is detected, prompt user to change the filter and update the driver accordingly.
  • Fix observing list wizard object filter by time and altitude and introduce a coverage param where the user can control the percentage.
  • Improved saving of settings in all Ekos modules.

Kdenlive 19.04.1 released

Saturday 11th of May 2019 03:08:38 PM

The Kdenlive team is happy to announce the first minor release of the 19.04 series fixing 39 bugs. The feedback by the community as well as the effort put in reporting issues has been very helpful and we encourage to keep it up. We expect to finish polishing in the coming months in order to focus on our planned pro features.

Kdenlive 19.04.1 fixes some important issues, so all 19.x users are encouraged to upgrade. Easiest way to test it is through the AppImage, available from the KDE servers as usual:

The Appimage also contains some last minute fixes that will be in 19.04.2 since we are still busy fixing some remaining issues after our big refactoring. This Appimage should fix the rendering and timeline preview issues recently reported, and the 19.04.1 fixes are listed below.

Other news: work continues to improve OpenGL support, fixes by the team have been merged into MLT improving speed and the Titler will be rewritten as a GSOC project.

19.04.1 bug fixes:

  • Search effects from all tabs instead of only the selected tab
  • Add missing lock in model cleanup. Commit.
  • Move levels effect back to main effects. Commit.
  • Fix crash closing project with locked tracks. Fixes #177. Commit.
  • Speedup selecting bin clip when using proxies (cache original properties). Commit.
  • Disable threaded rendering with movit. Commit.
  • Fix wrong thumbnails sometimes displayed. Commit.
  • Ensure fades always start or end at clip border. Commit.
  • Fix loading of clip zones. Commit.
  • Fix transcoding crashes caused by old code. Commit.
  • Fix fades copy/paste. Commit.
  • Fix broken fadeout. Commit.
  • Fix track red background on undo track deletion. Commit.
  • Update appdata version. Commit.
  • Zooming in these widgets using CTRL+two-finger scrolling was almost. Commit. Fixes bug #406985
  • Fix crash on newly created profile change. Commit.
  • Always create audio thumbs from original source file, not proxy because proxy clip can have a different audio layout. Commit.
  • Mark document modified when track compositing is changed. Commit.
  • Fix compositing sort error. Commit.
  • Fix crash opening old project, fix disabled clips not saved. Commit.
  • Fix crash and broken undo/redo with lift/gamma/gain effect. Fixes #172. Commit.
  • Fix clip marker menu. Fixes #168. Commit.
  • Fix composition forced track lost on project load. Fixes #169. Commit.
  • Fix spacer / remove space with only 1 clip. Fixes #162. Commit.
  • Fix timeline corruption (some operations used a copy of master prod instead of track producer). Commit.
  • Check whether first project clip matches selected profile by default
  • Renderwidget: Use max number of threads in render. Commit.
  • Fix razor tool not working in some cases. Fixes #160. Commit.
  • Better os detection macro. Commit.
  • Remove crash, not solving 1st startup not accepting media (see #117). Commit.
  • Remove unneeded unlock crashing on Windows. Commit.
  • Some fixes in tests. Commit.
  • Forgotten file. Commit.
  • Improve marker tests, add abort testing feature. Commit.
  • Add tests for unlimited clips resize. Commit.
  • Small fix in tests. Commit.
  • Fix AppImage audio recording (switch from wav to flac). Commit.
  • Dont remember clip duration in case of profile change. Fixes #145. Commit.
  • Fix spacer broken when activated over a timeline item. Commit.
  • Improve detection of composition direction. Commit.
  • Unconditionnaly reload producers on profile change. Related to #145. Commit.

Kaidan joins KDE

Friday 10th of May 2019 06:02:00 PM

Kaidan is a simple, user-friendly Jabber/XMPP client providing a modern user interface based on Kirigami and QtQuick. Kaidan aims to become a full-featured alternative to proprietary messaging solutions like Telegram, but featuring decentralization and privacy.

The Kaidan team has always had a good relationship with the KDE community, our code uses KDE’s Kirigami framework and is written with Plasma Mobile in mind. So we decided it is only logical for us to join KDE officially, and we are happy to announce that we now finally did.

This introduces some changes and benefits to the development workflow. First of all, we can now use KDE’s GitLab instance, KDE Invent, which eliminates the need for hosting our own. In the future, we plan to provide official Windows and macOS builds using KDE’s Binary Factory infrastructure.

If you are not a user of the KDE Plasma desktop environment, you might wonder whether this decision will influence our support for other desktop environments, like GNOME, but we will continue to aim at the full range of possible platforms, like we always did, regardless of this decision. Of course, merge requests improving the experience on other desktop environments are also very welcome! If you want to contribute, you can now do this on This requires the use of a KDE Identity account but is otherwise not different from our own GitLab instance you might be used to.

Happy messaging!

Closing doors: Codethink.

Friday 10th of May 2019 04:49:33 PM

April 26th was my last day at Codethink Ltd. It has been over three years and a half working for this Manchester based company as consultant. I have got the opportunity to learn a lot working for a variety of customers and Open Source organizations together with bright professionals.

Codethink is an organization that cares about people: customers and employees. That is not as common as it might seem in software engineering service companies. Codethink does good software engineering too; sometimes under tough conditions. It is not always easy to work for customers with tight deadlines, solving complex problems with high impact in their businesses. I am proud of having worked for such organization.

Thanks Paul Sherwood and the rest of Codethings for the given opportunity and your respect during this time.

Since the day I decided to turn my little IT training company into an Open Source Software one, back in 2003, I have worked in/for organizations that understood and supported Open Source. I could even work full time in the open in several occasions, either contributing to or being upstream. Codethink was no exception in this regard.

After +15 years working in Open Source organizations, it has come the time for me to move away from the comfort zone and try something different for some time. But that will be a matter of the coming post, in a few weeks.

Wallpaper competition update

Friday 10th of May 2019 04:42:10 PM

Howdy folks! Here’s a reminder about our Plasma 5.16 wallpaper competition. We’ve gotten lots of wallpapers, but there’s a little more than two weeks left and still plenty of time to submit you gorgeous entries! As a reminder, the winner also receives a Slimbook One computer! Here are the rules.

Let me also take the opportunity to clarify what it is that we’re looking for, overall. We want wallpapers with the following characteristics:

  • More abstract than literal
  • Uses the Breeze color palette, at least a bit
  • Has some geometric elements to it
  • Attractive but not overwhelming
  • Soothing but not boring
  • Creative but not bizarre or disturbing
  • Still good-looking with some desktop icons on top of it

Here are some stylistic suggestions, too: Avoid having a central foreground element framed within a background. This feels very confining. Try having the foreground go out of the frame and appear off-center. Also, try to avoid large areas of super bright colors, as these can be visually overwhelming and look uncomfortable when closing or minimizing windows. Pastels often work better than bright neon colors.

and if you’re looking for inspiration, check out the last six wallpapers. See how they evolved over time?

So get out there and submit a wallpaper!

Next Generation Plasma Notifications

Friday 10th of May 2019 09:45:27 AM

There is something very exciting I have to show to you today: a completely rewritten notification system for Plasma that will be part of our next feature update 5.16 to be released in June.

Isn’t that an update you’d love to install?

I have been planning to do this rewrite for years. In fact, the wiki page where I collected ideas and mock-ups was created in July 2016 and “assumes the status quo as of Plasma 5.7”. The old notification plasmoid was originally written in 2011, when QML was still pretty new. It later got ported to Plasma 5 and slightly overhauled, most noticeably using individual popups rather than the scrolling ticker of notifications we had in the late Plasma 4 days. However, its core logic hardly changed and it became evident that its code base could not support many of the feature users expect from a notification center these days. I had started a rewrite branch last summer but only recently found the time and motivation to finish it and basically hacked on that thing for a month straight, and here’s what came out of that:

New look and feel

The first thing you’ll notice is that the notifications are much more compact with the icon on the opposite side now and the issuing application prominently displayed. Font sizes have also been streamlined and the heading is now allowed to wrap. A major behavior change in the new system is that persistent notifications stay on screen until dismissed. This ensures that important notifications and ones that require user interaction, such as a pairing request from a Bluetooth or KDE Connect device don’t go unnoticed.

Notification popup synced from your phone indicating what app and device it originally came from

When an application sets a “default” action, the cursor changes to a pointing hand to indicate that the popup itself is clickable. A little bar on the side indicates when the notification will time out. I worked together with Nicolas Fella of KDE Connect fame to improve the user experience when syncing your devices and it will soon be possible for it to annotate a notification with the name of the device it originally came from and the actual app on the device that sent it.

My all time favorite productivity feature in Plasma, notification thumbnails, has of course been touched up. It now uses a better aspect ratio and has a lovely blur effect to go with it. In case you didn’t know: when you take a screenshot in an application like Spectacle (remember Meta+Shift+Print Screen) or Flameshot you can drag the screenshot preview in the notification anywhere you like, for example your web browser, or an email composer.

Screenshotception. Do not disturb mode

Another new major feature is “Do not disturb” mode. When enabled, no notification popups are shown and the Notification Sounds stream is muted. All notifications go straight to the history for later reference.

However, there are some notifications that should get through nonetheless. This is why the new server also supports the “Urgency” hint which an application can use to specify whether a notification has low, normal (the default), or critical urgency. Critical notifications, such as your battery is about to die, are shown even in do not disturb mode. KDE applications can now add a Urgency= key to an event in their .notifyrc file or use the setUrgency method on KNotification added in the upcoming Frameworks release. Moreover, you can specify which applications are allowed to send notifications regardless.

There is currently a discussion on the XDG mailing list about a DBus protocol for notification inhibitions so applications like LibreOffice Impress, Okular, OBS, and others could automatically enable this mode when giving a presentation or hosting a livestream. Speaking of presentations, while regular notifications aren’t shown on top of full-screen windows for added privacy, critical notifications will be, so you won’t miss your laptop running out of juice mid-lecture.

Progress reporting

Progress reporting when copying or receiving files has also been revamped and uses the same style as notifications. A more useful summary text is displayed, showing the most important information at a glance: the name of the file or number of files being processed as well as the destination and time remaining. This way you never actually have to expand the details section to see what’s going on. You still can, of course.

When a job finishes it turns into a regular notification and just times out. In case of error it stays visible so you can investigate what went wrong. One major complaint we got with the old progress reporting was that it’s easy to miss since it’s just a little circle in the panel. To address this, by default the job popup opens and stays there until the task has finished. It can be manually hidden at the click of a button and there’s an option to automatically do that a few seconds into the progress, if you prefer that.

Notification history Notification history showing what you’ve missed

An always only half-hearted feature was the notification history. Currently it collects every notification and gets cluttered quickly. In the new history, we try to reduce the amount of spam that piles up: notifications that you explicitly closed, interacted with, or that got revoked by the issuing application aren’t added to the history. Unfortunately, with the freedesktop Notification protocol and how KNotification is built around that a proper history isn’t possible: an application cannot revoke a notification once it expired even if it afterwards knows it became obsolete.

To remedy the effects of piling up old notifications they are grouped by application and only the last couple of notifications are shown, with the possibility to expand a group to show all of them. Additionally, low urgency notifications, which could be your media player changing tracks, aren’t added to the history by default.

New settings module

What would be a new feature in KDE land without settings? ;) Accompanying the new notification server and plasmoid is an all-new System Settings module.

New notification settings module

It lets you configure various aspects of notifications and job reporting in a central place. Settings for badges (the little number circle on app icons) and application progress in task manager have also been moved to this central location. Furthermore, the default popup time out, i.e. when an application doesn’t explicitly specify one, is now a fixed number of seconds. The old implementation tried to find a sensible timeout based on the number of words and an “average read speed” but its inaccurate algorithm often lead to popups staying on screen a long time for no apparent reason.

Configuring notifications, including Gnome applications and other 3rd party ones

While it has always been possible to configure notifications of KDE’s own applications to a great degree, there was no way to influence the behavior of 3rd party apps. The new settings module is able to find Gnome applications that set the appropriate hint as well as remember any application that sent a notification in the past.

This allows you to disable popups for those applications, too, as well as keep them out of your notification history or white-list them for do not disturb mode. The fine-grained notification event configuration for KDE applications is also still available.

Give it a try!

There’s a couple of days left until Plasma 5.16 Beta, to be released on 16 May, so there’s still some time for you to try it out beforehand and give feedback. With the new implementation I managed to fix almost 20 distinct bug reports. Go spin up your developer machine and build plasma-workspace master branch from git. You may also update your favorite distribution sporting daily builds, however, the feature has only been merged recently, so it might take some time for updated packages to be generated.

If you find a notification that looks odd or different from how it used to, do tell me! Ideally, you’ll be monitoring DBus traffic to find out what exact notification and data the application sent and help me reproduce and fix the issue. Just run

dbus-monitor interface=org.freedesktop.Notifications

to see all traffic going in and out of the notification server, or use a tool like Bustle. For debugging progress reporting, the interfaces instead are org.kde.kuiserver, org.kde.JobViewServer, and org.kde.JobViewV2. There’s also a debug logging category org.kde.plasma.notifications that you can enable through kdebugsettings.

My First impression at PlaMo

Friday 10th of May 2019 06:20:43 AM

How Plasma Mobile got my attention?
I remember how my admiration for android turned into disappointment when I got into android development. The realisation, with every new release the patches are built on the broken pieces of android was disheartening. And the water crossed the line when the whole system started disrespecting the users privacy!
Then, I fine day (mid-January 2019) I came to know about Plasma Mobile. When every other attempt at creating an open source mobile platform on the horizon failed, KDE came along with Plasma Mobile. So how could have I resisted to not to using it!
I purchased refurbished Nexus 5X, and here we are today:D

Currently Plasma Mobile is supported by very small number of devices. It works quite well on Nexus 5x (Bullhead). For installing PlaMo on Bullhead you can follow instructions at

My asseveration
Operating Plasma Mobile at console level is just like working on Ubuntu/Kubuntu terminal on desktop, and I usually prefer using my PlaMo using Command-line Interface (CLI) because most of the functions work well when controlling via CLI.
Either you can use ‘Konsole’ app for that or control your phone remotely from the computer.Since the Konsole app isn’t very handy as for now, I prefer the second option.
ssh phablet@
Plasma Mobile is very much an alpha-level product. It can’t be used for daily purpose as of now. But, it is really appreciable that the KDE developers have managed to take a Linux desktop, re-frame it around mobile hardware, and make it work quite well.
This could be something special!

PS: I do not intend to say Plasma Mobile is a full operating system itself (just like Plasma on desktop is not a complete distro). It needs an underlying OS.

Krita 4.2.0-alpha Released

Wednesday 8th of May 2019 03:30:36 PM


We’re on track to release Krita 4.2.0  this month, and today we’re releasing the alpha! That means that we’re still fixing bugs like crazy — just check what we’ve been doing this month:

But of course there’s much more. We’ve been working on this release for a long time now, since June last year when Krita 4.1 was released. We have fixed about 1500 bugs in that period, and implemented a host of new features, workflow improvements and little bits of spit and polish.

You can read all about the new features in the release notes. Highlights are much improved tablet support on all platforms, HDR painting on Windows, improved painting performance, improved color palette docker, animation API for scripting, gamut masks, improved artistic color selector, an improved start screen that can now show you the latest news about Krita, changes to the way flow and opacity work when painting… And much more.

In the meantime, we’ve also worked hard on the manual, to bring it up to date with this release. The manual for Krita 4.1.7 can still be downloaded in epub format.

Warning: Linux users should be careful with distribution packages. We have a host of patches for Qt queued up, some of which are important for distributions to carry until the patches are merged and released in a new version of Qt.

Download Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)


Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code md5sum

For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

Okular: improved PDF annotation tool

Tuesday 7th of May 2019 01:40:59 PM

Okular, KDE’s document viewer has very good support for annotating/reviewing/commenting documents. Okular supports a wide variety of annotation tools out-of-the-box (enable the ‘Review’ tool [F6] and see for yourself) and even more can be configured (such as the ‘Strikeout’ tool) — right click on the annotation tool bar and click ‘Configure Annotations’.

One of the annotation tools me and my colleagues frequently wanted to use is a line with arrow to mark an indent. Many PDF annotating software have this tool, but Okular was lacking it.

So a couple of weeks ago I started looking into the source code of okular and poppler (which is the PDF library used by Okular) and noticed that both of them already has support for the ‘Line Ending Style’ for the ‘Straight Line’ annotation tool (internally called the TermStyle). Skimming through the source code for a few hours and adding a few hooks in the code, I could add an option to configure the line ending style for ‘Straight Line’ annotation tool. Many line end styles are provided out of the box, such as open and closed arrows, circle, diamond etc.

An option to the ‘Straight Line’ tool configuration is added to choose the line ending style:

New ‘Line Ending Style’ for the ‘Straight Line’ annotation tool.

Here’s the review tool with ‘Open Arrow’ ending in action:

‘Arrow’ annotation tool in action.

Once happy with the outcome, I’ve created a review request to upstream the improvement. A number of helpful people reviewed and commented. One of the suggestions was to add icon/shape of the line ending style in the configuration options so that users can quickly preview what the shape will look like without having to try each one. The first attempt to implement this feature was by adding Unicode symbols (instead of a SVG or internally drawn graphics) and it looked okay. Here’s a screen shot:

‘Line End’ with symbols preview.

But it had various issues — some symbols are not available in Unicode and the localization of these strings without some context would be difficult. So, for now it is decided to drop the symbols.

For now, this feature works only on PDF documents. The patch is committed today and will be available in the next version of Okular.

Our 2019 Google Summer of Code Students

Tuesday 7th of May 2019 09:11:20 AM

Krita, part of KDE, takes part in the fourteenth edition of Google Summer of Code. Four students will be working on a wide variety of  projects. Here’s the shortlist:

Sharaf Zaman will be working on porting Krita to Android. In fact, he already has a port of Krita for Android that already starts on some devices! The port is still missing libraries, scripts to automate building the dependencies and Krita: the first goal of the project is have a dependable, reproducible way of building Krita for Android. Initially, we won’t do much if any work on a nice tablet GUI.

Tusooa Zhu will work on a radical change of Krita’s undo system. This will lead in the end to a history brush system and a system where you can continue from any history state of your image. This is rather a big and complex project, so it’s more like initial research into the possibilities.

Kuntal Majumder will be working on implementing a magnetic lasso selection tool for Krita. We already had a magnetic lasso tool, but that broke when we ported Krita to Qt 4 back in 2006, 2007… And we have tried once more to implement a new magnetic lasso selection tool, but that project was never finished. Third time should be the lucky time!

Alberto Eleuterio Flores Guerrero will be extending making it possible to use an SVG file as input for brush engines, so you can have an animated brush tip that can scale without loss of quality.



Summer is coming...

Tuesday 7th of May 2019 08:48:32 AM

Note: Not a Game of Thrones fan, yet to watch even a single episode, but I wanted a catchy title This will be a story, so get prepared to be bored, ��.

More in Tux Machines

LWN: Spectre, Linux and Debian Development

  • Grand Schemozzle: Spectre continues to haunt

    The Spectre v1 hardware vulnerability is often characterized as allowing array bounds checks to be bypassed via speculative execution. While that is true, it is not the full extent of the shenanigans allowed by this particular class of vulnerabilities. For a demonstration of that fact, one need look no further than the "SWAPGS vulnerability" known as CVE-2019-1125 to the wider world or as "Grand Schemozzle" to the select group of developers who addressed it in the Linux kernel. Segments are mostly an architectural relic from the earliest days of x86; to a great extent, they did not survive into the 64-bit era. That said, a few segments still exist for specific tasks; these include FS and GS. The most common use for GS in current Linux systems is for thread-local or CPU-local storage; in the kernel, the GS segment points into the per-CPU data area. User space is allowed to make its own use of GS; the arch_prctl() system call can be used to change its value. As one might expect, the kernel needs to take care to use its own GS pointer rather than something that user space came up with. The x86 architecture obligingly provides an instruction, SWAPGS, to make that relatively easy. On entry into the kernel, a SWAPGS instruction will exchange the current GS segment pointer with a known value (which is kept in a model-specific register); executing SWAPGS again before returning to user space will restore the user-space value. Some carefully placed SWAPGS instructions will thus prevent the kernel from ever running with anything other than its own GS pointer. Or so one would think.

  • Long-term get_user_pages() and truncate(): solved at last?

    Technologies like RDMA benefit from the ability to map file-backed pages into memory. This benefit extends to persistent-memory devices, where the backing store for the file can be mapped directly without the need to go through the kernel's page cache. There is a fundamental conflict, though, between mapping a file's backing store directly and letting the filesystem code modify that file's on-disk layout, especially when the mapping is held in place for a long time (as RDMA is wont to do). The problem seems intractable, but there may yet be a solution in the form of this patch set (marked "V1,000,002") from Ira Weiny. The problems raised by the intersection of mapping a file (via get_user_pages()), persistent memory, and layout changes by the filesystem were the topic of a contentious session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit. The core question can be reduced to this: what should happen if one process calls truncate() while another has an active get_user_pages() mapping that pins some or all of that file's pages? If the filesystem actually truncates the file while leaving the pages mapped, data corruption will certainly ensue. The options discussed in the session were to either fail the truncate() call or to revoke the mapping, causing the process that mapped the pages to receive a SIGBUS signal if it tries to access them afterward. There were passionate proponents for both options, and no conclusion was reached. Weiny's new patch set resolves the question by causing an operation like truncate() to fail if long-term mappings exist on the file in question. But it also requires user space to jump through some hoops before such mappings can be created in the first place. This approach comes from the conclusion that, in the real world, there is no rational use case where somebody might want to truncate a file that has been pinned into place for use with RDMA, so there is no reason to make that operation work. There is ample reason, though, for preventing filesystem corruption and for informing an application that gets into such a situation that it has done something wrong.

  • Hardening the "file" utility for Debian

    In addition, he had already encountered problems with file running in environments with non-standard libraries that were loaded using the LD_PRELOAD environment variable. Those libraries can (and do) make system calls that the regular file binary does not make; the system calls were disallowed by the seccomp() filter. Building a Debian package often uses FakeRoot (or fakeroot) to run commands in a way that appears that they have root privileges for filesystem operations—without actually granting any extra privileges. That is done so that tarballs and the like can be created containing files with owners other than the user ID running the Debian packaging tools, for example. Fakeroot maintains a mapping of the "changes" made to owners, groups, and permissions for files so that it can report those to other tools that access them. It does so by interposing a library ahead of the GNU C library (glibc) to intercept file operations. In order to do its job, fakeroot spawns a daemon (faked) that is used to maintain the state of the changes that programs make inside of the fakeroot. The libfakeroot library that is loaded with LD_PRELOAD will then communicate to the daemon via either System V (sysv) interprocess communication (IPC) calls or by using TCP/IP. Biedl referred to a bug report in his message, where Helmut Grohne had reported a problem with running file inside a fakeroot.

Flameshot is a brilliant screenshot tool for Linux

The default screenshot tool in Ubuntu is alright for basic snips but if you want a really good one you need to install a third-party screenshot app. Shutter is probably my favorite, but I decided to give Flameshot a try. Packages are available for various distributions including Ubuntu, Arch, openSuse and Debian. You find installation instructions on the official project website. Read more

Android Leftovers

IBM/Red Hat and Intel Leftovers

  • Troubleshooting Red Hat OpenShift applications with throwaway containers

    Imagine this scenario: Your cool microservice works fine from your local machine but fails when deployed into your Red Hat OpenShift cluster. You cannot see anything wrong with the code or anything wrong in your services, configuration maps, secrets, and other resources. But, you know something is not right. How do you look at things from the same perspective as your containerized application? How do you compare the runtime environment from your local application with the one from your container? If you performed your due diligence, you wrote unit tests. There are no hard-coded configurations or hidden assumptions about the runtime environment. The cause should be related to the configuration your application receives inside OpenShift. Is it time to run your app under a step-by-step debugger or add tons of logging statements to your code? We’ll show how two features of the OpenShift command-line client can help: the oc run and oc debug commands.

  • What piece of advice had the greatest impact on your career?

    I love learning the what, why, and how of new open source projects, especially when they gain popularity in the DevOps space. Classification as a "DevOps technology" tends to mean scalable, collaborative systems that go across a broad range of challenges—from message bus to monitoring and back again. There is always something new to explore, install, spin up, and explore.

  • How DevOps is like auto racing

    When I talk about desired outcomes or answer a question about where to get started with any part of a DevOps initiative, I like to mention NASCAR or Formula 1 racing. Crew chiefs for these race teams have a goal: finish in the best place possible with the resources available while overcoming the adversity thrown at you. If the team feels capable, the goal gets moved up a series of levels to holding a trophy at the end of the race. To achieve their goals, race teams don’t think from start to finish; they flip the table to look at the race from the end goal to the beginning. They set a goal, a stretch goal, and then work backward from that goal to determine how to get there. Work is delegated to team members to push toward the objectives that will get the team to the desired outcome. [...] Race teams practice pit stops all week before the race. They do weight training and cardio programs to stay physically ready for the grueling conditions of race day. They are continually collaborating to address any issue that comes up. Software teams should also practice software releases often. If safety systems are in place and practice runs have been going well, they can release to production more frequently. Speed makes things safer in this mindset. It’s not about doing the “right” thing; it’s about addressing as many blockers to the desired outcome (goal) as possible and then collaborating and adjusting based on the real-time feedback that’s observed. Expecting anomalies and working to improve quality and minimize the impact of those anomalies is the expectation of everyone in a DevOps world.

  • Deep Learning Reference Stack v4.0 Now Available

    Artificial Intelligence (AI) continues to represent one of the biggest transformations underway, promising to impact everything from the devices we use to cloud technologies, and reshape infrastructure, even entire industries. Intel is committed to advancing the Deep Learning (DL) workloads that power AI by accelerating enterprise and ecosystem development. From our extensive work developing AI solutions, Intel understands how complex it is to create and deploy applications for deep learning workloads. That?s why we developed an integrated Deep Learning Reference Stack, optimized for Intel Xeon Scalable processor and released the companion Data Analytics Reference Stack. Today, we?re proud to announce the next Deep Learning Reference Stack release, incorporating customer feedback and delivering an enhanced user experience with support for expanded use cases.

  • Clear Linux Releases Deep Learning Reference Stack 4.0 For Better AI Performance

    Intel's Clear Linux team on Wednesday announced their Deep Learning Reference Stack 4.0 during the Linux Foundation's Open-Source Summit North America event taking place in San Diego. Clear Linux's Deep Learning Reference Stack continues to be engineered for showing off the most features and maximum performance for those interested in AI / deep learning and running on Intel Xeon Scalable CPUs. This optimized stack allows developers to more easily get going with a tuned deep learning stack that should already be offering near optimal performance.