Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE - http://planetKDE.org/
Updated: 2 hours 8 min ago

GSoC Update

Thursday 27th of June 2019 12:00:00 AM

Last post I said that I was having some problems pushing my modifications to the git repo. I discovered that the official ROCS repository was recently moved to the KDE Gitlab repo, called KDE Invent, where I am working on a fork of the original ROCS repo.

It is a model that I have some knowledge, as I already worked with gitlab in a past internship and did some merge requests because of the Hacktoberfest (I like to win t-shirts). So I had to update my remote of the local repo and I sent my update to my remote fork branch, called improved-graph-ide-classes.

When I was modifying the code, I noticed some problems with the creation of random trees, but I am thinking what is the better way to fix this part. This problem lies in the relation of the algorithm and the edge types available to generate the tree. When using directed edges, the code is sometimes generating directed loops of size 2 in the graph.

Theorically speaking, the definition of a tree must have undirected edges, otherwise it would be a Directed Acyclical Graph (DAG). There are different algoritms to generate them with different uses. For example, to generate random trees for a given number of nodes, you could choose an algoritm that can generate any tree of the given number of nodes with the same chance [paper] (that means, the algoritm has a uniform distribution of trees when a good seed is guaranteed). As for DAG’s, we can use an ranking algorithm to configure the height and width of the graph, which could be more useful in most cases. While there can exist a algorithm to generate both, I think it would be more useful to separate them.

But then enter the problem: It is not guaranteed that a undirected/directed edge type will exist within the Edge Types of the program, as the user has the freedom to add and modify any edge type. I found two ways to solve it:

  • Always have an undirected and a directed edge in the edge types;
  • Put a check on the interface to check whether the edge is directed or undirected when necessary;

This decision boils down to restrict or not the freedom of the user. Although this not change much on the greater scope of the program, we have to decide if the best way to go is to always force the user to check for the existence of a necessary edge type, or just set two edge types that will always exist and work by restricting the modification of the edges.

Next post I will talk about the identifier part of the ROCS system, as it is a part of ROCS that needs some more polish.

Konsole and Splits

Thursday 27th of June 2019 12:00:00 AM

Some terminals like Tilix and Terminator offers the possibility to split the screen recursively, and I started to add the same thing to konsole. Konsole is usually said to be the swiss army knife of the terminal emulators, and if you didn’t try it yet, please do. We offer quite a lot of things that no other terminal emulator offer.

Rigth now this code is in review but it currently supports quite a few things:

  • Allow to Drag & drop the tab to create a new window of konsole
  • Allow to Drag & drop the tab back into another window
  • Allow to Drag & drop a split to reposition it in the current tab
  • Allow to Drag & drop a split to another tab
  • Allow to Drag & drop a split to another window (if in the same process)

Expect this to be in the next version of konsole if all goes well. Help to test, help to find bugs, Help to test, help to find bugs.

Shubham (shubham)

Wednesday 26th of June 2019 05:46:11 PM
First month progressHello people!! I am here presenting you with my monthly GSoC project report. I will be providing the links to my work at the end of the section.A bit of background: Its been a great first month of Google Summer of Code for me. I was so excited that I had started writing code a week before the actual coding period started. First month as I had expected, had been quite hectic and to add onto it, my semester end examinations are also running at the moment. So I had to manage my time efficiently which I believed have done great so far. Coming to the progress made during this period, I have done the following:1.1 Implement PolkitQt1 Authorisation back-end:Here I had aimed to implement the same Polkit back-end as the one implemented by KAuth currently. I had to replicate the same behaviour and just remove the mediator ie. KAuth from in between.1.2 Scrap Public Key Cryptography code based on QCA as QDbus is secure enough:QDbus already provides enough security to the calls made by the application to the helper. Hence no need to encrypt, sign the requests of the application and verify their integrity at the helper side.1.3 Establish QDBus communication from helper towards Application:Previously the Application to Helper communication was done through QDBus session and Helper to Application was done via KAuth. In this task, I had aimed to remove KAuth and establish QDbus mode of communication here as well. I have linked the patches to the above tasks below in “Patches” section.Links to my patches: If you are a mind with curiosity, you can checkout the patches I have submitted over phabricator here.1. PolkitQt1 Authorization backend:2. Scrap Public Key Cryptography code (PKC):  3. QDBus communication from helper towards Application:Note: Only the second patch(scrap PKC) is merged into master, rest others are still Work under progress.Link to cgit repository: Curious minds may try having a look at the code and maybe give suggestions/advice about the code.
Till next time, bye bye!!

Shubham (shubham)

Wednesday 26th of June 2019 05:39:55 PM
What my project is all about? Porting Authentication to Polit-qt-1KDE Partition Manager runs all the authentication or authorization protocols over KAuth (KDE Authentication), which is a tier 2 library from KDE Frameworks. In the current implementation of KDE Partition Manager, all the privileged tasks such as executing some external program like btrfs, sfdisk etc. Or copying a block of data from one partition to the other, which requires escalated permissions to execute are executed by a helper non GUI application. So, instead of running whole GUI application (KDE Partition Manager) as root or superuser, a helper non GUI application is spawned which runs as root and executes privileged tasks. This helper program communicates with KDE Partition Manager over simple DBus protocol. The current implementation may seem a good idea, but is not, the reason being that KAuth is an extra layer added over Polkit-qt which causes extra overhead. So, the proposal for this project is to port all the authentication/authorization code from KAuth to Polkit-qt without effecting the original behaviour of KDE Partition Manager.

Shubham (shubham)

Wednesday 26th of June 2019 05:38:21 PM
About me...huh, Who am I?I am Shubham, a 3rd year undergraduate student, pursuing my B.E(Bachelor of Engineering) at BMS Institute of Technology and Management, Bangalore, India. I am an amateur open source enthusiast and developer, mostly working with C++ with Qt framework to build some stand alone applications. I also have descent knowlege of C, Java, Python, bash scripting, git and I love developing under the linux environment. I also practice competitve programming at various online judges. Apart from coding, in my spare time I go for Cricket or Volleyball to keep myself refreshed.

My first month on GSoC

Wednesday 26th of June 2019 12:47:54 PM

This first month of GSoC was a great learning experience for me, when speaking to my colleagues of how Summer of Code is being important to my professional life, I always respond that I’m finally learning to code and the basic of C++.

Yes, maybe this is strange, I’m a second year undergraduate Computer Science student, have two year experience with C++. I should have learn to code by now right? Well, at least on my Campus you don’t learn to code applications or how to build stable, clean code. You learn to solve problems, and that’s something I got pretty good at, but when it came to code, well, I’m learning that now and I’m liking it a lot.

Let’s walk through what I implemented the last month on the Okular side. It’s been a while since Adobe implemented support for JavaScript in their PDF, unfortunately the support for this JavaScript is pretty limited in most readers which aren’t Adobe Reader. I’ll skip the technical details, these can be found at my status report page on https://community.kde.org/GSoC/2019/StatusReports/Jo%C3%A3oNetto.

Before my patches were applied in Poppler and Okular, we only could see a gray rectangle above any animations or buttons. These animations could not be played, and it would only stay still when opened.

Now, Okular can display all the animations produced by the Beamer and Animate package, which are LaTeX packages. Also, it’s the first time these are supported on a Linux Platform.

And this was what I’ve done in the first month. Seems like not a lot of things, but I learned a lot, and I think this has been the most valuable thing to me.

Snapshot Docker

Wednesday 26th of June 2019 03:32:04 AM

Over the past few weeks I have been working on the Snapshot Docker, and now it is finished already. -))

The idea of snapshots is to make copies of the current document and allow users to return to them at a later time. This is a part of my whole Google Summer of Code project, which aims to bring Krita a better undo/redo system. When fully implemented, it will fully replace the current mechanism that stores actions with one that stores different states. That is to say, Krita will create a snapshot of the document for every undoable step.

Snapshot Docker is not only a feature requested by artists but also a experimental implementation of the clone-replace mechanism. It has the following key parts:

  1. Cloning the document, which is provided by KisDocument::lockAndCloneForSaving(), which is already implemented in master.

  2. Replace the current document by another one, which is previously cloned.

Part (1) is already implemented so the work falls mainly on Part (2). My original approach is to replace the document and image pointers in KisView and KisCanvas, but it is not viable since other parts of the program have signal/slot connections on the KisDocument and KisImage, and directly replacing the two pointers will not only fail to work but also cause weird crashes. After discussing with Dmitry, we find out that it is probably better not to touch these two pointers, but to replace the content withinKisDocument and KisImage. It is therefore suggested that two member functions be made, namely KisDocument::copyFromDocument and KisImage::copyFromImage. These functions copies data from another document/image to the current one, avoiding the changes to the pointers inside the original instance. Eh, except for the nodes, since we have to reset and refresh the nodes in the image.

It is also important to notify other parts of Krita about the change in the document. One important thing is tell the layer docker about the changes in the nodes (they are completely different), which is done using the KisImage::sigLayersChangedAsync() signal. The current activated node is also stored and restored, by using the strategy of linearizing the layer tree using a queue, and then finding the corresponding node in the cloned image. Note that when restoring, we are unable to find layer by uuid, since they should change when copied to the current image (the comments in KisImage says the only situation where we should keep the uuids is for saving).

Another interesting thing is the palettes. Krita 4.2.0 allows documents to store their own, local palettes. The palette list is but a QList<KoColorSet *>, meaning that only creating a new QList of the same pointers will not work. This is because, the palettes are controlled by canvas resource manager, which takes the responsibility to delete them. Therefore, when taking snapshots, we had better take deep copies of the KoColorSets. And then another problem comes: the snapshots own their KoColorSets because they are not controlled by the resource manager in any way; but the KisDocument in the view does not. So we have to set up another flag, ownsPaletteList, to tell the document whether it should delete the palettes in the destructor.

And now the work has shifted to the refactoring of kritaflake, the library that mainly handles vector layers and shapes. I converted the whole KoShape hierarchy to implicit sharing where possible, but some tests are broken. I am now on Windows, where unit tests do not run. I will continue the development of flake as soon as I get access to my Linux laptop.

My Two Weeks on Google Summer of Code <2019-06-09 Sun>

Tuesday 25th of June 2019 06:42:00 PM

For the last 15 days I have been working on Krita. So far, it has been a great experience— I've learnt a lot, and the Krita team has been very helpful in aiding my grokking of everything. Here is a quick summary of what I've done the past two weeks, and what is next to come. Read more...

My Two Weeks on Google Summer of Code <2019-06-09 Sun>

Tuesday 25th of June 2019 06:42:00 PM

For the last 15 days I have been working on Krita. So far, it has been a great experience— I've learnt a lot, and the Krita team has been very helpful in aiding my grokking of everything. Here is a quick summary of what I've done the past two weeks, and what is next to come. Read more...

Basic functionality almost ready <2019-06-23 Sun>

Tuesday 25th of June 2019 06:40:00 PM

For the last two weeks been working on Krita, I had been solving two different problems. Read more...

An easier way to test Plasma

Tuesday 25th of June 2019 08:51:35 AM

Having the Plasma and Usability & Productivity sprints held at the same time and place had an unexpected benefit: we were able to come up with a way to make it easier to test a custom-compiled version of Plasma!

Previously, we had some documentation that asked people to create a shell script on their computers, copy files to various locations, and perform a few other steps. Unfortunately, many of the details were out of date, and the whole process was quite error-prone. It turned out that almost none of the Plasma developers at the sprint were actually using this method, and each had cobbled together something for themselves. Some (including myself) had given up on it and were doing Plasma development in a virtual machine.

So we put some time into easing this pain by making Plasma itself produce all the right pieces automatically when compiled from source. Then, we created a simple script to install everything properly.

So now all you have to do is compile Plasma and run this script once:

sudo ~/kde/build/plasma-workspace/login-sessions/install-sessions.sh

This will install all the necessary bits to make your compiled-from-source Plasma appear in the SDDM login screen’s session chooser. You even get both the X11 and Wayland versions!

Thereafter, you can just log out of your distro-provided Plasma session and log into your custom-compiled Plasma session whenever you want. It’s super easy:

There are a few quirks surrounding DBus and Polkit that you can read about on the wiki, but it totally works and now it’s super duper simple to test and use your custom-compiled Plasma without polluting your base system. I’ve been using the Plasma Wayland session from git master with no VM for my daily computing and development needs for the past three days and it feels *amazing* to be able to do this. Many thanks to veteran KDE developer Aleix Pol Gonzalez for this work.

So now you really have no excuse not to build plasma from source! Check out the developer documentation and give it a try!

0.4.1 Release of Elisa

Monday 24th of June 2019 08:01:48 PM

Elisa is a music player developed by the KDE community that strives to be simple and nice to use. We also recognize that we need a flexible product to account for the different workflows and use-cases of our users.

We focus on a very good integration with the Plasma desktop of the KDE community without compromising the support for other platforms (other Linux desktop environments, Windows and Android).

We are creating a reliable product that is a joy to use and respects our users privacy. As such, we will prefer to support online services where users are in control of their data.

I am happy to announce the release of 0.4.1 version of the Elisa music player.

The following fixes have been added to this release:

  • Much improved accessibility by providing more metadata to help screen readers do their job by Matthieu Gallien ;
  • Use full-height separators in ContentView to make Elisa more consistent with other KDE applications by Nate Graham ;
  • Make Playlist items span full width (Fix bug 408210) by Nate Graham ;
  • Improve focus handling by improving keyboard navigation and usage and improving focus indicators by Matthieu Gallien ;
  • Fix delegates in the file browser to have same look than other grid view delegates (Fix bug 407945) by Nate Graham ;
  • Improve build system to require only the minimum versions and to provide better feedback (Fix bug 407790 and 407799) by Matthieu Gallien.

More fixes will probably land in the stable branch and one more bugfix release will be done.

The work on the improved accessibility is very important and more fixes need to be done in the master branch (require breaking string freeze). Elisa was not doing a good job here and that was in fact preventing a lot of people to use it. This is a real shame.

One more thing inspired me to eventually improve the accessibility by reading this old interview by the current Debian Project Leader Sam Hartman.

I have the feeling that Elisa is really making progress as you can see with this screenshot of the bugfix release:

0.4.1 Release of Elisa Getting Involved

I would like to thank everyone who contributed to the development of Elisa, including code contributions, testing, and bug reporting and triaging. Without all of you, I would have stopped working on this project.

New features and fixes are already being worked on. If you enjoy using Elisa, please consider becoming a contributor yourself. We are happy to get any kind of contributions!

We have some tasks that would be perfect junior jobs. They are a perfect way to start contributing to Elisa. There are more not yet reported here but reported in bugs.kde.org.

The flathub Elisa package allows an easy way to test this new release.

Elisa source code tarball is available here. There is no Windows setup. There is currently a blocking problem with it (no icons) that is being investigated. I hope to be able to provide installers for later bugfix versions.

The phone/tablet port project could easily use some help to build an optimized interface on top of Kirigami. It remains to be seen how to handle this related to the current desktop UI. This is something very important if we want to also support free software on mobile platforms.

[Howto] Three commands to update Fedora

Monday 24th of June 2019 04:59:49 PM

These days using Fedora Workstation there are multiple commands necessary to update the entire software on the system: not everything is installed as RPMs anymore – and some systems hardly use RPMs at all anyway.

Background

In the past all updates of a Fedora system were easily applied with one single command:

$ yum update

Later on, yum was replaced by DNF, but the idea stayed the same:

$ dnf update

Simple, right? But not these days: Fedora recently added capabilities to install and manage code via other ways: Flatpak packages are not managed by DNF. Also, many firmware updates are managed via the dedicated management tool fwupd. And lost but not least, Fedora Silverblue does not support DNF at all.

GUI solution Gnome Software – one tool to rule them all…

To properly update your Fedora system you have to check multiple sources. But before we dive into detailed CLI commands there is a simple way to do that all in one go: The Gnome Software tool does that for you. It checks all sources and just provides the available updates in its single GUI:

The above screenshot highlights that Gnome Software just shows available updates and can manage those. The user does not even know where those come from.

If we have a closer look at the configured repositories in Gnome Software we see that it covers main Fedora repositories, 3rd party repositories, flatpaks, firmware and so on:

Using the GUI alone is sufficient to take care of all update routines. However, if you want to know and understand what happens underneath it is good to know the separate CLI commands for all kinds of software resources. We will look at them in the rest of the post.

System packages

Each and every system is made up at least of a basic set of software. The Kernel, a system for managing services like systemd, core libraries like libc and so on. With Fedora used as a Workstation system there are two ways to manage system packages, because there are two totally different spins of Fedora: the normal one, traditionally based on DNF and thus comprised out of RPM packages, and the new Fedora Silverblue, based on immutable ostree system images.

Traditional: DNF

Updating a RPM based system via DNF is easy:

$ dnf upgrade [sudo] password for liquidat: Last metadata expiration check: 0:39:20 ago on Tue 18 Jun 2019 01:03:12 PM CEST. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: kernel x86_64 5.1.9-300.fc30 updates 14 k kernel-core x86_64 5.1.9-300.fc30 updates 26 M kernel-modules x86_64 5.1.9-300.fc30 updates 28 M kernel-modules-extra x86_64 5.1.9-300.fc30 updates 2.1 M [...]

This is the traditional way to keep a Fedora system up2date. It is used for years and well known to everyone.

And in the end it is analogue to the way Linux distributions are kept up2date for ages now, only the command differs from system to system (apt-get, etc.)

Silverblue: OSTree

With the recent rise of container technologies the idea of immutable systems became prominent again. With Fedora Silverblue there is an implementation of that approach as a Fedora Workstation spin.

[Unlike] other operating systems, Silverblue is immutable. This means that every installation is identical to every other installation of the same version. The operating system that is on disk is exactly the same from one machine to the next, and it never changes as it is used.

Silverblue’s immutable design is intended to make it more stable, less prone to bugs, and easier to test and develop. Finally, Silverblue’s immutable design also makes it an excellent platform for containerized apps as well as container-based software development development. In each case, apps and containers are kept separate from the host system, improving stability and reliability.

https://docs.fedoraproject.org/en-US/fedora-silverblue/

Since we are dealing with immutable images here, another tool to manage them is needed: OSTree. Basically OSTree is a set of libraries and tools which helps to manage images and snapshots. The idea is to provide a basic system image to all, and all additional software on top in sandboxed formats like Flatpak.

Unfortunately, not all tools can be packages as flatpak: especially command line tools are currently hardly usable at all as flatpak. Thus there is a way to install and manage RPMs on top of the OSTree image, but still baked right into it: rpm-ostreee. In fact, on Fedora Silverblue, all images and RPMs baked into it are managed by it.

Thus updating the system and all related RPMs needs the command rpm-ostreee update:

$ rpm-ostree update ⠂ Receiving objects: 98% (4653/4732) 4,3 MB/s 129,7 MB Receiving objects: 98% (4653/4732) 4,3 MB/s 129,7 MB... done Checking out tree 209dfbe... done Enabled rpm-md repositories: fedora-cisco-openh264 rpmfusion-free-updates rpmfusion-nonfree fedora rpmfusion-free updates rpmfusion-nonfree-updates rpm-md repo 'fedora-cisco-openh264' (cached); generated: 2019-03-21T15:16:16Z rpm-md repo 'rpmfusion-free-updates' (cached); generated: 2019-06-13T10:31:33Z rpm-md repo 'rpmfusion-nonfree' (cached); generated: 2019-04-16T21:53:39Z rpm-md repo 'fedora' (cached); generated: 2019-04-25T23:49:41Z rpm-md repo 'rpmfusion-free' (cached); generated: 2019-04-16T20:46:20Z rpm-md repo 'updates' (cached); generated: 2019-06-17T18:09:33Z rpm-md repo 'rpmfusion-nonfree-updates' (cached); generated: 2019-06-13T11:00:42Z Importing rpm-md... done Resolving dependencies... done Checking out packages... done Running pre scripts... done Running post scripts... done Running posttrans scripts... done Writing rpmdb... done Writing OSTree commit... done Staging deployment... done Freed: 50,2 MB (pkgcache branches: 0) Upgraded: gcr 3.28.1-3.fc30 -> 3.28.1-4.fc30 gcr-base 3.28.1-3.fc30 -> 3.28.1-4.fc30 glib-networking 2.60.2-1.fc30 -> 2.60.3-1.fc30 glib2 2.60.3-1.fc30 -> 2.60.4-1.fc30 kernel 5.1.8-300.fc30 -> 5.1.9-300.fc30 kernel-core 5.1.8-300.fc30 -> 5.1.9-300.fc30 kernel-devel 5.1.8-300.fc30 -> 5.1.9-300.fc30 kernel-headers 5.1.8-300.fc30 -> 5.1.9-300.fc30 kernel-modules 5.1.8-300.fc30 -> 5.1.9-300.fc30 kernel-modules-extra 5.1.8-300.fc30 -> 5.1.9-300.fc30 plymouth 0.9.4-5.fc30 -> 0.9.4-6.fc30 plymouth-core-libs 0.9.4-5.fc30 -> 0.9.4-6.fc30 plymouth-graphics-libs 0.9.4-5.fc30 -> 0.9.4-6.fc30 plymouth-plugin-label 0.9.4-5.fc30 -> 0.9.4-6.fc30 plymouth-plugin-two-step 0.9.4-5.fc30 -> 0.9.4-6.fc30 plymouth-scripts 0.9.4-5.fc30 -> 0.9.4-6.fc30 plymouth-system-theme 0.9.4-5.fc30 -> 0.9.4-6.fc30 plymouth-theme-spinner 0.9.4-5.fc30 -> 0.9.4-6.fc30 Run "systemctl reboot" to start a reboot Desktop applications: Flatpak

Installing software – especially desktop related software – on Linux is a major pain for distributors, users and developers alike. One attempt to solve this is the flatpak format, see also Flatpak – a solution to the Linux desktop packaging problem.

Basically Flatpak is a distribution independent packaging format targeted at desktop applications. It does come along with sandboxing capabilities and the packages usually have hardly any dependencies at all besides a common set provided to all of them.

Flatpak also provide its own repository format thus Flatpak packages can come with their own repository to be released and updated independently of a distribution release cycle.

In fact, this is what happens with the large Flatpak community repository flathub.org: all packages installed from there can be updated via flathub repos fully independent from Fedora – which also means independent from Fedora security teams, btw….

So Flatpak makes developing and distributing desktop programs much easier – and provides a tool for that. Meet flatpak!

$ flatpak update Looking for updates… ID Arch Branch Remote Download 1. [✓] org.freedesktop.Platform.Locale x86_64 1.6 flathub 1.0 kB / 177.1 MB 2. [✓] org.freedesktop.Platform.Locale x86_64 18.08 flathub 1.0 kB / 315.9 MB 3. [✓] org.libreoffice.LibreOffice.Locale x86_64 stable flathub 1.0 MB / 65.7 MB 4. [✓] org.freedesktop.Sdk.Locale x86_64 1.6 flathub 1.0 kB / 177.1 MB 5. [✓] org.freedesktop.Sdk.Locale x86_64 18.08 flathub 1.0 kB / 319.3 MB Firmware

And there is firmware: the binary blobs that keep some of our hardware running and which is often – unfortunately – closed source.

A lot of Kernel related firmware is managed as system packages and thus part of the system image or packaged via RPM. But device related firmware (laptops, docking stations, and so on) is often only provided in Windows executable formats and difficult to handle.

Luckily, recently the Linux Vendor Firmware Service (LVFS) gained quite some traction as the default way for many vendors to make their device firmware consumable to Linux users:

The Linux Vendor Firmware Service is a secure portal which allows hardware vendors to upload firmware updates.

This site is used by all major Linux distributions to provide metadata for clients such as fwupdmgr and GNOME Software.

https://fwupd.org/

End users can take advantage of this with a tool dedicated to identify devices and manage the necessary firmware blobs for them: meet fwupdmgr!

$ fwupdmgr update No upgrades for 20L8S2N809 System Firmware, current is 0.1.31: 0.1.25=older, 0.1.26=older, 0.1.27=older, 0.1.29=older, 0.1.30=older No upgrades for UEFI Device Firmware, current is 184.65.3590: 184.55.3510=older, 184.60.3561=older, 184.65.3590=same No upgrades for UEFI Device Firmware, current is 0.1.13: 0.1.13=same No releases found for device: Not compatible with bootloader version: failed predicate [BOT01.0[0-3]_* regex BOT01.04_B0016]

In the above example there were no updates available – but multiple devices are supported and thus were checked.

Forgot something? Gnome extensions…

The above examples cover the major ways to managed various bits of code. But they do not cover all cases, so for the sake of completion I’d like to highlight a few more here.

For example, Gnome extensions can be installed as RPM, but can also be installed via extensions.gnome.org. In that case the installation is done via a browser plugin.

The same is true for browser plugins themselves: they can be installed independently and extend the usage of the web browser. Think of the Chrome Web Store here, or Firefox Add-ons.

Conclusion

Keeping a system up2date was easier in the past – with a single command. However, at the same time that meant that those systems were limited by what RPM could actually deliver.

With the additional ways to update systems there is an additional burden on the system administrator, but at the same time there is much more software and firmware available these ways – code which was not available in the old RPM times at all. And with Silverblue an entirely new paradigm of system management is there – again something which would not have been the case with RPM at all.

At the same time it needs to be kept in mind that these are pure desktop systems – and there Gnome Software helps by being the single pane of glas.

So I fully understand if some people are a bit grumpy about the new needs for multiple tools. But I think the advantages by far outweigh the disadvantages.

Advertisements

Interview with Chris Tallerås

Monday 24th of June 2019 10:00:46 AM
Could you tell us something about yourself?

My name is Chris Tallerås and I’m a 23 year old dude from the Olympic city of Lillehammer in Norway and I do political activism traveling the country to fight the climate crisis and to advocate free culture/free, libre & opensource software in our kingdom.

Do you paint professionally, as a hobby artist, or both?

I don’t do art full-time unfortunately but with time I hope I can fully support myself with my passion for art.

What genre(s) do you work in?

I’m a mostly a digital artist by trade; making illustrations and designs of weird, bizarre characters and creatures. I also do a lot of fantasy-type art inspired by Dungeons & Dragons and do a lot of worldbuilding.

Whose work inspires you most — who are your role models as an artist?

Wooof, that’s a troublesome one. I have so many and random stuff, but I’ll try to make some kind of a list.

Walter Everett

A golden age illustrator so obsessed with his fantastic colors, shapes and subjects that he neglected his family. He is well known among artists but mostly forgotten in the public eye.

T-wei

Don’t know what to call this guy. Hmm.. He makes weird characters and creatures JUST LIKE ME. That’s it! hahah. Well, you probably got to check it out for itself. He does very linework based illustrations with intriguing visual  concepts.

Martin Creed

Martin is what you’d call a “bullshit-artist”, and a humorous one at that. He focuses on bigger ideas in a way that is more relatable in our lives with a kind of cosy and jovial way. I’d call his art more experiences than art pieces, but I think you really should check out his interviews to get a picture of how he thinks and his jovial humor.

Alejandro Jodorowsky

Not a painter, but an artist nonetheless! Known for grand theatres, bizarre surrealistic plays and giant hippie films made in South America. He did my favorite film ever, which is The Holy Mountain, and made the biggest movie that was never made called Dune, which probably is where films like Alien, Star Wars and Blade Runner can be traced back to for its original style and design.

Mike Mignola

A big comic hit known for creating Hellboy and working on Disney’s Atlantis. He has a really cool style using great shapes and a really high contrast noir-like lighting. If you like comics or manga it’s worth checking out.

Edvard Munch

A Norwegian painter who had his heyday back in the 1880-1890s and was popular around the world, but unfortunately his home country, Norway, seems to be one of the last to acknowledge him. It went as far as when he died and donated all his art to the state, it got left hidden without proper management and was hung up in toilets around the capital. His art is emotional and wonderful and I am particularly fond of his series of his redhead girlfriend.

How and when did you get to try digital painting for the first time?

I’ve done digital art since elementary school, but doing it for real wasn’t before about 8th grade I think, when my mom helped me getting a Wacom Intuos 4 which I still own, although a friend of mine fricked it up a little when he borrowed it not too long ago. Since then I’ve kept going and to my art teacher’s perplexment as they had no idea how doing art on
the computer worked or what the hell I was making.

What makes you choose digital over traditional painting?

I think it is two main things. Community and practicality. Now, digital art isn’t either cheaper or easier to do than traditional, but traditional art has some challenges like the equipment you need to get it digital and so on. It also helps with efficiency which is key in the industry.

But the bigger and more significant reason for me personally is community. Over the years the art community online has grown, and the communities that I’ve been drawn to happen to make their art digitally. So as I’ve grown inside these communities, making the art digitally has been a natural part of that. Like, sitting with screen-sharing over the web talking while seeing what everyone is making.

How did you find out about Krita?

I started noticing that the support for Windows 7 was getting poorer and people had begun to go all the way to Windows 10. And this is before the actual support from Windows stopped, in 2016-2017 I think. Maybe later in
2017. I was getting tired of Windows and wanted to get into Linux, so I checked out all the distros and immersed myself. Kind of a lot to choose from, and the YouTube community wasn’t as cool as now, or at least I didn’t know about any cool channels. I went with Ubuntu 16.04, but there was a problem. Photoshop doesn’t work with Linux natively. I’ve been critical of Photoshop a long time and wasn’t as attached to it as other people, and tried many various software. However PS CS6 was the one I liked the most. I decided I’d try to find something else, and there it was in the software store. KRITA!

What was your first impression?

Opening it the first time had a pretty average impression on me with the icons and workingspace. It looked professional so I thought this should probably do what I need. It took some finding the stuff I was used to from
Photoshop, but when I had finally done it I was ready.

The first art pieces didn’t look great, but I could see the potential with the brush engine if I spent more time with it. And in a short time I came to love it! A lot of the features it had then Photoshop later got on their cloud version but to me it was insane and so helpful, like being able to group various brushes and organize them in the menu with tags. I also discovered Devid Revoy’s brush bundle and some “concept & illustration” bundle and that was when I really fired up. Since then I’ve been a strong believer of Krita and kind of an advocate.

What do you love about Krita?

Firstly the community and the people working on Krita. They are very involved and makes it into something that has to be generated naturally. It’s not just some company slowly pushing your buttons and they really try to make things better. But I also love the functions and the services it provides me which are exactly what I need for what I do professionally.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Not really much technical stuff that annoys me and with every version they add so much cool stuff with the tool options and stuff. I don’t know much the technical programming stuff too, so take that for what it is. What I’ve been thinking and that it seems Krita is going more for now is pushing their marketing more! Push cool content to show how good Krita is, show that it is used by professionals. Do more ways of pushing forward the community with community events. It could be a 24 hour charity stream where chosen Krita artists are invited to join and make something and talk and push donations. It can be user collaborated content where creators are invited to make something for Krita that they can show. But I also acknowledge that the Krita people probably have a better idea of what they should do and what works than me.

What sets Krita apart from the other tools that you use?

Well, firstly it’s free, libre & opensource. How cool is that? I think for me it’s that it’s the best drawing and painting program for use on Linux. But if I went back to Windows I’d probably still be using Krita. I mean, it’s free, it has the community and it does stuff that other software don’t or copy much later.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I think my most technically advanced illustration would be “the red planet” which I did for round 3 of last year’s Unreal Bjornament, a Thunderdome (Mad Max) style tournament for artists run by the amazing artist Bjorn Hurri.

What techniques and brushes did you use in it?

I had a very long, sporadic and iterative process for this one. It didn’t look like the initial sketches, nor how it looked further through and completely changed while keeping general things like colors, environment and story. Even the characters were completely switched and moved while keeping their original colors. Like the green guy on top used to be a giant mech that took half the image.

I used mostly one brush throughout, the whole brush from the “illustration & concept” bundle, and worked using several references I’d made myself, starting with a vibrant strong high saturated color in the background, and worked on top of that working the colors out of that to get an intuitive gamut and to have a underlying vibrancy I get from working that way.

A lot of the final details are pure intuition and improvisation.

Where can people see more of your work?

You can see more and contact me for work at my website christalleras.no.

I am also in the fediverse so if any of your guys are on mastodon or pixelfed you can follow me there.

Mastodon: https://mastodon.art/@ChrisTalleras
Pixelfed: https://pixelfed.social/Christalleras

Anything else you’d like to share?

Yes! A Whale’s Lantern released a new album and I DID THE COVER! You can check it out on bandcamp here:
https://awhaleslantern.bandcamp.com/album/portraits.

Skrooge 2.20.0 released

Sunday 23rd of June 2019 09:49:11 AM

The Skrooge Team announces the release 2.20.0 version of its popular Personal Finances Manager based on KDE Frameworks

Changelog
  • Correction bug 406903: no message when Skrooge can't open .skg file given on command line
  • Correction bug 406904: skrooge command-line help "--+[URL]" doesn't match its behavior
  • Correction bug 406741: QFX Date Import
  • Correction bug 407280: Skrooge flatpak unintentionally builds unused tests
  • Correction bug 407279: Skrooge flatpak needs later libofx
  • Correction bug 407257: Importing GNUcash (Account name instead of AccountID)
  • Correction bug 409026: skrooge appdata.xml fails validation on flathub, needs release and content_rating tags
  • Correction: aqbanking corrections:
    • Added auto repair for certain banks (Sprada, Netbank, Comdirect).
    • Added --disable-auto-repair command line option
    • Added --prefer-valutadate command line option
    • Removed --balance command line option
  • Correction: getNetWorth (used to compute PFS) is now computed by using all accounts
  • Correction: Remove color of hyperlinks in dashboard for a better rendering in dark theme
  • Correction: Remove broken quotes sources (BitcoinAverage, BitcoinCharts)
  • Correction: Better handling of the mode and comment field using the aqbanking import backend.
  • Feature: New REGEXPCAPTURE operator in "Search & Process" to capture a value by regular expression
  • Feature: Import backend aqbanking allows to import accounts without an IBAN. (See https://phabricator.kde.org/D20875)
Get it, Try it, Love it...

Grab Skrooge from your distro's packaging system. If it is not yet included in repositories, go get it from our website, and bug your favorite distro for inclusion.

Now, you can try the appimage or the flatpak too !

If you want to help me to industrialise the windows version, you can get it from here: https://binary-factory.kde.org/job/Skrooge_Nightly_mingw64/

Get Involved

To enhance Skrooge, we need you ! There are many ways you can help us:

  • Submit bug reports
  • Discuss on the KDE forum
  • Contact us, give us your ideas, explain us where we can improve...
  • Can you design good interfaces ? Can you code ? Have webmaster skills ? Are you a billionaire looking for a worthy investment ? We will be very pleased in welcoming you in the skrooge team, contact us !

New website for Konsole

Sunday 23rd of June 2019 09:00:00 AM

Yesterday, konsole.kde.org got a new website.

Doesn’t it look nice? As a reminder the old website looked like this.

The design is very similar to the kontact.kde.org and kde.org websites.

The content could probably still need some improvements, so if you find typos or want to improve the wording of a sentence, please get in touch with KDE Promo. The good news is that you don’t need to be a programmer for this.

Community goal

With Jonathan Riddell, we proposed a new community goal: KDE is All About the Apps.

One part of this goal is to provide a better infrastructure and promotional material for the KDE applications (notice the lowercase a). I think websites are important to let people know about our amazing applications.

So if you are maintaining a KDE applications and want a new shinning website, please contact me. And I will try to setup for you a new websites, following the general design.

Technical details

The new website uses Jekyll to render static html. Because the layout and the design aren’t unique to konsole.kde.org, I created a special Jekyll located at invent.kde.org/websites/jekyll-kde-theme, so that only the content and some configuration files are located in the websites/konsole-kde-org repository. This make it easier to maintain and will make it easier to change others website in the future without repeating ourself.

This was a bit harder to deploy than I first though, I had problem with installing my Jekyll theme in the docker image, but after the third or fourth try, it worked and then I had an encoding issue, that wasn’t present on my development machine.

How can I help?

Help is always welcome, the KDE community develops more than 200 different applications, and even though not all applications need or want a new modern website, there is tons of work to do.

If you are a web developer, you can help in the development of new website or in improving the Jekyll theme. Internalization, localization and accessibility still need to be implemented.

If you are not a web developer, but a web designer, I’m sure there is room for improvement in our theme. And it can be interesting to have small variations across the different websites.

And if you are neither a designer nor a developer, there is still tons of work with writing content and taking good looking screenshots. For the screenshots, you don’t even need to have a good English.

If you have question, you can as always contact me in Mastodon at @carl@linuxrocks.online or with matrix at @carl:kde.org.

You can discuss this post in reddit or mastodon.

KDE Usability & Productivity: Week 76

Sunday 23rd of June 2019 07:19:38 AM

Week 76 in KDE’s Usability & Productivity initiative is here! This week’s progress report includes the first several days of the Usability & Productivity sprint, and as such, it’s absolutely overflowing with cool stuff!

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

KStars v3.3.1 is released

Sunday 23rd of June 2019 05:37:39 AM
KStars v3.3.1 is released for Windows, MacOS, and Linux on all platforms (Intel/AMD and ARM). This is yet another maintenance release with a few new experimental features and addons.

MacOS Updates
Robert Lancaster cleared all the issues reported on astrometry.net problems on MacOS. After gathering the feedback of users with experimental releases on this dedicated INDI thread.

Astrometry on MacOS
Furthermore, DBus is now working again in this release which would make the Ekos Scheduler operational again under Mac.

New Observatory Module
Wolfgang Reissenberger developed a new Ekos Observatory module to manage the dome and weather-triggered shutdown procedure. This is the first iteration of the module with more expected in the upcoming months but it provides a compact and friendly interface to observatory management. Feedback is welcome.



Meridian Flip is gone!
Well, sorta of. Wolfgang Reissenberger removed Meridian Flip from the Capture Module and moved it to the Mount Module. This way the meridian flip can be controlled even if there is no active capture session going on.

So just set when you want the meridian flip to occur at the mount module. Remember that the setting is in Hour Angle (HA). 1 HA = 15 degrees, therefore 0.1 HA = 1.5 degrees West of the Meridian. 
Always use a positive value to ensure proper meridian flip takes place. Using zero could theoretically work but it is at the very edge where the decision to flip or not is made by the mount, so it's safer to use a slightly higher value like 0.1 HA.
Stream Window

Due to popular demand, the FPS control in the Stream window is replaced Frame Duration in seconds control. So a setting of 0.1 seconds would yield a frame rate of 10 FPS (1/0.1 = 10) if the hardware can support that. The duration can be set as low as 1 microsecond but only if supported by the driver and camera!

Other highlights:


  • Reset focus frame when mount slews.
  • Do not abort PHD2 guiding while suspended.
  • Switching to homebrew, python3, and astroy for plate solving on OS X.
  • Check if dust cap is detected before checking whether the camera is shutterful or shutterless.
  • Fix translation issue with Sun, Moon, and Earth designations.

Support for Jupyter notebooks has evolved in Cantor

Saturday 22nd of June 2019 08:19:39 PM

Hello everyone, it's been almost a month since my last post and there are a lot of changes that have been done since then.

First, what I called the "minimal plan" is arleady done! Cantor can now load Jupyter notebooks and save the currently opened document in Jupyter format.

Below you can see how one of the Jypiter notebooks I'm using for test purposes (I have mentioned them in previous post) looks in Jupyter and in Cantor.


As you can see, there aren't many differences in the representation of the content except of some minor differences in the rendering of the markdown code.

For the comparison, I also prepared some previews of the same fragments of the notebooks, opened in Jupyter and in Cantor.
This is a fragment from Understanding evolutionary strategies and covariance matrix adaptation notebook.




As the next example, we show a screenshot of A Reaction-Diffusion Equation Solver in Python with Numpy notebook.



As the final example, we show a screenshot of Rigid-body transformations in a plane (2D) notebook.



To be more detailed and concrete on what is currently supported in Cantor, below is the list of objects that can be imported:
  • Markdown cells
    • With mathematical expressions
    • With attachments
  • Code cells
    • With text (including error messages) and image results)
  •  Raw NBConvert cells
Cantor is able to handle almost all content specified by Jupyter notebook format, except of some metadata information about the notebook in general and about its cells, information about the used "kernel" (support for this will be added soon) and results of another types (for example latex or html outputs), which are more difficult to implement because of the lack of good and complete documentation of them.

When saving the project in Jupyter's format, Cantor handles almost all of its native entry types like markdown entries, text entries, code entries and image entries. For the remaining "page break entry" in Cantor it is still to be worked out how to map this element to Jupyter's structures.

Despite quite a good progress made, there is still a lot place and potential for improvements. Besides some technical issues arising when dealing with the import of another format and mapping its sturcture to the native structures of your application, which is very natural actually for all applications I guess, there is currently also currently problem with perfromance of the renderer used for mathematical expressions in Cantor. Openning of large documents (either in Cantor's native format or Jupyter notebooks) having a lot of formulas takes considerable amount of time because of the bad renderer implementation in Cantor. This heavily influence the user experience and I plan to start working on this soon.

So, there are some work for done before Cantor will support what I call the "maximum plan". With this I understand the ability to garantee the conversion between two formats when openning or saving projects to happen without any substantial loss of information relevant and critical for the consumption of the project file.

To achieve this, I want now to invest more into testing with more notebooks and closing the remaining gaps but also into writing automatic tests for Cantor covering this new functionality in Cantor. The latter are important to also prevent any kind of regressions introduce during bug fixing activities in the next weeks. This is something for the next week.

In the next post I plan to show a working test system and how Cantor are passing its tests.

AArch64 support for ELF Dissector

Saturday 22nd of June 2019 10:00:00 AM

After having been limited to maintenance for a while I finally got around to some feature work on ELF Dissector again this week, another side-project of mine I haven’t written about here yet. ELF Dissector is an inspection tool for the internals of ELF files, the file format used for executables and shared libraries on Linux and a few other operating systems.

Use Cases

As a quick introduction, let’s focus on what ELF Dissector is most useful for.

  • Inspecting forward and backward dependencies, on library and symbol level. Say you want to remove the dependency on a legacy library like KDELibs4Support from your application, the inverse dependency viewer helps you to identify what exactly pulls in this library, and which symbols are used from it.
  • Identifying load-time performance bottlenecks such as expensive static constructors or excessive relocations. An example for this is David Edmundson’s current research into KInit.
  • Size profiling of ELF files. That’s easiest shown in the picture below.
ELF Dissector size tree map view. AArch64 Support

Last week I had to analyze 64bit ARM binaries with ELF Dissector for the first time, which made me run into an old limitation of ELF Dissector’s disassembler. Until now ELF Dissector used Binutils for this (via some semi-public API), which works very well but unfortunately only on the host platform (that is, usually for x86 code). So this limitation finally needed to go.

We now have support for using the cross-platform disassembler framework Capstone. So far only AArch64 and x86 support are actually implemented, but adding further architectures is now quite straightforward. Together with a few other fixes and improvements, such as support for relocations in the .init_array section, ELF Dissector is now actually useful for inspecting loading performance of Aarch64 binaries too.

ELF Dissector showing AArch64 assembler. Outlook

ELF Dissector had its first commit more than six years ago, but it is still lingering around in a playground repository, which doesn’t really do it justice. One major blocker for making it painlessly distributable however are its dependencies on private Binutils/GCC API. Using the Capstone disassembler is therefore also a big step towards addressing that, now only the use of the demangler API remains.

More in Tux Machines

LWN: Spectre, Linux and Debian Development

  • Grand Schemozzle: Spectre continues to haunt

    The Spectre v1 hardware vulnerability is often characterized as allowing array bounds checks to be bypassed via speculative execution. While that is true, it is not the full extent of the shenanigans allowed by this particular class of vulnerabilities. For a demonstration of that fact, one need look no further than the "SWAPGS vulnerability" known as CVE-2019-1125 to the wider world or as "Grand Schemozzle" to the select group of developers who addressed it in the Linux kernel. Segments are mostly an architectural relic from the earliest days of x86; to a great extent, they did not survive into the 64-bit era. That said, a few segments still exist for specific tasks; these include FS and GS. The most common use for GS in current Linux systems is for thread-local or CPU-local storage; in the kernel, the GS segment points into the per-CPU data area. User space is allowed to make its own use of GS; the arch_prctl() system call can be used to change its value. As one might expect, the kernel needs to take care to use its own GS pointer rather than something that user space came up with. The x86 architecture obligingly provides an instruction, SWAPGS, to make that relatively easy. On entry into the kernel, a SWAPGS instruction will exchange the current GS segment pointer with a known value (which is kept in a model-specific register); executing SWAPGS again before returning to user space will restore the user-space value. Some carefully placed SWAPGS instructions will thus prevent the kernel from ever running with anything other than its own GS pointer. Or so one would think.

  • Long-term get_user_pages() and truncate(): solved at last?

    Technologies like RDMA benefit from the ability to map file-backed pages into memory. This benefit extends to persistent-memory devices, where the backing store for the file can be mapped directly without the need to go through the kernel's page cache. There is a fundamental conflict, though, between mapping a file's backing store directly and letting the filesystem code modify that file's on-disk layout, especially when the mapping is held in place for a long time (as RDMA is wont to do). The problem seems intractable, but there may yet be a solution in the form of this patch set (marked "V1,000,002") from Ira Weiny. The problems raised by the intersection of mapping a file (via get_user_pages()), persistent memory, and layout changes by the filesystem were the topic of a contentious session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit. The core question can be reduced to this: what should happen if one process calls truncate() while another has an active get_user_pages() mapping that pins some or all of that file's pages? If the filesystem actually truncates the file while leaving the pages mapped, data corruption will certainly ensue. The options discussed in the session were to either fail the truncate() call or to revoke the mapping, causing the process that mapped the pages to receive a SIGBUS signal if it tries to access them afterward. There were passionate proponents for both options, and no conclusion was reached. Weiny's new patch set resolves the question by causing an operation like truncate() to fail if long-term mappings exist on the file in question. But it also requires user space to jump through some hoops before such mappings can be created in the first place. This approach comes from the conclusion that, in the real world, there is no rational use case where somebody might want to truncate a file that has been pinned into place for use with RDMA, so there is no reason to make that operation work. There is ample reason, though, for preventing filesystem corruption and for informing an application that gets into such a situation that it has done something wrong.

  • Hardening the "file" utility for Debian

    In addition, he had already encountered problems with file running in environments with non-standard libraries that were loaded using the LD_PRELOAD environment variable. Those libraries can (and do) make system calls that the regular file binary does not make; the system calls were disallowed by the seccomp() filter. Building a Debian package often uses FakeRoot (or fakeroot) to run commands in a way that appears that they have root privileges for filesystem operations—without actually granting any extra privileges. That is done so that tarballs and the like can be created containing files with owners other than the user ID running the Debian packaging tools, for example. Fakeroot maintains a mapping of the "changes" made to owners, groups, and permissions for files so that it can report those to other tools that access them. It does so by interposing a library ahead of the GNU C library (glibc) to intercept file operations. In order to do its job, fakeroot spawns a daemon (faked) that is used to maintain the state of the changes that programs make inside of the fakeroot. The libfakeroot library that is loaded with LD_PRELOAD will then communicate to the daemon via either System V (sysv) interprocess communication (IPC) calls or by using TCP/IP. Biedl referred to a bug report in his message, where Helmut Grohne had reported a problem with running file inside a fakeroot.

Flameshot is a brilliant screenshot tool for Linux

The default screenshot tool in Ubuntu is alright for basic snips but if you want a really good one you need to install a third-party screenshot app. Shutter is probably my favorite, but I decided to give Flameshot a try. Packages are available for various distributions including Ubuntu, Arch, openSuse and Debian. You find installation instructions on the official project website. Read more

Android Leftovers

IBM/Red Hat and Intel Leftovers

  • Troubleshooting Red Hat OpenShift applications with throwaway containers

    Imagine this scenario: Your cool microservice works fine from your local machine but fails when deployed into your Red Hat OpenShift cluster. You cannot see anything wrong with the code or anything wrong in your services, configuration maps, secrets, and other resources. But, you know something is not right. How do you look at things from the same perspective as your containerized application? How do you compare the runtime environment from your local application with the one from your container? If you performed your due diligence, you wrote unit tests. There are no hard-coded configurations or hidden assumptions about the runtime environment. The cause should be related to the configuration your application receives inside OpenShift. Is it time to run your app under a step-by-step debugger or add tons of logging statements to your code? We’ll show how two features of the OpenShift command-line client can help: the oc run and oc debug commands.

  • What piece of advice had the greatest impact on your career?

    I love learning the what, why, and how of new open source projects, especially when they gain popularity in the DevOps space. Classification as a "DevOps technology" tends to mean scalable, collaborative systems that go across a broad range of challenges—from message bus to monitoring and back again. There is always something new to explore, install, spin up, and explore.

  • How DevOps is like auto racing

    When I talk about desired outcomes or answer a question about where to get started with any part of a DevOps initiative, I like to mention NASCAR or Formula 1 racing. Crew chiefs for these race teams have a goal: finish in the best place possible with the resources available while overcoming the adversity thrown at you. If the team feels capable, the goal gets moved up a series of levels to holding a trophy at the end of the race. To achieve their goals, race teams don’t think from start to finish; they flip the table to look at the race from the end goal to the beginning. They set a goal, a stretch goal, and then work backward from that goal to determine how to get there. Work is delegated to team members to push toward the objectives that will get the team to the desired outcome. [...] Race teams practice pit stops all week before the race. They do weight training and cardio programs to stay physically ready for the grueling conditions of race day. They are continually collaborating to address any issue that comes up. Software teams should also practice software releases often. If safety systems are in place and practice runs have been going well, they can release to production more frequently. Speed makes things safer in this mindset. It’s not about doing the “right” thing; it’s about addressing as many blockers to the desired outcome (goal) as possible and then collaborating and adjusting based on the real-time feedback that’s observed. Expecting anomalies and working to improve quality and minimize the impact of those anomalies is the expectation of everyone in a DevOps world.

  • Deep Learning Reference Stack v4.0 Now Available

    Artificial Intelligence (AI) continues to represent one of the biggest transformations underway, promising to impact everything from the devices we use to cloud technologies, and reshape infrastructure, even entire industries. Intel is committed to advancing the Deep Learning (DL) workloads that power AI by accelerating enterprise and ecosystem development. From our extensive work developing AI solutions, Intel understands how complex it is to create and deploy applications for deep learning workloads. That?s why we developed an integrated Deep Learning Reference Stack, optimized for Intel Xeon Scalable processor and released the companion Data Analytics Reference Stack. Today, we?re proud to announce the next Deep Learning Reference Stack release, incorporating customer feedback and delivering an enhanced user experience with support for expanded use cases.

  • Clear Linux Releases Deep Learning Reference Stack 4.0 For Better AI Performance

    Intel's Clear Linux team on Wednesday announced their Deep Learning Reference Stack 4.0 during the Linux Foundation's Open-Source Summit North America event taking place in San Diego. Clear Linux's Deep Learning Reference Stack continues to be engineered for showing off the most features and maximum performance for those interested in AI / deep learning and running on Intel Xeon Scalable CPUs. This optimized stack allows developers to more easily get going with a tuned deep learning stack that should already be offering near optimal performance.