Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content Planet KDE
Planet KDE
Updated: 52 min 53 sec ago

High DPI update

Thursday 9th of July 2020 03:07:57 AM

I’d like to share a brief update regarding the state of high DPI support. Since getting a laptop with a 4K screen, I’ve found all sorts of subtle papercuts and have been doing my best to fix them or at least file bug reports. Here’s what’s been fixed recently:

Fixed recently

…And the work continues! Here are the bugs next on my list to investigate and try to fix:

Next up

A lot of the above issues are sort of stretch goals for me as each one requires a great deal of learning before I can even begin to try to put together a halfway-intelligent fix. So feel free to help out if you find this work useful and have relevant skills!

Adding EteSync calendars and tasks to Kontact - GSoC 2020 with KDE and EteSync [Part 4]

Wednesday 8th of July 2020 06:00:00 PM

Hey!

Last month, I wrote about adding EteSync address books to Kontact. Since then, I have been working on extending this functionality to calendars and tasks as well. I am happy to report that fetching and modifying EteSync contacts, calendars and tasks is now possible in Kontact. If you want to test it out, skip to ”Testing the resource” section below. You can read on for updates on the project:

The configuration dialog

When a new EteSync address book or calendar is added to Kontact, a configuration dialog pops up, asking you to enter your EteSync server URL, username, password and existing password. Entering the relevant credentials will fetch your contacts and calendars/tasks in KAddressBook and KOrganizer.

Fetching EteSync contacts, calendar and tasks Adding/Modifying collections and items

Kontact now implements a two-way client for EteSync. Not only does it support fetching data, but we can also use it to create, modify and delete contacts, events and tasks. We can even create new EteSync address books, calendars and task lists right from within Kontact.

Here, all changes made using KOrganiser are reflected in the EteSync web client too.

Creating an event
Deleting an event Testing the resource

Testing the resource is as simple as cloning and building KDE PIM Runtime from this repo.

  • Clone the repo
  • Build the project using make or kdesrc-build
  • Restart Akonadi (akonadictl restart)
  • Open Kontact and add a new EteSync address book or calendar
About the implementation
  • The configuration dialogue is implemented by subclassing the Akonadi::AgentConfigurationBase class.
  • There are basically three handler classes for the three different types - ContactHandler, CalendarHandler and TaskHandler. Classes CalendarHandler and TaskHandler are derived from the abstract class CalendarTaskBaseHandler, which contains a lot of code common for handling calendars and tasks.
What next?
  • The configuration dialog is currently a single dialog that takes as input the username, password, server url and encryption password. It should ideally be a two-step process, where the user enters the server url, username, password and then, on successful login, the encryption password should be entered. This is needed to initialise new accounts which do not have encryption passwords set up.
  • A lot of stuff needs to be looked up and implemented - like setting up the collection refresh rates, the Akonadi cache policies and Akonadi resource status signals.

Qt Creator 4.12.4 released

Wednesday 8th of July 2020 10:42:46 AM

We are happy to announce the release of Qt Creator 4.12.4 !

Week 4 and 5: GSoC Project Report

Tuesday 7th of July 2020 05:11:52 PM

This is the report for week 4 and week 5 combined into one because I couldn’t do much during week 4 due to college tests and assignments, so there was not much to report for that week. These two weeks I worked on implementing interactions between the the storyboard docker and timeline docker (or the new Animation Timeline docker). Most of the interactions from the timeline docker to the storyboard docker are implemented. To list the things implemented:

  • Selections are synced.
  • Add, remove and move events in the timeline docker should result in items being added, removed and moved (sensibly).
  • Thumbnails for the current frame would be updated when you change the contents of that frame.

Selections between storyboard docker and timeline docker are synced. If you select a frame in timeline docker the item corresponding to frame in the storyboard docker would get selected . This works the other way too. So if you select an item in storyboard docker that frame will be selected in the timeline docker.

Adding a keyframe in the timeline docker would add an item for that frame in the storyboard docker, if there is not an item for that frame already.

Removing a keyframe in the timeline docker would remove the item for that frame in the storyboard docker, if there is no other keyframe at that frame.

Moving a keyframe in the timeline docker would move the item for that frame in the storyboard docker so that items in the storyboard docker are always sorted according to frames.

It is possible to not add items to storyboard on addition of keyframe by toggling the lock button in storyboard docker. Removing and moving of keyframes would always cause removal and movement of items in the storyboard docker. So the number of items can be less than number of keyframes, but it can never be more.

Changing a keyframe would update the thumbnail for that frame in the storyboard docker. All affected items are not updated right now, but I am working on it.

This week I would work on updating all affected items when a keyframe is changed and would write unit-tests for the interactions.

Google Summer of Code 2020 - Week 4

Tuesday 7th of July 2020 12:00:00 PM

According to my GSoC proposal, I should be done with the general purpose graph layout capabilities for Rocs and free to start working on layout algorithms specifically designed to draw trees. This is not the case for a series of reasons, including my decision to write my own implementation of a general purpose force-based graph layout algorithm and failure to anticipate the need for non-functional tests to evaluate the quality of the layouts. I still need to document the functionalities of the plugin and improve the code documentation as well. Besides that, although it is not present in my original purpose, I really want to include the layout algorithm presented in [1], because I have high expectations about the quality of the layouts it can produce.

Even though I am not done with this first part, I decided to start working on the layout algorithms for trees. Now that I am more used to the code base and to the precesses used by KDE, I expect to be more productive and finish everything on time.

The main motivation for implementing layout algorithms specifically designed for trees is the possibility of exploiting the properties of trees to come up with better layouts. Yesterday, I started studying more about layout algorithms for trees. Most of them are based on a subset of the following ideas:

  • Select a root node for the tree. This node can be provided by the user or selected automatically.
  • Partition the nodes by their depth in the rooted tree and draw all nodes with the same depth at the same layer (usually a line or a circle).
  • Draw the sub-trees in a recursive way and compose the resulting drawings to get the complete layout.

All of these ideas are related to the structure of trees. In order to improve my intuition about them, I wrote an experimental implementation based on the first two ideas above. The application of this implementation to a tree with 15 nodes generated the following result:

By taking advantage of the properties of trees, even simple solutions such as my one-day experimental implementation can guarantee some desirable layout properties that the general purpose force-based layout algorithm can not. For instance, it guarantees that there are no intersections between edges or between nodes. The force-based layout algorithm I implemented can generate layouts with pairs of edges that intersect even when applied to trees.

[1] R. Davidson and D. Harel. Drawing graphs nicely using simulated annealing.ACM Transactions on Graphics, 15(4):301–331, 1996.

Cantor - Plots handling improvments

Tuesday 7th of July 2020 07:22:00 AM
Hello everyone,

this is the third post about the progress in my GSoC project and I want to present new changes in the handling of the external packages in Cantor.
The biggest changes done recently happened for Python. We now properly support integrated plots created with matplotlib.
Cantor intercepts the creation of plots and embedds the result into its worksheet.
This also works if multiple plots are created in one step the order of plots is preserved.
Also, text results between plots are also supported.

Besides matplotlib, we also properly handle Plot.ly - another popular graphing library for Python and R. This package has some requirements
that have to be fulfilled first. The user is notified about these requirements in case they are not fulfilled.

Similar implementation was also done for Julia and Octave, but to a smaller extent.
Though many preparational changes were done in the code for this, the only visible result for the user
are at the moment the new messages about unfulfited requirements of graphing packages.
Especially for Julia this is imprortannt now since for graphing the package GR was hard-coded in the past and there was no notification to the user
if this package was not installed and it was not immediately clear to the user why the creation of plots fails.
With this improvements Cantor is doing the next steps to become more user friendly.

There is another important change - the settings for graphing packages become dynamic.
The user can now change them on the fly without having to restart the session.


Also, the plot menu was extended. Julia and Python now can produce code for multiple packages - the prefered package can be choosen in settings.


In the next post I plan to show how the usability of Cantor panels is going to be improved.

Weekly Report 4

Tuesday 7th of July 2020 12:00:00 AM
GSoC Week 5 - Qt3D based backend for KStars

In the fifth week of GSoC, I worked on adding stars in the 3d Painter backend along with grids. Code for Qt3D is shifted to a new Skymap3D now.

What’s done this week
  • Shaders for instancing all kinds of stars(labelled, unlabelled) with all projection modes.

  • A new abstract camera controller.

The Challenges
  • Integration issues with the original SkyPainter API written to support multiple backends.

  • Use of new custom camera controller and view matrix..

  • Zooming and focus for deep star objects.

What remains

My priorities for the next week include.

  • Adding Skymap events to Skymap3D.

  • Debug deep star objects and get started with other sky objects.

Demo

The Code

Week #5 Status Report [Preset Editor MyPaint Engine]

Monday 6th of July 2020 11:21:00 AM
The second month of Google Summer of Code has been started. This month my prime focus would be to improve the mypaint brush engine, fix some issues and also I aim to add a better preset editor than the one provided by other painting applications.
Last week I worked solely on the preset editor for the newly available MyPaint brush engine in Krita.
This was in accordance to the mockup as was presented in the proposal and contained only the very basic 3-4 settings for the engine. All most all of the applications with MyPaint integration just provide some 3-4 basic settings for the engine. Though the library does provide api to add all of the mentioned settings.
Preset Editor
This week I will look at how shall I add more settings and customization options to it. I have some thing like this in mind. Though as per the standard procedure will discuss it with Boud first.
Mockup for Preset Editor with All the Settings
This week I shall focus on this and will try to add as much settings as possible in the widget.
Till then, Good Bye :)

GSoC'20 progress : Phase I

Sunday 5th of July 2020 06:30:00 PM
Wrapping up the first phase of Google Summer of Code

This week in KDE: A little bit of everything

Saturday 4th of July 2020 05:03:41 AM

A lot of exciting things are happening behind the scenes these days, but in terms of what landed this week, we focused on bugfixing–including a few nice high DPI fixes–and also got a few nice Dolphin and Konsole features.

New Features

Dolphin now has a new “Copy Location” menu item (Yann Holme-Nielsen, Dolphin 20.08.0):

And so does Konsole! (Tomaz Canabrava, Konsole 20.08):

Konsole’s split view headers can now be optionally disabled, and the thickness of the separator can be optionally increased (Tomaz Canabrava, Konsole 20.08.0):

Bugfixes & performance Improvements

Various non-default Task Switchers now have the right size when using a high DPI scale factor (me: Nate Graham, Plasma 5.18.6 and beyond)

Fixed various system tray items’ context menus popping up in the wrong location (Konrad Materka, Plasma 5.18.6 and beyond)

When using a wallpaper package with multiple sizes (e.g. the default Plasma wallpaper), the correct size is now displayed when using a high DPI scale factor or changing screen resolutions (David Edmundson, Plasma 5.19.3)

The plasma start-up sound is no longer cut off when starting up the computer (David Edmundson, Plasma 5.19.3)

The new System Monitor widgets now always have the correct text color when using a Plasma theme with a different color scheme from the Application color scheme (Marco Martin, Plasma 5.19.3)

System Settings no longer crashes when you open the Applications page without any file managers installed (Alex Merry, Plasma 5.19.3)

Krunner is now faster to open so the text that you type winds up in KRunner rather than in the app below it (David Redondo, Plasma 5.20)

When changing the default browser, the “default browser” entry visible in Kickoff and the Task Manager by default now automatically updates too (Alexander Lohnau, Plasma 5.20)

Dolphin can once again execute script files with spaces in the filename or path (David Faure, Frameworks 5.72)

Close buttons in Kirigami sheets are no longer subtly pixelated some of the time (Nicolas Fella, Kirigami 5.72)

Several Breeze icons which had subtle pixel mis-alignments that could make them appear blurry no longer suffer from this issue (Maksym Hazevych, Frameworks 5.72)

When using Qt scaling in Plasma by setting the PLASMA_USE_QT_SCALING=1 environment variable, windows now minimize to the correct locations in the Task Manager (Marco Martin, Frameworks 5.72)

User Interface Improvements

The System Settings Screen Locking page has been rewritten in QML, which fixes all of the open bugs (David Redondo, Plasma 5.20):

The Keyboard Layout System Tray item now always uses a monochrome icon, to better match the style (Nicolas Fella, Plasma 5.20):

How You Can Help

Have a look at https://community.kde.org/Get_Involved to discover ways to help be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Finally, consider making a tax-deductible donation to the KDE e.V. foundation.

Third alpha release of my project

Friday 3rd of July 2020 02:45:00 PM

I’m glad to announce the third alpha of my GSoC 2020 project. For anyone not in the loop, I’m working on integrating Disney’s SeExpr expression language as a new type of Fill Layer.

Releases are available here:

Integrity hashes:

d5aa5138650c58ac93e16e5eef9e74f81d7eb4d3fa733408cee25e791bb7a3e1 krita-4.3.1-alpha-0b32800-x86_64.appimage 634d1c0dedc96bc8b267f02b5c431245eefde021a1e7b8e6fcdce33f5e62c25a krita-4.3.1-alpha-0b32800992-x86_64.zip

In this release, I fixed the following issues:

  • SeExpr textures use the scRGB color space, which is not supported by Qt’s QColor until 5.12. This makes the conversion to Krita space unbearably slow (thanks Wolthera van Hövell)
  • Refactored SeExpr error reporting to make messages Qt-translatable.
    • This adds KDE’s ECM to the list of (optional) dependencies of SeExpr.
  • Error reporting is now available, including highlighting! (thanks Wolthera too for noticing)
  • Configuration is saved and restored when changing between Fill layer types (bug 422885, thanks Boudewijn Rempt)
  • Cleaned up SeExpr headers
    • They are now installed only if used in the UI library itself.
  • UI labels have extra spacing (thanks Wolthera van Hövell)

Another outstanding issue is SeExpr’s vulnerability to the current LC_NUMERIC locale, due to its use of sscanf and atof. I am sad to announce I won’t be able to change this; the library I wanted to use, scn is itself vulnerable to locale changes.

But the most important feature, and final contribution, are bundleable presets!

This enables SeExpr scripts to be bundled just like any other resource in Krita. Below you can find a bundle containing all of the example scripts posted by Wolthera on the Krita Artists thread.

Link: https://dump.amyspark.me/Krita_Artists'_SeExpr_examples.bundle

Integrity hash:

1e4a1bc6a9b8238cee96dfee9a50e7db903fe7b665758caf731d53c96597dc20 Krita_Artists'_SeExpr_examples.bundle

Looking forward to your comments!

Cheers,

~amyspark

PS: The Git hash in the files point to a cleaned up branch at my own fork of Krita. I am reviewing options to sync this work back to the main krita/4.3 branch.

GSoC’20 First Evaluation

Friday 3rd of July 2020 12:00:00 AM

Hello everyone,

In the last blog, I wrote about my first two weeks on the GSoC period. In this blog, I will write about the activities on which I have worked further to implement multiple datasets.

Until now, I have implemented the multiple datasets in the following activities:

  1. Enumeration memory game
  2. Addition memory game with Tux and without Tux
  3. Subtraction memory game with Tux and without Tux
  4. Multiplication memory game with Tux and without Tux
Why multiple datasets for GCompris activities?

All of the GCompris activities had a generalized dataset so for some of the age groups like 3-5 yrs the activity were quite difficult to play, while for older age groups the activity were easy. So, multiple datasets helps in resolving this issue and making the activity adaptive for children of all age groups.

Enumeration memory game

In this activity, the child needs to turn the cards to count the images and match with the corresponding number cards. I started my work of implementing multiple datasets to memory activities with this activity. To do this I needed to change the logic of the code to support both the default datasets and the multiple datasets. I made the required changes to the code and after implementing multiple datasets there was an issue in case of just two images so I updated the condition which checks that the resources of images, sounds, texts are enough to load a particular level or not. There is a total of 8 multiple datasets for this activity.
Below image shows the dataset screen for Enumeration memory game activity

After implementing multiple datasets for this activity there was a regression that affected all memory activities as a blank activity config was displayed for activities with no multiple datasets. I fixed this too and pushed the changes to my branch for review. Once I had everything perfectly done and tested, I made a merge request from my working branch so that it could be merged to master.

Addition memory game

In this activity, the child needs to turn the cards to match an addition and its result, until all the cards are gone. The goal of this activity is to practice addition. After the implementation of multiple datasets for enumeration memory games it’s not that difficult to add multiple datasets for other memory activities as we just need to create different Data.qml files in the resource directory and load the datasets. For this activity we need to use the function getAddTable() implemented in math_util.js and pass the arguments for the respective numbers. In this activity there is a total of 10 multiple datasets. This activity has two mode as a second mode “with Tux” so I have implemented multiple datasets for this too. The dataset content of the activity Addition memory games with Tux is the same as without Tux. Once I have implemented the datasets for both modes, I made a merge request from my working branch.

Code of Data.qml file

import GCompris 1.0 import "qrc:/gcompris/src/activities/memory/math_util.js" as Memory Data { objective: qsTr("Addition table of 1.") difficulty: 4 data: [ { // Level 1 columns: 5, rows: 2, texts: Memory.getAddTable(1) } ] }

You can see that the content of the dataset is in json format. And to just add another multiple dataset we need to change the number from 1 to 2 and update the goal. Quite easy to add ten multiple datasets :)
Below image shows the dataset screen for Addition memory game activity

Subtraction memory game

In this activity, the child needs to turn the cards to match an subtraction and its result, until all the cards are gone. The goal of this activity is to practice subtraction. The datasets implementation procedure is same as for addition memory game just we need to use the getSubTable() function from math_util.js. This activity also have two modes so I have implemented multiple datasets to both mode.
Below image shows the dataset screen for Subtraction memory game activity

Multiplication memory game

In this activity, the child needs to turn the cards to match an multiplication and its result, until all the cards are gone. The goal of this activity to practice multiplication. The dataset implementation procedure is the same as for addition and subtraction memory game just we need to use the getMulTable() function from math_util.js. This activity also have two modes so I have implemented multiple datasets to both mode.
Below image shows the datasets screen for Multiplication memory game activity

What’s next?

I will further work on the implementation of multiple datasets to division memory games activity and other memory activities. As our project is moving to KDE Gitlab instance. I will be push all of my further work there only on a different git branch for each activity group

Regards
Deepak Kumar

KSnip and Spectacle

Thursday 2nd of July 2020 10:00:00 PM

I have two screenshot applications installed – KSnip and Spectacle – because they offer different, and independently useful, functionality. Here’s some notes on what each does well.

Spectacle has been in the FreeBSD ports collection for some time now, since it ships as part of the KDE release service.

KSnip was recently added to the FreeBSD ports collection: we already had KColorPicker as a dependency for Spectacle, so packaging up the rest of the stack from Damir was a natural next-step.

KSnip

The biggest reason I have for KSnip is the multiple-screenshots-in-tabs feature it has. Like any other document reader, it has tabs and you can switch between them relatively quickly. I use this particularly for keeping track of visual changes while developing Calamares: by screenshotting the Calamares window repeatedly, I can see what changes (for instance, while doing screen-margin tweaks for mobile).

Switching back-and-forth between the tabs gives me a “pixels moved” sense, and that’s really useful. KSnip’s wide selection of annotation tools – it’s nearly a specialized drawing application – helps, too: I tell people to draw big red arrows on screenshots pointing to problems (because describing things is difficult, and a glaring visual glitch to you may be totally invisible to me).

With KSnip, adding detail to a screenshot is child’s play.

That’s not to say that KSnip doesn’t have its issues. But a blog post is not a place to complaing about someone else’s Free Software: the issue tracker is (with constructive bug reports, not complaints).

Spectacle

Spectacle on the other hand integrates more nicely with my Plasma desktop on the whole, can screenshot pop-ups and tooltips and understands weird screen geometry.

I end up using Spectacle more for the “quick screenshot needed” part of writing blogs, and sometimes for sharing bits of screen where no annotations are needed.

With Spectacle, delivering or sharing the screenshot is simple.

Overall, I like having two applications that each do their own thing, and do their thing pretty well. Since the shared code stack is enormous, each of the two applications is only small: less than 1MB. That’s a small price for specialization.

Bringing modern process management to the desktop

Wednesday 1st of July 2020 01:03:58 PM

A desktop environment's sole role is to connect users to their applications. This includes everything from launching apps to actually displaying apps but also managing them and making sure they run fairly. Everyone is familiar the concept of a "Task manager" (like ksysguard), but over time they haven't kept up with the way applications are being developed or the latest developments from Linux.

The problem Managing running processes

There used to be a time where one PID == one "application". The kwrite process represents Kwrite, the firefox process represents Firefox, easy. but this has changed. To pick an extreme example:
Discord in a flatpak is 13 processes!

It basically renders our task manager's process view unusable. All the names are random gibberish, trying to kill the application or setting nice levels becomes a guessing game. Parent trees can help, but they only get you so far.

It's unusable for me, it's probably unusable for any user and gets in the way of feeling in control of your computer.

We need some metadata.

Fair resource distribution

As mentioned above discord in a flatpak is 13 processes. Krita is one process.

  • One will be straining the CPU because it is a highly sophisticated application doing highly complicated graphic operations
  • One will be straining the CPU because it's written in electron

To a kernel scheduler all it would see are 14 opaque processes. It has no knowledge that they are grouped as two different things. It won't be able to come up with something that's fair.

We need some metadata.

(caveat: Obviously most proceses are idling, and I've ignored threads for the purposes of making a point, don't write about it)

It's hard to map things

Currently the only metadata of the "application" is on a window. To show a user friendly name and icon in ksysguard (or any other system monitor) we have to fetch a list of all processes, fetch a list of all windows and perform a mashup. Coming up with arbitrary heuristics for handling parent PIDs which is unstable and messy.

To give some different real world examples:

  • In plasma's task manager we show an audio indicator next to the relevant window, we do this by matching PIDs of what's playing audio to the PID of a window. Easy for the simple case... however as soon as we go multi-process we have to track the parent PID, and each "fix" just alternates between one bug and another.
  • With PID namespaces apps can't correctly report client PIDs anymore.
  • We lose information on what "app" we've spawned. We have bug reports where people have two different taskmanager entries for "Firefox" and "Firefox (nightly)" however once the process is spawned that information is lost - the application reports itself as one consistent name and our taskbar gets confused.

We need some metadata.

Solution! This is a solved problem!

A modern sysadmin doesn't deal in processes, but cgroups. The cgroup manager (which will be typically systemd) spawns each service as one cgroup. It uses cgroups to know what's running, the kernel can see what things belong together.

On the desktop flatpaks will spawn themselves in cgroups so that they can use the relevant namespace features.

You're probably already using cgroups. As part of a cross-desktop effort we want to bring cgroups to the entire desktop.

Example

Before and after of our system monitor

Ultimately the same data but way easier to read..

Slices

Another key part of cgroup usage is the concept of slices. Cgroups are based on a heirachical structure, with slices as logical places to split resource usage. We don't adjust resources globally, we adjust resources within our slice, which then provides information to the scheduler.

Conceptually you can imagine that we just adjust resources within our level of a tree. Then the kernel magically takes care of the rest.

More information can be found on slices in this excellent series World domination with cgroups.

Default slices

This means we can set up some predefined slices. Within the relevant user slice this will consist shared of

  • applications
  • the system (kwin/mutter, plasmashell)
  • background services (baloo, tracker)

Each of these slices can be given some default prioritisations and OOM settings out of the box.

Dynamic resource shifting

Now that we are using slices, and only adjusting our relative weight within the slice, we can shift resource priority to the application owning the focused window.

This only has any effect if your system is running at full steam from mulitple sources at once, but it can provide a slicker response at no drawback.

Why slice, doesn't nice suffice?

Nice is a single value, global across the entire system. Because of this user processes can only be lowered, but never raised to avoid messing with the system. With slices we're only adjusting relative weight compared to services within our slice. So it's safe to give the user full control within their slice. Any adjustments to an application, won't impact system services or other users.

It also doesn't conflict with nice values set by the application explicitly. If we set kdevelop to have greater CPU weight, clang won't suddenly take over the whole computer when compiling.

Fixing things is just the tip of the iceberg CGroup extra features

CGroup's come with a lot of new features that aren't available on a per-process level.

We can:

  • Set limits so that a CPU can't use more than N%
  • We can gracefully close processes on logout
  • We can disable networking
  • We can set memory limits
  • We can prevent forkbombs
  • We can provide hints to the OOM killer not just with a weight but with expected ranges that should be considered normal
  • We can freeze groups of processes (which will be useful for Plasma mobile)
    ...

All of this is easy to add for a user / system administrator. Using drop in's one can just add a .service file [example file link] to ~/.config/systemd/user.control/app-firefox@.service and manipulate any of these.

[caveat, some of those features works for applications created as new transient services, not the lite version using scopes that's currently merged in KDE/Gnome - maybe worth mentioning]

Steps taken so far

Plasma 5.19 and recent Gnome now spawn applications into respective cgroups, but we're not yet surfacing the results that we can get from this.

For the KDE devs providing the metadata is easy.

If spawning a new application from an existing application be sure to use either ApplicationLauncherJob or CommandLauncherJob and set the respective service. Everything else is then handled automagically. You should be using these classes anyway for spawning new services.

For users, you can spawn an application with either kstart5 --application foo.desktop"

That change to the launching is relatively tiny, but getting to this point in Plasma wasn't easy - there were a lot of edge cases that messed up the grouping correctly.

  • kinit, our zygote process really meddled with keeping things grouped correctly
  • drkonqi, our crash handler and application restarter
  • dbus activation has no knowledge of the associated .desktop file if an application is DBus activated (such as spectacle our screenshot tool)
  • and many many more papercuts throughout of different launches

Also to fully capitalise on slices we need to move all our background processes into managed services and slices. This is worthy of another (equally lengthy) blog post.

How you can help?

It's been a battle to find these edge cases.
Whilst running your system, please run systemd-cgls and point out any applications (not background services yet) that are not in their appropriate cgroup.

What if I don't have an appropriate cgroup controller?

(e.g BSD users)

As we're just adding metadata, everything used now will continue to work exactly as it does now. All existing tools work exactly the same. Within our task manager we still will keep a process view (it's still useful regardless) and we won't put in any code that relies on the cgroup metadata present. We'll keep the existing heuristics for matching windows with external events, cgroup metadata would just be a strongly influence factor in that. Things won't get worse, but we won't be able to capitalise on the new features discussed here,

Google Summer of Code 2020 – week 4 and 5

Wednesday 1st of July 2020 04:11:04 AM

Hi, today I will talk about my week 4 and week 5 and bring some news!

The last post was short but this one will make up for it, explaining some important bits, and changes, in the structure of mark that changed/improved during the first month of coding in GSoC.

In week 4, I documented a huge part of the existing code, although there is still a need for some updates. Currently in week 5, I am fixing some bugs of the new logic and I will document the newly created Painter class (more information below), also start developing the logic for text annotation.

The structure of marK had some changes that I had like to highlight and explain. this diagram represents the current structure, with marK also being the main window class.

Current relationship of classes.

Container was an abstract class and its children handled the loading of the necessary items, such as the image for the previous ImageContainer and text for the to be TextContainer, and also annotate the data. After talking with my mentor, we decided that it could change for better and be simpler.

Now marK has only one Container, this one is a “canvas” to the Painter and he will be switched accordingly to the file type, the painters now are the ones responsible for load and display the contents of the files, and also handle the annotation of data.

The container holds the current MarkedObject being annotated and also a vector of the previous ones. With this change the code is smaller and it is easier and faster to change between different types of annotation.

Children of MarkedObject, each one represents, respectively, audio, image (and video) and text annotation.

MarkedObject represents the annotated data. Each MarkedObject has a reference of a MarkedClass, and use it to have a color and identifier for the annotation. It also uses d pointer to avoid problems related to ABI changes (as I said in the previous post).

The existing markedClasses are shown in the marK’s comboBox, being possible to select and also modify the name identifier and color. Allowing the user to personalize and edit accordingly to his needs.

Serializer is responsible for reading and saving the MarkedObjects resulted from the annotation in marK. Its functions have been refactored to support text annotation (and others types of annotation). Currently, serializer only exports to two file formats, xml and json.

Existing and futures children of Painter.

Painter is the base class of all Painters and is friend of the Container, as said before, it takes over the responsibility of the containers (of the previous logic). There will be a derived class for each type of annotation (such as the ones shown above). Currently, only ImagePainter exists and works but this month I will develop the TextPainter and change this.

In the next post I will explain about the TextPainter and show the initial text annotation in marK.

That is it, see you in the next post ; )

GSoC ’ 20 Progress: Week 3 and 4

Tuesday 30th of June 2020 07:11:56 PM

Greetings!

The past two weeks did not see as much progress as I would have liked because of my university exams and evaluations. Now, let’s focus on the work that I could do before I got swamped with the academic work and term exams.

I started the third week by working on drafting a basic QML display of the subtitle model items, like the position of the subtitles in the timeline.  I drafted a basic delegate QML model to display the start positions of each subtitle line.  Then I began working on integrating the back-end part (which I had mentioned in the previous post) with the basic front-end part (displaying the position of the subtitles).

In this process of integrating the subtitle model with the QML delegate model, I encountered a few logical errors with my code and some connections with the Subtitle Model which I had completely overlooked. It was also during this time that I realised I had missed out some key functions while writing the subtitle model class.

I defined a class, SubtitledTime, that will handle all things subtitle in the application, for example, comparing two different subtitles, getting the string or even the timings of each subtitle string, and assigning values to subtitles.

class SubtitledTime

Next, I added functions to add the start timing of each subtitle statement in the existing SnapModel. This will enable the snap feature of Kdenlive for the subtitles, making it convenient for the users while editing subtitles in the timeline.

void registerSnap (const std::weak_ptr<SnapInterface> &snapModel);

void addSnapPoint (GenTime startpos);

Finally, I added the basic functions of getting the list of subtitles and returning the total number of rows in the SubtitleModel.

int rowCount (const QModelIndex &parent = QModelIndex()) const override;

QList<SubtitledTime> getAllSubtitles() const;

The code can be found here.

With this, phase one of the coding period has come to an end and the phase evaluation has begun. Now that my exams are finally over, I hope to make up for the lost time in the upcoming weeks.

~ Sashmita Raghav

Plasma 5.19 testing in Groovy Gorilla

Tuesday 30th of June 2020 02:34:49 PM

Are you running the development release of Kubuntu Groovy Gorilla 20.10, or wanting to try the daily live ISO?

Plasma 5.19 has now landed in 20.10 and is available for testing. You can read about the new features and improvements in Plasma 5.19 in the official KDE release announcement.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

The Kubuntu development release is not recommended for production systems. If you require a stable release, please see our LTS releases on our downloads page.

Getting the release:

If you are already running Kubuntu 20.10 development release, then you will receive (or have already received) Plasma 5.19 in new updates.

If you wish to test the live session via the daily ISO, or install the development release, the daily ISO can be obtained from this link.

Testing:

  • If you believe you might have found a packaging bug, you can use a launchpad.net to post testing feedback to the Kubuntu team as a bug, or;
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.18 or whatever version you are familiar with?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Check the changelog in the KDEannouncement:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

Plasma 5.19 has 3 more scheduled bugfix releases in the coming months, so by testing you can help to improve the experience for Kubuntu users and the KDE community as a whole.

Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

Note: Plasma 5.19 has not currently been packaged for our backports PPA, as the release requires Qt >= 5.14, while Kubuntu 20.04 LTS has Qt 5.12 LTS. Our backports policy for KDE packages to LTS releases is to provide them where they are buildable with the native available stack on each release.

[1] – irc://irc.freenode.net/kubuntu-devel
[2] – https://t.me/kubuntu_support
[3] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

KDE's GitLab is now Live

Tuesday 30th of June 2020 08:28:04 AM





-->





After our final decision to adopt GitLab in November 2019, KDE started the work of tackling the many challenges that come with moving a whole development platform for a large open source community. KDE has now officially completed Phase One of the adoption and contributors have begun to use GitLab on a daily basis.

Why did we migrate to GitLab?

By switching to GitLab we will be offering our community one of the most popular and latest, fully-featured, actively developed, and supported DevOps platforms in existence today. This will contribute to boosting collaboration and productivity, and making our workflow more transparent and accessible to everyone who wants to contribute.

How will it benefit the wider community?

By using a platform offering an interface that most open source developers are nowadays familiar with, we will be lowering the bar for new contributors to join us, and providing a way for our community to continue to grow even faster in the coming years.

GitLab will also help us to achieve goals like "Consistency", as it will help our community members have a single solution to their needs. Now, we will be able to host and review code, manage projects/issues, communicate, collaborate, and develop software/applications on a single platform.

By adopting GitLab as a platform, we will be adding stability to our framework, as we will count on the support of GitLab as a company. GitLab, Inc. has nearly a decade of experience behind it, releases new versions on a regular basis and, apart from its in-house team, counts on an active community of third party contributors. This guarantees that our new development platform will be updated and maintained throughout the years.

KDE has migrated to GitLab and our instance is now live. Start discovering projects, groups and code on by visiting invent.kde.org.

You can read more about this migration on GitLab's blog.

Review Report 1

Tuesday 30th of June 2020 12:00:00 AM
GSoC Review 1 - Qt3D based backend for KStars

In the fourth week of GSoC, I worked on adding support for Skybox which supports the projection modes implemented last week. I also added the grid implementation in KStars based on the prototype.

What’s done this week
  • Custom Skybox and Skybox shaders for Lambert, Azimuthal, Orthographic, Equirectangular, Stereographic and Gnomic projections.

  • Custom Window3D class extending Qt3DWindow for mouse movements.

  • Equatorial grid based on the prototype.

The Challenges
  • Integration issues with the original SkyPainter API written to support multiple backends - Had to prototype outside of KStars.

  • Had to remove QCamera and first person controllers since they don’t conform to projection modes we use.

  • Switching SkyQPainter’s 2D projector class to GLSL.

What remains

My priorities for the next week include.

  • Adding mouse input.

  • Display of basic star catalog.

Demo

The Code

Google Summer of Code 2020 - Week 3

Monday 29th of June 2020 11:00:00 PM

This week, I spent most of my time testing the Rocs graph-layout-plugin. I needed to test the method that applies the force-based layout algorithm to a graph, whose signature is the following.

static void applyForceBasedLayout(GraphDocumentPtr document, const qreal nodeRadius, const qreal margin, const qreal areaFactor, const qreal repellingForce, const qreal attractionForce, const bool randomizeInitialPositions, const quint32 seed);

Unfortunately, there is not much that is guaranteed by this method. Basically, given a graph and some parameters, it tries to find positions for each node in such a way that, if we draw the graph using these positions, it will look nice. What does it mean for the drawing of a graph to look nice? How can we test it? This is a subjective concept, and there is no clear way to test it. But there is still something that can be done: non-functional tests.

Before going to the non-functional part, I decided to deal with the easy and familiar functional tests. I was not precise in my description of the method deliberately. Actually, there is at least one guarantee that it should provide: if we draw each node as a circle of radius nodeRadius with centers at the positions calculated by the method, these circles should respect a left-margin and a top-margin of length margin. This was a nice opportunity for me to try the QtTest framework. I wrote a data-driven Unit Test and everything went well.

Back to the non-functional part, I decided to write a quality benchmark. The idea is to measure some aesthetic criteria of the layouts generated for various classes of graphs. The metrics already implemented are: number of edge crosses, number of edges that cross some other edges, number of node intersections and number of nodes that intersect some other node. Although there is no formal definition of a nice layout, keeping the values of these metrics low seems to be desirable. Currently, I already implemented generators for paths, circles, trees and complete graphs. For each one of these classes of graph, I generate a number of graphs, apply the layout algorithm a certain number of times to each of them, and calculate summary statistics for each one of the considered aesthetic metrics.

For now, there is only one layout algorithm which can applied to any graph. The idea of the quality benchmark is to compare it to other layouts algorithms I will implement. But it does not mean that the quality benchmark is currently useless. Actually, the results I got were quite revealing. The good part is that there were no intersection between nodes. But the results about edge crossing are not so good. Despite my efforts in tuning the parameters, the algorithm can fail to eliminate all edge crosses even for very simple graphs such as paths and circles. Fortunately, choosing parameter values specifically for a graph can sometimes help, and the user can do that in the graph-layout-plugin user interface.

More in Tux Machines

today's leftovers

  • "Hey, DT. Why Arco Linux Instead Of Arch?" (Plus Other Questions Answered)

    In this lengthy rant video, I address a few questions that I've been receiving from viewers. I discuss fake DistroTube accounts on social media, my thoughts on PeerTube, my experience with LBRY, my thoughts on Arco vs Arch vs Artix, and what YouTubers have influenced my life.

  • 2020-08-10 | Linux Headlines 186

    elementary OS teases big changes coming in version 6, RetroArch rolls out major search improvements with version 1.9, Microsoft releases Minecraft: Education Edition for Chromebooks, and the new Krita Scripting School website aims to help developers expand the painting application.

  • R600 Gallium3D Now Has Compute Shaders Working With NIR

    If you are still rocking a pre-GCN AMD Radeon graphics card on the R600g driver for the HD 2000 through HD 6000 series, you really ought to consider upgrading in 2020, but otherwise at least from the open-source community there continues to be improvements.

  • NVIDIA GeForce are teasing something for August 31, likely RTX 3000

    Ready for your next upgrade? NVIDIA think you might be and they're teasing what is most likely the GeForce RTX 3000 launch at the end of this month. We don't know what they're actually going to call them, although they will be based on the already revealed Ampere architecture announced back in May. It's probably safe to say RTX 3000 for now, going by the last two generations being 1000 and 2000 but NVIDIA may go for something more fancy this time.

  • How to Learn Python in 21 Days?

    Before moving further, let’s have a brief introduction to Python Language. Python, designed by Guido Van Rossum in 1991, is a general-purpose programming language. The language is widely used in Web Development, Data Science, Machine Learning, and various other trending domains in the tech world. Moreover, Python supports multiple programming paradigms and has a huge set of libraries and tools. Also, the language offers various other key features such as better code readability, vast community support, fewer lines of code, and many more. Here in this article, we’ll discuss a thorough curriculum or roadmap that you need to follow to learn Python in just 21 days!

  • This Week In Servo 135

    Last week we released Firefox Reality v1.2, which includes a smoother developer tools experience, along with support for Unity WebXR content and self-signed SSL certificates. See the full release notes for more information about the new release.

OSS Leftovers

  • Richard Stallman Discusses Privateness Dangers of Bitcoin, Suggests 'One thing A lot Higher'
  • The many meanings of 'Open': Open Data, Open Source, and Open Standards

    It is important to note that open source software is not always “free” software. The difference is in the licensing and the level of effort required to customize the code for your use case. According to GNU progenitor and software freedom advocate Richard Stallman, free does not mean non-proprietary but rather suggests that “users have the freedom to run, copy, distribute, study, change and improve the software” for any purpose. (“This is a matter of freedom, not price, so think of ‘free speech,’ not ‘free beer,’” Stallman says.). One also has the freedom to sell the software after modifying it. Implementing open source software inside a business enterprise frequently requires customization for your organization’s workflow. Whether this customization is done using internal resources or with the help of external consultants, it typically is not free, nor is the subsequent maintenance of the software. Successful open source software is designed and built using a collaborative community software development process that releases frequent updates to improve functionality and reliability. The key is in the “community” adoption and development.

  • How an open community rebrands

    As an open community evolves, so does the way it expresses its identity to others. And having open conversations about how you'd like your community to be recognized is an important component of community engagement. Simply put, your community's brand is what people (especially potential contributors) see first when they encounter you. So you want to make sure your brand reflects your community—its values, its principles, and its spirit. [...] Together, then, we were able to augment Jim's experience at Red Hat (though we always welcomed his perspectives along the way). Over the past half-decade, the Open Organization community has grown from a small group of passionate people debating nascent ideas about the "cultural side" of open source to a bustling bunch of thought leaders who have literally written the definition of what it means to be an open organization. To put it in open source terms: Our entire upstream project continues to evolve from that founding gesture.

  • LibreOffice 7.0 arrives, improves performance and compatibility

    AMD sponsored the developers' implementing the Skia graphics engine in LibreOffice. In Windows this open source 2D graphics library provides upgraded performance. Additionally the engine is accelerated by the Vulkan graphics and compute API.

  • TinyFloat, Troll Arithmetic, GIMP Palettes

    I've been working on a 64 bit extension to the 6502 processor architecture. This is for the purpose of implementing a secure computer which also has a hope of working after post industrial collapse.

    Along the way, I have found a use for a practical use for 8 bit floating point numbers. Floating point representations were historically used for scientific calculations. The two components of a floating point number - the exponent and mantissa - work in a manner similar to logarithms, slide rules and the scientific representation of numbers. For example, 1.32×104 = 13,200. Why not just write the latter? Scientific notation works over a *very* large scale and is therefore useful for cosmology, biology and nanofabrication. For computing, floating point may use binary in preference to decimal. Also, it is not typical to store both the exponent and mantissa within 8 bits.

  • Open Source Contributions on the Rise in FinTech, Healthcare and Government [Ed: "The Linux Foundation sponsored this post." So the Foundation is now busy distorting the media instead of actually supporting developers who develop Free software on shoestring budget.]

    Enterprise use of open source remains stable, and a new generation of companies are increasing their engagement with open source communities. Led by financial services, healthcare and government, more organizations across most industry verticals are regularly (frequently or sometimes) contributing to upstream projects, going from 42% to 46% over the last three years.

  • TODO Group Survey Shows Stable Enterprise Open Source Use

    The “Open Source Programs in the Enterprise” survey, from The Linux Foundation’s TODO Group and The New Stack says “enterprise use of open source remains stable.” An article by Lawrence Hecht reports that more organizations across industry verticals are regularly contributing to upstream projects, increasing from 42% to 46% over the past three years. “The multi-year effort provides a solid baseline for measuring change, growth and effectiveness of efforts to guide corporate open source policies and community participation,” Hecht said.

Servers: Hosting, Supermicro and Containers

  • Linux vs. Windows hosting: What is the core difference?

    If you are having a budget constraint, Linux hosting is always a better option. But if you want to run certain complex applications on your website or web hosting that is specific to Windows, Windows hosting is the solution for you. If you are looking for a bulk of free and open-source applications and content management systems such as WordPress to run, it is better that you select Linux hosting.

  • Supermicro Launches SuperServer SYS-E100-9W-H Fanless Whiskey Lake Embedded Mini PC

    US-based Supermicro is known for its server products, but the company’s latest SuperServer SYS-E100-9W-H fanless embedded mini PC targets other applications, specifically industrial automation, retail kiosks, smart medical devices, and digital signage. The mini PC is equipped with an Intel Core i7-8665UE Whiskey Lake Embedded processor coupled with up to 64GB DDR4 memory, and offers plenty of connectivity options with dual Gigabit Ethernet, eight USB ports, four serial ports, and dual video output with HDMI and DisplayPort. [...] Supermicro only certified the mini PC with Windows 10, but looking at the OS compatibility matrix for X11SWN-H SBC used inside the mini PC, 64-bit Linux OS like Ubuntu 18.04/20.04, RedHat Enterprise Linux, and SuSE Linux should also be supported. The company also provides SuperDoctor 5 command-line or web-based interface for Windows and Linux operating systems to monitor the system and gets alerts via email or SNMP.

  • OpenDev 2020: Containers in Production – Day 1

today's howtos