Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content Planet KDE
Planet KDE
Updated: 1 hour 33 min ago

Adding EteSync address books to Kontact - GSoC 2020 with KDE and EteSync [Part 3]

Friday 5th of June 2020 06:10:00 PM

Hey everyone!

Last week, I wrote a post about adding EteSync address books to Kontact. I’m happy to report that you can now fetch your EteSync address books and contacts in Kontact. If you want to test it out, skip to ”Testing the resource” section below. You can read on for updates on the project:

Adding a new EteSync account to KAddressBook Adding a new EteSync address book

I have created a new EteSync resource subdirectory, and added it to the project build system. I’ve also added the logo for the resource, which shows up while adding a new address book to Kontact.

Fetching your EteSync contacts

Once the new resource is added, the app should ideally open a configuration dialog for you to provide your username, password, EteSync server url and encryption password. Unfortunately, I haven’t implemented that yet. I plan to implement it soon so that users can easily test out the resource.

Currently, the credentials are hardcoded, and will be used to fetch all your address books and contacts. These will then show up in KAddressBook.

EteSync address books and contacts visible in KAddressBook Testing the resource

As the resource configuration dialog hasn’t been implemented yet, you would need to put in your EteSync credentials in the configure() function in etesync/etesyncresource.cpp to test it out.

All the code for this project is at this repo. To test the resource, you can:

  • Clone the repo
  • Enter your EteSync credentials in etesync/etesyncresource.cpp
  • Build the project using make or kdesrc-build
  • Restart Akonadi (akonadictl restart)
  • Open KAddressBook and add a new EteSync address book
To-do

I plan to implement the configuration dialog ASAP. Also, the fetching itself needs more work as it should be done asynchronously using jobs.

Feedback is always welcome!

Hope to have more updates for you soon :)

Reordering the People Sidebar in DigiKam

Friday 5th of June 2020 02:53:07 PM

The People Sidebar is an important aspect of Face Management in DigiKam. It displays the names of all people in the database, and provides a variety of context menu functionality. Currently, the Face Tags (Names) in the Sidebar are sorted alphabetically (either ascending or descending). This causes inconvenience to the user, particularly when confirming the results of a Facial Recognition.

Here’s how one would have to perform the operation currently :

Link

The user has to search around for the tags that have new results, and the only indicator that a tag has new results is the small new counter, beside each tag.

To solve this I made two changes:

  • Tags with New Faces now appear in bold, to enhance their visibility.
  • Tags with New Faces get pinned to the top of the list. As the user confirms/rejects the suggestions, the new faces decrease and the tag moves back down to get alphabetically sorted.
All new tags are now pinned at the top in Bold (Link)

QStringView Diaries: Zero-Allocation String Splitting

Friday 5th of June 2020 09:00:17 AM

After four months of intensive development work, I am happy to announce that the first QStringTokenizer commits have landed in what will eventually become Qt 6.0. The docs should show up, soon.

While the version in Qt will be Qt 6-only, KDAB will release this tool for Qt 5 as part of its KDToolBox productivity suite. Yes, that means the code doesn’t require C++17 and works perfectly fine in pure C++11.

This is a good time to recapitulate what QStringTokenizer is all about.

QStringTokenizer: The Zero-Allocation String Splitter

Three years ago, when QStringView was first merged for Qt 5.10, I already wrote that we wouldn’t want to have a method like QString::split() on QStringView. QStringView is all about zero memory allocations, and split() returns an owning container of parts, say QVector, allocating memory.

So how do you return the result of string splitting, if not in a container? You take a cue from C++20’s std::ranges and implement a Lazy Sequence. A Lazy Sequence is like a container, except that it’s elements aren’t stored in memory, but calculated on the fly. That, in C++20 coroutine terms, is called a Generator.

So, QStringTokenizer is a Generator of tokens, and, apart from its inputs, holds only constant memory.

Here’s the example from 2017, now in executable form:

const QString s = ~~~; for (QStringView line : QStringTokenizer{s, u'\n'}) use(line);

Except, we beefed it up some:

const std::u16string s = ~~~; for (QStringView line : QStringTokenizer{s, u'\n'}) use(line);

Oh, and this also works now:

const QLatin1String s = ~~~; for (QLatin1String line : QStringTokenizer{s, u'\n'}) use(line); QStringTokenizer: The Universal String Splitter

When I initially conceived QStringTokenizer in 2017, I thought it would just work on QStringView and that’d be it. But the last example clearly shows that it also supports splitting QLatin1String. How is that possible?

This is where C++17 comes in, on which Qt 6.0 will depend. C++17 brought us Class Template Argument Deduction (CTAD):

std::mutex m; std::unique_lock lock(m); // not "std::unique_lock<std::mutex> lock(m);"

And that’s what we used in the examples above. In reality, QStringTokenizer is a template, but the template arguments are deduced for you.

So, this is how QStringTokenizer splits QStrings as well as QLatin1Strings: in the first case, it’s QStringTokenizer<QStringView, QChar>, in the second, QStringTokenizer<QLatin1String, QChar>. But be warned: you should never, ever, explicitly specify the template arguments yourself, as you will likely get it wrong, because they’re subtle and non-intuitive. Just let the compiler do its job. Or, if you can’t rely on C++17, yet, you can use the factory function qTokenize():

const QLatin1String s = ~~~; for (QLatin1String line : qTokenize(s, u'\n')) use(line); QStringTokenizer: The Safe String Splitter

One thing I definitely wanted to avoid is dangling references a la QStringBuilder:

auto expr = QString::number(42) % " is the answer"; // decltype(expr) is QStringBuilder<~~~, ~~~> QString s = expr; // oops, accessing the temporary return value of QString::number(), since deleted

The following must work:

for (QStringView line : QStringTokenizer{widget->text(), u'\n'}) use(line);

But since the ranged for loop there is equivalent to

{ auto&& __range = QStringTokenizer{widget->text(), u'\n'}; auto __b = __range.begin(); // I know, this is not the full truth auto __e = __range.end(); // it's what happens for QStringTokenizer, though! for ( ; __b != __e; ++__b) { QStringView line = *__b; use(line); } }

if QStringTokenizer simply operated on QStringView or QLatin1String, the following would happen: The __range variable keeps the QStringTokenizer object alive throughout the for loop (ok!), but the temporary returned from widget->text() would have been destroyed in line 3, even before we enter the for loop (oops).

This is not desirable, but what can we do against it? The solution is as simple as it is complex: detect temporaries and store them inside the tokenizer.

Yes, you heard that right: if you pass a temporary (“rvalue”) owning container to QStringTokenizer, the object will contain a copy (moved from the argument if possible) to extend the string’s lifetime to that of the QStringTokenizer itself.

Future

Now that we have developed the technique, we very strongly expect it to be used in Qt 6.0 for QStringBuilder, too.

By Qt 6.0, we expect QStringTokenizer to also handle the then-available QUtf8StringView as haystack and needle, as well as QRegularExpression and std::boyer_moore_searcher and std::boyer_moore_horspool_searcher as needles. We might also re-implement it as a C++20 coroutine on compilers that support them, depending on how much more performance we’ll get out of it.

Conclusion

QStringTokenizer splits strings, with zero memory allocations, universally, and safely. Get it for free right now from KDToolBox, and you can future-proof your code with an eye towards Qt 6.

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post QStringView Diaries: Zero-Allocation String Splitting appeared first on KDAB.

Status update: Linux

Thursday 4th of June 2020 03:20:00 PM

Hey all! Just a quick heads up on my GSoC project.

P.S.: this post was amended on June 4, 2020 to correct the actual post date. It should be June 4, not June 1. The timestamp was updated accordingly.

SeExpr prototype: fixed!

My layer generator is now working under Linux!

AppImage screenshot showing SeExpr rendering the pattern correctly under Linux.

When I showed the floating point truncation bug to Wolthera, her first idea was:

amyspark: another thing you could think about is whether you are dealing with a floating-point localization issue…

I didn’t believe her, seeing that it only happened inside Krita. I converted Disney’s existing imageSynth2 demo and compiled it inside our toolchain to see if it was the compiler instead, but to no avail.

Without any other options left, I jumped deep inside the rabbit hole that is SeExpr’s parser, and started by tracing the calls that yield the (truncated) constants.

The state dump I posted before says a class called N7SeExpr211ExprNumNodeE represents them; this is just a mangled name for the ExprNumNode class. I put a breakpoint on the value() call, but the value had already been truncated. I tested with the constructor itself, but wasn’t able to get the actual value, as it’d been <optimized out> according to gdb.

However, the breakpoint’s stacktrace shows me there’s a Bison source file in between!

Which is no help, except that I knew Bison generates a C++ version, and that Disney already bundles it. Armed with this knowledge and the stacktrace’s line number, I went to the source:

#line 333 "/disney/users/jberlin/projects/seexpr2/src/SeExpr2/ExprParser.y" { (SeExpr2val.n) = NODE1((SeExpr2loc).first_column,(SeExpr2loc).last_column,NumNode, (SeExpr2vsp[0].d)); /*printf("line %d",@$.last_column);*/} #line 2263 "y.tab.c"

Indeed, the value comes from SeExpr2vsp[0].d. This is a pointer to a structure called SeExprYYSTYPE, and which is nothing but the parser’s state storage!

#if ! defined SeExprYYSTYPE && ! defined SeExprYYSTYPE_IS_DECLARED union SeExprYYSTYPE { #line 77 "/disney/users/jberlin/projects/seexpr2/src/SeExpr2/ExprParser.y" SeExpr2::ExprNode* n; /* a node is returned for all non-terminals to build the parse tree from the leaves up. */ double d; // return value for number tokens char* s; /* return value for name tokens. Note: the string is allocated with strdup() in the lexer and must be freed with free() */ struct { SeExpr2::ExprType::Type type; int dim; SeExpr2::ExprType::Lifetime lifetime; } t; // return value for types SeExpr2::ExprType::Lifetime l; // return value for lifetime qualifiers #line 235 "y.tab.c" }; typedef union SeExprYYSTYPE SeExprYYSTYPE;

Now I knew that the d member of this union has the parsed value (yes, SeExpr2vsp[0].d showed it as truncated already), and that the rule for building nodes with numeric constants was called “NUMBER”:

| NUMBER { $$ = NODE1(@$.first_column,@$.last_column,NumNode, $1); /*printf("line %d",@$.last_column);*/}

I went to the lexer (generated version here, source here) and searched for NUMBER nodes being returned.

Guess what I found?

{REAL} { yylval.d = atof(yytext); return NUMBER; }

Turns out, she was right all the time!

atof is a Standard Library function that interprets strings into the floating-point numbers they represent. There is a little phrase in the docs (emphasis mine), that seems to have been overlooked:

nonempty sequence of decimal digits optionally containing decimal-point character (as determined by the current C locale) (defines significand)

My system’s locale is es_ES.UTF-8. This means that, whenever SeExpr parsed a numeric constant, the decimal point separator expected by the parsing function was a comma. This means the library would never work properly in my system, as it already consumes the commas to separater parameters! I think this bug wasn’t found by Disney developers because it would only happen on shells with very specific locales.

Calling SeExpr with the expected C locale would work, but it was a hack. I already have their source code, so I could fix this at the root by replacing atof calls with a locale-independent function. A quick StackOverflow search led me to an alternative parsing function called crack_atof. I found the original source, and since it was MIT-licensed, I adapted it to suit the atof calling convention, and bundled it with the Platform.h header in SeExpr.

What’s next?

Now that all OSes are working, next thing to do is to provide a better UI, and start dogfooding the layer. I’ve spoken with Boudewijn, and I plan to publish a new prototype using Disney’s widgets. These are obviously unsuitable for production usage in KDE applications (for reasons I’ll detail in a future post), so we plan to release a MWP AppImage and gather feedback before determining the next steps.

Obviously, I still want to add the unit tests and post a bit more on the innards! Just too many homework deadlines right now.

Until next time, and thank you so much Wolthera,

~amyspark

LGM 2020 : my experience running an online international conference

Thursday 4th of June 2020 02:50:07 PM

Last week was the Libre Graphics Meeting (LGM) 2020 online conference. The LGM is normally an occasion for all the contributors of graphics related Free Software to meet physically, and this year we had planned to organize it in our city of Rennes, France. But of course, with the current situation, we were forced to cancel the physical event. We hesitated to make an online event instead, as the biggest interest in this event is to have a physical meeting. We ran a poll to see if there was enough interest for an online event, and the result showed us that there was.

As several people asked me about the technical setup used for the stream, I’m going to explain it here.

I was inspired a lot by the solution used for the Libre Planet conference, which used a jitsi meet instance to receive the stream of remote speakers, and sent it to an icecast server using a GStreamer based script to record the screen.

The first difference is that we decided to ask all the speakers to send a pre-recorded video of their presentation. We felt it was safer to get a good quality source for the talks, as some speakers may not have enough bandwidth on their internet connection for a reliable high-quality live session. And it was also safer to have a good quality recording available to publish the videos after the event.
We kept the jitsi meet live setup only for the Q&A sessions after each talk, where participants could ask their questions on the irc chat channel and get answers from the speakers in the video stream. We also used the same setup for workshops, as it is more interesting to have a live workshop with interactions from participants than a pre-recorded one.

The second difference, is that I used FFMPEG instead of GStreamer for my script to record the screen and send it to the stream server. The reason is that I noticed GStreamer was a bit less reliable depending on the version used; with some GNU/Linux distributions, it was not working at all from the beginning, and with other distributions it was working at first and then a few days later it stopped working, I’m not sure why… So I made some tests hacking a script with FFMPEG, and this one was working in any case. More about it below.

I also could evaluate Big Blue Button at some point, but I was not convinced by it… For some pure slideshow-based presentations it could have been better, but for video and screen sharing I found it less efficient than the other solution… In my tests of BBB, it didn’t provide a way to play a video file (other than using youtube), screen sharing was more resource intensive than with jitsi and it could not share the audio output from the computer (only the microphone input). So I decided to keep the video+jitsi+icecast solution.

About the technical setup, it was like this:


-I was hosted by Le Jardin Moderne, one of our initial local partners, as they have a good internet connection that allowed to safely receive the jitsi meet video while sending the stream to the icecast server.

-I had my computer, with a second screen set to extend my desktop on the right side. The second screen was used as the “live canvas” to record everything that was streamed, while the primary screen was used for everything else (the irc chat, the command line to manage the stream, a web browser to check different things, …)

-Of course we had an icecast server setup (thanks to Brendan Howell for providing it).

-For the script, like I said it was using FFMPEG to grab the screen and the audio output, and to both record it to a file and send it to the icecast server. The next problem was: while using FFMPEG to only send to the server or to a file was working flawlessly, trying to do both at once didn’t work at first.

To send the result to multiple outputs, FFMPEG has a special “tee” output. The problem is that when using the tee output, the option used to set the stream content_type to video/webm (which is needed for the icecast server to know the type of content streamed) was discarded, and it was using the default value audio/mpeg instead, which of course made the result unreadable. Normally, according to the documentation, it should have been possible to pass this option specifically to the icecast output inside the tee output, but it was not working. My quick solution to fix this locally was to just edit the source of FFMPEG to change the default value from audio/mpeg to video/webm (which is set in the file libavformat/icecast.c ), rebuild it, and then everything was working perfectly. Since then, I’ve reported the issue in their bugtracker, and it was indeed a bug as passing the option to the iceacest output should have worked. So, for those who want to use a similar script now, you will have to rebuild FFMPEG, either with the different hardcoded default value (which is fine if you only need to send to a single icecast output), or with the patch found in the bugreport (which allows to send to multiple icecast output with different formats). Hopefully it should be fixed in their next release, and this issue will soon become ancient history.

The script used to record and send the stream to the icecast server looks like this:


#!/bin/sh

DATE=$(date +%Y-%m-%d-%H_%M_%S)

clementine -p

sleep 5s

ffmpeg -video_size 1920x1080 -thread_queue_size 512 -framerate 30 -f x11grab -i :0.0+1920,0 \
-thread_queue_size 512 -f pulse -ac 2 -i jack_out.monitor \
-f webm -cluster_size_limit 2M -cluster_time_limit 5100 -content_type video/webm \
-vf scale=640:-1 \
-acodec libvorbis -aq 4 \
-vcodec libvpx -b:v 1000K -crf 40 -g 150 -deadline good -threads 2 \
-f tee -map 0:v -map 1:a "icecast://source:password@example.url:8000/video.webm|record-video-${DATE}.webm"

 

I can explain it a bit for those who need some details:

  • DATE=… is used to grab the current date and time to make a sort of unique identifier used in the name of the file output.
  • clementine -p is used to launch the audio player clementine and make it play the current playlist. This was needed as the ffmpeg script was failing to start if there was no audio at all in the output. So, I just created a silent audio file and added it to an empty playlist in clementine, and that did the trick. Of course you can use your favorite audio player instead of clementine. Also, note that this was especially needed here as I’m using the jack audio server, it may not be needed if you use pulseaudio only, or if you grab an input like a microphone which should always send some kind of signal… You can test on your setup if it’s needed or not.
  • sleep 5s before the ffmpeg command gives a 5seconds delay between launching the script and starting the ffmpeg recording and stream, which can be useful especially if you have a single screen and you want to hide your terminal before the recording starts.

Then, the ffmpeg options:

(note for beginners: as the ffmpeg commad has a lot of options, it needs several lines, so every line ends with a \ to continue on the next line)

  • -video_size 1920×1080 -thread_queue_size 512 -framerate 30 -f x11grab -i :0.0+1920,0 \

These are the options for the video input. I’m grabbing an area of 1920×1080, at 30 frames per second, with the X video server. With :0.0+1920,0 , I select only the area of the second screen, as the first screen is also 1920 pixels wide. The thread_queue_size 512 option, both on the video and audio input options, is useful to give enough time for the script to synchronize properly both inputs (without it, the script was giving a lot of warnings, and the audio could easily be not properly synchronized with the video).

  • -thread_queue_size 512 -f pulse -ac 2 -i jack_out.monitor \

These are the options for the audio input. Here I’m using the output of pulseaudio called jack_out.monitor, as I am using jack as my main audio server, and pulseaudio is bridged to jack. If you use only pulseaudio, you can replace jack_out.monitor with default, and use pulseaudio graphical interface (pactl) to select the default input to use. Or you can define exactly which input from pulseaudio you want to use (you can list the name of all available input devices with this command: pacmd list-sources | grep -e ‘index:’ -e device.string -e ‘name:’ )

  • -f webm -cluster_size_limit 2M -cluster_time_limit 5100 -content_type video/webm \

These options define the webm format for the output. cluster_size_limit and cluster_time_limit are values specific to adapt the cluster settings for the icecast server, check the ffmpeg documentation for more details. Also, as I said previously, the -content_type option is used to specify the type of content send to the server. However, this option is only useful in this place if there is a single output, else it should be specified inside the tee output (with the patched ffmpeg or the next version of it).

  • -vf scale=640:-1 \

This one scales the video input to a 640 pixels wide output, and -1 for the height means “adapt the height value to keep the original ratio of the input”.

  • -acodec libvorbis -aq 4 \

Select the codec vorbis for the audio, with a quality setting of 4.

  • -vcodec libvpx -b:v 1000K -crf 40 -g 150 -deadline good -threads 2 \

Select the vp8 codec for the video encoding, with a bitrate of 1000K per second, a quality target of 40, a maximum period of 150 frames between two keyframes, the “deadline good” option for the encoding speed, and use 2 threads for it.

  • -f tee -map 0:v -map 1:a “icecast://source:password@example.url:8000/video.webm|record-video-${DATE}.webm”

This is where the output is defined, in the end. Here, I’m using tee to have multiple output. -map 0:v -map 1:a says “use the first input in the command for the video, and the second input for the audio”. Then the two outputs are inside the “”, with | to separate them.

If you use the patched-ffmpeg or next version, you should replace this last line with:

  • -f tee -map 0:v -map 1:a “[content_type=video/webm]icecast://source:password@example.url:8000/video.webm|record-video-${DATE}.webm”

Else if you only need the icecast output, you can replace this line with:

  • icecast://source:password@example.url:8000/video.webm

Or if you only need a file output, replace it with:

  • record-video-${DATE}.webm

And of course for icecast, adapt the source, password, url and mountpoint name according to your icecast configuration.

A final note about the resolution used for the output: in the first place, I was naively hoping I could send a 1920×1080 stream, or even 1280×720. The truth is that my computer could not handle encoding in a stable way at a higher resolution than 640×360 while at the same time receiving a video from jitsi. With a more powerful computer of course, it would have been possible. But in the end, this lower resolution was also good to put less weight on the icecast server (we could stream to 100 people without issues), and it also meant that people could watch the stream even with a relatively low internet connection. And it surely reduced the overall ecological impact of the event. Somehow in this case, less is more. And this low resolution was in practice good enough most of the times, at least as long as we made sure that the content sent was big enough to be scaled down to it (one workshop had some initial issues with it, but after lowering the resolution of the speaker’s screen it was okay, and we learned from that mistake to avoid similar issues in next workshops).

I hope this post will be useful for others to run their online event.

Again, thanks to all the participants, and everyone who helped us make this online event successful. The edited videos are available on youtube and on peertube (upload should be finished tomorrow).

… an improvised group screenshot taken at the end of the last day. If you attended the event but missed the photo, add your face in the empty spot

Recent developments for the coming release

Thursday 4th of June 2020 12:14:13 PM

Despite a very active development in the recent couple of weeks, we still need to finalize a couple of things before we can do the release for version 2.8.

While going through the remaining issues, we found some time to work on users’ suggestions, test our nightly builds and provide feedback. We fixed several reported bugs and also implemented a couple of smaller features that were recently requested. The purpose of this short post is to update you on the latest developments.

LabPlot supports different analysis methods, like fitting, smoothing, Fourier transformation, etc. For smoothing we recently added the calculation of rough values. The difference between the approximating smooth function and the original data is called “rough” in this context (data = smooth + rough). This is very similar to the calculation of “residuals” for the fit algorithms. In 2.8 we calculate and expose the rough values, made it possible to visualize them and to check the goodness of the smoothing process. The example below demonstrates the first two plot iterations of a moving average smoothing algorithms applied to the original data and the second plot shows the corresponding rough values for both iterations:



In principle, such an iterative smoothing process can be automatically carried out until a certain “goodness of rough values” is achieved that was specified by the user. This is an interesting feature request that we recently added to our backlog.

In the spreadsheet it is possible to get some descriptive statistics for the data sets. This information was extended a bit and we added the calculation of quartiles, trimean and of the statistical mode:



The list of normalization methods in the spreadsheet was also extended. We added a couple of new frequently used methods:



Furthermore, another small feature was added to the spreadsheet – the calculation of Tukey’s ladder of powers:



This page contains a couple of examples for this power transformation.

Of course, much more needs to be done to better support statistical analysis and workflows in LabPlot. In 2.9 we plan to add a significant amount of new relevant features for statistics. Check this blog post for an overview.

Having mentioned feature requests, no new or bigger features will be added in 2.8 anymore. The plan is to now fix the remaining issues for 2.8 and to invite people soon to beta test it, get more feedback from a broader audience and release it shortly after this.

Qt Creator 4.12.2 released

Wednesday 3rd of June 2020 10:28:24 AM

We are happy to announce the release of Qt Creator 4.12.2!

Cantor during GSoC 2020

Tuesday 2nd of June 2020 07:49:00 PM
  Hello everyone! I'm participating in Google Summer of Code 2020, I am working on KDE Cantor project. The GSoC project is mentored by Alexander Semke - one of the core developers of LabPlot and Cantor. At first, let me introduce you into Cantor and into my GSoC-project:

    Cantor is a KDE application providing a graphical interface to different open-source computer algebra systems and programming languages, like Octave, Maxima, Julia, Python etc. The main idea of this application is to provide one single, common and user-friendly interface for different systems instead of providing different GUIs for different systems. The details specific to the different languages are transparent to the end-user and are handled internally in the language specific parts of Cantor’s code.

    Though the code base of Cantor is already in quite good shape, there is still room for improvements with respect to the missing features, user experience and functional issues.

    Given the existence of similar open-source applications like wxMaxima, Jupyter, Octave GUI but also commercial like Mathematica, it is important to address all issues and to add missing features to become competitive with other applications or even be ahead of them.

    The idea of this project is not to implement one single and big "killer feature" but to address several smaller and bigger open and outstanding topics in Cantor.

  So, as you can see, the idea behind the project is an idea of "smothing the corners". The project also splitted into two section: feature section and bugfix section. And first, I will work on features.

  Since this project consists of many not so big development topics that I plan to finalize in timely manner, I hope to frequently post about the current progress and show you a lot of new cool stuf being added to Cantor in the next three months.

The coding period starts! - GSoC 2020 with KDE and EteSync [Part 2]

Tuesday 2nd of June 2020 01:30:00 PM

Hey everyone! The month-long Community Bonding period of GSoC ‘20 has ended, and with it begins the exciting phase of beginning work on our projects. My project, EteSync sync backend for Akonadi, will add support for syncing users’ contacts, calendars and tasks to Kontact. Here are the insights I’ve gained about the project, as well as my plans for the upcoming phase.

Resources in Akonadi

All address books, calendars and tasks that can be added to KAddressbook and KOrganizer (KDE PIM apps) are because of Akonadi Resources. Resources are processes that get data from a server, and serve it to KDE PIM apps through Akonadi. To this end, Akonadi provides an abstract class called ResourceBase. To create a new resource, one must subclass ResourceBase and implement the relevant functions. This makes the job of a resource developer quite simple, as compared to manually storing data in Akonadi and dealing with a ton of additional things.

Akonadi Resources in action

All resources have their code in the KDE PIM Runtime repository.

So, to create a new resource:

  • Create a new subdirectory for the resource
  • Get kdepim-runtime to build, with any external libraries you might need for your resource (restart Akonadi to see any changes)
  • Implement the resource
Current work

I have already added the new resource subdirectory to the project, and the project is building successfully. Now we get an option to select EteSync while adding a new address book.

Adding a new EteSync address book

I am now working on implementing read-only sync for contacts. This will allow you to select EteSync while adding a new address book in Kontact. After logging in to your account, all your EteSync contacts will be fetched from the server.

What next?

Upcoming work includes implementation to push changes to the server. This needs to be extended to calendars and tasks as well.

Hope to be back with more updates soon!

KDevelop 5.5.2 released

Tuesday 2nd of June 2020 01:28:58 PM
KDevelop 5.5.2 released

We today provide a bug fix and localization update release with version 5.5.2. This release introduces no new features and as such is a safe and recommended update for everyone currently using a previous version of KDevelop 5.5.

You can find the updated Linux AppImage as well as the source code archives on our download page.

Should you have any remarks or in case you find any issues in KDevelop 5.5, please let us know.

ChangeLog kdevelop
  • Remove plugin "kde repo provider" due to defunct service. (commit)
  • Fix extra margins around config pages. (commit)
kdev-python

No user-relevant changes.

kdev-php kossebau Tue, 2020/06/02 - 15:28 Category News Tags release

Kuesa 3D 1.2 release!

Tuesday 2nd of June 2020 11:00:19 AM

Today, KDAB is releasing version 1.2 of the 3D integration workflow Kuesa 3D, built on top of Qt 3D.

Kuesa™ 3D is a complete design-to-code workflow solution for 3D in real-time applications, centered around the open glTF™ 2 format, supported by Blender, Maya and 3ds Max.

Read the Press Release…

In short, Kuesa provides a workflow that simplifies work for both designers and developers. It is centered around the glTF 2 format. The idea behind Kuesa 3D is that changes made on 3D models shouldn’t require much, if any, work on the developer’s side. As a consequence, you can iterate more frequently, get feedback more often and release on time.

In this blog post, we will highlight some of the new features we have introduced. You can get the full details here.

What’s new since 1.1? The Iro Material library

The glTF 2 format currently only supports a Metallic Roughness physically-based material. As a result, it looks great but can be very expensive to render and requires lots of assets. For many use cases, simpler materials can be used instead: this is what the Iro Material library offers.

The library provides several materials that simulate reflections, clear coats of paint or simple transparent surfaces. The benefits of Iro materials are twofold:

  • they significantly reduce your GPU usage (compared with PBR materials), making them ideal for embedded or mobile applications;
  • they offer a real WYSIWYG integration with your 3D authoring tool: Kuesa is going to render your 3D models exactly like they appear in your artists’ editing suite.

This video by my colleague Timo Buske shows Iro Materials in action:

Improved Blender Support

Despite a steep learning curve, Blender is a fantastic tool and the latest version brings lots of interesting features to the table. For that reason, we have added support for Blender 2.8.

We can therefore now rely on the official glTF 2.0 export bundled with Blender. Furthermore, to make the best use of it, we’ve contributed patches that allow extending the exporter through custom extensions.

Then, we’ve added an extension to allow exporting of Iro Materials to glTF 2.0 files. In addition, we’ve also updated the Kuesa Blender addon to show a real time preview of the Iro Materials in Blender. What you see in Blender is what you’ll get with Kuesa 3D.

Improved animation support

Currently, glTF 2.0 only supports animating transformation properties (translation, scale, rotation) of objects. Incidentally, there is a draft of an extension EXT_property_animation to add support to animate any property.

Since that was a really important feature for us, we’ve decided to implement it for Kuesa 3D. In that sense, we’ve added a custom EXT_property_animation to the exporter. Next, we’ve updated the glTF 2 importer in the Kuesa 3D Runtime library to parse it properly.

Hence, we can now animate material, lights or camera properties in Blender, export the scene as a glTF file (with the extensions) and load it with Kuesa 3D Runtime.

An updated offering

With this new release, we changed the offering of the product to satisfy different use-cases.

Kuesa 3D Studio

Kuesa 3D Studio is the complete solution a team can use in production, with everything needed to satisfy both designers and developers:

  • Kuesa 3D Design Plugins:
    • Blender addons supporting:
      • Kuesa 3D extensions for Layers and Iro materials
      • The EXT_property_animation extension
      • Real time preview of the Iro materials in the Eevee viewport
  • Kuesa 3D Tools:
    • glTF editor application allowing preview and introspection of glTF 2 scenes
    • collection of command line tools to check and optimize assets
  • Kuesa 3D Runtime: Qt module library on top of Qt 3D
    •  glTF 2.0 fully compliant loader
    • support of Kuesa 3D and EXT_property_animation extensions
    • retrieve the various resources held in the scene
    • playback animations
    • add post processing effects such as Bloom, Depth of Field …
    • use a premade Qt 3D FrameGraph

The Kuesa 3D Tools and Runtime also support any glTF 2.0 files coming from other application like Maya, 3DS Max or others, as long as they can export to glTF 2.0.

You can find out more about Kuesa 3D Studio here.

Kuesa 3D Runtime

Kuesa 3D Runtime is also available as a separate product, full support from us. The product is available on the Qt marketplace or directly from us. This is perfect if you want to try out Kuesa and see what you can do with it.

Like previous releases, it is freely available under the AGPL 3 license.

Since it is built on top of Qt 3D, you can use the full Qt 3D API to further customize your application. For the most part, you can leverage things like Picking, Camera handling and a lot more for free.

As for actual Qt requirements, Kuesa 3D Runtime requires either the latest Qt 5.12 or the new Qt 5.15 release.

Find out more about Kuesa 3D Runtime.

Additional Contributions

Lastly, Kuesa 3D is also a way for us to justify contributions to open source projects. In that sense, we’ve made a few contributions targeting:

  • Support for registering and creating user defined extension in Blender’s glTF exporter
  • Fixes for animations on the official Blender glTF exporter
  • Qt 3D bug fixes and performance improvements
Try it out!

You can download Kuesa 3D Runtime from our GitHub repository.

Download the latest Kuesa 3D Studio brochure.

Join our live webinars and ask questions:

Watch our Realtime Music Box short video where I explain all this running on one of our demos:

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Kuesa 3D 1.2 release! appeared first on KDAB.

Wayland Status update for Plasma 5.19

Tuesday 2nd of June 2020 06:40:00 AM

We have been busy recently on the Wayland Goal.

A few of those points were already highlight on Nate's excellent blog. But some were missing, and I wanted to highlight those dedicated to Wayland with more context.

The changes I mention here will be present in Plasma 5.19, but they are not exhaustive.

KWin and architecture changes

Thanks to Aleix Pol, KWin Wayland in Plasma 5.19 now has Wayland tablet protocol support meaning we have tablet touch and pen pressure. Kwin patch and KWayland patch

Vlad improved subsurface clipping, it means the compositor can do less work and better figure out what to paint and not to paint. It is most visible for applications like Firefox that uses a lot of Wayland surfaces. D29131 kwin!5

With D29250 the resizing of XWayland windows has become less resource demanding and matches X experience. This still needs next version of XWayland 1.21 to work though.

And KWin has had more changes under the surface.

The KWayland library has been split library into two libraries. One KWayland contains the regular KWayland::Client stuff but the new kwayland-server contains now the KWayland::Server part.

Because KWayland is part of KDE frameworks it is released monthly, independently of Plasma releases, and we have to care about not breaking the API when making changes. But on the other hand KWayland::Server is only ever used by KWin, so it added some churn to the development of KWayland::Server which was not needed. Now KWayland-server will have a Plasma release cycle same as KWin, and will simplify our workflow evolving our Wayland compositor implementation. New depot is at https://invent.kde.org/plasma/kwayland-server.

Plasma

The global application menu has is now working in Wayland thanks to Carson Black patch serie: D27464, D28168, D28150, D28112, D27959, D27818 and D28146, bug. This is typical Wayland work : make new APIs because old ones were X specific or could not work with Wayland, add some wiring code and then use the new one in programs using. This most often end up in better code architecture, better decoupled.

The task manager can now bring forward multiple windows of the same program and remembering in which order they are stacked together bug patch 1 patch 2 patch 3 and patch 4.

KRunner is better positioned.

Kscreen Osd are now usable, allowing you to to setup a newly connected screen and identify your plugged-in screen bug D28817D28818 D28916

Bugfixes

Quite a few crash KWin were resolved :D28668 D28858 D28889 D27536 kwin!9 kwin!8

And drkonqi got a couple fix : D28692 D28832.

Plasma Wayland specific features

Thanks to the way Wayland is architectured, we can add features than are impossible with X server. For instance in plasma 5.19 in Wayland you will be able to setup a scrolling speed for mice and touchpads. D28331 D28310

But our journey is far from over.

Our goal is in part to make Plasma run in Wayland by default allowing pixel perfection, real security in the display server, better performance and overall better architecture.

The road is still long to reach feature parity with the venerable X based sessions. So we need as much people to help whether it is for testing, hacking code or reporting bugs.

There is a Plasma virtual Sprint starting right now spawning two weeks, where a lot of people involved in the Goal will participate. I invite you to join-in : Plasma virtual sprint 2020.

Also Drew DeVault released a great book about Wayland you can read at https://wayland-book.com/ to better understand Wayland inner-workings.

Week 0 – GSoC Project Report

Tuesday 2nd of June 2020 03:51:07 AM

This week corresponds to week 2 of the planned timeline. I had planned to write tests and get started with the MVC classes for the storyboard docker this week. Also the comment menu from previous week was to be implemented.

I managed to setup the comment menu’s delegate and model classes. This would handle the comment section of the main model. It would pass on signals such as toggled visibility, swapping , deletion and addition of comment fields. The comment menu’s model inherits QAbstractListModel. It holds a list of the comment fields such as Action or Dialogue, along with a variable for its visibility state. The visibility variable can be toggled and this should change the visibility of the comment field in the storyboard view.

Setting up the test to build with CMake was a pain, mainly because of my inexperience with CMake. But I managed to build the test and learned some things about CMake.

Writing tests was tougher than I expected since I have no prior experience with writing unit-tests. I wrote test for both models, comment and stroyboard, and their interactions in one test class, mainly because they are supposed to work as a unit. Writing tests gave me a sense of clarity about how I should go about implementing the storyboard model classes. I am using QAbstractItemModelTester from Qt’s test suite to test the model. Also I have implemented some other tests to make sure both the models are in sync, e.g. is adding new comments to commentModel adding columns to the storyboardModel. Also since the QAbstractItemModelTester does not have any destructive testing capabilities, I have tried to cover some of the corner case where the model might fail.

I couldn’t make much progress on implementing the MVC classes for main docker, but I had only planned to start it, so I can cover it in this week.

This week I will focus on getting the MVC classes ready with editable metadata(comment) field.

Google Summer of Code 2020 – Community bonding a bit about text annotation

Tuesday 2nd of June 2020 12:03:25 AM

Hello! As I said in the previous post I will be posting in this blog about my experiences in GSoC 2020 (if you do not know about it, see my first post).

Community bonding period has ended and officially the coding period begins now. This is my second (and late) post and I will talk about one of my main objectives in this project, text annotation, but first a little introduction:

In a supervised learning stage, data annotation is indispensable to machine learning models, so it can learn to recognize predetermined patterns and the algorithm can treat new, non-annotated data and successfully do its task. marK is a machine learning dataset annotation tool that aims to facilitate the important process of annotating data.

Text annotation

Text annotation, one type of data annotation, is the task of labeling text-based data. The acquired metadata make possible to train the learning model to recognize patterns to tackle a huge set of problems and niches. It has a bunch of fields, each one meant to a specific niche/objective, such as:

Phrase chunking Image from brat, an open source text annotation tool

Phrase chunking consists of labelling parts of the text according to their grammatical meaning such as noun, verb, adjective, adverb and prepositional phrase, abbreviated as NP, VP, ADJP, ADVP and PP, respectively.

Named entity recognition Image from doccano, an open source text annotation tool

Named entity recognition (NER) represents a named entity in the text, these entities are labelled with predetermined labels such as corporation, localization, person, etc. Used to discern and recognize selected entities in a text.

Named entity linking

Named entity linking (NEL) is used along side with named entity recognition, its task is to link entity mentions to a corresponding entity in a external knowledge database such as Wikipedia.

By no means this was an exhaustive list, it is meant to list some possibilities of text annotation.

How text annotation should be like in marK

As of how the graphical interface should become I am still not sure, while I studied tools of text annotation for machine learning I perceived that it has a lot of potential to be better than I previously thought. Text annotation in marK should be as flexible as possible allowing the user to annotate easily and comfortably, for this I will talk with my mentor Caio and figure it out what could be the most reasonable way of doing it.

Behind the GUI, marK will have a whole subset of classes that will that care of tasks related to text annotation, having a bridge to the API KTextEditor that will play a big role in this part, being the one responsible for displaying the text and allowing its selection. marK also is going to have classes that will represent the metadata acquired in the annotation, holding the information and afterwards it will be used to generate the output (currently a JSON or XML file).

With this I hope that I have clarified and explained a little better about one of my main goals in this project.

That is it, see you in the next post

Software Product Inventory: what is it and how to implement it.

Monday 1st of June 2020 07:07:33 PM

The concept of inventory applied to software, sometimes called catalogue, is not new. In IT/help-desk it usually refers to the software deployed in your organization. Along the history, there has been many IT Software Inventory Management tools. I first started to think about it beyond that meaning when working in deployments of Linux based desktops at scale.

The popularity that Open Source and Continuous Delivering is providing this traditionally static concept a wider scope as well as more relevance. It is still immature though, so read the article with that in mind.

1.- What is Inventory in software product development?

I like to think about the software inventory as the single source of truth of your software product so the main element for product development and delivery auditing purposes.

Isn’t that the source code?

Yes, but not only. The source code corresponding to the product that you ship (distribute) is a big part of it, but there are other important elements that should be considered part of the inventory like:

  • Requirements and/or tests, logs and results.
  • Technical documentation.
  • Tools and pipelines configuration files.
  • Packages, definitions or recipes…
  • Hashes, signatures, crypto libraries
  • License metadata, manifests, etc.
  • Metadata associated to security checks, permissions descriptions… .
  • Data associated with process performance metrics and monitoring/telemetry.
  • Many more…

When defined that way, the Software Inventory is a concept relevant in every stage of the software product life cycle. When you introduce, change, produce, publish, deploy or distribute any element of your product portfolio, your software inventory should change too.

There are two interesting considerations to add.

1.- If your product is part of a supply chain, like in any Open Source upstream/downstream environment, then the software inventory concept expands and become even more relevant since it can become an essential inbound-outbound control mechanism, even at acquisition time.

2.- In critical environments, especially safety critical ones, keeping such single source of truth goes beyond a “good practice”. Integrity, traceability and reproducibility for example, can be simpler to manage with a Product Software Inventory.

When you think about this particular case, it becomes clear to me that the elements that belongs to the inventory go beyond the actual deliverables or “product sources”. It should also include those elements and tools necessary to generate them, transform them, evaluate, deploy/ship them and evaluate its purpose.

2.- Static vs dynamic concept

Considering the above, the Software Product Inventory is a living construction, so dynamic, with the capacity to be frozen at any point in time (snapshot). This might seem obvious but it implies a different approach than supply and release management has traditionally considered (deliverables).

If evaluating, adding, modifying or managing elements of the inventory requires any action that significantly increases the cycle time of any specific stage, decompose those actions, parallelize them when possible and, when there is no choice, push it right in the pipelines. Ideally, no Software Product Inventory related activity should produce any friction in the code flow.

In a Continuous delivery environment, implementing the inventory requires to take actions across the entire development and delivery processes. Here are some points to consider at key stages:

2.1.- Inbound process: stage 0 of the development process

No software or any other element can become part of the product portfolio if it is not present in the Software Inventory. It make sense to implement the Product Inventory concept as part of the inbound stage (stage 0). Following Continuous Delivery principles and practices, here are some things to avoid vs promote:

  • Handovers or manual/committee-based vs code-review-like approval processes (pull vs push).
  • Completeness vs walking skeleton approach.
  • Management oriented (document based) vs engineering oriented (git based) tooling whenever possible.
  • Reports (manual) vs evidences (automated and reproducible) as output.
  • Access control vs signing and encrypting (if needed).

Avoid gate keeping activities. It is better to promote high throughput and short feedback loops than “quality gates” to improve product quality. If an evaluation is not completed, it is better to tag such piece of software as pending for a decision and letting the code flow than to have the engineers waiting for a decision of third parties.

I recognize that the concept might be too abstract to be easy to buy at first beyond the inbound and outbound (release/deploy) stages. Sadly, there is a strong tendency to pick up the concept at the inbound stage to establish early on a gate keeper, committee-based process to control the software that the developers use in the project, frequently compromising the code flow at a very early stage.

I prefer to focus on the procurement stage in the case of suppliers or how the relation is established with partners first. These are hand-over processes that heavily benefit from restructuring them, reducing the acquisition and on-boarding time and conditions.

More frequently that I would like to admit, Open Source is becoming a driver in this wrong direction, in many cases due to the proliferation of Open Source Offices in corporations that prefer to focus their initial attention in establishing specific policies for their own developers than in to changing their relations with partners and suppliers.

This is frequently due to a lack of understating of software product development at scale and what Continuous Delivery is about. In a nutshell, having their own engineers selecting the right Open Source software is prioritized over changing the relation with their existing commercial ecosystem, a more difficult but higher impact activity in many cases, according to my experience.

2.2.- Outbound process (deployment or release): last last stage of the delivery process.

The inventory accumulates all the elements required to ship/deploy the product plus all the elements required to recreate and evaluate the development and delivery process as well as the product itself, no matter if they are released or deployed. Ideally, this elements are evidence-based instead of report-based.

Like in the inbound case, each element of the Inventory should signed/encrypted as well as the overall snapshot, associated to the deployed/released product version. In case you are consuming or producing Free Software, please see Open Chain Project specification for more information about some good practices.

2.3.- Intermediate stages

As previously mentioned, the concept of Inventory is relevant at every stage of the development/delivery process. In general, it is all about generating additional/parallel outputs within the pipelines, signing and storing them in association with the related source code and binaries in a way that those evidences become “part of the product”. Using proprietary tools might break the trust chain in your process. This is something to consider carefully in safety critical environments. You will also need to consider the hardware, including dev. versions and prototypes.

A very interesting and open field is the Inventory concept in the context of safety critical certifications that traditionally have been very report-heavy-oriented. In this regard, I find the usage of system thinking very promising. Check Trustable Software, for instance.

3.- Some practical advice

3.1.- Walking skeleton vs completeness

I love the walking skeleton concept to design and implement processes in product development. It is significantly better to establish and end-to-end approach to the Inventory, where it has a light/soft/incomplete presence along the entire development/delivery cycle, than trying to implement it stage by stage following completeness, preventing you from having process-wide feedback loops.

It is not so much about doing it right as it is about moving fast in the right direction.

For instance, a frequent mistake is to concentrate most of the activities related to software license compliance on the inbound and outbound stages. Software license conformance and clearance has traditionally been perceived in many industries as a validation process performed by specialists, just like testing was done not so long ago, or a procurement action (acquisition).

Although lately more and more corporations are promoting the execution of license compliance activities at both stages (inbound and outbound), since they consume and ship more and more FOSS, they are still very report-based, specialists driven and management controlled activity.

I have witnessed enough dramatic situations to understand and promote that software license compliance is everybody’s job, just like tests or technical documentation (everything as code approach). Software license conformance and clearance, together with security, testing and technical documentation, can become the key drivers of the implementation of the Product Inventory concept. They share the same principles in this regard. The history of testing is the mirror to look at.

Decompose the software license compliance activities (conformance and clearance) and perform them across your pipelines. Start by executing simple conformance checks (REUSE) early on (inbound process, for instance). Coordinate such activities with the security team to also perform simple static code analysis. Agree with the architects or tech leaders in checking coding guidelines or other elements that can have a future impact in quality taking advantage of the Inventory concept. Add not just the software and the checks to the Inventory but also the results, logs and simple tools/scripts used.

More time consuming and intensive activities using more complex static code analysis or code scanning (licenses) tools can use the inventory as source (pull approach), instead of requesting the teams to perform such activities on their own (outside the pipelines) or establishing hand-over processes with specialists.

Be careful about how and when you include such activities in the pipelines though. Again, decompose and parallelize. And only when there is no choice, push these activities right in the delivery process. But do not break the code flow.

3.2.- Keep It Simple Stupid until it is not stupid anymore.

Here are some simple actions to start with…

At the beginning at least, use for the Inventory the same tools you are already using for the development of the product. Initially, your inventory can be nothing more than a file with a list of repos, hashes and links pointing at product elements location. This already have a value for security and software license compliance teams.

If you use a multi-repository approach pay attention to where the build tool pull the software from (definitions/recipes) to integrate the product. Make sure your initial inventory and the build tool are “in sync”. This will have a tremendous impact later on.

Export, sign and export the technical documentation living in your repositories (markdown, plantUML, .svg etc) as documents if they need to be part of the product deliverables, so you can establish simple checks to confirm their presence , integrity, etc.. This outputs should be also part of the Inventory as well.

Many of you already perform these and many other activities as part of the development and delivery process. The question is what to do with the associated metadata, the tooling used to generate them, the intermediate states required to get the output, the executed scripts, how to related them with the code and with other elements from previous stages of the process, how you guarantee their persistence, integrity, how you manage them at scale, how to store them etc..

3.3.- Act as an auditor

I always ask myself the following question: if I replicate the complete product development system providing all the product inputs for auditing purposes, how can I save time to the auditor so she does not need to understand the system itself and perform all the actions again to fully trust the output and the system itself (what/how we did, evidence based) and not myself or the workforce (who did it, report based)? Remember that you or your team will be such auditors in the future.

3.4.- Do not push the concept too early

If your stating point is very management (so reporting) driven, like for instance an environment where requirements or license information is generated and kept in .docx documents, the usage of binaries over source code is the norm, where proprietary tools are not questioned or where trust in the actual processes and outputs are based on who signs the associated reports (Officers) instead of in evidences, save your self headaches and do not try to push the Inventory concept beyond your area of direct influence. In my experience, it is worthless. You have a different battle to fight, a different and deeper problem to solve in such case.

In such cases, simply get ready in your area of influence for the future crisis to come, that will hit your product, so you get a chance to become part of the potential solution. Look for allies like the security, software license compliance and technical writing teams for instance, to expand that ring of influence. Hopefully they will see the value and the inventory how it can become a trigger to support modern and widely accepted practices within their domains across the organization.

4.- Summary

The Software Product Inventory is a high level (abstract) concept that will help you to move towards creating trustable software. Since it is part of the development process, following Continuous Delivery principles and practices to design and implement it is essential.

Some of the concepts behind this idea are probably present already in your organization but frequently using a management-driven approach, associated to the release and/or inbound stages, implemented in ways that generates a negative impact (friction) in the product code flow, so throughput and stability.

System thinking and asking questions as if you were an auditor will help you to implement simple measures first, and habits later, that will raise the quality of the product over time, even if the Product Inventory concept does not fly in your team or organization. That approach has helped me, at least.

Second Beta for Krita 4.3.0 Released

Monday 1st of June 2020 03:36:58 PM

This is the second beta release for Krita 4.3.0. It’s later than expected because our system for making release builds was temporarily unavailable.

Since the first beta, the following issues have been addressed:

  • Fix Color picking in freehand path and bezier curve tool (BUG:373037).
  • Fix zooming after changing the image resolution (BUG:421797)
  • Switch the stabilizer to always use scalable distance (BUG:421314)
  • Make sure channel thumbnails are not inverted when working with CMYK images (BUG:421442)
  • Make it possible to use save incremental and incremental backup on files in folders that are named to look like incremental saves or backups (BUG:421792)
  • The Python API for handling Document and Node objects is now synchronous: you do not have to add extra waitForDone calls anymore. (BUG:401497)
  • On macOS, support for using modifier keys with canvas input actions has been improved ( BUG:372646, BUG:373299, BUG:391088)
  • Implement touch support for Wacom tablets. Patch by P. Varet — thanks! (BUG:421295)
  • Fix issues with files taking a long time to save (BUG:421584)
  • Make the placeholder text in the text shape shorter and translatable (BUG:421663)
  • Shift-click on a layer to see the layer in isolation doesn’t change the visibility state of all layers anymore
  • Animation frames outside the requested range are no longer rendered
  • Make the autosave recovery dialog clearer (BUG:420014)
  • Properly play animations and show onion skins when viewing layers in isolation (BUG:394199)
  • Fix the position of the text shape editor on Windows (BUG:419529)
  • Fix gamut mask rendering (BUG:421142)
  • Fix artefacts when rendering the marching ants outline for small or complex selections (BUG:407868, BUG:419240, BUG:413220)
  • The animation timeline now correctly highlights the current frame after loading a file (BUG:403854)
  • Correctly align the onion skin after cropping an image (BUG:419462)
  • Fix rendering animations with odd render dimensions (BUG:396128)
  • Set the default values for the split layer dialog to something sensible
  • Fix eraser mode to be reset when the same color is picked from the canvas (BUG:415476)
  • Fix the aspect ratio of layer and channel thumbnails
  • Show the unsqueezed text of a squeezed combobox as a tooltip (BUG:415117)
  • Add more translation context in several places
  • Fix selecting colors in the stroke selection dialog (BUG:411482)
  • Fix the memory management of documents created from Python (BUG:412740)

Also, Rafał Mikrut has submitted many fixes for issues with memory management and pointer access. Thanks!

The full release notes bring you all the details!

Please help improve Krita by testing this beta!

Download Windows

If you’re using the portable zip files, just open the zip file in Explorer and drag the folder somewhere convenient, then double-click on the krita icon in the folder. This will not impact an installed version of Krita, though it will share your settings and custom resources with your regular installed version of Krita. For reporting crashes, also get the debug symbols folder.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

OSX

Note: the gmic-qt is not available on OSX.

Android

The Android builds were made from git, not the release tarball, so they don’t have translations. The beta is labeled 4.3.0-beta1 but actually contains all the fixes for beta 2, except for the commit that changed the version number.

This version of Krita for Android can load .kra files from Google Drive folders on ChromeOS, has fixes for problems with the menubar on some Samsung devices and has Samsung Air gestures integrated.

It is still not recommended to use these betas on phones, though they do install. This beta will also be available in the Google Play Store.

Source code md5sum

For all downloads:

Key The Linux appimage and the source .tar.gz and .tar.xz tarballs are signed. You can retrieve the public key over https here: 0x58b9596c722ea3bd.asc. The signatures are here (filenames ending in .sig).-->

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos! With your support, we can keep the core team working on Krita full-time.

Status report: Week 1

Monday 1st of June 2020 11:45:00 AM

Hey all! This is my first report of the project’s Coding Period.

The project’s objectives for this week are:

  • define the new generator
  • build SeExpr
  • and try calling it from within Krita

I had also promised in the previous post to:

  • dissect SeExpr
  • write up a list of the supported libraries in each OS.
Krita’s library support state

I’ve dissected each platform’s build scripts into this Google Docs sheet. This piece of work was done against the 4.3 branch, not taking into account some fixes I wrote for the AppImage.

SeExpr prototype

I’m glad to say, I’ve been mostly successful!

Windows and macOS: success! Linux, not so much... The new generator

0638ce85 introduces a new type of layer generator identified like its namesake, seexpr.

File format-wise, it’s expressed in the manifest as a layer of type generatorlayer:

<layer opacity="255" channelflags="" locked="0" filename="layer2" uuid="{1b9dbcc4-7dbc-4c57-95c5-3b3353ce0eac}" selected="true" generatorname="seexpr" x="0" compositeop="normal" nodetype="generatorlayer" colorlabel="0" y="0" name="Layer 3" visible="1" collapsed="0" intimeline="1" generatorversion="1"/>

while its script is stored in the layer’s configuration file (layers/layer2.filterconfig):

<!DOCTYPE params> <params version="1"> <param name="script" type="string"><![CDATA[$val=voronoi(5*[$u,$v,.5],4,.6,.2); $color=ccurve($val, 0.000, [0.141, 0.059, 0.051], 4, 0.185, [0.302, 0.176, 0.122], 4, 0.301, [0.651, 0.447, 0.165], 4, 0.462, [0.976, 0.976, 0.976], 4); $color ]]></param> </params>

Code-wise, the new generator is a mixture of:

  • a port of Disney’s initialization code, adapted to use Qt’s types (QMap and sanity assertions),
  • the existing Simplex Noise generator, which I used as a basis to understand how to get its configuration and report progress.
A quick dissection of SeExpr

The library itself can be accessed with two bits of code. The first has been named SeExprExpressionContext; it’s a subclass of SeExpr’s Expression class, adapted to provide four variables so far:

  • u and v are the current pixel’s normalized, centerpoint coordinates;
  • w and h are the image’s width and height.

The SeExprExpressionContext can be initialized directly, with just the string that’s retrieved from the configuration. On each iteration, I get the pixel’s coordinates, update them in the expression context, and evaluate the expression. And since the library outputs floating-point normalized RGB, I ship these values directly as a instance of Qt’s QColor.

void KisSeExprGenerator::generate(KisProcessingInformation dstInfo, const QSize &size, const KisFilterConfigurationSP config, KoUpdater *progressUpdater) const { KisPaintDeviceSP device = dstInfo.paintDevice(); Q_ASSERT(!device.isNull()); Q_ASSERT(config); if (config) { QString script = config->getString("script"); QRect bounds = QRect(dstInfo.topLeft(), size); const KoColorSpace *cs = device->colorSpace(); KisSequentialIteratorProgress it(device, bounds, progressUpdater); SeExprExpressionContext expression(script); expression.m_vars["u"] = new SeExprVariable(); expression.m_vars["v"] = new SeExprVariable(); expression.m_vars["w"] = new SeExprVariable(bounds.width()); expression.m_vars["h"] = new SeExprVariable(bounds.height()); if (expression.isValid() && expression.returnType().isFP(3)) { double pixel_stride_x = 1. / bounds.width(); double pixel_stride_y = 1. / bounds.height(); double &u = expression.m_vars["u"]->m_value; double &v = expression.m_vars["v"]->m_value; while(it.nextPixel()) { u = pixel_stride_x * (it.x() + .5); v = pixel_stride_y * (it.y() + .5); const qreal* value = expression.evalFP(); QColor color; // SeExpr already outputs normalized RGB color.setRedF(value[0]); color.setGreenF(value[1]); color.setBlueF(value[2]); cs->fromQColor(color, it.rawData()); } } } } The Linux problem

It works really nice, under Windows and macOS. Linux, however…

Under Linux, SeExpr’s interpreter doesn’t work correctly when invoked from another namespace. Inside Krita, it truncates all floating-point values in its internal state, as shown from this dump (look at the fp section):

Parse tree desired type lifetime_error Float[3] actual varying Float[3] '$val=voronoi(5*[$u,$v,.5],4,.6,.2); $color=ccurve($val, 0.000, [0.141, 0.059, 0.051], 4, 0.185, [0.302, 0.176, 0.122], 4, 0.301, [0.651, 0.447, 0.165], 4, 0.462, [0.976, 0.976, 0.976], 4); $color' N7SeExpr214ExprModuleNodeE type=varying Float[3] '$val=voronoi(5*[$u,$v,.5],4,.6,.2); $color=ccurve($val, 0.000, [0.141, 0.059, 0.051], 4, 0.185, [0.302, 0.176, 0.122], 4, 0.301, [0.651, 0.447, 0.165], 4, 0.462, [0.976, 0.976, 0.976], 4); $color' N7SeExpr213ExprBlockNodeE type=varying Float[3] '$val=voronoi(5*[$u,$v,.5],4,.6,.2);' N7SeExpr28ExprNodeE type=varying None '$val=voronoi(5*[$u,$v,.5],4,.6,.2);' N7SeExpr214ExprAssignNodeE type=varying None 'voronoi(5*[$u,$v,.5],4,.6,.2)' N7SeExpr212ExprFuncNodeE type=varying Float[3] '5*[$u,$v,.5]' N7SeExpr216ExprBinaryOpNodeE type=varying Float[3] '5' N7SeExpr211ExprNumNodeE type=constant Float '[$u,$v,.5]' N7SeExpr211ExprVecNodeE type=varying Float[3] '$u' N7SeExpr211ExprVarNodeE type=varying Float '$v' N7SeExpr211ExprVarNodeE type=varying Float '.5' N7SeExpr211ExprNumNodeE type=constant Float '4' N7SeExpr211ExprNumNodeE type=constant Float '.6' N7SeExpr211ExprNumNodeE type=constant Float '.2' N7SeExpr211ExprNumNodeE type=constant Float '$color=ccurve($val, 0.000, [0.141, 0.059, 0.051], 4, 0.185, [0.302, 0.176, 0.122], 4, 0.301, [0.651, 0.447, 0.165], 4, 0.462, [0.976, 0.976, 0.976], 4);' N7SeExpr214ExprAssignNodeE type=varying None 'ccurve($val, 0.000, [0.141, 0.059, 0.051], 4, 0.185, [0.302, 0.176, 0.122], 4, 0.301, [0.651, 0.447, 0.165], 4, 0.462, [0.976, 0.976, 0.976], 4)' N7SeExpr212ExprFuncNodeE type=varying Float[3] '$val' N7SeExpr211ExprVarNodeE type=varying Float[3] '0.000' N7SeExpr211ExprNumNodeE type=constant Float '[0.141, 0.059, 0.051]' N7SeExpr211ExprVecNodeE type=constant Float[3] '0.141' N7SeExpr211ExprNumNodeE type=constant Float '0.059' N7SeExpr211ExprNumNodeE type=constant Float '0.051' N7SeExpr211ExprNumNodeE type=constant Float '4' N7SeExpr211ExprNumNodeE type=constant Float '0.185' N7SeExpr211ExprNumNodeE type=constant Float '[0.302, 0.176, 0.122]' N7SeExpr211ExprVecNodeE type=constant Float[3] '0.302' N7SeExpr211ExprNumNodeE type=constant Float '0.176' N7SeExpr211ExprNumNodeE type=constant Float '0.122' N7SeExpr211ExprNumNodeE type=constant Float '4' N7SeExpr211ExprNumNodeE type=constant Float '0.301' N7SeExpr211ExprNumNodeE type=constant Float '[0.651, 0.447, 0.165]' N7SeExpr211ExprVecNodeE type=constant Float[3] '0.651' N7SeExpr211ExprNumNodeE type=constant Float '0.447' N7SeExpr211ExprNumNodeE type=constant Float '0.165' N7SeExpr211ExprNumNodeE type=constant Float '4' N7SeExpr211ExprNumNodeE type=constant Float '0.462' N7SeExpr211ExprNumNodeE type=constant Float '[0.976, 0.976, 0.976]' N7SeExpr211ExprVecNodeE type=constant Float[3] '0.976' N7SeExpr211ExprNumNodeE type=constant Float '0.976' N7SeExpr211ExprNumNodeE type=constant Float '0.976' N7SeExpr211ExprNumNodeE type=constant Float '4' N7SeExpr211ExprNumNodeE type=constant Float '$color' N7SeExpr211ExprVarNodeE type=varying Float[3] Eval strategy is interpreter ---- ops ---------------------- (null) 0x7ffff7eff210 ( 2 4) (null) 0x7ffff7eff210 ( 3 5) (null) 0x7ffff7f05d50 ( 4 5 6 7) _ZN7SeExpr27PromoteILi3EE1fEPiPdPPcRSt6vectorIiSaIiEE 0x7ffff7edbf90 ( 3 10) (null) 0x7ffff7eef900 ( 10 7 13) _ZN7SeExpr214ExprFuncSimple6EvalOpEPiPdPPcRSt6vectorIiSaIiEE 0x7ffff7edc5f0 ( 4 5 20 19 13 16 17 18) (null) 0x7ffff7f04b40 ( 20 0) (null) 0x7ffff7f05d50 ( 27 28 29 30) (null) 0x7ffff7f05d50 ( 35 36 37 38) (null) 0x7ffff7f05d50 ( 43 44 45 46) (null) 0x7ffff7f05d50 ( 51 52 53 54) _ZN7SeExpr214ExprFuncSimple6EvalOpEPiPdPPcRSt6vectorIiSaIiEE 0x7ffff7edc5f0 ( 6 7 59 58 0 26 30 33 34 38 41 42 46 49 50 54 57) (null) 0x7ffff7f04b40 ( 59 23) ---- opdata ---------------------- opData[0]= 2 opData[1]= 4 opData[2]= 3 opData[3]= 5 opData[4]= 4 opData[5]= 5 opData[6]= 6 opData[7]= 7 opData[8]= 3 opData[9]= 10 opData[10]= 10 opData[11]= 7 opData[12]= 13 opData[13]= 4 opData[14]= 5 opData[15]= 20 opData[16]= 19 opData[17]= 13 opData[18]= 16 opData[19]= 17 opData[20]= 18 opData[21]= 20 opData[22]= 0 opData[23]= 27 opData[24]= 28 opData[25]= 29 opData[26]= 30 opData[27]= 35 opData[28]= 36 opData[29]= 37 opData[30]= 38 opData[31]= 43 opData[32]= 44 opData[33]= 45 opData[34]= 46 opData[35]= 51 opData[36]= 52 opData[37]= 53 opData[38]= 54 opData[39]= 6 opData[40]= 7 opData[41]= 59 opData[42]= 58 opData[43]= 0 opData[44]= 26 opData[45]= 30 opData[46]= 33 opData[47]= 34 opData[48]= 38 opData[49]= 41 opData[50]= 42 opData[51]= 46 opData[52]= 49 opData[53]= 50 opData[54]= 54 opData[55]= 57 opData[56]= 59 opData[57]= 23 ----- fp -------------------------- fp[0]= 0 fp[1]= 0 fp[2]= 0 fp[3]= 5 fp[4]= 0 fp[5]= 0 fp[6]= 0.5 fp[7]= 0 fp[8]= 0 fp[9]= 0.5 fp[10]= 5 fp[11]= 5 fp[12]= 5 fp[13]= 0 fp[14]= 0 fp[15]= 2.5 fp[16]= 4 fp[17]= 0.6 fp[18]= 0.2 fp[19]= 4 fp[20]= 0 fp[21]= 0 fp[22]= 0 fp[23]= 0 fp[24]= 0 fp[25]= 0 fp[26]= 0 fp[27]= 0.141 fp[28]= 0.059 fp[29]= 0.051 fp[30]= 0.141 fp[31]= 0.059 fp[32]= 0.051 fp[33]= 4 fp[34]= 0.185 fp[35]= 0.302 fp[36]= 0.176 fp[37]= 0.122 fp[38]= 0.302 fp[39]= 0.176 fp[40]= 0.122 fp[41]= 4 fp[42]= 0.301 fp[43]= 0.651 fp[44]= 0.447 fp[45]= 0.165 fp[46]= 0.651 fp[47]= 0.447 fp[48]= 0.165 fp[49]= 4 fp[50]= 0.462 fp[51]= 0.976 fp[52]= 0.976 fp[53]= 0.976 fp[54]= 0.976 fp[55]= 0.976 fp[56]= 0.976 fp[57]= 4 fp[58]= 13 fp[59]= 0 fp[60]= 0 fp[61]= 0 ---- str ---------------------- s[0] reserved for datablock = 0 s[1] is indirectIndex = 0 s[2]= 0x��UUUU '��UU...' s[3]= 0x��UUUU '��UU...' s[4]= 0x�)��� '�)��...' s[5]= 0x�)��� '�)��...' s[6]= 0x�*��� '�*��...' s[7]= 0x +��� ' +��...' ending with isValid 1 parse error Parse tree desired type lifetime_error Float[3] actual varying Float[3] '$val=voronoi(5*[$u,$v,.5],4,.6,.2); $color=ccurve($val, 0.000, [0.141, 0.059, 0.051], 4, 0.185, [0.302, 0.176, 0.122], 4, 0.301, [0.651, 0.447, 0.165], 4, 0.462, [0.976, 0.976, 0.976], 4); $color' N7SeExpr214ExprModuleNodeE type=varying Float[3] '$val=voronoi(5*[$u,$v,.5],4,.6,.2); $color=ccurve($val, 0.000, [0.141, 0.059, 0.051], 4, 0.185, [0.302, 0.176, 0.122], 4, 0.301, [0.651, 0.447, 0.165], 4, 0.462, [0.976, 0.976, 0.976], 4); $color' N7SeExpr213ExprBlockNodeE type=varying Float[3] '$val=voronoi(5*[$u,$v,.5],4,.6,.2);' N7SeExpr28ExprNodeE type=varying None '$val=voronoi(5*[$u,$v,.5],4,.6,.2);' N7SeExpr214ExprAssignNodeE type=varying None 'voronoi(5*[$u,$v,.5],4,.6,.2)' N7SeExpr212ExprFuncNodeE type=varying Float[3] '5*[$u,$v,.5]' N7SeExpr216ExprBinaryOpNodeE type=varying Float[3] '5' N7SeExpr211ExprNumNodeE type=constant Float '[$u,$v,.5]' N7SeExpr211ExprVecNodeE type=varying Float[3] '$u' N7SeExpr211ExprVarNodeE type=varying Float '$v' N7SeExpr211ExprVarNodeE type=varying Float '.5' N7SeExpr211ExprNumNodeE type=constant Float '4' N7SeExpr211ExprNumNodeE type=constant Float '.6' N7SeExpr211ExprNumNodeE type=constant Float '.2' N7SeExpr211ExprNumNodeE type=constant Float '$color=ccurve($val, 0.000, [0.141, 0.059, 0.051], 4, 0.185, [0.302, 0.176, 0.122], 4, 0.301, [0.651, 0.447, 0.165], 4, 0.462, [0.976, 0.976, 0.976], 4);' N7SeExpr214ExprAssignNodeE type=varying None 'ccurve($val, 0.000, [0.141, 0.059, 0.051], 4, 0.185, [0.302, 0.176, 0.122], 4, 0.301, [0.651, 0.447, 0.165], 4, 0.462, [0.976, 0.976, 0.976], 4)' N7SeExpr212ExprFuncNodeE type=varying Float[3] '$val' N7SeExpr211ExprVarNodeE type=varying Float[3] '0.000' N7SeExpr211ExprNumNodeE type=constant Float '[0.141, 0.059, 0.051]' N7SeExpr211ExprVecNodeE type=constant Float[3] '0.141' N7SeExpr211ExprNumNodeE type=constant Float '0.059' N7SeExpr211ExprNumNodeE type=constant Float '0.051' N7SeExpr211ExprNumNodeE type=constant Float '4' N7SeExpr211ExprNumNodeE type=constant Float '0.185' N7SeExpr211ExprNumNodeE type=constant Float '[0.302, 0.176, 0.122]' N7SeExpr211ExprVecNodeE type=constant Float[3] '0.302' N7SeExpr211ExprNumNodeE type=constant Float '0.176' N7SeExpr211ExprNumNodeE type=constant Float '0.122' N7SeExpr211ExprNumNodeE type=constant Float '4' N7SeExpr211ExprNumNodeE type=constant Float '0.301' N7SeExpr211ExprNumNodeE type=constant Float '[0.651, 0.447, 0.165]' N7SeExpr211ExprVecNodeE type=constant Float[3] '0.651' N7SeExpr211ExprNumNodeE type=constant Float '0.447' N7SeExpr211ExprNumNodeE type=constant Float '0.165' N7SeExpr211ExprNumNodeE type=constant Float '4' N7SeExpr211ExprNumNodeE type=constant Float '0.462' N7SeExpr211ExprNumNodeE type=constant Float '[0.976, 0.976, 0.976]' N7SeExpr211ExprVecNodeE type=constant Float[3] '0.976' N7SeExpr211ExprNumNodeE type=constant Float '0.976' N7SeExpr211ExprNumNodeE type=constant Float '0.976' N7SeExpr211ExprNumNodeE type=constant Float '4' N7SeExpr211ExprNumNodeE type=constant Float '$color' N7SeExpr211ExprVarNodeE type=varying Float[3] Eval strategy is interpreter ---- ops ---------------------- (null) 0x7fffe42c6210 ( 2 4) (null) 0x7fffe42c6210 ( 3 5) (null) 0x7fffe42ccd50 ( 4 5 6 7) _ZN7SeExpr27PromoteILi3EE1fEPiPdPPcRSt6vectorIiSaIiEE 0x7fffe42a2f90 ( 3 10) (null) 0x7fffe42b6900 ( 10 7 13) _ZN7SeExpr214ExprFuncSimple6EvalOpEPiPdPPcRSt6vectorIiSaIiEE 0x7fffe42a35f0 ( 4 5 20 19 13 16 17 18) (null) 0x7fffe42cbb40 ( 20 0) (null) 0x7fffe42ccd50 ( 27 28 29 30) (null) 0x7fffe42ccd50 ( 35 36 37 38) (null) 0x7fffe42ccd50 ( 43 44 45 46) (null) 0x7fffe42ccd50 ( 51 52 53 54) _ZN7SeExpr214ExprFuncSimple6EvalOpEPiPdPPcRSt6vectorIiSaIiEE 0x7fffe42a35f0 ( 6 7 59 58 0 26 30 33 34 38 41 42 46 49 50 54 57) (null) 0x7fffe42cbb40 ( 59 23) ---- opdata ---------------------- opData[0]= 2 opData[1]= 4 opData[2]= 3 opData[3]= 5 opData[4]= 4 opData[5]= 5 opData[6]= 6 opData[7]= 7 opData[8]= 3 opData[9]= 10 opData[10]= 10 opData[11]= 7 opData[12]= 13 opData[13]= 4 opData[14]= 5 opData[15]= 20 opData[16]= 19 opData[17]= 13 opData[18]= 16 opData[19]= 17 opData[20]= 18 opData[21]= 20 opData[22]= 0 opData[23]= 27 opData[24]= 28 opData[25]= 29 opData[26]= 30 opData[27]= 35 opData[28]= 36 opData[29]= 37 opData[30]= 38 opData[31]= 43 opData[32]= 44 opData[33]= 45 opData[34]= 46 opData[35]= 51 opData[36]= 52 opData[37]= 53 opData[38]= 54 opData[39]= 6 opData[40]= 7 opData[41]= 59 opData[42]= 58 opData[43]= 0 opData[44]= 26 opData[45]= 30 opData[46]= 33 opData[47]= 34 opData[48]= 38 opData[49]= 41 opData[50]= 42 opData[51]= 46 opData[52]= 49 opData[53]= 50 opData[54]= 54 opData[55]= 57 opData[56]= 59 opData[57]= 23 ----- fp -------------------------- fp[0]= 0 fp[1]= 0 fp[2]= 0 fp[3]= 5 fp[4]= 0 fp[5]= 0 fp[6]= 0 fp[7]= 0 fp[8]= 0 fp[9]= 0 fp[10]= 5 fp[11]= 5 fp[12]= 5 fp[13]= 0 fp[14]= 0 fp[15]= 0 fp[16]= 4 fp[17]= 0 fp[18]= 0 fp[19]= 4 fp[20]= 0 fp[21]= 0 fp[22]= 0 fp[23]= 0 fp[24]= 0 fp[25]= 0 fp[26]= 0 fp[27]= 0 fp[28]= 0 fp[29]= 0 fp[30]= 0 fp[31]= 0 fp[32]= 0 fp[33]= 4 fp[34]= 0 fp[35]= 0 fp[36]= 0 fp[37]= 0 fp[38]= 0 fp[39]= 0 fp[40]= 0 fp[41]= 4 fp[42]= 0 fp[43]= 0 fp[44]= 0 fp[45]= 0 fp[46]= 0 fp[47]= 0 fp[48]= 0 fp[49]= 4 fp[50]= 0 fp[51]= 0 fp[52]= 0 fp[53]= 0 fp[54]= 0 fp[55]= 0 fp[56]= 0 fp[57]= 4 fp[58]= 13 fp[59]= 0 fp[60]= 0 fp[61]= 0 ---- str ---------------------- s[0] reserved for datablock = 0 s[1] is indirectIndex = 0 s[2]= 0x�41�� '�41�...' s[3]= 0x�41�� '�41�...' s[4]= 0x��/�� '��/�...' s[5]= 0xؙ/�� 'ؙ/�...' s[6]= 0xȚ/�� 'Ț/�...' s[7]= 0x �/�� ' �/�...' ending with isValid 1 parse error Dumps of SeExpr's interpreter state of the Voronoi-lava program. On the left, the state as produced under the imageSynth2 demo program. On the right, the state as produced under Krita's generator. The fp section shows every relevant value has been truncated.

I must own that I don’t really know how to solve this yet. I’ve tested with a more direct port of the initialization (using std::map, const double instead of const real), but it still corrupts its internal state. I’ll file a question in Disney’s repo and see if they have any idea what could be going on.

What’s next?

In trying to make sure I could test everything, I covered four weeks in one (Week 1, 2, 3, and 7

Basic Subtitling Support in Kdenlive – GSoC ’20

Monday 1st of June 2020 05:30:22 AM

Greetings to all!

A month ago I was selected to participate as a student in Google Summer of Code with Kdenlive. The Community Bonding period is coming to an end and the coding period will soon commence. 

In this post, I am going to talk about what the project is about, how I plan to implement it, and what all I have done in the community bonding period to ensure a smooth and bump-free coding period.

About the Project

Kdenlive is largely limited in its ability to customize and edit subtitles. At present, subtitles are added as an effect, namely the Subtitle effect, this effect uses an FFmpeg filter to burn the subtitle file onto the respective video.

Basic subtitling support in Kdenlive can be achieved by extending the functionality of the existing “Subtitle” filter, thereby giving users more choices over subtitle customization.

Planned Implementation
  • A class will have to be created to store the subtitle lines with their duration and position in the timeline. All customizations made by the user to the subtitle file like altering the time, text, and color of the subtitles will be handled by this class. 
  • Design a user-friendly user interface like a separate QML track for users to effortlessly customize subtitles in their projects.
Community Bonding Period

First off, I completed a few prerequisites, like applying for a KDE Developer account, adding my blog to Planet KDE, to name a few.

Discussed with my mentors on how I plan to approach the implementation and finalized what I have to do over the course of the upcoming 3 months.

I went through the code base of Kdenlive to understand how the Subtitle effect is handled and how the parameter values of different FFmpeg filters can be manipulated. I also understood the standard method of writing a class used in the application. 

Now that I have better clarity on different aspects of the implementation, I am looking forward to a productive coding period

Plasma Vault and gocryptfs

Monday 1st of June 2020 12:00:00 AM

I promised gocryptfs support in Vault a long time ago, but I kept failing to deliver on that promise because of other obligations, life and work happenings.

Now, the beauty of Free Software is that the users do not need to rely only on my free time for new Vault features.

Martino Pilia sat down and wrote a gocryptfs backend for Plasma Vault which has been merged and will be available in Plasma 5.19. Many thanks for that!

gocryptfs in Plasma Vault

As with all new things, you are advised to be cautious as there might be some bugs remaining we haven’t detected.

KDE Privacy Team You can support my work on Patreon, or you can get my book Functional Programming in C++ at Manning if you're into that sort of thing. -->

Klinker library in KDE Connect Sms app

Sunday 31st of May 2020 10:07:40 PM

So today GSoC’s three months coding period officially begins. Last one month I spent bonding with my mentors and have tried to establish the prerequisites required for the rest of the project. My project for GSoC 2020 is to improve the MMS support to KDE Connect’s SMS app.
During the community bonding period, the first challenge we had to face was to implement a way to send MMS messages from KDE Connect’s android app and it becomes more challenging when you will come to know that android’s MMS API’s are hidden and there is no documentation available for the same. This task alone becomes beyond the scope of a GSoC project.
With the help of some luck, we found the Klinker library which is an opensource sm-mms library for android. I spent some time going through its implementation and after having the understanding of how it works, I started implementing it in KDE Connect and within two weeks I was able to send MMS messages through KDE Connect for the first time.
I would say, It is a great library for third-party android developers who wants to implement similar functionality in their applications.

Apart from this, the work to add MMS support in KDE Connect SMS app is in progress. Very soon we will be going to have MMS support in KDE Connect! Here’s a short demonstration of what I have implemented till now.

More in Tux Machines

libinput 1.16.0

libinput 1.16.0 is now available.

No significant changes since the second RC, so here's slightly polished RC1
announcement text.

This has been a long cycle, mostly because there weren't any huge changes on
the main development branch and a lot of the minor annoyances have found
their way into the 1.15.x releases anyway.

libinput now monitors timestamps of the events vs the current time when
libinput_dispatch() is called by the compositor. Where the difference
*may* result in issues, a (rate-limited) warning is printed to the log.
So you may see messages popping up in the form of
  "event processing lagging behind by XYZms, your system is too slow"
This is a warning only and has no immediate effect. Previously we would only
notice (and warn about) this when it affected an internal timer. Note that
these warnings do not show an issue with libinput, it shows that the the
compositor is not calling libinput_dispatch() quick enough.

The wheel tilt axis source was deprecated. No device ever had the required
udev properties set so we should stop pretending we support this.

Touchpads now support the "flat" acceleration profile. The default remains
unchanged and this needs to be selected in the configuration interface. The
"flat" profile applies a constant factor to movement deltas (1.0 for the
default speed setting).

Events from lid or tablet-mode switches that are known to libinput as being
unreliable are now filtered and no longer passed to the caller.
This prevents callers from receiving those known-bogus events and having to
replicate the same heuristics to identify unreliable devices that libinput
employs internally.

A new "libinput analyze" debugging tool is the entry tool for analysing
various aspects of devices. Right now the only tool is
"libinput analyze per-slot-delta" which can be used to detect pointer jumps
in a libiput record output. This tool used to live elsewhere, it was moved
to libinput so that reporters can easier run this tool, reducing the load on
the maintainers.

The tools have seen a few minor improvements, e.g.
- "libinput record touchpad.yml" does the right thing, no explicit --output
  argument required
- libinput measure touchpad-pressure has been revamped to be a bit more
  obvious
- libinput measure touchpad-size has been added (as replacement for the
  touchpad-edge-detector tool)
- libinput measure fuzz has been fixed to work (again and) slightly more
  reliable

The libinput test suite has been fixed to avoid interference with the
currently running session. Previously it was virtually impossible to work
while the test suite is running - multiple windows would pop up, the screen
would blank regularly, etc.

And of course a collection of fixes, quirks and new bugs.

As usual, see the git shortlog for details.

Diego Abad A (1):
      FIX: typo on building documentation

Peter Hutterer (2):
      test: semi-fix the switch_suspend_with_touchpad test
      libinput 1.16.0

git tag: 1.16.0
Read more Also: >Libinput 1.16 Released - Ready To Warn You If Your System Is Too Slow

18 Frameworks, Libraries, and Projects for Building Medical Applications

Open-source is not just a license or a code-based that left free on an online repository, It's a complete concept which comes with several advantages. Moreover, the most advantage you can get from Open-source is beyond the open-code it's FREEDOM; freedom to use or re-shape it as you see fit within your project commercial or otherwise, and that depends on the license of course. You are free from the headache of license conflict legal problems but also from the dilemma of dealing with restrections and limitations which come with property licenses. You are free from the system lock-in schemes, furthermore, you own your data, and freedom to customize the software as your structure requires and workflow demands. The Community: The Open-source project gains a powerful community as they gain users, the community users vary between advanced users, end-users, developers and end-users on decision-making level. Many of the community users are providing quality inputs from their usage and customized use-case and workflow or test-runs, Furthermore, they always have something to add as new features, UI modification, different usability setup, and overall introducing new workflows and tools, and That's what makes the progress of the open-source different than non-free solutions. While, Good community means good support, The community is a good resource to hire advanced users, developers, and system experts. It also provides alternative options when hiring developers. Unlike non-free software which are not blessed with such communities and where the options there are limited, The rich open-source community provides rich questions and answers sets that contributed by users from all around the world. Higher education value for the in-house team The open-source concept itself provides educational value, I owe most of what I know to open-source communities.The access to the source code and open-channels communication with the core developers is the best educational value any developer can get. Read more

Android Leftovers

Python Programming