Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE - http://planetKDE.org/
Updated: 28 min 18 sec ago

KTextEditor/Kate Bugs – Help Appreciated

11 hours 6 min ago

The bug report count of KTextEditor (implementing the editing part used in Kate/KWrite/KDevelop/Kile/…) and Kate itself reached again some value over 200.

If you have time and need an itch to scratch, any help to tackle the currently open bugs would be highly appreciated.

The full list can be found with this bugs.kde.org query.

Easy things anybody with a bit time could do would be:

  • check if the bug still is there with current master builds, if not, close it it
  • check if it is the duplicate of a similar still open bug, if yes, mark it as duplicate

Beside that, patches for any of the existing issues are very welcome.

I think the best guide how to setup some development environment is on our KDE Community Wiki. I myself use a kdesrc-build environment like described there, too.

Patches can be submitted for an review via our KDE Phabricator.

If it is just a small change and you don’t want to spend time on Phabricator, attaching a git diff versus current master to the bug is ok, too. Best mark the bug with a [PATCH] prefix in the subject.

The team working on the code is small, therefore please be a bit patient if you wait for reactions. I hope we have improved our reaction time in the last months but we still are lacking in that respect.

Why precompiled headers do (not) improve C++ compile times

Thursday 23rd of May 2019 09:22:40 PM

Would you like your C++ code to compile twice as fast (or more)?

Yeah, so would I. Who wouldn't. C++ is notorious for taking its sweet time to get compiled. I never really cared about PCHs when I worked on KDE, I think I might have tried them once for something and it didn't seem to do a thing. In 2012, while working on LibreOffice, I noticed its build system used to have PCH support, but it had been nuked, with the usual poor OOo/LO style of a commit message stating the obvious (what) without bothering to state the useful (why). For whatever reason, that caught my attention, reportedly PCHs saved a lot of build time with MSVC, so I tried it and it did. And me having brought the PCH support back from the graveyard means that e.g. the Calc module does not take 5:30m to build on a (very) powerful machine, but only 1:45m. That's only one third of the time.

In line with my previous experience, on Linux that did nothing. I made the build system support also PCH with GCC and Clang, because it was there and it was simple to support it too, but there was no point. I don't think anybody has ever used that for real.

Then, about a year ago, I happened to be working on a relatively small C++ project that used some kind of an obscure build system called Premake I had never heard of before. While fixing something in it I noticed it also had PCH support, so guess what, I of course enabled it for the project. It again made the project build faster on Windows. And, on Linux, it did too. Color me surprised.

The idea must have stuck with me, because a couple weeks back I got the idea to look at LO's PCH support again and see if it can be made to improve things. See, the point is, PCHs for that small project were rather small, it just included all the std stuff like <vector> and <string>, which seemed like it shouldn't make much of a difference, but it did. Those standard C++ headers aren't exactly small or simple. So I thought that maybe if LO on Linux used PCHs just for those, it would also make a difference. And it does. It's not breath-taking, but passing --enable-pch=system to configure reduces Calc module build time from 17:15m to 15:15m (that's a less powerful machine than the Windows one). Adding LO base headers containing stuff like OUString makes it go down to 13:44m and adding more LO headers except for Calc's own leads to 12:50m. And, adding even Calc's headers, results in 15:15m again. WTH?

It turns out, there's some limit when PCHs stop making things faster and either don't change anything, or even make things worse. Trying with the Math module, --enable-pch=system and then --enable-pch=base again improve things in a similar fashion, and then --enable-pch=normal or --enable-pch=full just doesn't do a thing. Where it that 2/3 time reduction --enable-pch=full does with MSVC?

Clang has recently received a new option, -ftime-trace, which shows in a really nice and simple way where the compiler spends the time (take that, -ftime-report). And since things related to performance simply do catch my attention, I ended up building the latest unstable Clang just to see what it does. And it does:
So, this is bcaslots.cxx, a smaller .cxx file in Calc. The first graph is without PCH, the second one is with --enable-pch=base, the third one is --enable-pch=full. This exactly confirms what I can see. Making the PCH bigger should result in something like the 4th graph, as it does with MSVC, but it results in things actually taking longer. And it can be seen why. The compiler does spend less and less time parsing the code, so the PCH works, but it spends more time in this 'PerformPendingInstantiations', which is handling templates. So, yeah, in case you've been living under a rock, templates make compiling C++ slow. Every C++ developer feeling really proud about themselves after having written a complicated template, raise your hand (... that includes me too, so let's put them back down, typing with one hand is not much fun). The bigger the PCH the more headers each C++ file ends up including, so it ends up having to cope with more templates. With the largest PCH, the compiler needs to spend only one second parsing code, but then it spends 3 seconds sorting out all kinds of templates, most of which the small source file does not need.

This one is column2.cxx, a larger .cxx file in Calc. Here, the biggest PCH mode leads to some improvement, because this file includes pretty much everything under the sun and then some more, so less parsing makes some savings, while the compiler has to deal with a load of templates again, PCH or not. And again, one second for parsing code, 4 seconds for templates. And, if you look carefully, 4 seconds more to generate code, most of it for those templates. And after the compiler spends all this time on templates in all the source files, it gets all passed to the linker, which will shrug and then throw most of it away (and that will too take a load of time, if you still happen to use the BFD linker instead of gold/lld with -gsplit-dwarf -Wl,--gdb-index). What a marvel.

Now, in case there seems to be something fishy about the graphs, the last graph indeed isn't from MSVC (after all, its reporting options are as "useful" as -ftime-report). It is from Clang. I still know how to do performance magic ...



Little Trouble in Big Data – Part 1

Thursday 23rd of May 2019 01:05:13 PM

A few months ago, we received a phone call from a bioinformatics group at a European university. The problem they were having appeared very simple. They wanted to know how to usemmap() to be able to load a large data set into RAM at once. OK I thought, no problem, I can handle that one. Turns out this has grown into a complex and interesting exercise in profiling and threading.

The background is that they are performing Markov-Chain Monte Carlo simulations by sampling at random from data sets containing SNP (pronounced “snips”) genetic markers for a selection of people. It boils down to a large 2D matrix of floats where each column corresponds to an SNP and each row to a person. They provided some small and medium sized data sets for me to test with, but their full data set consists of 500,000 people with 38 million SNP genetic markers!

The analysis involves selecting a column (SNP) at random in the data set and then performing some computations on the data for all of the individuals and collecting some summary statistics. Do that for all of the columns in the data set, and then repeat for a large number of iterations. This allows you to approximate the underlying true distribution from the discreet data that has been collected.

That’s the 10,000 ft view of the problem, so what was actually involved? Well we undertook a bit of an adventure and learned some interesting stuff along the way, hence this blog series.

The stages we went through were:

  1. Preprocessing
  2. Loading the Data
  3. Fine-grained Threading
  4. Preprocessing Reprise
  5. Coarse Threading

In this blog, I’ll detail stages 1 and 2. The rest of the process will be revealed as the blog series unfolds, and I’ll include a final summary at the end.

1. Preprocessing

The first thing we noticed when looking at the code they already had is that there is quite some work being done when reading in the data for each column. They do some summary statistics on the column, then scale and bias all the data points in that column such that the mean is zero. Bearing in mind that each column will be processed many times, (typically 10k – 1 million), this is wasteful to repeat every time the column is used.

So, reusing some general advice from 3D graphics, we moved this work further up the pipeline to a preprocessing step. The SNP data is actually stored in a compressed form which takes the form of quantizing 4 SNP values into a few bytes which we then decompress when loading. So the preprocessing step does the decompression of SNP data, calculates the summary statistics, adjusts the data and then writes the floats out to disk in the form of a ppbed file (preprocessed bed where bed is a standard format used for this kind of data).

The upside is that we avoid all of this work on every iteration of the Monte Carlo simulation at runtime. The downside is that 1 float per SNP per person adds up to a hell of a lot of data for the larger data sets! In fact, for the full data set it’s just shy of 69 TB of floating point data! But to get things going, we were just worrying about smaller subsets. We will return to this later.

2. Loading the data

Even on moderately sized data sets, loading the entirety of the data set into physical RAM at once is a no-go as it will soon exhaust even the beefiest of machines. They have some 40 core, many-many-GB-of-RAM machine which was still being exhausted. This is where the original enquiry was aimed – how to use mmap(). Turns out it’s pretty easy as you’d expect. It’s just a case of setting the correct flags so that the kernel doesn’t actually take a copy of the data in the file. Namely, PROT_READ and MAP_SHARED:

void Data::mapPreprocessBedFile(const string &preprocessedBedFile) { // Calculate the expected file sizes - cast to size_t so that we don't overflow the unsigned int's // that we would otherwise get as intermediate variables! const size_t ppBedSize = size_t(numInds) * size_t(numIncdSnps) * sizeof(float); // Open and mmap the preprocessed bed file ppBedFd = open(preprocessedBedFile.c_str(), O_RDONLY); if (ppBedFd == -1) throw("Error: Failed to open preprocessed bed file [" + preprocessedBedFile + "]"); ppBedMap = reinterpret_cast<float *>(mmap(nullptr, ppBedSize, PROT_READ, MAP_SHARED, ppBedFd, 0)); if (ppBedMap == MAP_FAILED) throw("Error: Failed to mmap preprocessed bed file"); ... }

When dealing with such large amounts of data, be careful of overflows in temporaries! We had a bug where ppBedSize was overflowing and later causing a segfault.

So, at this point we have a float *ppBed pointing at the start of the huge 2D matrix of floats. That’s all well and good but not very convenient for working with. The code base already made use of Eigen for vector and matrix operations so it would be nice if we could interface with the underlying data using that.

Turns out we can (otherwise I wouldn’t have mentioned it). Eigen provides VectorXf and MatrixXf types for vectors and matrices but these own the underlying data. Luckily Eigen also provides a wrapper around these in the form of Map. Given our pointer to the raw float data which is mmap()‘d, we can use the placement new operator to wrap it up for Eigen like so:

class Data { public: Data(); // mmap related data int ppBedFd; float *ppBedMap; Map<MatrixXf> mappedZ; } void Data::mapPreprocessBedFile(const string &preprocessedBedFile) { ... ppBedMap = reinterpret_cast<float *>(mmap(nullptr, ppBedSize, PROT_READ, MAP_SHARED, ppBedFd, 0)); if (ppBedMap == MAP_FAILED) throw("Error: Failed to mmap preprocessed bed file"); new (&mappedZ) Map<MatrixXf>(ppBedMap, numRows, numCols); }

At this point we can now do operations on the mappedZ matrix and they will operate on the huge data file which will be paged in by the kernel as needed. We never need to write back to this data so we didn’t need the PROT_WRITE flag for mmap.

Yay! Original problem solved and we’ve saved a bunch of work at runtime by preprocessing. But there’s a catch! It’s still slow. See the next blog in the series for how we solved this.

The post Little Trouble in Big Data – Part 1 appeared first on KDAB.

Elisa 0.4.0 Release

Wednesday 22nd of May 2019 08:28:12 PM

Elisa is a music player developed by the KDE community that strives to be simple and nice to use. We also recognize that we need a flexible product to account for the different workflows and use-cases of our users.

We focus on a very good integration with the Plasma desktop of the KDE community without compromising the support for other platforms (other Linux desktop environments, Windows and Android).

We are creating a reliable product that is a joy to use and respects our users privacy. As such, we will prefer to support online services where users are in control of their data.

I am happy to announce the release of 0.4.0 version of the Elisa music player.

The new features are explained in the following posts New features in Elisa, New Features in Elisa: part 2 and Elisa 0.4 Beta Release and More New Features.

There have been a couple more changes not yet covered.

Improved Grid Views Elements

Nate Graham has reworked the grid elements (especially visible with the albums view).

I must confess that I was a bit uneasy with this change (it was a part mostly unchanged since the early versions). I am now very happy about this change.

Before After Getting Involved

I would like to thank everyone who contributed to the development of Elisa, including code contributions, testing, and bug reporting and triaging. Without all of you, I would have stopped working on this project.

New features and fixes are already being worked on. If you enjoy using Elisa, please consider becoming a contributor yourself. We are happy to get any kind of contributions!

We have some tasks that would be perfect junior jobs. They are a perfect way to start contributing to Elisa. There are more not yet reported here but reported in bugs.kde.org.

The flathub Elisa package allows an easy way to test this new release.

Elisa source code tarball is available here. There is no Windows setup. There is currently a blocking problem with it (no icons) that is being investigated. I hope to be able to provide installers for later bugfix versions.

The phone/tablet port project could easily use some help to build an optimized interface on top of Kirigami. It remains to be seen how to handle this related to the current desktop UI.

 

KDSoap 1.8.0 released

Wednesday 22nd of May 2019 09:32:39 AM

KDAB has released a new version of KDSoap. This is version 1.8.0 and comes more than one year since the last release (1.7.0).

KDSoap is a tool for creating client applications for web services without the need for any further component such as a dedicated web server.

KDSoap lets you interact with applications which have APIs that can be exported as SOAP objects. The web service then provides a machine-accessible interface to its functionality via HTTP. Find out more...

Version 1.8.0 has a large number of improvements and fixes:

General
  • Fixed internally-created faults lacking an XML element name (so e.g. toXml() would abort)
  • KDSoapMessage::messageAddressingProperties() is now correctly filled in when receiving a message with WS-Addressing in the header
Client-side
  • Added support for timing out requests (default 30 minutes, configurable with KDSoapClientInterface::setTimeout())
  • Added support for soap 1.2 faults in faultAsString()
  • Improved detection of soap 1.2 faults in HTTP response
  • Stricter namespace check for Fault elements being received
  • Report client-generated faults as SOAP 1.2 if selected
  • Fixed error code when authentication failed
  • Autodeletion of jobs is now configurable (github pull #125)
  • Added error details in faultAsString() – and the generated lastError() – coming from the SOAP 1.2 detail element.
  • Fixed memory leak in KDSoapClientInterface::callNoReply
  • Added support for WS-UsernameToken, see KDSoapAuthentication
  • Extended KDSOAP_DEBUG functionality (e.g. “KDSOAP_DEBUG=http,reformat” will now print http-headers and pretty-print the xml)
  • Added support for specifying requestHeaders as part of KDSoapJob via KDSoapJob::setRequestHeaders()
  • Renamed the missing KDSoapJob::returnHeaders() to KDSoapJob::replyHeaders(), and provide an implementation
  • Made KDSoapClientInterface::soapVersion() const
  • Added lastFaultCode() for error handling after sync calls. Same as lastErrorCode() but it returns a QString rather than an int.
  • Added conversion operator from KDDateTime to QVariant to void implicit conversion to base QDateTime (github issue #123).
Server-side
  • New method KDSoapServerObjectInterface::additionalHttpResponseHeaderItems to let server objects return additional http headers. This can be used to implement support for CORS, using KDSoapServerCustomVerbRequestInterface to implement OPTIONS response, with “Access-Control-Allow-Origin” in the headers of the response (github issue #117).
  • Stopped generation of two job classes with the same name, when two bindings have the same operation name. Prefixed one of them with the binding name (github issue #139 part 1)
  • Prepended this-> in method class to avoid compilation error when the variable and the method have the same name (github issue #139 part 2)
WSDL parser / code generator changes, applying to both client and server side
  • Source incompatible change: all deserialize() functions now require a KDSoapValue instead of a QVariant. If you use a deserialize(QVariant) function, you need to port your code to use KDSoapValue::setValue(QVariant) before deserialize()
  • Source incompatible change: all serialize() functions now return a KDSoapValue instead of a QVariant. If you use a QVariant serialize() function, you need to port your code to use QVariant KDSoapValue::value() after serialize()
  • Source incompatible change: xs:QName is now represented by KDQName instead of QString, which allows the namespace to be extracted. The old behaviour is available via KDQName::qname().
  • Fixed double-handling of empty elements
  • Fixed fault elements being generated in the wrong namespace, must be SOAP-ENV:Fault (github issue #81).
  • Added import-path argument for setting the local path to get (otherwise downloaded) files from.
  • Added -help-on-missing option to kdwsdl2cpp to display extra help on missing types.
  • Added C++17 std::optional as possible return value for optional elements.
  • Added -both to create both header(.h) and implementation(.cpp) files in one run
  • Added -namespaceMapping @mapping.txt to import url=code mappings, affects C++ class name generation
  • Added functionality to prevent downloading the same WSDL/XSD file twice in one run
  • Added “hasValueFor{MemberName}()” accessor function, for optional elements
  • Generated services now include soapVersion() and endpoint() accessors to match the setSoapVersion(…) and setEndpoint(…) mutators
  • Added support for generating messages for WSDL files without services or bindings
  • Fixed erroneous QT_BEGIN_NAMESPACE around forward-declarations like Q17__DialogType.
  • KDSoapValue now stores the namespace declarations during parsing of a message and writes
  •     namespace declarations during sending of a message
  • Avoid serialize crash with required polymorphic types, if the required variable wasn’t actually provided
  • Fixed generated code for restriction to base class (it wouldn’t compile)
  • Prepended “undef daylight” and “undef timezone” to all generated files, to fix compilation errors in wsdl files that use those names, due to nasty Windows macros
  • Added generation for default attribute values.

Get KDSoap…

KDSoap on github…

The post KDSoap 1.8.0 released appeared first on KDAB.

[GSoC – 1] Achieving consistency between SDDM and Plasma

Tuesday 21st of May 2019 12:43:03 AM

I’m very excited to start off the Google Summer of Code blogging experience regarding the project I’m doing with my KDE mentors David Edmundson and Nate Graham. What we’ll be trying to achieve this summer is have SDDM be more in sync with the Plasma desktop. What does that mean? The essence of the problem...... Continue Reading →

Linux perf and KCachegrind

Monday 20th of May 2019 02:18:58 PM

If you occassionally do performance profiling as I do, you probably know Valgrind's Callgrind and the related UI KCachegrind. While Callgrind is a pretty powerful tool, running it takes quite a while (not exactly fun to do with something as big as e.g. LibreOffice).

Recently I finally gave Linux perf a try. Not quite sure why I didn't use it before, IIRC when I tried it somewhen long ago, it was probably difficult to set up or something. Using perf record has very little overhead, but I wasn't exactly thrilled by perf report. I mean, it's text UI, and it just gives a list of functions, so if I want to see anything close to a call graph, I have to manually expand one function, expand another function inside it, expand yet another function inside that, and so on. Not that it wouldn't work, but compared to just looking at what KCachegrind shows and seeing ...

When figuring out how to use perf, while watching a talk from Milian Wolff, on one slide I noticed a mention of a Callgrind script. Of course I had to try it. It was a bit slow, but hey, I could finally look at perf results without feeling like that's an effort. Well, and then I improved the part of the script that was slow, so I guess I've just put the effort elsewhere :).

And I thought this little script might be useful for others. After mailing Milian, it turns out he just created the script as a proof of concept and wasn't interested in it anymore, instead developing Hotspot as UI for perf. Fair enough, but I think I still prefer KCachegrind, I'm used to this, and I don't have to switch the UI when switching between perf and callgrind. So, with his agreement, I've submitted the script to KCachegrind. If you would find it useful, just download this do something like:

$ perf record -g ...
$ perf script -s perf2calltree.py > perf.out
$ kcachegrind perf.out



Help Test Plasma 5.16 Beta

Monday 20th of May 2019 01:43:40 PM

Plasma 5.16 beta was released last week and there’s now a further couple of weeks to test it to find and fix all the beasties. To help out download the Neon Testing image and install it in a virtual machine or on your raw hardware. You probably want to do a full-upgrade to make sure you have the latest builds. Then try out the new notifications system, or the new animated wallpaper settings or anything else mentioned in the release announcement. When you find a problem report it on bugs.kde.org and/or chat on the Plasma Matrix room. Thanks for your help!

J On The Beach: a great event

Monday 20th of May 2019 12:31:47 PM

I have been at many software events and have helped or have been part of the organization in a few of them. Based on that experience and the fact that I have participated in the last two editions, let me tell you that J On The Beach is a great event.

The main factors that leads me to such a conclusion are:

  • It is all about contents. I have seen many events that, over time, loose the focus on the quality of the contents. It is a hard focus to keep, specially as you grow. @JOTB19 had great content: well delivered talks and workshops, performed by bright people with something to say which was relevant to the audience.
    • I think the event has not reached its limit yet, specially when it comes to workshops.
    • Designing the content structure to target the right audience is as important as bringing speakers with great things to say. As any event matures, tough decisions will need to be taken in order to find its own space and identity among outstanding competitors.
      • When it comes to themes, will J On The Beach keep targeting several topics, or will it narrow them to one or two? Will they always be the same or will they rotate?
      • When it comes to size, will it grow or will it remain in the current numbers? Will the price increase or will be kept in the current range?
      • When it comes to contents, will the event focus more energy and time allocation on the “hands on” learning sessions or will workshops be kept as relevant compared to the talks, as they are today?  Will the talks length be reduced? Will we see lightning talks?
  • J On The Beach was well organised. A good organization is not the one that does not run into any trouble but the one that handles them smoothly so there is little or no perceived impact. This event has a diligent team behind it, based on the little/no impact I perceived.
  • Support from local companies. As Málaga matures as software hub, more and more companies arrive to this area expecting to grow in size, so the need to attract local talent grows in parallel.
    • Some of these foreign companies understand how important it is to show up in local events to be known by as many local developers as possible. J On The Beach has captured the attention of several of these companies.
    • The organizers have understood this reality and support them to use the event to openly  recruit people. This symbiotic relation is a very productive one from what I have witnessed.
    • It is a hard relation to sustain though, specially if the event does not grow is size, so probably in the future the current relation will need to add additional common interests to remain productive for both sides.
  • Global by default. Most events in Spain have traditionally been designed for Spaniards first, turning into more global events as they grow. J On The Beach is global by default, by design, since day 1. It is harder to succeed that way, but beyond the activation point it turns out to be easier to become sustainable. The organizers took the risk and have reached that point already, which provides the event a bright future in my opinion.
    • The fact that the event is able to attract developers from many countries, specially from eastern European ones, makes J On The Beach very attractive to foreign companies already located in Málaga, from the recruitment perspective. Málaga is a great place not just to work in English but also to live in English. There are well established communities from many different countries in the metropolitan area, due to how strong the touristic industry is here. These factors, together with others like logistics, affordable living costs, good public health care system, sunny weather, availability of international and multilingual schools, etc., reduce the adaptation effort when relocating,  specially for developer’s families. J On The Beach brings tasty fishes to the pond.

Let me name a couple of points that can make the event even better:

  • It is very hard to find a venue that fits any event during its consolidation phase and evolves with it. This edition’s venue represents a significant improvement compared to last year edition. There is room for improvement though.
    • It would be ideal to find a place in Málaga itself, closer to where the companies are located and places to hang out after the event, which at the same time, keep the good things the current venue/location provides, which are plenty.
    • Finding the right venue is tough. There are decision-making factors that participants do not usually see but are essential like costs, how supportive the venue staff and owners are, accommodation availability in the surrounding area, availability on the selected dates, etc. It is one of the most difficult points to get right, in my experience.                      
  • Great events deserve great keynote speakers. They are hard to get but often reflects the difference between great and must-attend events.
    • Great keynote speakers does not necessarily mean popular ones. I see already celebrities in bigger and more expensive events. I would love to see in Málaga old time computer science cowboys.  I mean those first class engineers who did something relevant some time ago and have witnessed the evolution of our industry and their own inventions. They are able to bring a perspective that very few can provide, extremely valuable in these fast pace changing times. Those gems are harder to see at big/popular events and might be a good target for a smaller, high quality event. I think that it would be a great sign of success if such a kind of professionals come to talk at J On The Beach.

I am very glad there is such a great event close to where I live. J On The Beach is not just worth for local developers but also for those abroad. I attend to several events in other countries every year with more name but less value than J On The Beach. It will definitely be on my 2020 agenda. Thanks to every person involved in making it possible.

Pictures taken from the J On The Beach website.

KDE Craft now delivers with vlc and libvlc on macOS

Sunday 19th of May 2019 06:56:16 PM

Lacking VLC and libvlc in Craft, phonon-vlc cannot be built successfully on macOS. It caused the failed building of KDE Connect in Craft.

As a small step of my GSoC project, I managed to build KDE Connect by removing the phonon-vlc dependency. But it’s not a good solution. I should try to fix phonon-vlc building on macOS. So during the community bonding period, to know better the community and some important tools in the Community, I tried to fix phonon-vlc.

Fixing phonon-vlc

At first, I installed libVLC in MacPorts. All Header files and libraries are installed into the system path. So theoretically, there should not be a problem of the building of phonon-vlc. But an error occurred:

We can figure that the compiling is ok, the error is just at the end, during the linking. The error message tells us there is no QtDBus lib. So to fix it, I made a small patch to add QtDBus manually in the CMakeLists file.

1
2
3
4
5
6
7
8
9
10
11
12
13
diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt
index 47427b2..1cdb250 100644
--- a/src/CMakeLists.txt
+++ b/src/CMakeLists.txt
@@ -81,7 +81,7 @@ if(APPLE)
endif(APPLE)

automoc4_add_library(phonon_vlc MODULE ${phonon_vlc_SRCS})
-qt5_use_modules(phonon_vlc Core Widgets)
+qt5_use_modules(phonon_vlc Core Widgets DBus)

set_target_properties(phonon_vlc PROPERTIES
PREFIX ""

And it works well!

A small problem is that Hannah said she didn’t get an error during linking. It may be something about Qt version. If someone gets some idea, welcome to contact me.

My Qt version is 5.12.3.

Fixing VLC

To fix VLC, I tried to pack the VLC binary just like the one on Windows.

But unfortunately, in the .app package, the Header files are not completed. Comparing to Windows version, the entire plugins folder is missing.

So I made a patch for all those files. But the patch is too huge (25000 lines!). So it is not a good idea to merge it into master branch.

Thanks to Hannah, she has made a libs/vlc blueprint in the master branch, so in Craft, feel free to install it by running craft libs/vlc.

Troubleshooting

If you cannot build libs/vlc, just like me, you can also choose the binary version VLC with Header files patch.

The patch of Headers for binary is too big. Adding it to the master branch is not a good idea. So I published it on my own repository:
https://github.com/Inokinoki/craft-blueprints-inoki

To use it, run craft --add-blueprint-repository https://github.com/inokinoki/craft-blueprints-inoki.git and the blueprint(s) will be added into your local blueprint directory.

Then, craft binary/vlc will help get the vlc binary and install Header files, libraries into Craft include path and lib path. Finally, you can build what you want with libvlc dependency.

Conclusion

Up to now, KDE Connect is using QtMultimedia rather than phonon and phonon-vlc to play a sound. But this work could be also useful for other applications or libraries who depend on phonon, phonon-vlc or vlc. This small step may help build them successfully on macOS.

I hope this can help someone!

About me

Sunday 19th of May 2019 06:56:16 PM

Hi, everyone!

I’m Weixuan XIAO, with the nickname: Inoki, sometimes Inokinoki is used to avoid duplicated username.

I’m glad to be selected in Google Summer of Code 2019 to work for KDE Community to make KDE Connect work on macOS. And I’m willing to be a long-term contributor in KDE Community.

As a Chinese student, I’m studying in France for my engineering degree. At the same time, I’m waiting for my bachelor degree at Shanghai University.

I major in Real-Time System and Embedded Engineering. With strong interests in Operating System and Computer Architecture, I like playing with small devices like Arduino and Raspberry Pi, different systems like macOS and Linux(especially Manjaro with KDE, they are the best partner).

Japanese culture makes me crazy, for example, the animation and the game. Even my nickname is actually the pronunciation of my real name in Japanese. So if all of these is the choice of Steins Gate, I’ll normally accept them :)

I speak Chinese, French, English, and a little Japanese. But I realize that my English is awful. So if I make any mistake, please tell me. This would improve my English and I will appreciate it.

I hope we can have a good summer in 2019. And have some good codes :)

Okular: another improvement to annotation

Sunday 19th of May 2019 01:40:53 PM

Continuing with the addition of line terminating style for the Straight Line annotation tool, I have added the ability to select the line start style also. The required code changes are committed today.

Line annotation with circled start and closed arrow ending.

Currently it is supported only for PDF documents (and poppler version ≥ 0.72), but that will change soon — thanks to another change by Tobias Deiminger under review to extend the functionality for other documents supported by Okular.

libqaccessibilityclient 0.4.1

Sunday 19th of May 2019 10:31:44 AM
libqaccessibilityclient 0.4.1 is out now https://download.kde.org/stable/libqaccessibilityclient/ http://embra.edinburghlinux.co.uk/~jr/tmp/pkgdiff_reports/libqaccessibilityclient/0.4.0_to_0.4.1/changes_report.html Signed by Jonathan Riddell https://sks-keyservers.net/pks/lookup?op=vindex&search=0xEC94D18F7F05997E
  • version 0.4.1
  • Use only undeprecated KDEInstallDirs variables
  • KDECMakeSettings already cares for CMAKE_AUTOMOC & BUILD_TESTING
  • Fix use in cross compilation
  • Q_ENUMS -> Q_ENUM
  • more complete release instructions
by

Polymorphism and Implicit Sharing

Sunday 19th of May 2019 06:29:28 AM

Recently I have been researching into possibilities to make members of KoShape copy-on-write. At first glance, it seems enough to declare d-pointers as some subclass of QSharedDataPointer (see Qt’s implicit sharing) and then replace pointers with instances. However, there remain a number of problems to be solved, one of them being polymorphism.

polymorphism and value semantics

In the definition of KoShapePrivate class, the member fill is stored as a QSharedPointer:

QSharedPointer<KoShapeBackground> fill;

There are a number of subclasses of KoShapeBackground, including KoColorBackground, KoGradientBackground, to name just a few. We cannot store an instance of KoShapeBackground directly since we want polymorphism. But, well, making KoShapeBackground copy-on-write seems to have nothing to do with whether we store it as a pointer or instance. So let’s just put it here – I will come back to this question at the end of this post.

d-pointers and QSharedData

The KoShapeBackground heirarchy (similar to the KoShape one) uses derived d-pointersfor storing private data. To make things easier, I will here use a small example to elaborate on its use.

derived d-pointer1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
class AbstractPrivate
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
// it is not yet copy-constructable; we will come back to this later
// Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QScopedPointer<AbstractPrivate> d_ptr;
private:
Q_DECLARE_PRIVATE(Abstract)
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
// it is not yet copy-constructable
// Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { Q_D(const Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { Q_D(Derived); d->var++; d->bar++; }
private:
Q_DECLARE_PRIVATE(Derived)
};

The main goal of making DerivedPrivate a subclass of AbstractPrivate is to avoid multiple d-pointers in the structure. Note that there are constructors taking a reference to the private data object. These are to make it possible for a Derived object to use the samed-pointer as its Abstract parent. The Q_D() macro is used to convert the d_ptr, which is a pointer to AbstractPrivate to another pointer, named d, of some of its descendent type; here, it is a DerivedPrivate. It is used together with the Q_DECLARE_PRIVATE() macro in the class definition and has a rather complicated implementation in the Qt headers. But for simplicity, it does not hurt for now to understand it as the following:

#define Q_D(Class) Class##Private *const d = reinterpret_cast<Class##Private *>(d_ptr.data())

where Class##Private means simply to append string Private to (the macro argument) Class.

Now let’s test it by creating a pointer to Abstract and give it a Derived object:

1
2
3
4
5
6
7
int main()
{
QScopedPointer<Abstract> ins(new Derived());
ins->foo();
ins->modifyVar();
ins->foo();
}

Output:

foo 0 0foo 1 1

Looks pretty viable – everything’s working well! – What if we use Qt’s implicit sharing? Just make AbstractPrivate a subclass of QSharedData and replace QScopedPointer with QSharedDataPointer.

making d-pointer QSharedDataPointer

In the last section, we commented out the copy constructors since QScopedPointer is not copy-constructable,but here QSharedDataPointer is copy-constructable, so we add them back:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
class AbstractPrivate : public QSharedData
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QSharedDataPointer<AbstractPrivate> d_ptr;
private:
Q_DECLARE_PRIVATE(Abstract)
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { Q_D(const Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { Q_D(Derived); d->var++; d->bar++; }
private:
Q_DECLARE_PRIVATE(Derived)
};

And testing the copy-on-write mechanism:

1
2
3
4
5
6
7
8
9
int main()
{
QScopedPointer<Derived> ins(new Derived());
QScopedPointer<Derived> ins2(new Derived(*ins));
ins->foo();
ins->modifyVar();
ins->foo();
ins2->foo();
}

But, eh, it’s a compile-time error.

error: reinterpret_cast from type &aposconst AbstractPrivate*&apos to type &aposAbstractPrivate*&apos casts away qualifiers Q_DECLARE_PRIVATE(Abstract)Q_D, revisited

So, where does the const removal come from? In qglobal.h, the code related to Q_D is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
template <typename T> inline T *qGetPtrHelper(T *ptr) { return ptr; }
template <typename Ptr> inline auto qGetPtrHelper(const Ptr &ptr) -> decltype(ptr.operator->()) { return ptr.operator->(); }

// The body must be a statement:
#define Q_CAST_IGNORE_ALIGN(body) QT_WARNING_PUSH QT_WARNING_DISABLE_GCC("-Wcast-align") body QT_WARNING_POP
#define Q_DECLARE_PRIVATE(Class) \
inline Class##Private* d_func() \
{ Q_CAST_IGNORE_ALIGN(return reinterpret_cast<Class##Private *>(qGetPtrHelper(d_ptr));) } \
inline const Class##Private* d_func() const \
{ Q_CAST_IGNORE_ALIGN(return reinterpret_cast<const Class##Private *>(qGetPtrHelper(d_ptr));) } \
friend class Class##Private;

#define Q_D(Class) Class##Private * const d = d_func()

It turns out that Q_D will call d_func() which then calls an overload of qGetPtrHelper() that takes const Ptr &ptr. What does ptr.operator->() return? What is the difference between QScopedPointer and QSharedDataPointer here?

QScopedPointer‘s operator->() is a const method that returns a non-const pointer to T; however, QSharedDataPointer has two operator->()s, one being const T* operator->() const, the other T* operator->(), and theyhave quite different behaviours – the non-const variant calls detach() (where copy-on-write is implemented), but the other one does not.

qGetPtrHelper() here can only take d_ptr as a const QSharedDataPointer, not a non-const one; so, no matter which d_func() we are calling, we can only get a const AbstractPrivate *. That is just the problem here.

To resolve this problem, let’s replace the Q_D macros with the ones we define ourselves:

#define CONST_SHARED_D(Class) const Class##Private *const d = reinterpret_cast<const Class##Private *>(d_ptr.constData())#define SHARED_D(Class) Class##Private *const d = reinterpret_cast<Class##Private *>(d_ptr.data())

We will then use SHARED_D(Class) in place of Q_D(Class) and CONST_SHARED_D(Class) for Q_D(const Class). Since the const and non-const variant really behaves differently, it should help to differentiate these two uses. Also, delete Q_DECLARE_PRIVATE since we do not need them any more:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
class AbstractPrivate : public QSharedData
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QSharedDataPointer<AbstractPrivate> d_ptr;
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { CONST_SHARED_D(Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { SHARED_D(Derived); d->var++; d->bar++; }
};

With the same main() code, what’s the result?

foo 0 0foo 1 16606417foo 0 0

… big whoops, what is that random thing there? Well, if we use dynamic_cast in place of reinterpret_cast, the program simply crashes after ins->modifyVar();, indicating that ins‘s d_ptr.data() is not at all a DerivedPrivate.

virtual clones

The detach() method of QSharedDataPointer will by default create an instance of AbstractPrivate regardless of what the instance really is. Fortunately, it is possible to change that behaviour through specifying the clone() method.

First, we need to make a virtual function in AbstractPrivate class:

virtual AbstractPrivate *clone() const = 0;

(make it pure virtual just to force all subclasses to re-implement it; if your base class is not abstract you probably want to implement the clone() method) and then override it in DerivedPrivate:

virtual DerivedPrivate *clone() const { return new DerivedPrivate(*this); }

Then, specify the template method for QSharedDataPointer::clone(). As we will re-use it multipletimes (for different base classes), it is better to define a macro:

1
2
3
4
5
6
7
#define DATA_CLONE_VIRTUAL(Class) template<> \
Class##Private *QSharedDataPointer<Class##Private>::clone() \
{ \
return d->clone(); \
}
// after the definition of Abstract
DATA_CLONE_VIRTUAL(Abstract)

It is not necessary to write DATA_CLONE_VIRTUAL(Derived) as we are never storing a QSharedDataPointer<DerivedPrivate> throughout the heirarchy.

Then test the code again:

foo 0 0foo 1 1foo 0 0

– Just as expected! It continues to work if we replace Derived with Abstract in QScopedPointer:

QScopedPointer<Abstract> ins(new Derived());QScopedPointer<Abstract> ins2(new Derived(* dynamic_cast<const Derived *>(ins.data())));

Well, another problem comes, that the constructor for ins2 seems too ugly, and messy. We could, like the private classes, implement a virtual function clone() for these kinds of things, but it is still not gentle enough, and we cannot use a default copy constructor for any class that contains such QScopedPointers.

What about QSharedPointer that is copy-constructable? Well, then these copies actually point to the same data structures and no copy-on-write is performed at all. This still not wanted.

the Descendents of …

Inspired by Sean Parent’s video, I finally come up with the following implementation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
template<typename T>
class Descendent
{
struct concept
{
virtual ~concept() = default;
virtual const T *ptr() const = 0;
virtual T *ptr() = 0;
virtual unique_ptr<concept> clone() const = 0;
};
template<typename U>
struct model : public concept
{
model(U x) : instance(move(x)) {}
const T *ptr() const { return &instance; }
T *ptr() { return &instance; }
// or unique_ptr<model<U> >(new model<U>(U(instance))) if you do not have C++14
unique_ptr<concept> clone() const { return make_unique<model<U> >(U(instance)); }
U instance;
};

unique_ptr<concept> m_d;
public:
template<typename U>
Descendent(U x) : m_d(make_unique<model<U> >(move(x))) {}

Descendent(const Descendent & that) : m_d(move(that.m_d->clone())) {}
Descendent(Descendent && that) : m_d(move(that.m_d)) {}

Descendent & operator=(const Descendent &that) { Descendent t(that); *this = move(t); return *this; }
Descendent & operator=(Descendent && that) { m_d = move(that.m_d); return *this; }

const T *data() const { return m_d->ptr(); }
const T *constData() const { return m_d->ptr(); }
T *data() { return m_d->ptr(); }
const T *operator->() const { return m_d->ptr(); }
T *operator->() { return m_d->ptr(); }
};

This class allows you to use Descendent<T> (read as “descendent of T“) to represent any instance of any subclass of T. It is copy-constructable, move-constructable, copy-assignable, and move-assignable.

Test code:

1
2
3
4
5
6
7
8
9
int main()
{
Descendent<Abstract> ins = Derived();
Descendent<Abstract> ins2 = ins;
ins->foo();
ins->modifyVar();
ins->foo();
ins2->foo();
}

It gives just the same results as before, but much neater and nicer – How does it work?

First we define a class concept. We put here what we want our instance to satisfy. We would like to access it as const and non-const, and to clone it as-is. Then we define a template class model<U> where U is a subclass of T, and implement these functionalities.

Next, we store a unique_ptr<concept>. The reason for not using QScopedPointer is QScopedPointer is not movable, but movability is a feature we actually will want (in sink arguments and return values).

Finally it’s just the constructor, moving and copying operations, and ways to access the wrapped object.

When Descendent<Abstract> ins2 = ins; is called, we will go through the copy constructor of Descendent:

Descendent(const Descendent & that) : m_d(move(that.m_d->clone())) {}

which will then call ins.m_d->clone(). But remember that ins.m_d actually contains a pointer to model<Derived>, whose clone() is return make_unique<model<Derived> >(Derived(instance));. This expression will call the copy constructor of Derived, then make a unique_ptr<model<Derived> >, which calls the constructor of model<Derived>:

model(Derived x) : instance(move(x)) {}

which move-constructs instance. Finally the unique_ptr<model<Derived> > is implicitly converted to unique_ptr<concept>, as per the conversion rule. “If T is a derived class of some base B, then std::unique_ptr<T> is implicitly convertible to std::unique_ptr<B>.”

And from now on, happy hacking — (.>w<.)

KDE Usability & Productivity: Week 71

Sunday 19th of May 2019 06:01:22 AM

Hot on the heels of last week, this week’s Usability & Productivity report continues to overflow with awesomeness. Quite a lot of work what you see featured here is already available to test out in the Plasma 5.16 beta, too! But why stop? Here’s more:

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

This Summer with Kdenlive

Sunday 19th of May 2019 12:00:00 AM

Hi! I’m Akhil K Gangadharan and I’ve been selected for GSoC this year with Kdenlive. My project is titled ‘Revamping the Titler Tool’ and my work for this summer aims to kickoff the complete revamp of one of the major tools used in video-editing in Kdenlive, called the Titler tool.

Titler Tool?

The Titler tool is used to create, you guessed it, title clips. Title clips are clips that contain text and images that can be composited over videos.

The Titler tool

Why revamp it?

In Kdenlive, the titler tool is implemented using QGraphicsView which is considered deprecated since the release of Qt5. This makes it obviously prone to bugs that may appear in the upstream to affect the functionality of the tool. This has caused issues in the past, popular features like the Typewriter effect had to be dropped because of QGraphicsView which lead to uncontrollable crashes.

How?

Using QML.

Currently the Titler Tool uses QPainter, which paints every property and every animation is required to be programmed. QML allows creating powerful animations easily as QML as a language is designed for designing UI, which can be then rendered to create title clips as per our need.

Implementation details - a brief overview

For the summer, I intend to complete work on the backend implementation. The first step is to write and test a complete MLT producer module which can render QML frames. And then to begin test integration of this module with a new titler tool.

This is how the backend currently looks like -

After the revamp, the backend would look like -

After the backend is done with, we begin integrating it with Kdenlive and evolve the titler to use the new backend.

A great long challenge lies ahead, and I’m looking forward to this summer and beyond with the community to complete writing the tool - right from the backend to the new UI.

Finally, a big thanks to the Kdenlive community for getting me here and to my college student community, FOSS@Amrita for all the support and love!

Plasma 5.15.90 (Plasma 5.16 Beta) Available for Testing

Saturday 18th of May 2019 04:44:09 PM

Are you using Kubuntu 19.04, our current Stable release? Or are you already running our daily development builds?

We currently have Plasma 5.15.90 (Plasma 5.16 Beta)  available in our Beta PPA for Kubuntu 19.04, and in our 19.10 development release daily live ISO images.

For 19.04 Disco Dingo, add the PPA and then upgrade

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt update && sudo apt full-upgrade -y

Then reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

For already installed 19.10 Eoan Ermine development release systems, simply upgrade your system.

Update directly from Discover, or use the command line:

sudo apt update && sudo apt full-upgrade -y

And reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

Otherwise, to test or install the live image grab an ISO build from the daily live ISO images link.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use your launchpad.net account is required to post testing feedback to the Kubuntu team. 
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the changelog.

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.15.5?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu 19.10 as well as added to our backports.

Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

[Some] KDE Applications 19.04.1 also available in flathub

Friday 17th of May 2019 08:30:07 PM

Thanks to Nick Richards we've been able to convince flathub to momentarily accept our old appdata files as still valid, it's a stopgap workaround, but at least gives us some breathing time. So the updates are coming in as we speak.

My Project for Google Summer of Code <2019-05-17 Fri>

Friday 17th of May 2019 05:44:00 AM

I was accepted to Google Summer of Code. I will work with Krita implementing an Animated Vector Brush Read more...

KIOFuse – GSoC 2019

Thursday 16th of May 2019 04:31:17 PM

It’s been a great pleasure to be chosen to work with KDE during GSoC this year. I’ll be working on KIOFuse and hopefully by the end of the coding period it will be well integrated with KIO itself. Development will mainly by coordinated on the #kde-fm channel (IRC Nick: feverfew) with fortnightly updates on my blog so feel free to pop by! Here’s a small snippet of my proposal to give everyone an idea of what I’ll be working on:

KIOSlaves are a powerful feature within the KIO framework, allowing KIO-aware applications
such as Dolphin to interact with services out of the local filesystem over URLs such as fish://
and gdrive:/. However, KIO-unaware applications are unable to interact seamlessly with KIO
Slaves. For example, editing a file in gdrive:/ in LibreOffice will not save changes to your Google Drive. One potential solution is to make use of FUSE, which is an interface provided
by the Linux kernel, which allows userspace processes to provide a filesystem which can be
mounted and accessed by regular applications. ​KIOFuse is a project by fvogt that
allows the possibility to mount KIO filesystems in the local system; therefore exposing them to
POSIX-compliant applications such as Firefox and LibreOffice.

This project intends to polish KIOFuse such that it is ready to be a KDE project. In particular,
I’ll be focusing on the following four broad goals:
• ​Improving compatibility with KDE and non-KDE applications by extending and improving
supported filesystem operations.
• ​Improving KIO Slave support.
• ​Performance and usability improvements.
• ​Adding a KDE Daemon module to allow the management of KIOFuse mounts and the
translation of KIO URLs to their local path equivalents.

More in Tux Machines

Ubuntu 19.10 Puts Nvidia's Proprietary GPU Driver Right On The ISO

In Ubuntu 19.04, Canonical introduced the ability to download Nvidia's propriety graphics driver during the OS installation process (provided the user has an internet connection). That was a welcome step toward making gaming more accessible for newcomers. With the upcoming Ubuntu 19.10, however, Canonical is following in the footsteps of System76's Pop!_OS and slapping Nvidia's driver (both 390 and 418) right onto the ISO. Phoronix spotted the update via Ubuntu's Launchpad platform. What this means is that users can have the proprietary Nvidia driver -- a better option for gaming compared to the open source "Nouveau" driver -- ready to go at first boot. They also have the option to install the Nvidia binary at any point in the future without needing to add or activate a repository or download the driver. Read more

Benchmarking AMD FX vs. Intel Sandy/Ivy Bridge CPUs Following Spectre, Meltdown, L1TF, Zombieload

Now with MDS / Zombieload being public and seeing a 8~10% performance hit in the affected workloads as a result of the new mitigations to these Microarchitectural Data Sampling vulnerabilities, what's the overall performance look like now if going back to the days of AMD FX Vishera and Intel Sandybridge/Ivybridge processors? If Spectre, Meltdown, L1TF/Foreshadow, and now Zombieload had come to light years ago would it have shaken that pivotal point in the industry? Here are benchmarks looking at the the performance today with and without the mitigations to the known CPU vulnerabilities to date. As I've already delivered many benchmarks of these mitigations (including MDS/Zombieload) on newer CPUs, for this article we're looking at older AMD FX CPUs with their relevant Spectre mitigations against Intel Sandybridge and Ivybridge with the Spectre/Meltdown/L1TF/MDS mitigations. Tests were done on Ubuntu 19.04 with the Linux 5.0 kernel while toggling the mitigation levels of off (no coverage) / auto (the default / out-of-the-box mitigations used on all major Linux distributions for the default protections) / auto,nosmt (the more restricted level that also disables SMT / Hyper Threading). The AMD CPUs were tested with off/auto as in the "auto,nosmt" mode it doesn't disable any SMT as it doesn't deem it insecure on AMD platforms. Read more

Today in Techrights

today's leftovers

  • Zombieload, Nextcloud, Peppermint 10, KDE Plasma, IPFire, ArcoLinux, LuneOS | This Week in Linux 67
    On this episode of This Week in Linux, we’ll check out some Distro News from Peppermint OS, ArcoLinux, LuneOS & IPFire. We got a couple apps to talking about like Nextclou0…d and a new Wallpaper tool that has quite a bit of potential. We’ll take a look at what is to come with the next version of KDE Plasma. Intel users have gotten some more bad news regarding a new security vulnerability. Later in the show, we’ll cover some interesting information regarding a couple governments saving money by switching to Linux. Then finally we’ll check out some Linux Gaming News. All that and much more on your Weekly Source for Linux GNews!
  • Ubuntu Podcast: S12E07 – R-Type
    This week we’ve been installing Lineage on a OnePlus One and not migrating Mastodon accounts to ubuntu.social. We round up the Ubuntu community news from Kubuntu, Ubuntu MATE, Peppermint OS and we discuss some tech news. It’s Season 12 Episode 07 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.
  • OpenGL 4.6 / SPIR-V Support Might Be Inching Closer For Mesa Drivers
    We're quickly approaching the two year anniversary of the OpenGL 4.6 release and it's looking like the Intel/RadeonSI drivers might be inching towards the finish line for that latest major revision of the graphics API.  As we've covered many times, the Mesa drivers have been held up on OpenGL 4.6 support due to their SPIR-V ingestion support mandated by this July 2017 version of the OpenGL specification. While there are the Intel and Radeon RADV Vulkan drivers already with the SPIR-V support that is central to Vulkan, it's taken a long time re-fitting the OpenGL drivers for the likes of ARB_gl_spriv. Then again, there aren't many (actually, any?) major OpenGL games requiring version 4.6 of the specification even with its interoperability benefits thanks to SPIR-V.