Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE - http://planetKDE.org/
Updated: 2 hours 18 min ago

Markdown and support of embedded mathematics

Tuesday 30th of July 2019 09:32:27 PM
Hello everyone!

In the previous post I mentioned that Cantor now handles embedded mathematical expressions inside of Markdown, like $...$ and $$...$$ in accordance with the Markdown syntax.

In the past Cantor for a long time didn’t have any support for Markdown and only have simple text entry type for comment purposes. Markdown entry type was added only in 2018 by kqwyf. Internally, this was realized by using the Discount library, which converts markdown syntax to the to html code which is then passed to Qt for final the rendering (Qt supports limited set of the html syntax).

Discount library actually supports integration with LaTeX: text inside LaTeX expressions like $$...$$, \(...\), \[...]\ is passed to the output html string without modifications (except html escaping).

As you see Discount doesn't support embedded mathematics with single delimiter $...$ that is used in Jupyter very frequently. Of course, for my Jupiter integration projects ignoring this type of statements was not an option. I decided to report this issue in Discount bug tracker because all the other options solve this problem purely in Cantor had other problems.

Fortunately, the author of Discount reacted very soon (thanks to him for that) and suggested code changes for supporting the single-delimited math. Unfortunately, the changes didn't get into master branch yet. To proceed further in Cantor I decided to copy required Discount’s code having all the relevant changed into Cantor’s repository as a third party library.

Independent of the support for the single-delimiter mathematics, there is a big problem with the embedded mathematical expressions - you need to somehow find these mathematical statements in output html string. In the initial implementation I simply searched for $$ in the result string but this could lead to "search collisions".

The dollar sign could be inside of a Markdown code block or inside of a quote block. Here, the dollar signs shouldn't treated as part of the embedded mathematics. After some further testing of couple of other implementations on Cantor sidethe conclusion was obvios - the identification and labeling of positions of embedded mathematics in the html string, produced by Discount, should be done directly inside Discount itself.

At this moment, the version of Discount added to Cantor’s repository had two additional functional fixes on top of the officially released version of this library. First, Discount copies all LaTeX expressions during the processing of markdown syntax to a special string list, which is then used by Cantor to search for LaTeX code. Second, a useful change was to add an ASCII non-text symbol to every math expression. This symbol is used as a search key which greatly reduces the likelihood for a string collision, still theoretically possible, though.

For example, if Discount will find (according Markdown syntax) math expression $\Gamma$, then it will write the additional symbol and the expression iin the output html string will be $<symbol>\Gamma$ and Cantor will search exactly this text.

I think, that's all.  Maybe this doesn’t look like a complex problem but solving this problem was a task that took the most time and it took me two months to fix it. So, I think the problem and its solution deserved a separate blog post.

At this moment, what I called "maximum plan" (I have mentioned this concep in this post) of the Jupyter support in Cantor is mostly finished. So, in the next post I plan to show how Cantor now handles test notebooks and what I’ll plan to do next.

Migrating to a Static Site

Monday 29th of July 2019 10:00:00 PM

Hello static site...

I've been writing really nothing on my previous blog, and the whole Wordpress install was too much overkill, besides doing a static website in python sounds way better to a programmer ;-)

So, i'm using Pelican and plan to revert back all my customizations that make sense.

And of course

Strokes are Working Now

Monday 29th of July 2019 03:02:36 AM

Okay, good news today. I have been porting DefaultTool to the new node-replacing system and it is working now, finally, at least for the part I have already done.

The work involves combining a number of different modules in Krita: the stroke system,KoInteractionTool and its interaction strategies, and, well, the COW mechanism in Flake.

KoInteractionTool is the class used to manage the interaction with vector shapes, and is subclassed by DefaultTool. The behaviours of KoInteractionTool (and thus DefaultTool) are defined by KoInteractionStrategys. Upon the press of the mouse button, DefaultTool creates an instance of some subclass of KoInteractionStrategy, say, ShapeMoveStrategy, according to the point of the click as well as keyboard modifiers. Mouse move events after that are all handled by the interaction strategy. When the mouse is released, the interaction strategy’s finishInteraction() is called, and then createCommand(). If the latter returns some KUndo2Command, the command is added to the undo history. Till now it sounds simple.

So how does the stroke system come in? I have experimented the interaction strategy without the stroke system (https://invent.kde.org/tusooaw/krita/commit/638bfcd84c622d3cfefda1e5132380439dd3fdc2), but it is really slow and even freezes Krita for a while sometimes. The stroke system allows the modification of the shapes to run in the image thread, instead of the GUI thread. A stroke is a set of jobs scheduled and run by a KisStrokesFacade (here, KisImage). One creates the stroke in a strokes facade using a stroke strategy, which defines the behaviour of the stroke. After creation, jobs can be added to the stroke and then executed at some later time (it is asynchronous).

So combining these two, we have an interaction strategy and a stroke strategy – when the interaction strategy is created, we start the stroke in the image; when there is mouse move, we add individual jobs that change the shapes to the stroke; when the mouse released, we end the stroke. My discussion with Dmitry firstly tended to make the interaction strategy inherit the stroke strategy but later it proves not a viable solution since the interaction strategy is owned and deleted by KoInteractionTool while the stroke strategy is owned by the stroke — which will lead to double deletion. So we divide it into two classes instead: the interaction strategy starts the stroke, and the stroke strategy takes a copy of the current active layer upon creation; when handling mouse move events, a job is added to the stroke to modify the current layer; finally when the interaction finishes, the interaction strategy ends the stroke and creates an undo command if the layer has been changed.

A problem I found lies in the final stage–if the mouse is released as soon as being pressed and no undo command is created, Krita will simply crash. It does not happen when I use gdb to start Krita so it seems to be a timing issue though it leads to difficulty for debugging as well. Dmitry used a self-modified version of Qt to produce a backtrace, indicating the problem probably lies in KisCanvas2‘s canvasUpdateCompressor, which is not thread-safe. However, after I changed it to KisThreadSafeSignalCompressor, the crash still happens, unfortunately.

The final inspiration comes from the comments in KisThreadSafeSignalCompressor, though. It indicates we cannot delete the compressor from other threads — we have to use obj->deleteLater() instead, since it lies in the gui thread. And aha, that is the problem. The stroke strategy’s destructor is executed in the image thread; if the undo command is not created, there is only one reference to our copied KisNode, namely in our stroke strategy, so it has to be destructed there. However, upon the creation of the KisNode, it is moved into the gui thread. So it simply means we cannot let it be deleted in the image thread. The solution looks a little bit messy, but it works:

1
2
3
4
5
KisNode *node = m_d->originalState.data(); // take the address from KisSharedPtr
node->ref(); // prevent KisSharedPtr from deleting the node
m_d->originalState.clear(); // now node is not being referenced by any KisSharedPtr
node->deref(); // the reference count is now zero again
node->deleteLater(); // it will be deleted by the event loop, later

My KDE Onboarding Sprint 2019 report

Sunday 28th of July 2019 09:01:02 PM

This week I took part on the KDE Onboarding Sprint 2019 (part of what's been known as Nuremberg Megasprint (i.e. KDEConnect+KWin+Onboarding) in, you guessed it, Nuremberg.

The goal of the sprint was "how do we make it easier for people to start contributing". We mostly focused on the "start contributing *code*" side, though we briefly touched artists and translators too.

This is *my* summary, a more official one will appear somewhere else, so don't get annoyed at me if the blog is a bit opinionated (though i'll try it not to)

The main issues we've identified when trying to contribute to KDE software is:
* Getting dependencies is [sometimes] hard
* Actually running the software is [sometimes] hard

Dependencies are hard

Say you want to build dolphin from the git master branch. For that (at the time of writing) you need KDE Frameworks 5.57, this means that if you run the latest Ubuntu or the latest OpenSUSE you can't build it because they ship older versions.

Our current answer for that is kdesrc-build but it's not the most easy to use script, and sometimes you may end up building QtWebEngine or QtWebKit, which as a newbie is something you most likely don't want to do.

Running is hard

Running the software you have just built (once you've passed the dependencies problem) is not trivial either.

Most of our software can't be run uninstalled (KDE Frameworks are a notable exception here, but newbies rarely start developing KDE Frameworks).

This means that you may try to run make install, which if you didn't pass -DCMAKE_INSTALL_PREFIX pointing somewhere in your home you'll probably have to run make install as root since it defaults to /usr/local (this will be fixed in next extra-cmake-modules release to point to a somewhat better prefix) that isn't that useful either since none of your software is looking for stuff in /usr/local. Newbies may be tempted to use -DCMAKE_INSTALL_PREFIX=/usr but that's *VERY* dangerous since it can easily mess up your own system.

For applications, our typical answer is use -DCMAKE_INSTALL_PREFIX=/home/something/else at cmake stage, run make install and then set the environment variables to pick up things from /home/something/else, a newbie will say "which variables" at this stage probably (and not newbies too, I don't think i remember them all). To help with that we generate a prefix.sh in the build dir and after the next extra-cmake-release we will tell the users that they need to run it for things to work.

But still that's quite convoluted and I know from experience answering people in IRC that lots of people get stuck there. It's also very IDE unfriendly since IDEs don't usually have the "install" concept, it's run & build for them.

Solutions

We ended up focusing on two possible solutions:

* Conan: Conan "the C/C++ Package Manager for Developers" (or so they say) is something like pip in the python world but for C/C++. The idea is that by using Conan to get the dependencies we will solve most of the problems in that area. Whether it can help or not with the running side is still unclear, but some of our people involved in the Conan effort think they may either be able to come up with a solution or get the Conan devs to help us with it. Note Conan is not my speciality by far, so this may not be totally correct.

* Flatpak: Flatpak is "a next-generation technology for building and distributing desktop applications on Linux" (or so they say). The benefits of using flatpak are multiple, but focusing on onboarding are. "Getting dependencies is solved", dependencies are either part of the flatpatk SDK and you have them or the flatpak manifest for the application says how to get and build them and that will automagically work for you as it works for everyone else using the same manifest. "Running is solved" because when you build a flatpak it gets built into a self contained artifact so running it is just running it, no installing or environment variable fiddling is needed. We also have [preliminary] support in KDevelop (or you can use Gnome Builder if you want a more flatpak-centric experience for now). The main problem we have with flatpak at this point is that most of our apps are not totally flatpak-ready (e.g. Okular can't print). But that's something we need to fix anyway so it shouldn't be counted as a problem (IMHO).

Summary

*Personally* i think Flatpak is the way to go here, but that means that collectively we need to say "Let's do it", it's something we all have to take into account and thus we have to streamline the manifest handling/updating, focus on fixing the Flatpak related issues that our software may have, etc.

Thanks

I would like to thank SUSE for hosting us in their offices and the KDE e.V. for sponsoring my attendance to the sprint, please donate to KDE if you think the work done at sprints is important.

Latte Dock v0.9, "...a world to discover..."

Sunday 28th of July 2019 08:39:02 PM


Welcome Latte Dock v0.9.0!

After a full year of development a new stable branch is finally available! This is a version full of innovations, improvements and enhancements that improve the entire experience at all areas. I have written plenty of articles for introduced major features and in the same manner this article will focus only on highlights.


You can get  v0.9.0  from, download.kde.orgor  store.kde.org*
-----* archive has been signed with gpg key: 325E 97C3 2E60 1F5D 4EAD CF3A 5599 9050 A2D9 110E
- youtube presentation -

New Colors painting

Latte can now paint itself based on the current active window and when it is transparent can provide the best possible contrast for the underlying desktop background. You can read more at Latte and a Colors tale...
- top panel colored from kwrite color scheme -


Indicators

New indicators are now provided and are also self-packaged. You can install them from Latte Effects page  and you can find them online at kde store. You can already find Unity and DaskToPanel styles. You can read more at Latte and an Indicators tale...
- unity indicators style-
- dashtopanel indicators style -


Multiple Layouts Environment


With Latte you can have different layouts used at different Plasma Activities and at the same you can now share layouts to other layouts in order to achieve Shared Docks/Panels. You can read more at Latte and a Shared Layouts dream... 
- Top Panel layout shared to two other layouts -


Flexible Settings

All Dock/Panel settings have been redesigned in order to improve their meaning and at the same time give a way to the user to adjust their width/height scales according to their screen. You can read more at Latte and Flexible settings...



Improved Badges

Badge experience has been rethought and improved. New options have been added in order for the user to choose more prominent notification badges or use 3D style for them instead of the Material one used as default.
- persistent notification badge and shortcut badges -
  

Documentation and Reports

At Latte Global preferences you can now find some diagnostic reports in order to debug your layouts. If you are a developer and you are interested in Latte development you can now find plenty of information at kde techbase. Read more at
Latte, Documentation and Reports...



Fixes/Improvements

Plenty, plenty, plenty... v0.9 provides you much smoother and bug free experience compared to previous versions. Many areas have been redesigned and improved at areas.




Latte Development team

Because the community might have not been updated properly, Latte Development team is currently just me and this is the situation for the past two years. So for the upcoming development circle I will focus only on bug fixing and supporting the current massive features list. For new features I am not interested at all except if that feature enhances my own personal workflow. For any new features there should be an implementation to be discussed in order to be accepted and for any new features requests at bugs.kde.org I will leave them open for a month; if no developer is found then they will be closed afterwards.


Requirements

Minimum:
  • Qt >= 5.9
  • Plasma >=5.12
Proposed:
  • Qt >= 5.12
  • Plasma >=5.15
 

Donations

You can find Latte at Liberapay if you want to support,    

or you can split your donation between my active projects in kde store.

KDE Usability & Productivity: Week 81

Sunday 28th of July 2019 06:01:50 AM

Here’s week 81 in KDE’s Usability & Productivity initiative! And boy is there some delicious stuff today. In addition to new features and bugfixes, we’ve got a modernized look and feel for our settings windows to show off:

Doesn’t it look really good!? This design is months in the making, and it took quite a bit of work to pull it off. I’d like to thank Marco Martin, Filip Fila, and the rest of the KDE VDG team for making it happen. This is the first of several user interface changes we’re hoping to land in the Plasma 5.17 timeframe, and I hope you’ll like them!

In the meantime, look at all the other cool stuff that got done this week:

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a tax-deductible donation to the KDE e.V. foundation.

KDE Connect mDNS: Nuremberg Megaspring Part 2

Saturday 27th of July 2019 11:19:01 AM

Sprints are a great time to talk in real-time to other project developers. One of the things we talked about at the KDE Connect part of the “Nuremberg Megasprint” was the problem that our current discovery protocol often doesn’t work, since many networks block the UDP broadcast we currently use. Additionally, we often get feature requests for more privacy-conscious modes of KDE Connect operation. Fixing either of these problems would require a new Link Provider (as we call it), and maybe we can fix both at once.

A New Backend

First, let’s talk about discovery. The current service discovery mechanism in KDE Connect is we send a multicast UDP packet to the current device’s /24 subnet. This is not ideal, since some networks are not /24, and since many public networks block packets of this sort. Alternatively, you can manually add an IP address which then establishes a direct connection. Manual connections work on many networks with block UDP, but it is a bit of a hassle. Can we find a better way to auto-discover services?

A few months ago, a user named rytilahti posted two patches to our Phabricator for KDE Connect service advertisement over mDNS (aka avahi, aka nsd, aka …). The patches were for advertisement-only (it still doesn’t establish a connection) but they were a good proof of concept to show that mDNS works on many institutional networks which block UDP multicast since mDNS is frequently used for other things like network printer discovery which are desired by those institutional networks.

I would post a screenshot here, but I don’t want to spread details of an internal network too far

At the sprint, we talked about whether we would like to move forward with these and we decided it was useful, so Albert Vaca and I put together two proof of concept patches to start trying to establish a connection using mDNS advertisements:

The patches are not yet fully working. We can both see each other and attempt to establish a connection but then something goes wrong and one of them crashes. Given that this was less than 8 hours of work, I would call this a success!

There is still plenty to do, but it was very helpful to be able to sit in-person and talk about what we wanted to accomplish and work out the details of the new protocol.

More Privacy

Before we talk about privacy, it helps to have a quick view of how KDE Connect currently establishes a connection:

  • As described above, both devices send a multicast UDP packet. This is what we call an “Identity Packet”, where each device send its name, capabilities (enabled plugins), and unique ID
  • If your device receives an identity packet from a device it recognizes, it establishes a secure TCP connection (if both devices open a connection, the duplicate connection is handled and closed)

As long as we are talking about a new backend, let’s think about ways to make KDE Connect more privacy-conscious. There are two problems to address:

  • Device names often contain personal information. For instances “Simon’s Phone” tells you that “Simon” is around
  • Device IDs are unique and unchanging. Even assuming I rename my phone, you can still track a particular device by checking for the same ID to show up again and again

Solving the first problem is easy. We want the user’s device name so we can display it in the list of available devices to pair with. So, instead of sending that information in the identity all the time, have some “discovery mode” switch which otherwise withholds the device name until a connection to an already-trusted device is established.

This leaves the second problem, which quite a bit more tricky. One answer is to have trusted user-selected trusted wifi networks, so KDE Connect doesn’t broadcast on a random wifi that the user connects to. But what if I connect to, say, my university network where I want to use KDE Connect but I don’t want to tell everyone that I’m here?

We don’t have a final answer to this question, but we discussed a few possible solutions. We would like some way of verifying ourselves to the other device which conceals our identity behind some shared secret, so the other device can trust that we are who we say we are, but other devices can’t fingerprint us. It is a tricky problem but not yet one to solve. Step 1 is to get the new mDNS backend working, step 2 is to add advanced features to it!

3D – Interactions with Qt, KUESA and Qt Design Studio, Part 2

Friday 26th of July 2019 12:15:43 PM

In my last post, I went through a method of creating a simulated reflection with a simple scene. This time I’d like to take the technique and apply it to something a bit more realistic. I have a model of a car that I’d like to show off in KUESA™, and as we know, KUESA doesn’t render reflections.

Using the method discussed last time, I’d need to create an exact mirror of the object and duplicate it below the original, and have a floor that was partially transparent. This would be expensive to render if I duplicated every single mesh on the high poly model. Here, the model is X triangles – duplicating it would result in Y triangles – is it really worth it for a reflection?

However, if I could take a low poly version of the mesh and make it look like the high poly mesh this could then work. I could bake the materials onto a low poly mesh. Easy to do in blender.

Here I have a low poly object as the created reflection, with the baked materials giving the impression of much higher detail than physically exists. Using the same technique of applying an image texture as an alpha channel, I can fade out the mesh below to look more like a reflection.

To further the illusion, I can also blur the details on the baked materials – this, along with the rough texture of the plane, gives more of a reflective look, especially when you consider that a rough plane would not give an exact reflection:

The good news is, it produces an effect similar to a reflection, which is what we want.

This is all very well for a static model, but how can I deal with an object that has animations, such as a car door opening? I’ll look into this in the next post.

The post 3D – Interactions with Qt, KUESA and Qt Design Studio, Part 2 appeared first on KDAB.

KDE First Contributions and first sprint

Friday 26th of July 2019 12:03:00 PM

I have been a KDE User for more than 10 years. I really love KDE community, Plasma and its apps. I have been reading eagerly Nate Graham's blog He gave me the inspiration to start contributing.

It has been a opportunity to learn some C++, Qt and some Qml.

So far I am very happy to have contributed a few features and bug fixes :

And a few less noticeable as well.

And I have quite a lot more in the back of my head. I am close to completing adding click-to-play to video and audio previews in dolphin information panel.

Usablity & Productivity Sprint in Valencia

I participated last month to the Usablity & Productivity Sprint in Valencia. I have been very happy to meet some great KDE community members from three continents and 9 countries.

There I improved the kinfocenter consistency thanks to the help of Filip and Marco and added a link from the system battery plasmoid to the corresponding kinfocenter kcm. I started to work on a new recently used document ioslave that will match the same feature as in kicker/kickoff. Adding some consistency and activity awareness to dolphin and KDE Open/Save dialogs. I learned about Kwin debugging with David Edmundson. And I had great discussions with the people there.

I am going to Akademy 2019 !

And since I am a big fan of rust-lang, this will be a nice opportunity to debate on the matter and on the future of KDE.

Nürnberg Sprint and KDE Itinerary Browser Integration

Friday 26th of July 2019 07:00:00 AM

Getting everyone interested/involved in a specific area into a room for a few days with no distraction is a very effective way to get things done, that’s why we do sprints in KDE since many years. Another possible outcome however can be that we end up with some completely unexpected results as well. Here is one such example.

Sprint

Last weekend we had the combined KDE Connect, Streamlined Onboarding and KWin sprints in Nürnberg. SUSE kindly provided us with rooms in their office (as well as coffee and the occasional pizza), and KDE e.V. had made sure we didn’t have to sleep on a park bench somewhere.

I really like the recent trend of “sprint pooling”, that is combining a few more or less independent topics at the same time and location. This does not only reduce travel overhead, it also helps to avoid silo teams and fosters cross-team collaboration. While it’s not a full replacement for doing this at a larger scale at the Randa Meetings, it’s a massive improvement over isolated sprints.

And it made me attend three sprints I’d probably not attended individually, as I’m not deeply involved enough in either topic. That however turned out very much worth it.

KDE Itinerary Browser Integration

While there were many important discussions, achievements and decisions as other reports on Planet KDE show, I’d like to talk about one in particular here, the experiments on extracting data relevant for KDE Itinerary via a browser extension. You might notice that this isn’t in scope for any of the official sprint topics, but that’s exactly what happens when you bring together people from different areas, in this case Kai from Plasma Browser Integration and myself from KDE Itinerary.

KDE Itinerary so far gathers most of its data in the email client, Kai’s idea was to also consider the browser for this. That makes sense, as it’s where you do most of the actual booking operations. However, unlike email website can be very dynamic making them hard to capture and analyze to get an idea who viable that idea actually is.

Having someone knowing how to develop browser extensions and someone knowing the structured data patterns we need to look for sitting next to each other for a day for this enables quite some progress. As a start we (and by we I mean Kai) wrote a small browser extension that would look for the schema.org and IATA BCBP patterns we could generically extract. If those would be found in sufficient numbers we actually would have a viable way to get relevant data, without needing a completely unscalable web scraping approach.

My initial expectation was we’d need to run this extension on a couple of machines for a few weeks until we had some initial numbers. It turned out I was very very wrong. The extension almost immediately started to report results, it looks like the majority of hotel chain websites and hotel booking sites have schema.org annotations, as well as many restaurant and event booking sites, next to a relevant number of individual hotel and restaurant sites. So, definitely worth pursuing this approach.

Of course the actual development work only just starts now, and there is still a lot of work ahead of us to get this to a point where it provides value, but we have come up with an approach and validated it in a tiny fraction of the time it would have taken any one of us individually.

Contribute

If you find the idea of a Free Software and designed for privacy digital travel assistant appealing but so far have shied away from helping out because you are more familiar with web technologies than C++, the browser integration is a great way to get in :)

Another way to help is by enabling KDE e.V. (and its sister organizations like the Krita Foundation) to support such activities, financially by donations, by becoming a supporting member or a corporate sponsor, or logistically by providing a suitable venue for such events for example.

Welcome to KDE: Nuremberg Megasprint Part 1

Thursday 25th of July 2019 09:18:23 AM

Now that it has been over half a year since I started this blog, it is time to address one of the topics that I promised to address at the beginning: How I got started with KDE. I will do this in the context of the “Nuremberg Megasprint” which combined a KDE Connect sprint, a KDE Welcome / Onboarding sprint, and a KWin sprint.

At the Onboarding sprint, we were talking mostly about ways to make it easier for developers new to KDE to work on our software. Currently the path to getting that working is quite convoluted and pretty much requires that a developer read the documentation (which often doesn’t happen). We agreed that we would like the new developer experience to be easier. I don’t have a lot to say about that, but keep an eye on the Planet for an idea of what was actually worked on! Instead, since I am a relatively new KDE contributor, I will tell the story of how I got started.

I started using Plasma as a desktop environment around 2012, shortly after Ubuntu switched from Gnome 2, which I liked, to Unity, which I disliked. I tried playing with Mate and Cinnamon for Ubuntu, but I didn’t find either one was what I wanted. I had heard that KDE existed, but I didn’t know anything about it, so I gave it a try as well.

At the time, my main computer was an HP TM2-2000-series laptop, with a quite respectable 4GB RAM, some decent dual-core, first-generation Intel i5, and a little AMD GPU (which I could never get to work under Linux). But most importantly, it had a touchscreen with a capacitive digitizer for fingers, some styluses, or carrots (which usually work better than the special styluses) and a built-in Wacom digitizer for taking notes in class using the special pen.

An HP TM-2 Laptop, Almost in Tablet Mode

Plasma was nice to use on the touchscreen but most importantly, it had a built-in hotkey to disable the capacitive digitizer so I could write notes using the Wacom pen without my palm constantly messing everything up. It may sound silly, but that is literally the reason I started using KDE software!

Kubuntu came packaged with KDE Connect, which I was very excited by. Could I write SMS from the desktop without touching my phone and without installing questionable software? At the time, no. This was practically the first release of KDE Connect. It still had cool features, so I still loved it, but replying to SMS didn’t come until later.

Fast-forward the clock a couple of years. KDE Connect has had reply-to SMS features for awhile, but something was wrong. If you wrote a “long” SMS, KDE Connect would appear to accept it but then silently the message would never be sent. How curious! Since the only thing you could do was reply, it was hard to reproduce what was happening. I also noticed that KDE Connect had some work-in-progress, unreleased Telepathy plugin.

I started trying to set up Telepathy so that I would be able to send messages as well as just reply to them. I was able to get the plugin set up, which had (and still has, unfortunately) the very basic feature that you could enter a phone number and see messages sent and received in that “chat”, with no history or contacts-matching. Once I had the ability to send messages from KDE Connect, I noticed that any message I sent which was longer than 1 SMS (~140 bytes) would go missing.

At this point, the only software I had built was the Telepathy plugin (none of the core parts of KDE Connect). Luckily, the Android app is not difficult to build and debug. I followed the message I was trying to send through the app into an Android system call which was clearly for sending a single SMS (and apparently fails silently if the message is too long). I tweaked that part of the code to use the Android way of sending a multi-part SMS, posted the patch (to the mailing list, because I didn’t know Phabricator was the way to go since I hadn’t read the contributor documentation) and I have been hooked ever since.

Building the desktop app was more of a problem and is a better story to tell in the context of onboarding. I couldn’t figure out what cmake flags I needed. I am using Fedora, so I downloaded the source RPM to see if that would help me. I also couldn’t figure out how to read that, but I *did* figure out how to re-build the RPM with new sources. So, for about the first 8 months of my time in KDE, my workflow was:

  • Make a change
  • Rebuild the RPM (which took a relatively long time, even on my fairly fast computer)
  • Install the new RPM
  • Try to figure out why my change wasn’t working

Needless to say, this path was very cumbersome. Luckily, about this time, someone updated the KDE Connect wiki with the proper cmake flags to use!

After a certain amount of effort, I can now run KDE Connect in Eclipse, with the integrated debugger view (Note to readers: I recommend a different IDE for KDE/Qt development. Eclipse requires lots of manual configuration. Try KDevelop!)

kdeconnectd, in Eclipse, paused in the debugger

That’s all for this post. I think it’s clear to say that my road to KDE development was far from straightforward. Hopefully we can make that road smoother in the future!

Back from vacation 2019

Wednesday 24th of July 2019 10:00:00 PM

After 795km on my bicycle, I’m back in the saddle – or, rather, back on my reasonably-comfortable home-office chair. Time to pick up the bits and pieces and dust off the email client!

Some notes and bits from the trip:

  • It takes about a week to settle into a camping stride, for packing in the morning, getting things onto the bike, and getting going. Daily startup time dropped from 3 hours to 1.5 hours by the end of the trip. What this means: practice makes perfect.
  • Fromage Frais, in particular Bibeleskaes, is the foundation on which well-fed bike trips are built on.
  • The Moselle / Moesel river is lovely, even more so when biking along it. It would have been nice if it was 15 degrees colder, with France and Germany sagging under an unprecedented heatwave.
  • Every. Single. French. Person. says bon jour! when passing by on a bike, people move aside when you ring a bell or even whistle to overtake. It felt amazingly friendly.
  • Not every boulangerie makes a decent croissant.
  • The best local beer I had was L’ogresse rousse.
  • After two weeks of nothing but bike paths and rural France, the city of Trier is over-crowded, noisy, gross, and feels downright dangerous. Also has no concept of decent bike path markings to get you to the train station.
  • There don’t seem to be any BSD developers along the Moselle.
  • A tent does not provide protection from nuclear fallout. I learned this from the campground safety poster near the Thionville reactors.

No photos, since my low-end phone takes really bad shots.

Anyway, I’m glad to be back home, with bug bites and a slight sunburn.

KDE Onboarding Sprint Report

Wednesday 24th of July 2019 07:08:36 PM

These are my notes from the onboarding sprint. I had to miss the evenings, because I’m not very fit at the moment, so this is just my impression from the days I was able to be around, and from what I’ve been trying to do myself.

Day 1

The KDE Onboarding Sprint happened in Nuremberg, 22 and 23 July. The goal of the sprint was to come closer to making getting started working on existing projects in the KDE community easier: more specifically, this sprint was held to work on the technical side of the developer story. Of course, onboarding in the wider sense also means having excellent documentation (that is easy to find), a place for newcomers to ask questions (that is easy to find).

Ideally, an interested newcomer would be able to start work without having to bother building (many) dependencies, without needing the terminal at first, would be able to start improving libraries like KDE frameworks as a next step, and be able to create a working and installable release of his work, to use or to share.

Other platforms have this story down pat, with the proviso that these platforms promote greenfield development, not extending existing projects, as well as working within the existing platform framework, without additional dependencies:

  • Apple: download XCode, and you can get started.
  • Windows: download Visual Studio, and you are all set.
  • Qt: download the Qt installer with Qt Creator, and documention, examples and project templates are all there, no matter for which platform you develop: macOS, Windows, Linux, Android or iOS.

GNOME Builder is also a one-download, get-started offering. But Builder adds additional features to the three above: it can download and build extra dependencies (with atrocious user-feedback, it has to be said), and it offers a list of existing GNOME projects to start hacking on. (Note: I do not know what happens when getting Builder on a system that lacks git, cmake or meson.)

KDE has nothing like this at the moment. Impressive as the kdesrc-build scripts are technically (thanks go to Michael Pyne for giving an in-depth presentation), they are not newcomer-friendly, with a complicated syntax, configuration files and dependency on working from the terminal. KDE also has much more diversity in its projects than GNOME:

  • Unlike GNOME, KDE software is cross-platform — though note that not every person present at the sprint was convinced of that, even dismissing KDE applications ported to Windows as “not used in the real world”.
  • Part of KDE is the Frameworks set of additional Qt libraries that are used in many KDE projects
  • Some KDE projects, like Krita, build from a single git repository, some projects build from dozens of repositories, where adding one feature, means working on half a dozen features at the same time, or in the case of Plasma replaces the entire desktop on Linux.
  • Some KDE projects are also deployed to mobile systems (iOS, Android, Plasma Mobile)

Ideally, no matter the project the newcomer selects, the getting-started story should be the same!

When the sprint team started evaluating technologies that are currently used in the KDE community to build KDE software, things started getting confused quite a bit. Some of the technologies discussed were oriented towards power users, some towards making binary releases. It is necessary to first determine which components need to be delivered to make a seamless newcomer experience possible:

  • Prerequisite tools: cmake, git, compiler
  • A way to fetch the repository or repositories the newcomer wants to work on
  • A way to fetch all the dependencies the project needs, where some of those dependencies might need to transition from dependency to project-being-worked on
  • A way to build, run and debug the project
  • A way to generate a release from the project
  • A way to submit the changes made to the original project
  • An IDE that integrates all of this

The sprint has spent most of its time on the dependencies problem, which is particularly difficult on Linux. An inventory of ways KDE projects “solve” the problem of providing the dependencies for a given project currently includes:

  • Using distribution-provided dependencies: this is unmaintainable because there are too many distributions with too much variation in the names of their packages to make it possible to keep full and up-to-date lists per project — and newcomers cannot find the deps from the name given in the cmake find modules.
  • Building the dependencies as CMake external projects per project: is only feasible for projects with a manageable number of dependencies and enough manpower to maintain it.
  • Building the dependencies as CMake external projects on binary factory, and using a docker image identical to the one used on the binary factory + these builds to develop the project in: same problem.
  • Building the (KDE, though now also some non-KDE) dependencies using kdesrc-build, getting the rest of the dependencies as distribution packages: this combines the first problem with fragility and a big learning curve.
  • Using the KDE flatpak SDK to provide the KDE dependencies, and building non-KDE dependencies manually, or fetching them from other flatpak SDK’s. (Query: is this the right terminology?) This suffers from inter- and intra-community politicking problems.

ETA: I completely forgot to mention craft here. Craft is a python based system close to emerge that has been around for ages. We used it initially for our Krita port to Windows; back then it was exclusively Windows oriented. These days, it also works on Linux and Windows. It can build all KDE and non-KDE dependencies that KDE applications need. But then why did I forget to mention it in my write-up? Was it because there was nobody at the sprint from the craft team? Or because nobody had tried it on Linux, and there was a huge Linux bias in any case? I don’t know… It was discussed during the meeting, though.

As an aside, much time was spent discussing docker, but when it was discussed, it was discussed as part of the dependency problem. However, it is properly a solution for running a build without affecting the rest of the developers system. (Apparently, there are people who install their builds into their system folders.) Confusingly, part of this discussion was also about setting environment variables to make it possible to run their builds when installed outside the system, or uninstalled. Note: the XDG environment variables that featured in this discussion are irrelevant for Windows and macOS.

Currently, newcomers for pim, frameworks or plasma development are pointed towards kdesrc-build (https://community.kde.org/Get_Involved/development, four clicks from www.kde.org), which not unexpectedly, leads to a heavy support burden in the kde development communication channels. For other applications, like Krita, there are build guides (https://docs.krita.org/en/contributors_manual.html, two clicks from krita.org), and that also is not a feasible solution.

As a future solution, Ovidiu Bogdan presented Conan, which is a cross-platform binary package manager for C++ libraries. This could solve the dependency problem, and only the dependency problem, but at the expense of making the run problem much harder because each library is in its own location. See https://conan.io/ .

Day 2

The attendendants decided to try to tackle the dependency problem. A certain amount of agreement was reached on acknowledging that this is a big problem, so this was discussed in-depth. Note again, that the focus was on Linux again, relegating the cross-platform story to second place. Dmitry noted that when he tries to recruit students for Krita, only one in ten is familiar with Linux, pointing out we’re really limiting ourselves with this attitude.

A KDE application, kruler, was selected, as a prototype, for building with dependencies provided either by flatpak or conan.

Dmitry and Ovidiu dug into Conan. From what I observed, laying down the groundwork is a lot of work, and by the end of the evening, Dmitry and Ovidiu has packaged about half of the Qt and KDE dependencies for kruler. Though the Qt developers are considering moving to Conan for Qt’s 3rdparty deps, Qt in particular turned out to be a problem. Qt needs to be modularized in Conan, instead of being a big, fat monolith. See https://bintray.com/kde.

Aleix Pol had already made a begin integrating flatpak and docker support into KDevelop, as well as providing a flatpak runtime for KDE applications (https://community.kde.org/Flatpak).

This made it relatively easy to package kruler, okular and krita using flatpak. There are now maintained nightly stable and unstable flatpak builds for Krita.

The problems with flatpak, apart from the politicking, consist in two different opinions of what an appstream should contain, checks that go beyond what freedesktop.org standards demand, weird errors in general (you cannot have a default git branch tag that contains a slash…) an opaque build system and an appetite for memory that goes beyond healthy: my laptop overheated and hung when trying to build a krita flatpak locally.

Note also that the flatpak (and docker) integration in KDevelop are not done yet, and didn’t work properly when testing. I am also worried that KDevelop is too complex and intimidating to use as the IDE that binds everything together for new developers. I’d almost suggest we repurpose/fork Builder for KDE…

Conclusion

We’re not done with the onboarding sprint goal, not by a country mile. It’s as hard to get started with hacking on a KDE project, or starting a new KDE project as has ever been. Flatpak might be closer to ready than conan for solving the dependency problem, but though flatpak solves more problems than just the dependency problem, it is Linux-only. Using conan to solve the dependency problem will be very high-maintenance.

I do have a feeling we’ve been looking at this problem at a much too low level, but I don’t feel confident about what we should be doing instead. My questions are:

* Were we right on focusing first on the dependency problem and nothing but the dependency problem?
* Apart from flatpak and conan, what solutions exist to deliver prepared build environments to new developers?
* Is kdevelop the right IDE to give new developers?
* How can we make sure our documentation is up to date and findable?
* What communication channels do we want to make visible?
* How much effort can we afford to put into this?

DBus connection on macOS

Wednesday 24th of July 2019 04:18:14 PM
What is DBus

DBus is a concept of software bus, an inter-process communication (IPC), and a remote procedure call (RPC) mechanism that allows communication between multiple computer programs (that is, processes) concurrently running on the same machine. DBus was developed as part of the freedesktop.org project, initiated by Havoc Pennington from Red Hat to standardize services provided by Linux desktop environments such as GNOME and KDE.

In this post, we only talk about how does DBus daemon run and how KDE Applications/Frameworks connect to it. For more details of DBus itself, please move to DBus Wiki.

QDBus

There are two types of bus: session bus and system bus. The user-end applications should use session bus for IPC or RPC.

For the DBus connection, there is already a good enough library named QDBus provided by Qt. Qt framework and especially QDBus is widely used in KDE Applications and Frameworks on Linux.

A mostly used function is QDBusConnection::sessionBus() to establish a connection to default session DBus. All DBus connection are established through this function.

Its implementation is:
1
2
3
4
5
6
QDBusConnection QDBusConnection::sessionBus()
{
if (_q_manager.isDestroyed())
return QDBusConnection(nullptr);
return QDBusConnection(_q_manager()->busConnection(SessionBus));
}

where _q_manager is an instance of QDBusConnectionManager.

QDBusConnectionManager is a private class so that we don’t know what exactly happens in the implementation.

The code can be found in qtbase.

DBus connection on macOS

On macOS, we don’t have a pre-installed dbus. When we compile it from source code, or install it from HomeBrew or somewhere, a configuration file session.conf and a launchd configuration file org.freedesktop.dbus-session.plist are delivered and expected to install into the system.

session.conf

In session.conf, one important thing is <listen>launchd:env=DBUS_LAUNCHD_SESSION_BUS_SOCKET</listen>, which means socket path should be provided by launchd through the environment DBUS_LAUNCHD_SESSION_BUS_SOCKET.

org.freedesktop.dbus-session.plist

On macOS, launchd is a unified operating system service management framework, starts, stops and manages daemons, applications, processes, and scripts. Just like systemd on Linux.

The file org.freedesktop.dbus-session.plist describes how launchd can find a daemon executable, the arguments to launch it, and the socket to communicate after launching daemon.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>org.freedesktop.dbus-session</string>

<key>ProgramArguments</key>
<array>
<string>/{DBus install path}/bin/dbus-daemon</string>
<string>--nofork</string>
<string>--session</string>
</array>

<key>Sockets</key>
<dict>
<key>unix_domain_listener</key>
<dict>
<key>SecureSocketWithKey</key>
<string>DBUS_LAUNCHD_SESSION_BUS_SOCKET</string>
</dict>
</dict>
</dict>
</plist>

Once the daemon is successfully launched by launchd, the socket will be provided in DBUS_LAUNCHD_SESSION_BUS_SOCKET env of launchd.

We can get it with following command:
1
launchctl getenv DBUS_LAUNCHD_SESSION_BUS_SOCKET

Current solution in KDE Connect

KDE Connect needs urgently DBus to make, the communication between kdeconenctd and kdeconnect-indicator or other components, possible.

First try

Currently, we delivered dbus-daemon in the package, and run

1
./Contents/MacOS/dbus-daemon --config-file=./Contents/Resources/dbus-1/session.conf --print-address --nofork --address=unix:tmpdir=/tmp

--address=unix:tmpdir=/tmp provides a base directory to store a random unix socket descriptor. So we could have serveral instances at the same time, with different addresse.

--print-address can let dbus-daemon write its generated, real address into standard output.

Then we redirect the output of dbus-daemon to
KdeConnectConfig::instance()->privateDBusAddressPath(). Normally, it should be $HOME/Library/Preferences/kdeconnect/private_dbus_address. For example, the address in it is unix:path=/tmp/dbus-K0TrkEKiEB,guid=27b519a52f4f9abdcb8848165d3733a6.

Therefore, our program can access this file to get the real DBus address, and use another function in QDBus to connect to it:
1
QDBusConnection::connectToBus(KdeConnectConfig::instance()->privateDBusAddress(), QStringLiteral(KDECONNECT_PRIVATE_DBUS_NAME));

We redirect all QDBusConnection::sessionBus to QDBusConnection::connectToBus to connect to our own DBus.

Fake a session DBus

With such solution, kdeconnectd and kdeconnect-indicator coworks well. But in KFrameworks, there are lots of components which are using QDBusConnection::sessionBus rather than QDBusConnection::connectToBus. We cannot change all of them.

Then I came up with an idea, try to fake a session bus on macOS.

To hack and validate, I tried to launch a dbus-daemon using /tmp/dbus-K0TrkEKiEB as address, and then I tried type this in my terminal:
1
launchctl setenv DBUS_LAUNCHD_SESSION_BUS_SOCKET /tmp/dbus-K0TrkEKiEB

Then I launched dbus-monitor --session. It did connect to the bus that I launched.

And then, any QDBusConnection::sessionBus can establish a stable connection to the faked session bus. So components in KFramework can use the same session bus as well.

To implement it in KDE Connect, after starting dbus-daemon, I read the file content, filter the socket address, and call launchctl to set DBUS_LAUNCHD_SESSION_BUS_SOCKET env.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// Set launchctl env
QString privateDBusAddress = KdeConnectConfig::instance()->privateDBusAddress();
QRegularExpressionMatch path;
if (privateDBusAddress.contains(QRegularExpression(
QStringLiteral("path=(?<path>/tmp/dbus-[A-Za-z0-9]+)")
), &path)) {
qCDebug(KDECONNECT_CORE) << "DBus address: " << path.captured(QStringLiteral("path"));
QProcess setLaunchdDBusEnv;
setLaunchdDBusEnv.setProgram(QStringLiteral("launchctl"));
setLaunchdDBusEnv.setArguments({
QStringLiteral("setenv"),
QStringLiteral(KDECONNECT_SESSION_DBUS_LAUNCHD_ENV),
path.captured(QStringLiteral("path"))
});
setLaunchdDBusEnv.start();
setLaunchdDBusEnv.waitForFinished();
} else {
qCDebug(KDECONNECT_CORE) << "Cannot get dbus address";
}

Then everything works!

Possible improvement
  1. Since we can directly use session bus, the redirect from QDBusConnection::sessionBus to QDBusConnection::connectToBus is not necessary anymore. Everyone can connect it in convenience.
  2. Each time we launch kdeconnectd, a new dbus-daemon is launched and the environment in launchctl is overwritten. To improve this, we might detect whether there is already an available dbus-daemon through testing connectivity of returned QDBusConnection::sessionBus. This might be done by a bootstrap script.
  3. It will be really nice if we can have a unified way for all KDE Applications on macOS.
Conclusion

I’m looking forward to a general DBus solution for all KDE applications :)

Improved rendering of mathematical expressions in Cantor

Tuesday 23rd of July 2019 09:59:08 PM
Hello everyone!

In the previous post I mentioned that the render of mathematical expressions in Cantor has bad performance. This heavily and negatively influences user experience. With my recent code changes I addressed this problem and it should be solved now. In this blog post I wand to provide some details on what was done.

First, I want to show some numbers proving the performance improvements. For example, loading of the notebook "Rigid-body transformations in a plane (2D)" - one of the notebooks I’m using for testing - took 15.9 seconds (this number and all other numbers mentioned in the following are average values of 5 consequent measurements). With the new implementation it takes only 4.06 seconds. And this acceleration comes without loosing render quality.

This is a example, how modern render looks like compared with Jupyter renderer (as you can see, Cantor doesn't show images from web in Markdown entries, but I will fix it soon).




I did further measurements by executing all the tests I wrote for the Jupiter import which cover several Jupyter notebooks. Here the results:
  • Without math rendering - 7.75 seconds.
  • New implementation - 14.014 seconds.
  • Old implementation - 41.296 seconds.
To quickly summarize, we get an average of 535% performance improvement. This result depends on the number of available cores and I’ ll explain below why.

To get these results I solved two main problems of math rendering in Cantor.
First, I changed the code for the LaTeX renderer. In the old implementation the rendering process consisted of the following steps:
  1. create TeX document using a page template and the code provided by the user.
  2. run latex executable on the produced TEX file to generate the DVI file.
  3. run dvips executable to convert the DVI file to an EPS file.
  4. convert the produced EPS file to a QImage using Libspectre library.
After these four steps the produced QImage is shown Cantor’s worksheet (QGraphicsScene/View). As you see, the overall chain of steps to get the image out of a mathematical expression is quite long - there are several steps where the time is spent. In total, for a usual mathematical expression these operations take ~500 ms where the first three steps take 300 ms and the last step takes 200 ms. The complexity and the size of the mathematical expressions have a negligible impact on the numbers shown above. Also, the time spent in Cantor for expressions of other types is relatively small. So, for example if you have a notebook with 20 different mathematical expressions and some other entries of other types, Cantor will load the project in ca 20*500ms=10s.

I reduced this chain to three elements by merging the steps two and three. This was achieved by using pdflatex as the LaTeX engine which produces a PDF file directly out of the TEX file. Furthermore, I replaced libspectre library with Poppler pdf rendering library. This brought the overall time down to 330 ms with pdflatex process taking 300 ms and with the rendering in Poppler (converting PDF to QImage) taking only 30 ms. With this, for our example notebook with 20 mathematical expressions mentioned above the rendering take only 6.6 seconds. In this new chain, the LaTeX process is the bottle neck and I’m looking for potential acceleration here but until now I didn’t find any "magic" parameters which would help to reduce this time spent in latex rendering.

Despite this progress, loading of somewhat bigger documents will hurt in Cantor. For example, for a project having 100 formulas openning of the file will take ca. 33 seconds.

The problem here is that the rendering process is a blocking and non-parallelized operation - only one mathematical expression is processed simultanuosly and the UI is blocked for the whole processing time. Obviously, this behaviour is unsatisfactory and under-utilizes the modern multi-core CPUs. So I  decided to run the rendering part in many parallel tasks asynchronously without blocking the main GUI thread. Fortunately, Qt helps here a log with it's classes QThreadPool managing the pool of threads and QRunnable providing an interface to define "tasks" that will be executed in parallel.

Now when openning a project, for every Markdown entry containing mathematical expression, Cantor creates a render task for every expression, sends this task to the thread pool and continues with the processing of entries in the document. Once such a task has finished, the result is shown in Cantor's worksheet. With this a further good performance improvement can be achieved. Clearly, the more cores you have the faster the processing will be. Of course, if you have only a small number of physical threads possible on your computer, you won't notice a huge difference. But still, you should see an improvement compared to the old single-threaded implementation in Cantor.

For a notebook comparable in size to  "Rigid-body transformations in a plane (2D)" project which has 109 mathematical expressions, the loading of the notebook takes a reasonable and acceptable time on my hardware (I have 8 physical cores in the CPU, so that is why the render acceleration is so big). And, thanks to the asynchron processing, the user can interact with the notebook even though the rendering of the mathematical expressions is still in process.

Since my previous post, not only math renderer have changed, there is also a huge change in Markdown support - Cantor finally handles embeded math expressions, like $...$ and $$...$$ in accordance with the Markdown syntax. In the next blog post I'll describe how it works now.

Day 58

Tuesday 23rd of July 2019 03:05:40 PM

Since the last update, I worked on the Khipu interface and created some models to manage the information on the screen. Actually, Khipu is looking like:

The interface is not finished and I’ll leave the design to the end, because there’s a lot to do in the back end now.
The remove button works, but the rename hasn’t yet been implemented.
I made a SpaceModel that manages the spaces, and a PlotModel that manages the information on the black retangle on the menu.
I started to link my code with Analitza, the KDE mathematical library, using their qml components such as Graph2D and Graph3D to show these interactive spaces and the Expression class holds the user input (a mathematical function such as “y = x**2”), and now I’m stuck trying to learn about the AnalitzaPlot library. I need to understand how it works to show functions on these spaces. I’m using KAlgebra Mobile, an another KDE mathematical software, as a reference for my code.

Kate LSP Status – July 22

Monday 22nd of July 2019 08:16:00 PM

After my series of LSP client posts, I got the question: What does this actually do? And why should I like this or help with it?

For the basic question: What the heck is the Language Server Protocol (LSP), I think my first post can help. Or, for more details, just head over to the official what/why/… page.

But easier than to describe why it is nice, I can just show the stuff in action. Below is a video that shows the features that at the moment work with our master branch. It is shown using the build directory of Kate itself.

To get a usable build directory, I build my stuff locally with kdesrc-build, the only extra config I have in the global section of my .kdesrc-buildrc is:

cmake-options -DCMAKE_BUILD_TYPE=RelWithDebInfo -G “Kate - Unix Makefiles” -DCMAKE_EXPORT_COMPILE_COMMANDS=ON

This will auto generate the needed .kateproject files for the Kate project plugin and the compile_commands.json for clangd (the LSP server for C/C++ the plugin uses).

If you manually build your stuff with cmake, you can just add the

-G “Kate - Unix Makefiles” -DCMAKE_EXPORT_COMPILE_COMMANDS=ON

parts to your cmake call. If you use ninja and not make, just use

-G “Kate - Ninja” -DCMAKE_EXPORT_COMPILE_COMMANDS=ON

Then, let’s see what you can do, once you are in a prepared build directory and have a master version of Kate in your PATH.

I hope the quality is acceptable, that is my first try in a long time to do some screen-cast ;)

As you can see, this is already in an usable state at least for C/C++ in combination with clangd.

For details how to build Kate master with it’s plugins, please take a look at this guide.

If you want to start to hack on the plugin, you find it in the kate.git, addons/lspclient.

Feel welcome to show up on kwrite-devel@kde.org and help out! All development discussions regarding this plugin happen there.

If you are already familiar with Phabricator, post some patch directly at KDE’s Phabricator instance.

KDE ISO Image Writer – GSoC Phase 2

Monday 22nd of July 2019 07:00:39 PM

As mentioned in my previous blog post, the new user interface was functional on Windows. However, the application had to be run using root on Linux to be able to write an ISO image to a USB flash drive.

KAuth

The original user interface used KAuth to write the ISO image without having to run the entire application with root privileges. KAuth is a framework that allows to perform privilege elevation on restricted portions of code. In order to run an action with administrator privileges without having to run the entire application as an administrator, an additional binary (KAuth helper) which is included alongside the main application binary will perform the actions that require elevated privileges. This approach allows privileges escalation for specific portions of code without granting elevated privileges for code that does not need them. After integrating the existing KAuth helper into the new user interface, it was able to write ISO images by asking for authorisation when required.

Finalising The User Interface

In addition to implementing the KAuth back-end, I polished the user interface and I implemented additional features such as drag and drop support which allows the user to select an ISO image by simply dropping a file on the application window.

KDevelop 5.4 beta 1 released

Monday 22nd of July 2019 04:30:00 PM

KDevelop 5.4 beta 1 released

We are happy to announce the release of KDevelop 5.4 Beta 1!

5.4 as a new feature version of KDevelop will among other things add some first support for projects using the Meson build system and have the Clang-Tidy support plugin merged as part of built-in plugins. It also brings 11 months of small improvements across the application. Full details will be given in the announcement of the KDevelop 5.4.0 release, which is currently scheduled for in 2 weeks.

Downloads

You can find the Linux AppImage (learn about AppImage) here: KDevelop 5.4 beta 1 AppImage (64-bit)
Download the file and make it executable (chmod +x KDevelop-5.3.80-x86_64.AppImage), then run it (./KDevelop-5.3.80-x86_64.AppImage).

The source code can be found here: KDevelop 5.4 beta 1 source code

Windows installers are currently not offered, we are looking for someone interested to take care of that.

kossebau Mon, 2019/07/22 - 18:30 Category News Tags release

KDE Connect sprint 2019

Monday 22nd of July 2019 01:31:00 PM

This blog is about KDE Connect, a project to communicate across all your devices. For example, with KDE Connect you can receive your phone notifications on your desktop computer, control music playing on your desktop from your phone, or use your phone as a remote control for your desktop.

From friday the 19th to sunday the 21st, we had the KDE Connect sprint. It's always a nice opportunity to meet the others working on KDE Connect, since we usually only talk to each other online.

Lots of activity at the KDE Connect sprint! (Also some KWin & Onboarding sprinters)

On arrival on friday, we immediately got the first issue to fix: the Wi-Fi at the sprint blocks UDP broadcasts. That means KDE Connect couldn't find any device. Adding the IP-addresses makes it work again, but it's a good reminder to actually improve this situation.

On friday and saturday, we had a lot of discussions about projects to improve KDE Connect architecturally. Those are:

  • Not keeping TCP connections open to all reachable devices (really a problem when there are like a 100 devices)
  • Making KDE Connect better on Wayland
  • Google locking down Android more and more, and how to handle that
  • Using mDNS (a.k.a. Avahi/Zeroconf/Bonjour) to discover KDE Connect devices
  • Improving the buggy bluetooth backend

Specifically on bluetooth: it is very hard to make it work on all devices. We're slowly making some progress, though.

Besides that, we worked on lots of smaller features and lots of bugfixes, of course. For example, I changed the Android app to not create a thread for everything single thing we send. This matters a lot for the mousepad plugin, which sends many small packets.

In short, the sprint was great! Thanks SUSE for hosting this sprint!

More in Tux Machines

LWN: Spectre, Linux and Debian Development

  • Grand Schemozzle: Spectre continues to haunt

    The Spectre v1 hardware vulnerability is often characterized as allowing array bounds checks to be bypassed via speculative execution. While that is true, it is not the full extent of the shenanigans allowed by this particular class of vulnerabilities. For a demonstration of that fact, one need look no further than the "SWAPGS vulnerability" known as CVE-2019-1125 to the wider world or as "Grand Schemozzle" to the select group of developers who addressed it in the Linux kernel. Segments are mostly an architectural relic from the earliest days of x86; to a great extent, they did not survive into the 64-bit era. That said, a few segments still exist for specific tasks; these include FS and GS. The most common use for GS in current Linux systems is for thread-local or CPU-local storage; in the kernel, the GS segment points into the per-CPU data area. User space is allowed to make its own use of GS; the arch_prctl() system call can be used to change its value. As one might expect, the kernel needs to take care to use its own GS pointer rather than something that user space came up with. The x86 architecture obligingly provides an instruction, SWAPGS, to make that relatively easy. On entry into the kernel, a SWAPGS instruction will exchange the current GS segment pointer with a known value (which is kept in a model-specific register); executing SWAPGS again before returning to user space will restore the user-space value. Some carefully placed SWAPGS instructions will thus prevent the kernel from ever running with anything other than its own GS pointer. Or so one would think.

  • Long-term get_user_pages() and truncate(): solved at last?

    Technologies like RDMA benefit from the ability to map file-backed pages into memory. This benefit extends to persistent-memory devices, where the backing store for the file can be mapped directly without the need to go through the kernel's page cache. There is a fundamental conflict, though, between mapping a file's backing store directly and letting the filesystem code modify that file's on-disk layout, especially when the mapping is held in place for a long time (as RDMA is wont to do). The problem seems intractable, but there may yet be a solution in the form of this patch set (marked "V1,000,002") from Ira Weiny. The problems raised by the intersection of mapping a file (via get_user_pages()), persistent memory, and layout changes by the filesystem were the topic of a contentious session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit. The core question can be reduced to this: what should happen if one process calls truncate() while another has an active get_user_pages() mapping that pins some or all of that file's pages? If the filesystem actually truncates the file while leaving the pages mapped, data corruption will certainly ensue. The options discussed in the session were to either fail the truncate() call or to revoke the mapping, causing the process that mapped the pages to receive a SIGBUS signal if it tries to access them afterward. There were passionate proponents for both options, and no conclusion was reached. Weiny's new patch set resolves the question by causing an operation like truncate() to fail if long-term mappings exist on the file in question. But it also requires user space to jump through some hoops before such mappings can be created in the first place. This approach comes from the conclusion that, in the real world, there is no rational use case where somebody might want to truncate a file that has been pinned into place for use with RDMA, so there is no reason to make that operation work. There is ample reason, though, for preventing filesystem corruption and for informing an application that gets into such a situation that it has done something wrong.

  • Hardening the "file" utility for Debian

    In addition, he had already encountered problems with file running in environments with non-standard libraries that were loaded using the LD_PRELOAD environment variable. Those libraries can (and do) make system calls that the regular file binary does not make; the system calls were disallowed by the seccomp() filter. Building a Debian package often uses FakeRoot (or fakeroot) to run commands in a way that appears that they have root privileges for filesystem operations—without actually granting any extra privileges. That is done so that tarballs and the like can be created containing files with owners other than the user ID running the Debian packaging tools, for example. Fakeroot maintains a mapping of the "changes" made to owners, groups, and permissions for files so that it can report those to other tools that access them. It does so by interposing a library ahead of the GNU C library (glibc) to intercept file operations. In order to do its job, fakeroot spawns a daemon (faked) that is used to maintain the state of the changes that programs make inside of the fakeroot. The libfakeroot library that is loaded with LD_PRELOAD will then communicate to the daemon via either System V (sysv) interprocess communication (IPC) calls or by using TCP/IP. Biedl referred to a bug report in his message, where Helmut Grohne had reported a problem with running file inside a fakeroot.

Flameshot is a brilliant screenshot tool for Linux

The default screenshot tool in Ubuntu is alright for basic snips but if you want a really good one you need to install a third-party screenshot app. Shutter is probably my favorite, but I decided to give Flameshot a try. Packages are available for various distributions including Ubuntu, Arch, openSuse and Debian. You find installation instructions on the official project website. Read more

Android Leftovers

IBM/Red Hat and Intel Leftovers

  • Troubleshooting Red Hat OpenShift applications with throwaway containers

    Imagine this scenario: Your cool microservice works fine from your local machine but fails when deployed into your Red Hat OpenShift cluster. You cannot see anything wrong with the code or anything wrong in your services, configuration maps, secrets, and other resources. But, you know something is not right. How do you look at things from the same perspective as your containerized application? How do you compare the runtime environment from your local application with the one from your container? If you performed your due diligence, you wrote unit tests. There are no hard-coded configurations or hidden assumptions about the runtime environment. The cause should be related to the configuration your application receives inside OpenShift. Is it time to run your app under a step-by-step debugger or add tons of logging statements to your code? We’ll show how two features of the OpenShift command-line client can help: the oc run and oc debug commands.

  • What piece of advice had the greatest impact on your career?

    I love learning the what, why, and how of new open source projects, especially when they gain popularity in the DevOps space. Classification as a "DevOps technology" tends to mean scalable, collaborative systems that go across a broad range of challenges—from message bus to monitoring and back again. There is always something new to explore, install, spin up, and explore.

  • How DevOps is like auto racing

    When I talk about desired outcomes or answer a question about where to get started with any part of a DevOps initiative, I like to mention NASCAR or Formula 1 racing. Crew chiefs for these race teams have a goal: finish in the best place possible with the resources available while overcoming the adversity thrown at you. If the team feels capable, the goal gets moved up a series of levels to holding a trophy at the end of the race. To achieve their goals, race teams don’t think from start to finish; they flip the table to look at the race from the end goal to the beginning. They set a goal, a stretch goal, and then work backward from that goal to determine how to get there. Work is delegated to team members to push toward the objectives that will get the team to the desired outcome. [...] Race teams practice pit stops all week before the race. They do weight training and cardio programs to stay physically ready for the grueling conditions of race day. They are continually collaborating to address any issue that comes up. Software teams should also practice software releases often. If safety systems are in place and practice runs have been going well, they can release to production more frequently. Speed makes things safer in this mindset. It’s not about doing the “right” thing; it’s about addressing as many blockers to the desired outcome (goal) as possible and then collaborating and adjusting based on the real-time feedback that’s observed. Expecting anomalies and working to improve quality and minimize the impact of those anomalies is the expectation of everyone in a DevOps world.

  • Deep Learning Reference Stack v4.0 Now Available

    Artificial Intelligence (AI) continues to represent one of the biggest transformations underway, promising to impact everything from the devices we use to cloud technologies, and reshape infrastructure, even entire industries. Intel is committed to advancing the Deep Learning (DL) workloads that power AI by accelerating enterprise and ecosystem development. From our extensive work developing AI solutions, Intel understands how complex it is to create and deploy applications for deep learning workloads. That?s why we developed an integrated Deep Learning Reference Stack, optimized for Intel Xeon Scalable processor and released the companion Data Analytics Reference Stack. Today, we?re proud to announce the next Deep Learning Reference Stack release, incorporating customer feedback and delivering an enhanced user experience with support for expanded use cases.

  • Clear Linux Releases Deep Learning Reference Stack 4.0 For Better AI Performance

    Intel's Clear Linux team on Wednesday announced their Deep Learning Reference Stack 4.0 during the Linux Foundation's Open-Source Summit North America event taking place in San Diego. Clear Linux's Deep Learning Reference Stack continues to be engineered for showing off the most features and maximum performance for those interested in AI / deep learning and running on Intel Xeon Scalable CPUs. This optimized stack allows developers to more easily get going with a tuned deep learning stack that should already be offering near optimal performance.