Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE -
Updated: 11 hours 54 min ago

I was at the Libre Graphics Meeting 2019

Monday 3rd of June 2019 09:35:35 AM

I had a nice surprise last Monday, I learned that the city where I live Saarbrücken (Germany) is hosting the 2019 edition of the nice Libre Graphics Meeting (lgm). So I took the opportunity to attend my first FOSS event. The event took place at the Hochschule der Bildenden Künste Saar from the Wed 29.05 to Sunday 02.06.

I really enjoyed, I meet a lot of other Free Software contributors (not only devs), and discovered some nice programming and artistic projects.

There were some really impressive presentations and workshops.

Thursday 30.05.

GEGL (GIMP new ‘rendering engine’) maintainer Øyvind Kolås presented, how to use GEGL effect from the command line and the same commands can be used directly from GIMP. This is helpful, when we want to automate some workflow.

In the afternoon, I discovered PraxisLIVE, an awesome live coding IDE where you can create effect with Java and a graph editor and showing the effect instantly on for example a webcam stream or a music track.

Ana Isabel Carvalho and Ricardo Lafuente explained their past workshop in Porto where the participants create pixel art fonts with git and the gitlab-ci.

Friday, 31.05.

On Friday, I took part to two workshops. The first was GIMP, there I met a lot of GIMP/GEGL developers. But it was more development meeting than a workshop where I could get my hand dirty.

I also took part in the Inkscape workshop, where I learned about all of the nice features coming in Inkscape 1.0 (a new alpha version was released during the LGM 2019 and users are encouraged to reports bugs and regressions). I also learned that Inkscape can be used to create nice wood works:

The model is published in the Thingiverse under CC BY-NC-SA 3.0.

After this productive day, most of the LGM participants went to the ‘Kneipentour’ (bar-hopping) and enjoyed some good Zwickel (the local beer).

Saturday, 01.06.

After last night, it was a bit difficult to get up, but I was able be only one minute late to see Boudewijn Rempt talk “HDR Support in Krita”.

In the afternoon, I took part in the Paged.js workshop, where we were able to create a book layout with CSS and HTML. Paged.js could be interesting for generating nice KDE handbooks with a professional looking feel, because it’s only using web standards (not implemented in any web browsers), and we could generate the pdf from the already existing html version.

Sunday, 02.06.

Sunday I took part in the Blender workshop, and Julian Eisel did an excellent job explaining the internal of how “Blender DNA and RNA system” archives great backward compatibility for .blend files and make it painless to write UI in Python almost directly connected to the DNA.


In summarize, LGM was a great event, I really enjoyed it and I hope I will be able to attend the next edition in Rennes (France) and see all these nice people again.

Oh, and now I have now more stickers on my laptop.

You can comment to this post in Mastodon.

Many thanks to Drew DeVault for proofreading this blog post.

Hello GSoC

Sunday 2nd of June 2019 12:48:31 PM

This blog post recaps the last year, and how I ended up getting selected in GSoC’2019 as a student with an awesome project in KDE. Read more on the blog post here .

KDE Usability & Productivity: Week 73

Sunday 2nd of June 2019 04:53:32 AM

Week 73 in Usability & Productivity initiative is here! We have all sorts of cool stuff to announce, and eagle-eyed readers will see bits and pieces of Plasma 5.16’s new wallpaper “Ice Cold” in the background!

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

April/May in KDE Itinerary

Saturday 1st of June 2019 08:00:00 AM

A lot has happened again around KDE Itinerary since the last two month summary. A particular focus area at the moment is achieving “Akademy readiness”, that is being able to properly support trips to KDE Akademy in Milan early September, understanding the tickets of the Italian national railway is a first step into that direction.

New Features

The timeline view in KDE Itinerary now highlights the current element(s), to make it easier to find the relevant information. Active elements also got an “are we there yet?” indicator, a small bar showing the progress of the current leg of a trip, taking live data into account.

Trip element being highlighted and showing a progress indicator.

Another clearly visible addition can be found in the trip group summary elements in the timeline. Besides expanding or collapsing the trip, these elements now also show information concerning the entire trip when available, such as the weather forecast, or any power plug incompatibility you might encounter during the trip.

Trip group summary showing weather forecast for the entire trip. Trip group summary showing power plug compatibility warnings.

Less visible but much more relevant for “Akademy readiness” was adding support for Trenitalia tickets. That required some changes and additions to how we deal with barcodes, as well as an (ongoing) effort to decode the undocumented binary codes used on those tickets. More details can be found in a recent post on this subject.

Infrastructure Work

A lot has also happened behind the scenes:

  • The ongoing effort to promote KContacts and KCalCore yields some improvements we benefit from directly as well, such as the Unicode diacritic normalization applied during the country name detection in KContacts (reducing the database size and detecting country names also with slight spelling variations) or the refactoring of KCalCore::Person and KCalCore::Attendee (which will make those types easily accessible by extractor scripts).
  • The train reservation data model now contains the booked class, which is a particular useful information when you don’t have a seat reservation and need to pick the right compartment.
  • The RCT2 extractor (relevant e.g. for DSB, NS, ÖBB, SBB) got support for more variations of seat reservations and more importantly now preserves the ticket token for ticket validation with the mobile app.
  • The train station knowledge database is now also indexed by UIC station codes, which became necessary to support the Trenitalia tickets.
  • Extractor scripts got a new utility class for dealing with unaligned binary data in barcodes.

We also finally found the so far illusive mapping table for the station identifiers used in SNCF barcodes, provided by Trainline as Open Data. This yet has to find its way into Wikidata though, together with more UIC station codes for train stations in Italy. Help welcome :)

Performance Optimizations

Keeping an eye on performance while the system becomes more complex is always a good idea, and a few things have been addressed in this area too:

  • The barcode decoder so far was exposed more or less directly to the data extractors, resulting in possibly performing the expensive decoding work twice on the same document, e.g. when both the generic extractor and one or more custom extractors processed a PDF document. Additionally, each of those were applying their own heuristics and optimizations to avoid expensive decoding attempts where they are unlikely to succeed. Those optimizations now all moved to the barcode decoder directly, together with a positive and negative decoding result cache. That simplifies the code using this, and it speeds up extraction of PDF documents without a context (such as a sender address) by about 15%.
  • Kirigami’s theme color change compression got further optimized, which in the case of KDE Itinerary avoids the creation of a few hundred QTimer objects.
  • The compiled-in knowledge database got a more space-efficient structure for storing unaligned numeric values, cutting down the size of the 24bit wide IBNR and UIC station code indexes by 25%.
Fixes & Improvements

There’s plenty of smaller changes that are noteworthy too of course:

  • We fixed a corner case in KF5::Prison’s Aztec encoder that can trigger UIC 918.3 tickets, producing invalid barcodes.
  • The data extractors for Brussels Airlines, Deutsche Bahn and SNCF got fixes for various booking variants and corner cases.
  • Network coverage for KPublicTransport increased, including operators in Ireland, Poland, Sweden, parts of Australia and more areas in Germany.
  • More usage of emoji icons in KDE Itinerary got replaced by “real” icons, which fixes rendering glitches on Android and produces a more consistent look there.
  • Lock inhibition during barcode scanning now also works on Linux.
  • PkPass files are now correctly detected on Android again when opened as a content: URL.
  • The current trip group in the KDE Itinerary timeline is now always expanded, which fixes various confusions in the app when “now” or “today” don’t exist due to being in a collapsed time range.
  • Multi-day event reservations are now split in begin and end elements in the timeline as already done for hotel bookings.
  • Rental car bookings with a drop-off location different from the pick-up location are now treated as location changes in the timeline, which is relevant e.g. for the weather forecasts.
  • Extracting times from PkPass boarding passes now converts those to the correct timezone.

A big thanks to everyone who donated test data again, this continues to be essential for improving the data extraction.

If you want to help in other ways than donating test samples too, see our Phabricator workboard for what’s on the todo list, for coordinating work and for collecting ideas. For questions and suggestions, please feel free to join us on the KDE PIM mailing list or in the #kontact channel on Matrix or Freenode.

Organizing time on Plasma Mobile

Friday 31st of May 2019 06:00:00 AM

About a year ago the phabricator tasks of Plasma Mobile were extensively revamped. We tried to make clear the objective of each task, providing helpful resources and facilitating onboarding. Looking at the features needed to reach the “Plasma Mobile 1.0” milestone, the calendar application was sticking out. So, Calindori was born (even though this name was coined some months later).

Build on top of Qt Quick and Kirigami and following -or trying to follow- the KDE human interface guidelines, the whole point of Calindori is to help users manage their time. Through a clean user interface, it aims to offer the users an intuitive way to accomplish their tasks.

Calindori home and events pages

For the time being, Calindori provides the basic calendar functionalities: you can check previous and future dates and manage events and todos. Tasks and events of different domains (e.g. personal, business) can be included in different calendars, since multiple calendars are supported. Import functionality has also been added so as to make the transition of new users easier.

You may test Calindori at the moment, either building the 1.0 release from source or just installing the flatpak bundle. A Plasma Mobile phone is not a requirement for testing Calindori; it perfectly runs on Plasma or any other Linux desktop environment. It has been designed having in mind the needs of the mobile users, but the great advantage of using a framework like Kirigami is the adaptation of the user interface to desktop environments, with little or without additional development effort. It will also be very helpful to report issues and provide feedback on the gitlab repository.

Behind the scenes, the iCalendar standard is followed, as implemented by the KDE KCalcore library. KDE/Qt Developers may also find interesting the date and time pickers included in the application. These components may be reviewed, enhanced and, why not, find their way to the KDE frameworks.

Calindori date and time pickers

Looking to the future, the support of repeating events and reminders are the first tasks that should be handled. Then, caldav should also be supported, enabling the users to synchronize their online calendars (e.g. Nextcloud). Nevertheless, that’s a task that should be discussed with the rest of the Plasma Mobile team. The next KDE Akademy at Milan in September will be a great opportunity.

The Plasma Mobile team has made a significant progress during the last months, trying hard to offer a KDE Plasma experience to the mobile users. Even though it is not an 100% complete mobile platform, the missing parts are slowly being added to the software stack. But a lot of interesting tasks are still available, waiting for people willing to help the Plasma Mobile ecosystem grow.

In the next months a set of devices -like PinePhone and Librem 5- are expected, enabling the execution of Linux distributions on real mobile hardware. This will change significantly the free software mobile environment; it will open the way for more privacy friendly devices that could put the users in the driver’s seat, without spying on them or treating them like products. I believe it is the perfect time to get involved with projects like Plasma Mobile, helping to create an open mobile platform that will bring a KDE breeze to mobile phones.

KStars v3.2.3 is Released!

Thursday 30th of May 2019 09:38:16 PM
Another minor release of the 3.2.X series is released, KStars v3.2.3 is out for Windows/Mac/Linux. This would probably the last minor release of the 3.2.X series with 3.3.0 coming into development now.

This release contains a few minor bug fixes and some few convenient changes that were requested by our users.

The Sky Map cursor can now be configured. The default X icon can now be changed to either the arrow or circle mouse cursors. It can be changed by going to Configure KStars --> Advanced --> Look & Feel.

It is also possible now to make left click immediately snaps to the object under the cursor. The default behavior is to double-click on object to focus it, but if Left Click Selects Object is checked, then any left clicks would snap to the object right away.
Another minor change was to include the profile name directly into the Window title for Ekos to make it easy to remember which profile is currently running, in addition to a few icon changes. 

A race condition when using a guide camera was fixed thanks to Bug #407952 filed by Kevin Ross. On the other hand, Wolfgang Reissenberger improved the estimated time of scheduler jobs where there are repeated jobs in the sequence.

Using std::unique_ptr with Qt

Thursday 30th of May 2019 07:29:56 PM
Qt memory handling

Qt has a well established way to handle memory management: any QObject-based instance can be made a child of another QObject instance. When the parent instance is deleted it deletes all its children. Simple and efficient.

When a Qt method takes a QObject pointer one can rely on the documentation to know if the function takes ownership of the pointer. Same thing for functions returning a QObject pointer.

What the rest of the world does

This is very Qt specific. In the rest of the C++ world, object ownership is more often managed through smart pointers like std::unique_ptr and std::shared_ptr.

I am used to the Qt way, but I like the harder to misuse and self documenting aspect of the unique_ptr way. Look at this simple function:

Engine* createEngine();

With this signature we have no way to know if you are expected to delete the new engine. And the compiler cannot help us. This code builds:

{ Engine* engine = createEngine(); engine->start(); // Memory leak! }

With this signature, on the other hand:

std::unique_ptr<Engine> createEngine();

It is clear and hard to ignore that ownership is passed to the caller. This won't build:

{ Engine* engine = createEngine(); engine->start(); }

But this builds and does not leak:

{ std::unique_ptr<Engine> engine = createEngine(); engine->start(); // No leak, Engine instance is deleted when going out of scope }

(And we can use auto to replace the lengthy std::unique_ptr<Engine> declaration)

We can also use this for "sink functions": declaring a function argument as std::unique_ptr makes it unambiguous that the function takes ownership of it.

Using std::unique_ptr for member variables brings similar benefits, but one point I really like is how it makes the class definition more self-documenting. Consider this class:

class Car { /*...*/ World* mWorld; Engine* mEngine; }

Does Car owns mWorld, mEngine, both, none? We can guess, but we can't really know. Only the class documentation or our knowledge of the code base could tell us that Car owns mEngine but does not own mWorld.

On the other hand, if we work on a code base where all owned objects are std::unique_ptr and all "borrowed" objects are raw pointers, then this class would be declared like this:

class Car { /*...*/ World* mWorld; std::unique_ptr<Engine> mEngine; }

This is more expressive.

Forward declarations

We need to be careful with forward declarations. The following code won't build:


#include <memory> class Engine; class Car { public: Car(): private: std::unique_ptr<Engine> mEngine; };


#include "Car.h" #include "Engine.h" Car::Car() : mEngine(std::make_unique<Engine>()) { }


#include "Car.h" int main() { Car car; return 0; }

The compiler fails to build "main.cpp". It complains it cannot delete mEngine because the Engine class is incomplete. This happens because we have not declared a destructor in Car, so the compiler tries to generate one when building "main.cpp", and since "main.cpp" does not include "Engine.h", the Engine class is unknown there.

To solve this we need to declare Car destructor and tell the compiler to generate its implementation in Car implementation:


#include <memory> class Engine; class Car { Car(); ~Car(); // <- Destructor declaration private: std::unique_ptr<Engine> mEngine; };


#include "Car.h" #include "Engine.h" Car::Car() : mEngine(std::make_unique<Engine>()) { } Car::~Car() = default; // <- Destructor "definition" Using std::unique_ptr with Qt code

I wanted to experiment with using unique_ptr instead of the Qt parent-child system in a real project, so I decided to do so on Lovi, a Qt-based log file viewer I am working on. It works out well, but there are a few pitfalls to be aware of.

Double deletions

If your class owns a QObject through a unique_ptr, be careful that the QObject parent is not deleted before your class, as it will delete your QObject, so when you reach your class destructor, unique_ptr will try to delete the already deleted QObject.

This also happens if you use a unique_ptr for a QDialog with the Qt::WA_DeleteOnClose attribute set.

Get used to calling .get()

Another change compared to using raw pointers is that every time you pass the object to a method which takes a raw pointer, you need to call .get(). So for example connecting the Engine::started() signal to our Car instance would be done like this:

connect(mEngine.get(), &Engine::started, this, &Car::onEngineStarted);

This is a bit annoying but again, it makes it explicit that you are "lending" your object to another function.

What about QScopedPointer?

Qt comes with QScopedPointer, which is very similar to std::unique_ptr. You might already be using it. Why should you use unique_ptr instead?

The main difference between these two is that QScopedPointer lacks move semantics. It makes sense since it has been created to work with C++98, which does not have move semantics.

This means it is more cumbersome to implement sink functions with it. Here is an example.

Suppose we want to create a function to shred a car. The Car instance received by this function should not be usable once it has been called, so we want to make it a sink function, like this:

void shredCar(std:unique_ptr<Car> car) { // Shred that car }

The compiler rightfully prevents us from calling shredCar() like this:

auto car = std::make_unique<Car>(); shredCar(car);

Instead we have to write:

auto car = std::make_unique<Car>(); shredCar(std::move(car));

This makes it explicit that car no longer points to a valid instance. Now lets write shredCar() using QScopedPointer instead:

void shredCar(QScopedPointer<Car> car) { // Shred that car }

Since QScopedPointer does not support move, we can't write:


Instead we have to write this:


which is less readable, and less efficient since we create a new temporary QScopedPointer.

Other than its name being more Qt-like, QScopedPointer has no advantages compared to std::unique_ptr.

Borrowed pointers

Following the guideline of using std::unique_ptr<> for pointers we own means that we should only use raw pointers for "borrowed" objects: objects owned by someone else, which have been passed to us (yes, this sounds very Rust like).

To make things even more explicit, I am contemplating the idea of creating a borrowed_ptr<T> pointer. This pointer would not do anything more than a raw pointer, but it would make it clear that the code using the pointer does not own the object.

Taking our Car example again, the class definition would look like this:

class Car { /*...*/ borrowed_ptr<World> mWorld; std::unique_ptr<Engine> mEngine; }

Such a pointer would make code more readable at the cost of verbosity. It could also be really useful when generating bindings for other languages. What do you think?


I believe using std::unique_ptr can help make your code more readable and robust. Consider using it instead of raw pointers. Not only for local variables, but also for function arguments, return values and member classes.

And if you use QScopedPointer, you can switch to std::unique_ptr, it has a few advantages and no drawbacks.

Assistants -- copy, share, assignment

Thursday 30th of May 2019 08:54:20 AM

Over the last week I have been investigating into Bug 361012, on the undo history of the modification of guides. But from the very beginning I mixed up the two terms “guides” and “assistants,” so I decided to work on both. The work with guides is a lot simpler and will not be covered here, though.

As I write this post, the master branch of Krita does not create any undo commands for the document. I first added undo commands for adding and removing assistants, which seems the easiest. The editing of them is a bit more difficult, as the dragging operations involve the movement of many “handles,” the movable round buttons that define the position of one or more assistants. The source code on master for implementing such actions is quite complicated and involves a great number of cases. It would be another great endeavour to put all these bunches of code into a KUndo2Command. But, another thing I have experimented with and I will be working on will immediately clear the clouds.

So I just thought of the copy-on-write mechanism, and yes, why not? Though COW itself is not actually implemented for the guides, it does seem inspiring. I mean, we can just save a copy of all assistants and, when needed, restore that.

The main problem here is the handles. They are represented as shared pointers in individual assistants and may be shared between different ones (e.g. two perspectives share two corner handles and one side handles). When we take a clone of the list of assistants it will be necessary to keep this kind of relationship. My solution is to use a QMap of pointers, which seems to coincide with the logic of exporting to xml, but I had yet to read that partof the code when writing mine so I did not know about that. The logic is to check, for every handle, whether there is a mapping relationship in the map. If there is, we reuse that handle, and if not, we create a new one with the same position and record that relationship in our QMap.

But some display properties are not to be recorded into the undo history. Such properties include the changing of color, visibility, etc. To resolve this problem, I put these data into a shared pointer and, when we are cloning an assistant for undo/redo, we will reuse that pointer.When we replace the assistant list with the one recorded, all the display properties will remain since the data are shared.

And for the next several weeks I will move onto the Snapshot Docker.

Linux perf and KCachegrind

Wednesday 29th of May 2019 09:59:11 AM

If you occassionally do performance profiling as I do, you probably know Valgrind's Callgrind and the related UI KCachegrind. While Callgrind is a pretty powerful tool, running it takes quite a while (not exactly fun to do with something as big as e.g. LibreOffice).

Recently I finally gave Linux perf a try. Not quite sure why I didn't use it before, IIRC when I tried it somewhen long ago, it was probably difficult to set up or something. Using perf record has very little overhead, but I wasn't exactly thrilled by perf report. I mean, it's text UI, and it just gives a list of functions, so if I want to see anything close to a call graph, I have to manually expand one function, expand another function inside it, expand yet another function inside that, and so on. Not that it wouldn't work, but compared to just looking at what KCachegrind shows and seeing ...

When figuring out how to use perf, while watching a talk from Milian Wolff, on one slide I noticed a mention of a Callgrind script. Of course I had to try it. It was a bit slow, but hey, I could finally look at perf results without feeling like that's an effort. Well, and then I improved the part of the script that was slow, so I guess I've just put the effort elsewhere :).

And I thought this little script might be useful for others. After mailing Milian, it turns out he just created the script as a proof of concept and wasn't interested in it anymore, instead developing Hotspot as UI for perf. Fair enough, but I think I still prefer KCachegrind, I'm used to this, and I don't have to switch the UI when switching between perf and callgrind. So, with his agreement, I've submitted the script to KCachegrind. If you would find it useful, just download this do something like:

$ perf record -g ...
$ perf script -s > perf.out
$ kcachegrind perf.out

Krita 4.2.0 is Out!

Wednesday 29th of May 2019 01:00:56 AM

Today we’re releasing Krita 4.2.0. This is a big release, with over a thousand bugs fixed and exciting new functionality like support for HDR displays.

Compared to the last beta, there have been over 30 bug fixes. New in Krita 4.2.0 is updated support for drawing tablets, support for HDR monitors on Windows, an improved color palette docker, scripting API for animation, color gamut masking, improved selection handling, much nicer handling of the interaction between opacity and flow and much, much, much more.

Dive into the release notes for a full overview!

And there’s also the lovely new splash image by Tyson Tan!

Join the Community and Help Your Fellow Artists

With millions of downloads every year, and that number is growing rapidly, Krita is clearly very popular. That’s great! But we’re running into the limits what we can do to help people with questions and problems, and we need you all to help out, by helping your fellow Krita artists on Reddit, Ask, the Forum and all the other places!

Download Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)


Note: the gmic-qt is not available on OSX.

Source code md5sum

For all downloads:


Currently, .sig files aren’t available for this release because the maintainer is travelling. We will prepare those next week.

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

Debugging Krita on Android

Tuesday 28th of May 2019 02:31:12 PM

 Debugging Krita on Android

Well, the easiest way is to use Android Studio.

Import the project in Android Studio as a gradle project and build the project. Krita build will fail when run from Android Studio. Now, to run it successfully, we’ll have to manually provide the path to installPrefix or comment out copyLibs dependency. Now, the project should build properly.

You might want to change the debug type to Native or Dual, as their Auto mode did not work for me. Open the C++ file in Android Studio and set a breakpoint. Click the bug icon, sit back and watch while Android Studio does all the magic for you.

And then it is usual lldb (in Android Studio) or GUI if that’s what you prefer.

Using command line:

Starting Android studio takes a lot of time and memory. Then, it builds which takes an additional few minutes, so it really isn’t a good idea to use it for debugging every time the app crashes. So, the less time consuming one and a bit complex method – here we go!

Assuming the app has been installed with the debug key. The first step is to launch it in debug mode, to do so:

# domain/(launcher activity or exported activity's class-path)
$ adb shell am start -D -n "org.kde.krita/"

Now the app on phone should launch and show Waiting for Debugger message. While it waits – open a terminal and enter $ adb shell, and then look for lldb-server in /data/local/tmp/. If you ever debugged app through Android Studio, then it should exist. If it does not, then launch Android Studio and run it in debug mode ��…hahahaha.

Just kidding, push the file to that location.

$ adb push $ANDROID_SDK_ROOT/lldb/<version>/android/<abi>/lldb-server /data/local/tmp
$ adb shell chmod +x /data/local/tmp/lldb-server

(No lldb directory? See notes)

Then for us to access the libraries, we’ll have to copy it to /data/data/org.kde.krita, for that:

$ adb shell run-as org.kde.krita cp /data/local/tmp/lldb-server /data/data/org.kde.krita

(Why run-as? It is a setuid program and gives us the necessary permission to access the sandbox).

Now, enter the app sandbox by first entering the $ adb shell and then $ run-as org.kde.krita.

Run the lldb-server like you would if you were remote debugging.

$ ./lldb-server platform --server --listen "<incoming-ip>:<port>"
$ # Example: allow any ip on port 9999
$ ./lldb-server platform --server --listen "*:9999"

Now on the host machine, run lldb and then

(lldb) platform select remote-android
(lldb) platform connect connect://<ip>:<port>

On my machine:

(lldb) platform select remote-android
Platform: remote-android
Connected: no
(lldb) platform connect connect://localhost:9999
Platform: remote-android
Triple: arm-*-linux-android
OS Version: 28.0.0 (4.4.153-15659493)
Kernel: #2 SMP PREEMPT Thu Apr 4 18:31:57 KST 2019
Hostname: localhost
Connected: yes
WorkingDir: /data/data/org.kde.krita

You can read more about what they do on LLVM’s website.
(This is one a time setup you can keep the server and client connected)

Remember, that our process is still Waiting for Debugger? :(
Let’s give it what it wants. Attach the debugger to the running process’s pid, which can be known by $ adb shell ps | grep "krita" or $ pgrep "krita"

To attach:

(lldb) attach <pid>
(lldb) # on my machine
(lldb) attach 1818
Process 1818 stopped
* thread #1, name = 'org.kde.krita', stop reason = signal SIGSTOP
frame #0: 0xe8d35f7c`syscall + 28
<and much more>

Still didn’t continue? :-<
So, let’s finally resume it!

We’ll have to resume it over Java Debug Wire Protocol (JDWP), we’ll use jdb

$ adb forward tcp:12345 jdwp:<pid> # the same pid which we attached in lldb
$ jdb -attach localhost:12345

Now continue the process in lldb and we are done!

(This might seem like a lot, but it really isn’t. Every time the app crashes, I run in debug mode $ attach pid and I get the backtrace immediately!)

PS: When I was looking for it on the internet, I didn’t find much about it and had to spend a lot of time on this.
This method should work with debugging any android app with lldb, obv!
(I am really new to blogging. If it’s hard to understand or my formatting is bad, I am really sorry.)

  • I hate the extra jdb thing, and if the function which you want to debug is not going to be called during the early start up, you can use -N flag instead of -D with am.
  • Can’t find lldb directory in your SDK? Use platform tools to install it.
  • jdb doesn’t attach? $ killall android-studio && adb kill-server #_#

More in Tux Machines

today's howtos

Games; CHOP, LeClue - Detectivu, Nantucket, MOTHERGUNSHIP

  • Brutal local co-op platform brawler CHOP has released

    CHOP, a brutal local co-op platform brawler recently left Early Access on Steam. If you like fast-paced fighters with a great style and chaotic gameplay this is for you. There's multiple game modes, up to for players in the standard modes and there's bots as well if you don't have people over often. Speaking about the release, the developer told me they felt "many local multiplayer games fall into a major pitfall : they often lack impact and accuracy, they don't have this extra oomph that ensure players will really be into the game and hang their gamepad like their life depends on it." and that "CHOP stands out in this regard". I've actually quite enjoyed this one, the action in CHOP is really satisfying overall.

  • Mystery adventure game Jenny LeClue - Detectivu is releasing this week

    Developer Mografi has confirmed that their adventure game Jenny LeClue - Detectivu is officially releasing on September 19th. The game was funded on Kickstarter way back in 2014 thanks to the help of almost four thousand backers raising over one hundred thousand dollars.

  • Seafaring strategy game Nantucket just had a big patch and Masters of the Seven Seas DLC released

    Ahoy mateys! Are you ready top set sail? Anchors aweigh! Seafaring strategy game Nantucket is now full of even more content for you to play through. Picaresque Studio and Fish Eagle just released a big new patch adding in "100+" new events, events that can be triggered by entering a city, the Resuscitation command can now heal even if someone isn't dead during combat, the ability to rename crew to really make your play-through personal, minor quests give off better rewards and more. Quite a hefty free update!

  • MOTHERGUNSHIP, a bullet-hell FPS where you craft your guns works great on Linux with Steam Play

    Need a fun new FPS to try? MOTHERGUNSHIP is absolutely nuts and it appears to run very nicely on Linux thanks to Steam Play. There's a few reasons why I picked this one to test recently: the developers have moved onto other games so it's not too likely it will suddenly break, there's not a lot of new and modern first-person shooters on Linux that I haven't finished and it was in the recent Humble Monthly.

GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers

Yesterday, the team behind the GNU project announced Parallel GCC, a research project aiming to parallelize a real-world compiler. Parallel GCC can be used in machines with many cores where GNU cannot provide enough parallelism. A parallel GCC can be also used to design a parallel compiler from scratch. Read more

today's leftovers

  • 3 Ways to disable USB storage devices on Linux
  • Fedora Community Blog: Fedocal and Nuancier are looking for new maintainers

    Recently the Community Platform Engineering (CPE) team announced that we need to focus on key areas and thus let some of our applications go. So we started Friday with Infra to find maintainers for some of those applications. Unfortunately the first few occurrences did not seem to raise as much interest as we had hoped. As a result we are still looking for new maintainers for Fedocal and Nuancier.

  • Artificial Intelligence Confronts a 'Reproducibility' Crisis

    Lo and behold, the system began performing as advertised. The lucky break was a symptom of a troubling trend, according to Pineau. Neural networks, the technique that’s given us Go-mastering bots and text generators that craft classical Chinese poetry, are often called black boxes because of the mysteries of how they work. Getting them to perform well can be like an art, involving subtle tweaks that go unreported in publications. The networks also are growing larger and more complex, with huge data sets and massive computing arrays that make replicating and studying those models expensive, if not impossible for all but the best-funded labs.

    “Is that even research anymore?” asks Anna Rogers, a machine-learning researcher at the University of Massachusetts. “It’s not clear if you’re demonstrating the superiority of your model or your budget.”

  • When Biology Becomes Software

    If this sounds to you a lot like software coding, you're right. As synthetic biology looks more like computer technology, the risks of the latter become the risks of the former. Code is code, but because we're dealing with molecules -- and sometimes actual forms of life -- the risks can be much greater.


    Unlike computer software, there's no way so far to "patch" biological systems once released to the wild, although researchers are trying to develop one. Nor are there ways to "patch" the humans (or animals or crops) susceptible to such agents. Stringent biocontainment helps, but no containment system provides zero risk.

  • Why you may have to wait longer to check out an e-book from your local library

    Gutierrez says the Seattle Public Library, which is one of the largest circulators of digital materials, loaned out around three million e-books and audiobooks last year and spent about $2.5 million to acquire those rights. “But that added 60,000 titles, about,” she said, “because the e-books cost so much more than their physical counterpart. The money doesn’t stretch nearly as far.”

  • Libraries are fighting to preserve your right to borrow e-books

    Libraries don't just pay full price for e-books -- we pay more than full price. We don't just buy one book -- in most cases, we buy a lot of books, trying to keep hold lists down to reasonable numbers. We accept renewable purchasing agreements and limits on e-book lending, specifically because we understand that publishing is a business, and that there is value in authors and publishers getting paid for their work. At the same time, most of us are constrained by budgeting rules and high levels of reporting transparency about where your money goes. So, we want the terms to be fair, and we'd prefer a system that wasn't convoluted.

    With print materials, book economics are simple. Once a library buys a book, it can do whatever it wants with it: lend it, sell it, give it away, loan it to another library so they can lend it. We're much more restricted when it comes to e-books. To a patron, an e-book and a print book feel like similar things, just in different formats; to a library they're very different products. There's no inter-library loan for e-books. When an e-book is no longer circulating, we can't sell it at a book sale. When you're spending the public's money, these differences matter.

  • Nintendo's ROM Site War Continues With Huge Lawsuit Against Site Despite Not Sending DMCA Notices

    Roughly a year ago, Nintendo launched a war between itself and ROM sites. Despite the insanely profitable NES Classic retro-console, the company decided that ROM sites, which until recently almost single-handedly preserved a great deal of console gaming history, need to be slayed. Nintendo extracted huge settlements out of some of the sites, which led to most others shutting down voluntarily. While this was probably always Nintendo's strategy, some sites decided to stare down the company's legal threats and continue on.

  • The Grey Havens | Coder Radio 375

    We say goodbye to the show by taking a look back at a few of our favorite moments and reflect on how much has changed in the past seven years.

  • 09/16/2019 | Linux Headlines

    A new Linux Kernel is out; we break down the new features, PulseAudio goes pro and the credential-stealing LastPass flaw. Plus the $100 million plan to rid the web of ads, and more.

  • Powering Docker App: Next Steps for Cloud Native Application Bundles (CNAB)

    Last year at DockerCon and Microsoft Connect, we announced the Cloud Native Application Bundle (CNAB) specification in partnership with Microsoft, HashiCorp, and Bitnami. Since then the CNAB community has grown to include Pivotal, Intel, DataDog, and others, and we are all happy to announce that the CNAB core specification has reached 1.0. We are also announcing the formation of the CNAB project under the Joint Development Foundation, a part of the Linux Foundation that’s chartered with driving adoption of open source and standards. The CNAB specification is available at Docker is working hard with our partners and friends in the open source community to improve software development and operations for everyone.

  • CNAB ready for prime time, says Docker

    Docker announced yesterday that CNAB, a specification for creating multi-container applications, has come of age. The spec has made it to version 1.0, and the Linux Foundation has officially accepted it into the Joint Development Foundation, which drives open-source development. The Cloud Native Application Bundle specification is a multi-company effort that defines how the different components of a distributed cloud-based application are bundled together. Docker announced it last December along with Microsoft, HashiCorp, and Bitnami. Since then, Intel has joined the party along with Pivotal and DataDog. It solves a problem that DevOps folks have long grappled with: how do you bolt all these containers and other services together in a standard way? It’s easy to create a Docker container with a Docker file, and you can pull lots of them together to form an application using Docker Compose. But if you want to package other kinds of container or cloud results into the application, such as Kubernetes YAML, Helm charts, or Azure Resource Manager templates, things become more difficult. That’s where CNAB comes in.