Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content Planet KDE
Planet KDE
Updated: 1 hour 36 min ago

The MyPaint Brush Engine is now working

Tuesday 16th of June 2020 04:39:00 AM
It has been more than 2 weeks since the coding period began and I didn't post much because the project was just begun and there was no big progress. Coming to the project, the MyPaint brush engine plugin has been integrated into Krita and is working. Though, it is very rudimentary as of now, we can't customize it, we can't load/save brushes and there is no settings widget. All we can do as of now is just use the default settings for painting. The rest of the things will be taken care of during this summer.
Work Done: Over the last two weeks, I worked on the KisMyPaintBrush, KisMyPaintSurface, MyPaintopPlugin classes.  Last week I solely worked on draw_dab and get_color methods which are responsible for painting the dabs over the canvas. Those methods occupied most of the week.


                                                                 
                        Bubble brush



                        AirBrush Stroke


                        Spray Brush

These were loaded from the system for testing purposes and are obviously not the default settings. :) The LAG: The problem that I am facing currently is that the brush engine is lagging for 40px and above sized brushes. Now, this one could be an internal problem or specific to implementation. I was thinking of multi threading the draw_dab on the basis of quadrants in the dab but it was ruled out by one of my mentors as it might not deliver the expected results. I don't have any idea of how would I solve this one yet but will try to dig a bit deeper into the problem this week.

GSoC’ 20 Progress: Week 1 and 2 

Monday 15th of June 2020 07:17:41 PM

Greetings!

It’s been two weeks since the coding period began and I would love to share with the community the progress I have made so far. 

In the past two weeks, I focused on implementing a basic class for handling subtitles.

First, I created a class called SubtitleModel. This class would contain the list of subtitle content included in the uploaded subtitle file. Since the SubtitleModel class would be utilized to implement a basic model based upon a list of strings, the QAbstractListModel provided an ideal base class on which to build. Subtitle files are usually of two basic formats: SubRipText file (.srt) and SubStation Alpha (.ass) type. Subtitles are maintained in these files in totally different formats based on their file type, so the function ought to parse through each file type in a distinctive way.

In the first week, I worked on composing a parser function for parsing the SRT format file and the SSA or ASS format file provided by the user. As of now, the parser parses through the .srt or.ass file and extricates the subtitle data like the subtitle text and its starting and ending timings. 

void parseSubtitle()

Next, I worked on composing the addSubtitle() function to add the accumulated data to the list model.

void addSubtitle (GenTime start,GenTime end, QString str)

I included many more fundamental functions like getModel() which returns a shared pointer to the Subtitle Model, and rolenames() and data() functions which help in returning values of the objects within the list based upon the custom roles. 

static std::shared_ptr<SubtitleModel> getModel()

In the forthcoming weeks, I will work on implementing a  Subtitle QML track in order to display the positions of the subtitles in the timeline.

The code can be viewed here.

That’s all for now, please stay tuned to my progress in upcoming posts.

~ Sashmita Raghav

Laptop update 2

Monday 15th of June 2020 05:41:48 PM

Here’s an update on the trials and tribulations encountered with my new Lenovo ThinkPad X1 Yoga gen 4.

Audio

I have submitted a merge request to clean up the audio display by filtering out inactive devices, which takes it from this:

… to this:

High DPI Scaling

The issue with plasma shadows not respecting the scale factor has been fixed.

Upstream (i.e. non-KDE) issues

I have filed Kernel and PulseAudio bugs to track the issues with audio and power management:

The saga continues…

KDE Applications Release Meta-data

Monday 15th of June 2020 03:52:44 PM

kde.org/applications now has latest release versions and dates on it.  Finally you can check your app store or distro is up to date

OpenUK Future Leaders Online Talk on Friday

Monday 15th of June 2020 03:42:06 PM

Jonathan Riddell will be talking about KDE’s “All About the Apps” goal this Friday at OpenUK’s Future Leader’s Training. Register by mailing admin@openuk.uk.

https://openuk.uk/event-calendar/kde-operating-systems-and-apps/

Interview with Albert Weand

Monday 15th of June 2020 12:04:05 PM
Could you tell us something about yourself?

I’m an illustrator from Panama. I use Krita to create digital art, but I also work with traditional tools.

Do you paint professionally, as a hobby artist, or both?

It started as a hobby. Many of my illustrations are created as personal projects. However, I do accept commissions or do professional work, if the occasion arises.

What genre(s) do you work in?

I used to draw manga (comics). As a result, most of my work was done in grayscale. But I was always interested in digital illustration and painting. Nowadays, I’m starting to take inspiration from fantasy and nature. My work focuses on characters but I’m trying to work on backgrounds as well.

Whose work inspires you most — who are your role models as an artist?

Years ago, when I started drawing manga, my style was influenced by artists like Masakazu Katsura, Inio Asano or Kentaro Miura. With digital paintings, I’ve been trying to expand my influences with works from traditional artists such as Ted Nasmith, Alan Lee, Miles Johnston and some Pre-Raphaelite paintings as well. I think it’s important to contemplate and study the works of other artists, even if their genre or style is different from yours; they can teach you something valuable. David Revoy and Krenz Cushart have also been a great source of inspiration.

How and when did you get to try digital painting for the first time?

When I finished high school, my father gave me a drawing tablet. At first, I was reluctant to use it because it was quite difficult to learn the coordination between sight and hand movements. Also, I didn’t have any experience in painting at all. Slowly, I started to get better at it. Thanks to digital painting I was able to learn more about the use of color, light and composition.

What makes you choose digital over traditional painting?

It’s relatively faster to make changes to your artwork. It’s easier to manage your tools since you just need a decent computer and a drawing tablet. There are many tutorials available on the internet. Now, even though I like digital art, I’m also interested in traditional oil painting. I just need to get the materials and start working on it. Sometimes I also work with graphite and watercolors.

How did you find out about Krita?

A couple of years ago, I started to gain interest in GNU/Linux and even considered using it as my main OS. One of my priorities was to find a good painting application compatible with the system. I tried MyPaint and Gimp, but Krita was definitely the best option.

What was your first impression?

I really like the user interface, it’s very flexible. I like to keep things simple and just focus on the artwork. The shortcuts to navigate around the canvas are great, they feel very natural. There’s no need to change tools in order to zoom in, zoom out or move around the canvas. I also like the default brushes, they feel organic and the textures help to simulate real brushes in traditional painting.

What do you love about Krita?

The project is free and open source; not only programmers but also artists do their part to improve the software. The documentation and the tutorials on Krita’s youtube channel are very useful. Also, I really like the fact that the application focuses on painting.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I’m quite happy with the application and it’s features. If I had to point something out for improvement, it would be the text tool.

What sets Krita apart from the other tools that you use?

Based on my experience, Krita is very intuitive and the brush engine allows a lot of customization. The LUT Management tool is great, it helps me working with values.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Right now, I think my favorite work would be “Nest”.


In my first digital paintings, the brushwork was very clean and everything had to be well-defined. Now I’ve come to understand that it’s important to balance the level of details; some things can be suggested by shapes and colors. If the brushstrokes have the same size and texture, the image might look too uniform. That’s why I’ve been trying to use more brushes in my artworks. I think I did a decent job here, although there’s certainly room for improvement.

What techniques and brushes did you use in it?

I used the default brushes in the paint tag, especially the bristle brushes. The wet knife is very useful for shapes and blending.

Sometimes I use some David Revoy’s new brushes. Also, I like the effect and texture of Ramon Miranda’s watercolor brushes.

Where can people see more of your work?

Instagram: https://www.instagram.com/albertweand
Twitter: https://www.twitter.com/albertweand
Artstation: https://www.artstation.com/aweand

Anything else you’d like to share?

Thanks to everyone involved in the creation and development of Krita. It’s my favorite software, I use it on a regular basis and I’m quite happy to be able to work with it.

Weekly Report 2

Monday 15th of June 2020 12:00:00 AM
GSoC Week 2 - Qt3D based backend for KStars

In the second week of GSoC, I worked on handling projections, instance rendering for multiple stars, updating SkyObject coordinates and worked on porting the existing grid system in KStars to Qt3D.

What’s done this week
  • Setup of the Celestial Sphere.

  • Displaying stars on the screen using simple 3D point on the Celestial sphere.

  • Implementation of SkyPolyLines for drawing grids on the celestial sphere.

  • Sync with backend for transformations.

The Challenges
  • Integration issues with the original SkyPainter API written to support multiple backends.

  • Qt3D’s scenegraph based rendering on KStars which displays and updates data frame-by-frame.

  • Projection modes for different grid systems.

  • Transform Synchronization base on the projection mode used.

What remains

My priorities for the next week include.

  • Instance rendering for millions of stars displayed by KStars.

  • Dig deep into projection modes and shader coding based on the pre-existing Projector class in KStars.

  • Updates using SkyComposite.

Demo

The Code

Virtual Plasma Sprint 2020

Sunday 14th of June 2020 08:30:03 PM

This weekend the Plasma team’s annual sprint took place. Due to the Corona pandemic we had to cancel our original week-long in-person meet up end of April in Augsburg, Germany hosted by our friends at TUXEDO and settled for an online sprint instead. In anticipation of more virtual sprints KDE has set up its own BigBlueButton instance – an open source web conferencing system for online learning.

Plasma @ Home

While a four day online event can’t fully replace an entire week in a room with one of the most talented and dedicated people I know hacking and discussing from 9 till midnight, I was pleasantly surprised how productive it was. Huge thanks to BigBlueButton for creating a great tool to work with and to KDE Sysadmin, and Bhushan Shah in particular, for making this happen! Also check out this lovely unprepared group photo he took.

The meeting notes are being refined a little right now and should arrive on the plasma-devel mailing list in the coming days. This week’s experience made me confident that Akademy 2020 – also happening online – will work out great! Nevertheless I hope that eventually we’ll be able to catch up on our original sprint plans and meet in Augsburg again, physically.

Open Search Foundation

Sunday 14th of June 2020 07:56:26 PM

recently I learned about the Open Search Foundation in the public broadcast radio (Bayern 2 Radio Article). That surprised me: I had not heard about OSF before, even though I am active in the field of free software and culture. But this new foundation made it into the mainstream broadcast already. Reason enough to take a closer look.

It is a very good sign to have the topic of internet search in the news. It is a fact that one company has a gigantic market share in searching which is indeed a threat to the freedom of internet users. The key to be found in the web is the key to success with whatever message or service a web site might come up with, and all that is controlled by one enterprise driven by commercial interests. That should be realized by a broad audience.

The Open Search Foundation has the clear vision to build up an publicly owned search index as an alternative for Europe.

Geographical and Political Focus

The whitepaper talks about working on the search machine specifically for Europe. It mentions that there are search indexes in the US, China and Russia, but none rooted in Europe. While this is a geographical statement in the first place, it is of course also a political, because some of the existing services are probably politically controlled.

It is good to start with a focus on Europe, but the idea of a free and publicly controlled project should not be limited to Europes borders. In fact, it will not stop there if it is attractive because it might offer a way to escape from potentially controlled systems.

On the other hand, Europe (in opposite to any single European country alone) seems like a good base to start with this huge effort as it is able to come up with the needed resources.

Organization

The founding members of the Open Search Foundation are not very well known members of the wider open source community. That is good, as it shows that the topics around the free internet do not only concern nerds in the typical communities, but also people who work for an open and future proof society in other areas like academia, research and medicine.

On the other hand, an organization like for example the Wikimedia e.V. might have been a more obvious candidate to address this topic. Neither on the web site nor in the whitepaper I found mentions of any of the “usual suspects” or other organizations and companies who have already tried to set up alternative indices. I wonder if there have been discussions, cooperations or plans to work together?

I am very curious to see how the collaboration between the more “traditional” open data/open source community and the Open Search Foundation will be, as I think it is a crucial part to combine all players in this area without falling into the “endless discussion trap” while not achieving countable results. It is the question of building an efficient community.

Pillars of Success

Does the idea of the OSF have a realistic chance to succeed? The following four pillars might play an important role for the success of the idea to build the free search index of the internet:

1. Licenses and Governance

The legal framework has to be well defined and thought through, so that it will be resilient longer term. As we talk about a huge commercial potential to control this index, parties might wanna try to get into control of it.

Only a strong governance and legal framework can ensure that the idea lasts.

The OSF mentions in the whitepaper that it is one of the first steps to set this up.

2. Ressources

A search index requires big amounts of computing power in the wider sense, including storage, networking, redundancy and so on. Additionally there need to be people who take care on that. For that, there needs to be financial support for staffing, marketing, legal support and all that.

The whitepaper mentions ideas to collect the computing power from academia or from company donations.

For the financial backing the OSF will have to find sources like EC money, from governments and academia, and maybe private fund raising. Organizations like Wikimedia would already have experience with that.

If that will not be enough, the idea of selling better search results for money or offering SEO help for development will quickly come up. This will be interesting discussions that require the strong governance.

3. Technical Excellence

Who will use a search index that does not come up with reasonable search results?
To be able to compete with the existing solutions that even made it into our daily communication habits already, the service needs to be just great in terms of search results and user experience.

Many already existing approaches that use the Google index as a backend have already show that even with that it is not easy to provide a comparable result.

It is a fact that users of the commercial competition trade their personal data against optimal search results, even if they dont do that consciously. It is more difficult for a privacy oriented service, so this is another handicap.

The whitepaper mentions ideas on how to work on this huge task and also accepts that it will be challenging. But that is no reason to not try it. We all know plenty of examples where these kind of tasks were successful even though nobody believed that in the beginning.

4. Community

To achieve all the points a strong community is key factor.

There need to be people who do technical work like administering the data centers, developers who code, technical writers for documentation, translators and much more. But that is only the technical part.

For the financial-, marketing- and legal support there are other people needed, not speaking about political lobby and such.

All these parts have to be built up, managed and kept intact long term.

The Linux kernel, which was mentioned as a model in the whitepaper, is different. Not even the technical work is comparable between the free search index and the Linux kernel.

The long term stable development of the Linux kernel is based on people who work full time on the kernel while being employed by certain companies who are actually competitors. But on the kernel, they collaborate.

This way, the companies share cost for inevitable base development work. There differentiators in the market are not depending on there work on the kernel, but in the levels above the kernel.

How is that for OSF? I am failing to see how enough sustainable business can be based on an open, privacy respecting search index so that companies will be happy to fund engineers working on it.

Apart from that, the kernel has the benefit that it had strong companies like RedHat, SUSE and IBM who pushed Linux in the early times, so no special marketing budgets etc. were needed for the kernel specifically. Also that is different for OSF, as quite some marketing- and community management money will be required to start.

Conclusion

Building a lasting, productive and well established community will be the vital question for the whole project in my opinion. Offering a great idea, which this initiative is without question, will not be enough to motivate people to participate long term.

There has to be an interesting offer for potential contributors at all levels, starting from individuals and companies for contributions, to universities for donating hardware or the governments and the European Community for money. There needs to be some kind of benefit they will gain for their engagement on the project. It is interesting if the OSF can come up with a model that will get that kickstarted.

I very much hope that this gets traction as it would be an important step towards a more free internet again. And I also hope that there will be collaboration on this topic with the traditional free culture communities and the foundations there.

An Air Cooler For The 21st Century

Sunday 14th of June 2020 06:15:00 PM

I’m going to start this post with a giant warning and disclaimer. Do not, under any circumstances, attempt to reproduce what I’m about to describe in this post unless you independently know exactly what you are doing. This post describes wiring up things to work with mains electricity, which will kill you if you make even one miswiring, or accidentally scrape an exposed metal surface that you’re supposed to stay away from. Publishing this post does not constitute an instruction on my part for you to go reproduce this, and if you choose to do so out of your own free will, I do not take responsibility for any results, short-term or long-term, including any personal injury, death, or loss of property that may occur as a result.

With that out of the way, let’s begin. In the summer of 2017, my first summer in Germany, I was forced to make a rather big purchase: an evaporative air cooler. Temperatures had spiked to over 30 degrees celsius on occasion, and I was actually managing to get sick from the heat. An evaporative cooler was both a practical and an economical solution to the problem. I could wheel it around, it didn’t take too much power, and I could put some frozen salt ice bags or orthopaedic gel packs into the water tank to cool the water down on those extra hot days.

For two summers, this worked great. On the third summer, the pump broke down.

Pretty much everywhere in Germany, the water is very hard. Calcification in the water tank, which I had neglected for the most part, had finally destroyed the submersible aquarium pump inside that pumped water into the evaporation mesh. So not only did I have to clean the water tank and flush out the piping with descaling fluid, I also had to replace the pump. So for the first time, I opened up the cooler, took out the pump and cleaned up whatever I could.

The first shock I had was when I looked at the label on the pump to find its ratings so that I could order a replacement. The pump ran directly off a 220V mains supply and was submerged in the water tank, where I had dipped my hand into on multiple occasions to check the water temperature while the pump was running. This was never going to do, so I ordered a 12V DC pump this time, with the same flow ratings, and something I found on Amazon called a 12V LED Transformer, which looked like it had what I wanted: a 220V AC input, a 12V DC output and rated for about 680mA of current. I double-sided taped the power adapter inside the dry section of the cooler, connected the new pump, and stayed cool for the most part of that summer.

But towards the end of that summer, this arrangement also failed. The pump was basically always running, even with the main unit switched off, which meant that the main control electronics had failed somehow. Additionally, when I took apart the unit again and inspected the “LED Driver”, it turned out it was using a capacitor divider to step down the AC voltage and then using a single diode to rectify it, plus an additional filter capacitor on the output. The voltage divider capacitors were leaking, and the pump was again calcified all the way through, so it would again need to be replaced.

So I had to consider: throw this one away and buy a better cooler, or try to replace the control electronics and get this unit running again? I briefly considered the first option, but my unit had adequate performance, it was the right size and the main fan worked. I also didn’t want to add more waste to a landfill somewhere. Also, I’m an electrical engineer by education (sort of anyway, my degree is in Computer Science & Engineering), so I should be able to fix this, right?

Anyway, the challenge was on.

The Teardown

The cooler was made somewhere in China, branded and sold as an in-house product by Conrad Electronics here in Germany, and I couldn’t just go out and order a new logic board from somewhere. I’d have to build my own control electronics, so the task I had on hand wasn’t particularly easy.

So I began by tearing down the whole cooler. The first thing that I started inspecting was the main fan motor. Now I know that wiring up AC motors isn’t easy (you need starter capacitors and special drive electronics to create enough torque to get the motor spinning in the right direction), and I assumed that the motor would be an AC induction motor and I’d need to buy a variable frequency drive to control the fan speed. What I found out was that the motor had 4 wires coming out of it, and it already had a capacitor built in, so none of those four wires needed to be wired to a capacitor.

My first instinct was that this was a 3-phase motor and one of the wires was for grounding the metal body. That assumption was quickly proven wrong when I realised that the whole unit just had a 2-pin plug for the mains supply. None of this was grounded.

Then I looked at the label on the motor. The wire colours were marked as L, M and H, and the black wire was COM. Could it really be that simple?

Turns out, it was. Apparently these motors are 3-speed AC motors of some sort, and these types of motors are pretty common in air conditioners and coolers, and even table fans which only have 3 speed settings. I should have guessed, since the control panel on the top of the cooler only had 3 options for “wind speed” - low, medium and high. And what’s more, wiring them up is dead simple: you just plug the COM (common) wire into the mains neutral, and then connect the mains hot line to whichever speed you want. I wouldn’t be able to control the speed of the fan on a continuous range, but I would definitely be able to change the speed easily.

The next bit was the swing motor - a small motor attached to the vent slats in the front of the unit that swung the air direction from side to side. This was also a 220V AC motor, and about the size of a hockey puck. It had just two wires, so you’d only need to plug it in to mains the usual way.

The third component was of course the pump. I’d be buying a new one anyway, and I had the freedom to choose what voltage I wanted. I left that decision for later. Also in the water tank was what looked like a floating switch, which was mounted deep inside the tank, right at the bottom. This was the tank empty sensor, and with some multimeter testing, I figured out that this was normally open, and closed when floating. So wiring this up would just be a matter of connecting one side to the microcontroller voltage, one side to the input pin, and pulling the input down.

With all of this out of the way, I could finally turn my attention to the logic board itself. And this is where I got my second shock: apart from a tiny buck converter that supplied the tiny microcontroller on the board, the entire board was 220V AC. Not only that, there was no isolation, no grounding, and the switching elements were MAC97A6 triacs. Yes, that’s a triac in a TO-92 package (the same package you’d find BC547s in) switching mains electricity into a pretty hefty fan motor. No wonder this thing failed.

I’d do much better.

Shopping For Parts

I was never going to use triacs, especially such tiny ones, for switching mains power. From the very beginning, I planned to use relays. So the first item I went shopping for was a relay board. I needed at least 5 channels (3 for the different fan speeds, one for the swing motor and one for the pump), so I found a 8-relay board that I quite liked on Amazon. While it’s not mentioned in this product page, there’s a bunch of similar products (they all come from China and are likely made from the same design), and they had some nice properties that I loved: the inputs to the relays were opto-isolated (so you won’t kill your microcontroller with rush currents when actuating the solenoids), and while the relays required 5V DC for the switching, because the inputs were isolated they also worked with 3.3V logic inputs (just remove the jumper between the VCC and JDVCC pins on the bottom right of the board and supply 5V straight from a power supply to JDVCC). It’s also worth mentioning that the inputs are active low (if you’re using the normally open side of the relays, they’re active high if you’re using the normally closed side).

The next item on my list was the compute element. Now most sane people would use an Arduino or some sort of microcontroller. I needed something a bit more versatile. For what its worth, I have a few smart home accessories at home, and I’m more or less a full-time Apple user at this point. All my smart home devices are HomeKit compatible, and I wanted to be able to yell at Siri to control my cooler. So I needed something that would be comparitively easier to program, would have enough oomph to run a server to respond to HomeKit Accessory Protocol - which, by the way, is now a fully open protocol so anyone can create non-certified accessories and even create control apps for non-Apple platforms - requests and had WiFi. So of course, the only logical choice was a Raspberry Pi. I chose a Raspberry Pi 3 A+ - it’s smaller than the regular models but still has the full GPIO array, has only 512MB RAM (which seems enough, I mean, do I really need a 4GB air cooler), and is, most importantly, really cheap - at just 27 EUR.

Now that the Raspberry Pi dictated the DC voltage in the system (5V), I went ahead and ordered a 5V pump, and this time I ordered a rather hefty power supply (rated for 10A), because I’d be supplying the pump, powering the relays and of course powering the Pi from this supply, without any additional filtering.

To round up the shopping, I ordered some jumper cables, a new power cable with a grounding wire, a plastic project case (there wasn’t enough space inside the cooler housing to fit all the additional electronics and wiring, so I decided to put everything in a separate box and fix it to the side of the cooler), and some 3M VHB Tape. If you’ve never heard of, or used VHB tape before, let me tell you a few things about it. VHB tape is your foam based double-sided tape, but it’s no run of the mill double sided tape. This thing is actually aerospace grade, and is used in aircraft and spacecraft to hold things together. Once attached, none of this is coming off, except when you actually want it to come off, at which point you can remove it without leaving any residue behind. Your locally store-bought “extra strength” double-sided tape is nothing compared to VHB tape, and you really shouldn’t have an engineering toolbox at home without some VHB tape in it.

Most of the construction is held together with VHB tape, including suspending the heavy power supply from the underside of the enclosure lid, attaching the electronics box to the side of the enclosure, and the submerged pump. The submerged pump is really why I had to use VHB tape - while I could have used the cheaper Tesa extra-strength stuff for the other things, I only trust real VHB tape to hold its strength underwater.

Wiring And Programming

I’m not going to go into too much detail about wiring except talking about the principles I followed. I also won’t go into too much detail about the programming for the simple reason that all the code is available for you to see on my GitLab account, but again I’ll talk about principles.

Let’s start with the wiring. The Raspberry Pi’s GPIO is 3.3V, and wiring up 5V relays to it is going to let out the magic smoke pretty quick. For this reason, having isolated inputs to the relay board comes in quite useful. I can wire up JDVCC to the 5V from my power supply, and wire up the GPIO directly to the inputs on the board, supplying VCC from the 3.3V pins on the Raspberry Pi itself. I don’t even need a separate 3.3V power supply.

Wiring up the tank empty sensor / switch also doesn’t actually need a pull-down resistor, because the pin can be pulled down in software. So again, just connect one end to a 3.3V pin on the GPIO, and the other end to your designated input pin.

I used the normally open side of the relays of course, and wired each speed of the fan to a separate relay (taking care in software that only one of these relays is activatable at any given time). These relays can switch both 5V DC and 220V AC (like all normal mechanical relays), so even the pump is switched with one of the relays.

On the software side of things, I initially started by using Python, with the built-in (to Raspbian) RPi.GPIO package to control my GPIO pins. I built a JSON API, and then a web app to work with this JSON API and turn individual elements on and off. I used Homebridge to bridge between the API and HomeKit. This never really worked well, and this multi-service architecture was needlessly complicated and I was never fully confident that there were no bugs, given that Python code was never statically analysed during compile-time (there is no compile-time).

So I learnt Go.

Re-writing the control software in Go was probably the most fun I had while working on this whole project. Go is so incredibly easy and fun to write (once you stop being annoyed at the enforced gofmt code-styling rules - which for some people I can see taking years). I went from not knowing Go at all to having a reimplimentation of my driver in 4 hours, the complete API in 24 hours, and it took me another day to implement the HomeKit bits, hooking directly into the driver and not bothering with the API. So now I have a Web UI, HomeKit integration, and a statically checked daemon that controls everything.

I used go-rpio to be able to control the GPIO pins. You can theoretically control your GPIO with just echo and cat by writing into and reading from the correct files under your /sys/class/gpio - and here’s an article in German explaining how to do that - but go-rpio memory-maps the /dev/gpiomem file and uses that to write directly into the bits of the CPU address space that control the GPIO pins, which also means that you don’t need to be root to run the driver and daemon, you only need to be part of the gpio group.

I used hc to be able to expose a HomeKit interface. HomeKit is conceptually a really simple protocol - an accessory has one or more services which it exposes, and every service is composed of different characteristics. If you want your device to be controllable by the Home app on iOS and macOS (and by yelling at Siri), you need to choose from a few Apple-defined combinations of accessories, services and characteristics. I decided to expose the cooler as an Air Conditioner, implementing the Heater-Cooler service, and implementing most of the optional characteristics. hc‘s built-in accessory and service classes only implement the mandatory characteristics, so most of my code in hapservice.go is defining and building up my own service and accessory class.

The finaly bit of Go magic that I used was Goroutines. Goroutines are lightweight threads that are incredibly easy to implement (you just write a normal function with the go keyword preceeding it), and it took me about 5-10 lines of code to write a Goroutine that checks the water tank status every second and shut down the pump if it is running while the tank ran dry.

And finally, there’s the toolchain. Programming on a Mac and building for 64-bit ARM/Linux is simply a matter of setting the correct environment variables. I also strip the binaries and UPX-compress them (Go does produce some gigantic statically-linked binaries by default). My build command-line is something like:

$: GOOS=linux GOARCH=arm64 go build -ldflags="-s -w" && upx binaryname

Of course, this is only for test builds. I have GitLab CI set up on my repo, so every time I make a commit, it builds a new version of the binary within a minute and offers it up for download.

In Conclusion

I’m currently really happy with the way the cooler now works, and I find myself exclusively using HomeKit to control it. The Web UI definitely needs some work, and I might end up adding scheduling features to it, or automatic control based on the weather outside. I will definitely add a few temperature sensors - one for the water temperature, one for ambient temperature, and one probe right in front of the fan to measure effective wind temperature.

Because Go produces statically linked binaries and I need no operating system dependencies to run them, I was finally able to move to an Aarch64 (ARMv8) distribution, currently running Ubuntu Server 20.04. Yes, my cooler runs Ubuntu and I don’t know how I feel about it. Amongst other things (like having a more recent kernel and packages than Raspbian and being 64-bit), I also found it really easy to set up the network for first boot so that I never needed a monitor and keyboard and could just SSH in right after plugging the SD card in and turning on the machine. I also set up systemd-resolved to expose Multicast DNS so that even with a dynamic IP I can address my cooler with its hostname. The only thing I currently don’t like about Ubuntu Server is its forced use of Netplan, but I don’t know if I’m bothered enough to replace it with NetworkManager yet.

I hope you enjoyed reading about what I did during my ‘Rona lockdown, and remember kids, mains electricity is dangerous. Do NOT try this at home.

KMyMoney 5.1.0 released

Sunday 14th of June 2020 11:03:48 AM

The KMyMoney development team today announces the immediate availability of version 5.1.0 of its open source Personal Finance Manager.

With additional development manpower we were able to tackle a lot of issues and will continue to do so in the upcoming time. If you think you can support the project with some code changes or your artistic or writing talent, please take a look at the some low hanging fruits at the KMyMoney junior job list. Any contribution is welcome.

Despite the ongoing permanent testing we understand that some bugs may have slipped past our best efforts. If you find one of them, please forgive us, and be sure to report it, either to the mailing list or on bugs.kde.org.

The details

Here is the list of the bugs which have been fixed. A list of all changes between v5.0.8 and v5.1.0 can be found in the ChangeLog.

  • 350360 OFX import leaves brokerage account field blank for nested accounts
  • 396286 KF5 ofximporter “Map account” fails on MS Windows
  • 399261 report’s chart mess with data if there are too many data
  • 416534 Some ui files are not compilable after editing with designer
  • 416577 Message Box Doesn’t Size
  • 416621 Import a QFX file fails on MacOS
  • 416711 Missing german translation
  • 416746 Summary values are not updated for investment transaction of type interest income
  • 416827 libofx dtd files are not found in AppImage
  • 416902 Investment reports should ignore setting for “Show equity accounts”
  • 416963 After the migration to aq6 the change of views takes a long time
  • 417142 cannot find yahoo finance under online quotes
  • 418334 Request: Use latest values to fill in transaction
  • 418823 Script based online quotes do not work in the AppImage version
  • 419082 Backup
  • 419113 Data displayed in scheduled transaction and home page are sometimes not consistent
  • 419554 Balance of budget is shown incorrect
  • 419974 MyMoneyStatementReader uses base currency instead brokerage account’s own when adding new price
  • 419975 When importing transactions, we’re matching against the other transactions also being imported
  • 420056 BUY/SELL information ignored when importing OFX investment transactions
  • 420082 Startlogo not translated in french
  • 420422 Indian Rupee has new symbol since 7 years,it is ₹
  • 420584 QIF importer ignores new investments
  • 420593 A sum of multiple rows selected is incorrect for securities with fraction > 100
  • 420683 Inaccurate decimal precision of South Korean Won (KRW)
  • 420761 After upgrade from Fedora 31 to 32, one of my checking accounts shows a huge negative “Cleared” balance
  • 420767 Incorrect ordinate axis labels when zooming a chart
  • 420931 Crash in “Edit loan Wizard”
  • 421056 Freeze: logarithmic vertical axis and negative data range From value
  • 421105 Logarithmic vertical axis has multiple zero labels
  • 421126 Securities Dialog “Market” field not populated with existing data on edit
  • 421260 Networth “account balances by institution” provides incorrect results
  • 421307 Account context menu’s Reconcile option opens incorrect ledger
  • 421569 New Account Wizard throws exception on empty payment method selected
  • 421691 New Account Wizard is not asking if the user wants to add a new payee
  • 421750 Scheduled monthly transaction will only change first date
  • 421757 Anonymised files are no longer created
  • 421900 SEGFAULT occurring when marking an account as preferred
  • 422012 Incorrect account hierarchy if an account is marked as preferred
  • 422196 Budget view displays all account types
  • 422200 KMyMoney crashes when navigating backwards through CSV import wizard
  • 422480 Search widget in the Budgets view ignores user input

Here is the list of the enhancements which have been added:

  • 416279 Add option “Reverse charges and payments” to OFX import

digiKam 7.0.0-rc is released

Sunday 14th of June 2020 12:00:00 AM
Dear digiKam fans and users, Just few words to inform the community that 7.0.0 release candidate is out and ready to test two month later the third beta release published in April. After a Covid-19 containement stage at home, this new version come with more than 740 bug-fixes since last stable release 6.4.0 and look very promising. We are in finalisation stage now to be ready to publish the 7.

OSM Indoor Maps for KDE Itinerary

Saturday 13th of June 2020 07:30:00 AM

In the previous post I briefly mentioned ongoing work about adding interactive train station and airport maps to KDE Itinerary. Here are some more details on what this is about.

Goals

Some of the public transport backend services actually provide links to images or PDF files with a map for all stops or stations involved in a journey, showing those would be fairly straightforward and would even give us the official maps of the corresponding operators. That’s better than nothing, but it’s not the best approach when looking at why you’d actually want a map of a train station or airport in the first place:

  • Find the departure location of your next connection, ie. the gate of your flight or the platform (or better platform section) of your train. As we know those information, we can easily assist there by highlighting those places.

  • Locate where to buy a ticket for the local public transport service, say a ticket machine or a ticket office. As we know with which transport operator you are continuing your trip, suggesting the right places for this would help.

  • Find a restroom, a place to get food/drinks, the lost luggage counter, a pharmacy, or a lounge compatible with your ticket or frequent traveler bonus program. This can be supported by searching/filtering, but can also use context information such as considering opening hours (ie. places closed during the time you are waiting for your connection are probably not very relevant).

For all of the above, you’d might also want navigation support. While finding your way there might not be all that complicated (we are talking about a single large building usually), it can get a lot more challenging when considering mobility restrictions, be it heavy luggage, a stroller, a hurt leg or a wheelchair. In those cases you might for example prefer elevators over stairs. For those it’s however not only useful to know where they are, but also whether they are actually operational right now.

All this obviously needs more than an opaque image with a map, but something we can fully introspect and adapt depending on the use-case. While certainly ambitious, we do have all the necessary data to build this, thanks to OpenStreetMap. OSM doesn’t just provide the spatial data for the map, but also detailed semantic annotations (see e.g. the very elaborate machine-readable opening hours specification). And applications like wheelmap.org show how this can be combined with live status data for elevators.

OSM Indoor Maps

While many OSM maps focus on two dimensions, we have the additional complication that train stations and airports are often multi-level buildings, so we do have to care about a third dimension. Look for example at how the default OSM renderer shows Berlin central station:

Berlin central station as displayed by the default OSM renderer.

This is hardly useful, as half of the platforms are underground and thus not even visible and amenities are spread over three floors which you cannot tell apart. The information on which level things are is all there though, we just need to consider this when rendering the map.

Berlin central station split up by floor levels.

OSM calls this indoor mapping, even if maybe not everything related to this is technically “indoors”. There’s a few existing web-based renderers for these information as well, such as OpenLevelUp or OpenStationMap.

Outlook

There is certainly a lot more to write about, and I’ll try to do that over the coming weeks. And there is also still much more work ahead before this becomes actually useful.

The code for this is currently in the KPublicTransport repository, not necessarily because this is the best place for it, but because it’s using OSM code that already existed in there. It’s still a fairly small codebase, feel free to get in touch if this is a subject you are interest in diving into!

A central part of this is a declarative definition of whats shown on the map, so that part doesn’t even require programming skills to meaningfully work on. And in case you are into more challenging performance or math problems, there’s some of that as well :)

This Week in KDE: Plasma 5.20 features start landing

Saturday 13th of June 2020 04:59:40 AM

In addition to a ton of bugfixes for Plasma 5.19 which we just released, this week we started to land big improvements for Plasma 5.20. Take a look:

New Features

It’s now possible to independently configure the file size cut-off for displaying previews for local and remote files in Dolphin (Gastón Haro, Dolphin 20.08.0):

It’s now possible to tile a window to a corner by quickly invoking two edge tiling shortcuts within one second; for example by hitting Meta+Right arrow and Meta+up arrow one after another, the window will be tiled to the top-right corner (me: Nate Graham, Plasma 5.20):

https://i.imgur.com/B0iFqJ4.mp4

You can now middle-click on the System Tray Notifications icon to enter and exit Do Not Disturb mode (Kai Uwe Broulik, Plasma 5.20)

Bugfixes & Performance Improvements

The drawing tools in Okular’s presentation mode toolbar are no longer blurry when using a high DPI screen (David Hurka, Okular 1.10.3)

Yakuake’s main window no longer appears under a top panel on Wayland (Tranter Madi, Yakuake 20.08.0)

Fixed a bug that could prevent Yakuake from opening when using a dual-monitor setup with a single vertical panel on a screen edge close to the center of the full desktop (Maximillian Schiller, Yakuake 20.08.0)

Kate’s “Open Recent” menu now displays documents opened in Kate from the command line and other sources as well, not just the ones opened using the file dialog (Christoph Cullmann, Kate 20.08.0)

Fixed a common crash in Qt applications when quitting (Vlad Zahorodnii, Plasma 5.19.0)

Disconnected Wi-Fi networks now display the correct security type (Jan Grulich, Plasma 5.19.1

The Bluetooth system tray applet’s tooltip no longer shows the name of the wrong device (me: Nate Graham, Plasma 5.19.1)

Fixed a bug causing high CPU usage when scrolling through the list of rules in the new Window Rules System Settings page (Ismael Asensio, Plasma 5.19.1)

Rows in the System Tray popup are now centered vertically in a correct manner (Eugene Popov, Plasma 5.19.1)

Right-clicking on pinned apps to run their app-specific options (e.g. to open a private browsing window in Firefox or Chrome) now works properly when the action includes command-line arguments (Alexander Lohnau, Plasma 5.19.1)

When you search for an application in the Kickoff Application Launcher and then right-click on the search result, the “Edit Application…” menu item now works (Alexander Lohnau, Plasma 5.19.1)

Various apps whose .desktop files specify the icon as a full path to an SVG file now display those icons correctly in the Kicker, Kickoff, and Application Dashboard launchers (Alexander Lohnau, Plasma 5.19.1)

The activities database now has backup and self-repair mechanisms, which should reduce (if not eliminate) the occurrences of favorites and recent items being corrupted or forgotten (Ivan Čukić, Plasma 5.20.0)

Recent documents accessed in private Activities are no longer visible in KRunner search results accessed from other Activities (Méven Car, Plasma 5.20.0)

Fixed an issue preventing the new header appearance from working properly when using the Breeze Dark plasma theme (Chris Holland, Frameworks 5.71)

Content can no longer overflow in the grid items in the new “Get New [thing]” windows (Dan Leinir Turthra Jensen, Frameworks 5.72)

When using a dark color scheme, the new “Get new [thing]” windows no longer display white squares in the center of each grid item before the preview image loads (Dan Leinir Turthra Jensen, Frameworks 5.72)

The Baloo file indexer no longer skips indexing the filenames of files with a blacklisted MIME type (i.e. those whose contents are not useful to index); it will now always index filenames, but only perform full content indexing for files whose content makes sense to index. This should make it better overall at finding files but use hardly any more resources in the process (Stefan Brüns, Frameworks 5.72)

User Interface Improvements

The default Plasma layout has been changed to replace the Task manager with an Icons-Only Task manager with some apps pinned to it by default, on a thickened panel. This should provide a more familiar and modern layout with greater touch-friendliness by default. Remember that you can always change back if you don’t like it

Cantor in GSoC 2020

Saturday 13th of June 2020 12:00:00 AM

KDE is once again taking part in Google Summer of Code program and this time Cantor has 2 internships working to improve the software and bringing new features. Both projects are supervised by Alexander Semke and Stefan Gerlach.

Nikita Sirgienko is polishing usability and developing several small features present in other mathematical REPL applications to improve the user experience in Cantor. In his words, “the idea of this project is not to implement one single and big “killer feature” but to address several smaller and bigger open and outstanding topics in Cantor”.

Shubham is bringing the documentations of the programming languages supported by Cantor to the application itself. Currently, Cantor just provides a link to the documentations websites, and obviously it can be improved. When this project be successed, it will be possible to provide documentation search and context sensitive help facilities.

Follow Nikita project’s blog and Shubham project’s blog for news about their progress.

Happy hacking!

Calamares default branch

Friday 12th of June 2020 10:00:00 PM

There’s plenty of definitions for the word “master” – my Oxford English Dictionary lists over thirty – and most of them are unproblematic. That is, they do what they say on the tin. There’s also a meaning connected to slavery. Slavery is an evil that I’m glad is partly destroyed from the world, sad that it is only partly destroyed; like smallpox, it should be gone.

We can talk about things that do not exist, and things that should not exist, and things that exist metaphorically. But we should be – when I say “we should be” I mean “I personally pledge to do”, as well as meaning “this is a moral imperative to all of us” – we should be careful to use words with the right etyomological, historical, and metaphorical baggage.

I don’t want to use the word “master” with a meaning connected to slavery, unless it’s speaking specifically about slavery, the evil that it is, and its abolition.

Calamares uses the phrase “master boot record”. That’s the first 512 bytes (one block) of an old-fashioned hard disk. The meaning on the tin, and the etymological background, is one of “original version”. The one from which copies are made. This is still the meaning held by MBR, the terminology is current, and Calamares is going to keep using it.

Since Calamares deals with hard disks – even old-fashioned ones – it might have to deal with two disks attached to the same PATA cable. Since 1994, those disks have been called device 0 and device 1 in the ATA standard (says Wikipedia), but earlier terminology persists, like on a hard disk from 2004 (cable is not connected, just a reminder of 40-pin connectors with 80-strand cables).

So if Calamares were to talk about hard disk addressing, it shouldn’t use the technically and morally wrong word “master”. The metaphor is clear, and I will have no part of it: “master” with metaphorical connections to slavery is to be used to speak of slavery, the evil that it is, and its abolition.

I checked: Calamares doesn’t deal with this level of detail, so this is a cheap commitment from me.

But today I learned something new, about the history of the naming of git branches. Brendan O’Leary has a good write-up, though I found that from following Reginald Braithwaite. Brendan describes the history of, and the metaphorical baggage of, git’s “master” branch.

I will have no part of that, and so the default branch, the branch from which new releases are cut, and the branch I generally merge my git alligators to, is calamares from this day forward.

If you git pull from the Calamares repository, you may need to switch: do a git fetch -p followed by git checkout calamares.

Cantor Integrated Documentation : Week 1 and 2 Progress

Friday 12th of June 2020 04:57:00 PM
Hello KDE people!! It's been almost couple of weeks of the coding period already, and it has been hectic already. I was mostly able to stick to the timeline I had proposed, just loosing couple of days here and there. None the less, I am here presenting my progress on the project. Things Done 1. Creation of QHP file It stands for Qt Help Project. This is similar to xml’s file format. It contains the table of contents, indices, and references to the actual documentation files (*.html). This file is later passed to qhcp (Qt Help Collection File). It contains various tags like table of contents, section tag for defining the actual documentation, keywords tag for defining the indices for the documentation, files tag for listing all the files required. Click here for more details on qhp
For adding the keywords that will serve as the index for the documentation, I made use of the index.hhk file that is shipped with the official installation of Maxima. But, copying and adding each and every index (~2.5k of them) from index.hhk to .qhp file would have been a difficult task. So as a solution, I utilized the power of Python to write a script that will do it for me. Here is the link to that script if you are interested.
Steps to extract the indices from index.hhk file to add them to qhp index.hhk is an index file shipped with the Maxima documentation. To extract all the indices, run the python script named index_parser.py over the index file to get the keywords listed for qhp file. Copy and paste the output.txt file’s content to qhp file under keywords section.
2. Creation of QHCP file It is an XML file that contains references to the compressed help files that should be included in the help collection. This file can be passed to the help generator for creating a qhc and qch files in one go. Refer this link for more information Use the following command to generate the above said files. qhelpgenerator mycollection.qhcp -o mycollection.qhc 3. Adding custom style to the Maxima's official documentation  I have also tried customizing the official documentation. I personally did not liked the layout of the official documentation, so I tried to add some styling to it. Currently I am in process of doing it.  Adding style to hundreds of HTML files was a challenge and tedious task to be completed manually. I again utilized Python's power and created a script to link the main CSS file to the HTML files. Here is the script. Below is a screenshot taken from QtAssistant browser when I load the .qch file which is generated in the previous step. (Edit->Preferences->Documentation->Add).
  That's all folks for this time, until then, Good bye!!

Calamares extensions and out-of-tree modules

Thursday 11th of June 2020 10:00:00 PM

Calamares is a universal Linux installer framework. It provides a distribution- and desktop-agnostic set of tools that Linux distributions (and potentially FreeBSD as well) can use to build an installer for Live media (that is, ISO images). It is broadly themable, brandable, configurable and tweakable – the core repository contains 54 modules for various parts of the install process.

Even 54 modules can’t do justice to all the breadth of things-people-might-want for Linux, so Calamares encourages people to write their own modules to solve specific problems. Calamares is also an eager upstream, so if the problem is specific, but affects lots of people, or can be made generally useful, then Calamares is eager to incorporate those modules into the “core” of the software product.

To help and support people developing modules, Calamares should provide all the necessary bits for development: it has a C++ API and some CMake stuff that needs doing, for instance, and module-developers will need that.

It’s also possible to extend Calamares through Python scripts and even shell scripts, so not all extensions or all tweaks need that infrastructure. For the most-fancy C++ extensions, though …

postmarketOS is a Linux distribution for Phones. It aims to take over where the phone vendors give up (i.e. 6-12 months after introduction of a new model, in my experience). The postmarket people are experimenting with Calamares to see if it does what they need – and to that end they are also writing custom modules. Custom modules aren’t a surprise, here: a phone is a different beast from most install-from-ISO situations.

They hit some fundamental problems really quick.

And very importantly, they filed an issue, explained the problem, did some investigation and were responsive on IRC about what was going on.

So, two days of solid ugh-why-did-I-ever-do-things-this-way later, the Calamares development branch (which will be 3.2.26 next week) has the following:

  • all the headers needed for C++ development are now installed, and in a consistent manner (there were a whole bunch missing, because I hadn’t updated things after various code-reorganizations),
  • the CMake infrastructure is simpler and more consistent.

In the Calamares extensions repository, I’ve tweaked the examples a little, written extra documentation inspired by the issues the postmarketOS people ran into, and added a whole new UI example that uses all of the bits-that-needed-fixing.

So, thanks postmarket – you gave the impetus to improve things. Calamares 3.2.26 and later will fully support out-of-tree module development again.

20.08 releases schedule finalized

Thursday 11th of June 2020 05:53:00 PM
It is available at the usual place https://community.kde.org/Schedules/release_service/20.08_Release_Schedule

Dependency freeze is in four weeks (July 9) and Feature Freeze a week after that, make sure you start finishing your stuff!

KDE's June 2020 Apps Update

Thursday 11th of June 2020 12:00:00 PM

It always a joy when the KDE family grows, that’s why this month we are especially happy to welcome backup manager Kup and a whole new packaging effort: Homebrew.

New releases Kup 0.8

Kup is a backup tool you use to keep your files safe.

It was previously developed outside of KDE, but this last month is has passed the Incubation process and joined our community, becoming officially a KDE project. The lead developer, Simon Persson, has celebrated with a new release.

Here are the changes you will find in the new version:

  • Changed how rsync type backups are stored when only one source folder is selected. This change tries to minimize risk of deleting files for a user who selects a non-empty folder as destination. Added migration code to detect and move your files on the first run, thus avoiding copying everything again and doubling the storage.
  • Added advanced option that lets you specify a file Kup will read exclude patterns from, for example, letting you tell Kup to never save *.bak files.
  • Changed default settings, hopefully making them better.
  • Reduced warnings about files not being included, as it was raising too many false alarms.
  • Kup no longer asks for a password to unlock encrypted external drives just for the sake of showing how much space is available.
  • Fixed not treating a backup saving as failed just because files went missing during the operation, both for rsync and bup.
  • Started running backup integrity checks and repairs in parallel based on the number of CPUs.
  • Added support for bup metadata version 3, which was added in bup version 0.30.
  • Lots of smaller fixes to user interface.

Kup can backup using rsync or do versioned backups with the Python tool Bup. Bup currently only works with Python 2 which means this option won’t be available on many distros, but a port to Python 3 is in the works.


Kup

To find out more about Kup, Average Linux User did a review and a video on Kup not long ago:

Krita on Android Tablets

Thanks to the hard work of Sharaf Zaman, Krita is now available in the Google Play Store for Android tablets and Chromebooks (but not for Android phones).

This beta, based on Krita 4.2.9, is the full desktop version of Krita, so it doesn’t have a special touch user interface. But it’s there, and you can play with it.

Unlike the Windows and Steam store, they don’t ask for money for Krita in the store, since it’s the only way people can install Krita on those devices. However, you can buy a supporter badge from within Krita to support development.

To install

  • Get Krita from Google Play
  • Alternatively, in the Play Store, switch to the “Early access” tab and search for org.krita. (See: Google’s official instructions on Early Access. Until we’ve got a reasonable number of downloads, you’ll have to scroll down a bit.)
  • You can also download the apk files yourself. Do NOT ask for help installing those files.
  • Those are all the official places. Please do not install Krita from other sources. We cannot guarantee their safety.

Notes

  • Supports Android tablets & Chromebooks. Android versions supported: Android 6 (Marshmallow) and up.
  • Currently not compatible with: Android phones.
  • If you have installed one of Sharaf’s builds or a build you’ve signed yourself, you need to uninstall that first, for all users!

Krita on Android Incoming

KIO Fuse made its first beta release this month.

Bugfixes

Bugfix releases came out for

  • Collection manager Tellico with an updated filter dialog to allow matching against empty text.
  • Local network browser SMB4K fixed saving settings on close.
  • Coders IDE KDevelop made an update for the moved KDE repositories.
App Store
Homebrew

While in Linux we are gradually getting used to being able to install individual apps from an app store, the reverse is happening in the world of macOS and Windows. For these systems, package managers are being introduced for those who like one source to control everything on their systems.

The leading open source package repository for macOS is Homebrew, managed by a crack team of developers including former KDE dev Mike McQuaid.

This month the KDE Homebrew project, which has been running external to KDE for a while, moved into KDE to be a full part of our community.

You can add the KDE Homebrew repo for macOS and download KDE sources complied ready for you to run.

We caught up with lead dev Yurii Kolesnykov and asked him about the project.

Tell us about yourself, what’s your name, where do you come from, what’s your interest in KDE and mac, what do you do for a living?

My name is Yurii Kolesnykov, I’m from Ukraine. I have a passion for Free Software since I first heard about it, approximately in the end of high school. I think KDE is simply the best DE for Linux and Unix systems with many great apps. My interest in Mac comes from my main job, I develop iOS Mobile Software for living.

What Is Homebrew?

Homebrew is the most popular package manager for macOS, just like apt or yum. Since macOS is Unix and Apple provides good compiler and toolchain for it, people decided to create package managers for it, so you may install much free and open source software on Mac. Homebrew also has a subproject, called Homebrew Cask, which allows you to install many binary applications, i.e. proprietary or GUI ones. Because GUI apps are hard to integrate with the system if they are installed via Homebrew.

What KDE packages have you made for Homebrew?

I just ran grep on our tap, and I see that we have 110 packages in total, 67 of them are frameworks, and approximately 39 apps. We already have most popular apps, like Kate, Dolphin and KDevelop, because of users request.

As a Mac user what do you need to do to get apps installed?

At first, you need to follow Homebrew installation guide if you don’t have it yet, it’s available at brew.sh. Then you need to tap our repo with the following:

brew tap kde-mac/kde https://invent.kde.org/packaging/homebrew-kde.git

Unfortunately a lot of KDE packages doesn’t work out-of-the-box, but we created a script that makes all the necessary hacks, so after tapping you need to run the following command:

"$(brew --repo kde-mac/kde)/tools/do-caveats.sh"

Do you know how popular Homebrew is as a way of getting apps for Mac?

Good question. Unfortunately we haven’t setup any analytics yet, I will add it to my TODO list. But given the fact that Homebrew is the most popular package manager for Mac and it requires users not to mix it with other similar projects to install software on same Mac, due to conflicts. So, yes, I think it’s quite popular.

How much work did you need to do to get KDE apps working in Homebrew?

During creation of current packages, we already addressed many common issues, so bringing new software is relatively easy. I promise to write a How to for this, users are already requested it many times.

Currently, packages need to be compiled locally, will you have pre-compiled packages available?

Homebrew allows you to install software via Bottles, i.e. pre-compiled binary packages. But the process of creating bottles is tightly integrated with Homebrew infrastructure, i.e. we need to run CI with tests on every package before it get bottled. So we decided to integrate as many packages as possible into the main brew repo to eliminate maintenance burden.

Is there much other desktop software available in Homebrew?

Yes. In general, if an app is popular and has a channel of distribution outside of Mac AppStore, then there is a very high chance that it’s already available to install via a Brew Cask.

How can KDE app authors help get their software into Homebrew?

Apple hardware is very expensive, so getting a Mac for every KDE dev will be not a good idea. So as for now, they are welcome to create a Feature Request in our repo. Then maintainers or users of Homebrew KDE report bugs if something isn’t working as intended. And we are trying to provide as much information as possible upon request of KDE devs. But as for now we have a lot of pending tickets for KDE apps with small, but very annoying bugs. I hope that we will be more integrated with KDE infrastructure, i.e. we may link bugs in our repo with upstream projects. We had already migrated to KDE Invent and I hope KDE Bugs will be migrated from Bugzilla to KDE Invent soon.

The other way to get your KDE apps built for Mac is with Craft. How does the Homebrew build apps compare to ones build with Craft?

I still think that Homebrew is more friendly to end users. Its install process is as easy as run one-liner. To add our repo and start installing apps from it, you need to run another two lines.

Thanks for your time Yurii. Releases 20.04.2

Some of our projects release on their own timescale and some get released en-masse. The 20.04.2 bundle of projects was released today and will be available through app stores and distros soon. See the 20.04.2 releases page for details.

Some of the fixes in today’s releases:

  • Writes are broken into multiple requests for SFTP servers that limit transfer size
  • Konsole updates the cursor position for input methods (such as IBus or Fcitx) and no longer crashes when closing via menu
  • KMail generates better HTML when adding a HTML signature to mails

20.04 release notesPackage download wiki page20.04.2 source info page20.04.2 full changelog

More in Tux Machines

New Prototype Builds Bringing Leap, SLE Closer Will be Available Soon

The release manager for openSUSE Leap, Lubos Kocman, has updated openSUSE’s develop community on efforts to bring the codes of Leap and SUSE Linux Enterprise closer together. In an email to the openSUSE-Factory mailing list, Kocman explained that the prototype project openSUSE Jump should become available for early testing soon and that contributions to the project could become available in the next five weeks. “First, I’d like to announce that we’ll start publishing images and FTP trees for openSUSE Jump, so people can get their hands on [it],” Kocman wrote. “Please be aware that Jump is still in Alpha quality. I expect data to be available later this week as there is still pending work on pontifex by Heroes. The Alpha quality state of Jump is gradually progressing. Jump is an interim name given to the experimental distribution in the Open Build Service as developers try to synchronize SLE binaries for openSUSE Leap. Kocman explained how feature requests will work, the process for how contributions will be handled and and he also explained how the submissions will lead to greater transparency. Read more

Linux 5.9 Performance Is Off To A Great Start With FSGSBASE Boost

FSGSBASE particularly helps out context switching heavy workloads like I/O and allowing user-space software to write to the x86_64 GSBASE without kernel interaction. That in turn has been of interest to Java and others. While going through patch review, we've benchmarked FSGSBASE patches at different points and found the performance benefits to be evident and helping in areas hurt by the likes of Spectre/Meltdown. FSGSBASE is supported on Intel CPUs since Ivy Bridge as well as newer AMD CPUs, where the performance is also helped. On Linux 5.9 where FSGSBASE is finally mainlined, it's enabled by default on supported CPUs. FSGSBASE can be disabled at kernel boot time via the "nofsgsbase" kernel option. On Linux 5.9+, looking for "fsgsbase" in the /proc/cpuinfo is the indicator whether FSGSBASE kernel usage is happening though note prior to 5.9 on supported CPUs the "fsgsbase" string is always present. For this article are some early data points of Linux 5.9 tested out-of-the-box on a Git snapshot and then again when booting that kernel image with "nofsgsbase" and repeating the tests. Via the Phoronix Test Suite various benchmarks relevant to FSGSBASE testing were carried out. Quick tests on both Intel Core and AMD Ryzen are done for this article while additional tests will be coming of Linux 5.9 over the weeks ahead -- 5.9-rc1 isn't even out until next weekend as marking the end of 5.9 features landing. Read more Also: User Xattr Support Finally Landing For NFS In Linux 5.9 Please pull NFS server updates for v5.9

Python Programming

today's leftovers

  • "Hey, DT. Why Arco Linux Instead Of Arch?" (Plus Other Questions Answered)

    In this lengthy rant video, I address a few questions that I've been receiving from viewers. I discuss fake DistroTube accounts on social media, my thoughts on PeerTube, my experience with LBRY, my thoughts on Arco vs Arch vs Artix, and what YouTubers have influenced my life.

  • 2020-08-10 | Linux Headlines 186

    elementary OS teases big changes coming in version 6, RetroArch rolls out major search improvements with version 1.9, Microsoft releases Minecraft: Education Edition for Chromebooks, and the new Krita Scripting School website aims to help developers expand the painting application.

  • R600 Gallium3D Now Has Compute Shaders Working With NIR

    If you are still rocking a pre-GCN AMD Radeon graphics card on the R600g driver for the HD 2000 through HD 6000 series, you really ought to consider upgrading in 2020, but otherwise at least from the open-source community there continues to be improvements.

  • NVIDIA GeForce are teasing something for August 31, likely RTX 3000

    Ready for your next upgrade? NVIDIA think you might be and they're teasing what is most likely the GeForce RTX 3000 launch at the end of this month. We don't know what they're actually going to call them, although they will be based on the already revealed Ampere architecture announced back in May. It's probably safe to say RTX 3000 for now, going by the last two generations being 1000 and 2000 but NVIDIA may go for something more fancy this time.

  • How to Learn Python in 21 Days?

    Before moving further, let’s have a brief introduction to Python Language. Python, designed by Guido Van Rossum in 1991, is a general-purpose programming language. The language is widely used in Web Development, Data Science, Machine Learning, and various other trending domains in the tech world. Moreover, Python supports multiple programming paradigms and has a huge set of libraries and tools. Also, the language offers various other key features such as better code readability, vast community support, fewer lines of code, and many more. Here in this article, we’ll discuss a thorough curriculum or roadmap that you need to follow to learn Python in just 21 days!

  • This Week In Servo 135

    Last week we released Firefox Reality v1.2, which includes a smoother developer tools experience, along with support for Unity WebXR content and self-signed SSL certificates. See the full release notes for more information about the new release.