Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content Planet KDE
Planet KDE
Updated: 5 hours 46 min ago

Google Summer of Code 2020 - Post 7

Sunday 26th of July 2020 01:50:00 AM

I finally tested the Rocs radial layout algorithm implementation. This includes some functional tests to check if the implementation works as expected in some cases and a non-functional test to evaluate the performance of the algorithm with respect to some aesthetic criteria.

The implementation passed all functional tests. Initially, the non-functional tests reported that edge crosses occurred. Since, in theory, no edge crosses can occur in layouts generated by the radial layout algorithm, I though the implementation had bugs. After checking the code a couple of times without finding any bugs, I decided to take a look at the graphs in which the algorithm was failing. I managed to get one example with just 8 vertices and applied the radial layout algorithm to it, getting the following result.

Wait! There are no crosses between edges in this layout. It turns out the bug was in the non-functional test. More precisely, the QLineF::intersects method reported an intersection between the two edges in red. After I fixed the problem, I got the expected results. Now I can focus on the documentation of the graph-layout plugin.

digiKam Recipes 20.07.27 released

Sunday 26th of July 2020 12:00:00 AM
Hot on the heels of the digiKam 7.0.0 release comes a new revision of the digiKam Recipes book. The new version includes Grouping RAW and JPEG files manually can be a real chore. Fortunately, a simple Python script can do this automatically, and the Group RAW and JPEG files with a script chapter provides instructions on using the script. The Disaster-proof digiKam setup chapter describes how to keep digiKam databases and configuration safe.

PSA: try turning on WebRenderer in Firefox

Saturday 25th of July 2020 07:52:33 PM

This is a new rendering backend for Firefox that’s not yet on by default. Presumably there are some edge cases where it makes things worse or causes some instability, but so far I have not experienced anything bad. On the contrary, without it, I and some other people get terrible flickering in Firefox on Wayland. With it enabled, not only is the flickering gone, but scrolling performance becomes buttery smooth and CPU usage decreases noticeably on both Wayland and X11, resulting in increased battery life! Win-win-win.

To turn it on, visit the about:config page in Firefox, search for “gfx.webrender.all”, and set it to true. That’s all there is to it!

Updating Marble’s OSM Data Server

Saturday 25th of July 2020 08:00:00 AM

Recently I wrote about options for getting OSM indoor map data for KDE itinerary’s work-in-progress indoor map feature for train stations and airports. The most flexible option mentioned there was using Marble’s OSM data tiles (which I had slightly misleadingly called “vector tiles”, a term in the OSM world usually referring to a different data format with much more application-specific pre-processing applied). Here’s an update on how this topic has progressed since.

Understanding the current setup

The first challenge was figuring out how the current system works, and specifically how the data currently served by maps.kde.org has been generated. Not everyone involved in the creation of that system is still available anymore, so this required a bit of code and system archaeology. Thanks to the help from Torsten, Ben and Nicolás with that!

So far it appears that:

  • The current data has been pre-generated once on a no longer existing system. Which input data and generation parameters were used is lost.
  • The tools used for generation are in the Marble repository and after minor fixes still work.
  • The code in the Marble repository for on-demand data tile generation apparently never got deployed on the existing system.

There’s no memory of how long the generation took back then. The best-case estimate based on measurements on current hardware and after applying some of the optimizations mentioned below would suggest a full world-wide dataset would take about 30 days on an eight core machine.

Far from ideal, as I was hoping to achieve an update latency of about two weeks, not even mentioning the huge energy cost.

Geometry reassembly

Before looking into a more efficient way to do this, I wanted to make sure though we could solve the geometry reassembly issues mentioned in the previous post, as that would be a major blocker.

That turned out to be surprisingly simple in the end, a small mistake when converting between different coordinate formats caused a precision loss from the OSM base resolution of 100 nanodegree to about 6 microdegree. That’s the difference between a centimeter and half a meter, ie. enough to noticeably distort indoor building structures (commit).

Pillars and room shapes in Paris Gare de Lyon before and after fixing the coordinate precision loss issue. On-demand tile generation

With a full-scale pre-generation off the table, the obvious alternative would be on-demand generation of requested tiles. For this we can actually take quite some inspiration from how the OSM raster tiles are generated.

The key elements there are mod_tile, an Apache extension for serving map tiles and managing a cache of those, and Tirex, a map tile generation scheduler. This setup isn’t limited to raster tiles, nor to any specific tile generator.

OSM’s own statistics show that even on their much much wider used setup high zoom level tiles are only actually needed for a tiny fraction of the world’s surface, so there’s a lot of resources to be saved this way.

Besides having to write a bit of glue code to interface Marble’s tile generator with Tirex, this however means that the generation of a single tile (or rather a batch of 8x8 tiles, the smallest unit Tirex works with) has to be fast enough for on-demand generation, as well as having a reasonably restrained memory consumption.

Optimizing tile generation

With the existing system, that wasn’t really the case though, due to two major costs: Loading of the input data, and processing/serialization of the resulting tile.

Input Data

To generate a data tile we need to load the raw data for the region covered by the tile, and ideally not much more than that. The full dataset is about 60GB in compact binary storage, without any indexing, and far from evenly spread across the earth’s surface, so even just loading the data of an urban area is non-trivial, let alone finding the right subset in the first place.

The previous approach used a recursive split of the full database, into 2¹⁰ x 2¹⁰ input tiles. That can be done easily using the osmconvert tool, and gives us a crude spatial index. This is however a fairly resource-intensive process with no support for incremental updates, and it still gives us 2⁸ times too much data to load for a tile batch on the highest zoom level.

A proper spatially indexed database with efficient incremental update support would be ideal instead. Several options for this exist, unfortunately the most common ones apply application-specific transformations and thus cannot reproduce the original raw OSM data again.

Attending the virtualized SOTM2020 conference earlier this month however made me discover OSMExpress (osmx), which does offer exactly that. Initial tests look very promising, but there’s of course a price to pay for this as well, in the form of needing 600+GB of disk space.

Processing and Output

Once we have the raw data loaded, it’s reduced in level of detail (on the lower zoom levels), clipped to the tile boundaries and eventually written back out to its binary serialization format.

While there are a few things in there with interesting algorithmic complexity, like clipping of non-trivial polygons, this is mainly an area for technical optimizations. That is, avoiding temporary allocations, avoiding detaches in implicitly shared types, using more efficient file formats and plugging a few memory leaks (see e.g. merge requests 2, 5 or 6).

With all that we are already reaching the needed performance and memory bounds, and there’s probably another 30-50% to be gained by bypassing more of Marble’s internal data structures. Those do provide ABI stability and abstraction over different data sources, features we don’t need here but that nevertheless incur a cost.

Outlook

So far this setup has been successfully tested on a subset of the OSM data (about 10%, covering a high data density area in central Europe). The results are very promising, both regarding performance and regarding improvements in the data quality.

After getting a bigger SSD a full-scale test is now under way, if that doesn’t expose any blockers I hope we get this deployed on maps.kde.org in the not too distant future :) There’s still a few more details to fix in the data processing to retain all information KDE Itinerary would need, but that’s then fairly easy to do once the infrastructure is in place.

This week in KDE: screencasting and shared clipboard on Wayland

Saturday 25th of July 2020 03:46:35 AM

This week has seen more fixes and improvements to the Get New Stuff system, as well as speeding up Discover. But they may be overshadowed by Major Enormous Exciting Amazing new Wayland features such as screencasting and Klipper/shared clipboard support!

Oh and two Ryzen-powered KDE Slimbook laptops were released! I wrote a review of the 15.6″ model here. It’s really good.

New Features

Screen recording and screencasting now works on Wayland for compatible applications (e.g. OBS Studio and more to come) (Aleix Pol Gonzalez, Plasma 5.20)

Klipper now uses the Wayland clipboard and works as you would expect in a Wayland session (David Edmundson, Plasma 5.20)

The Task Manager and Icons-Only Task Manager now offer you options for what visualization you want to see when clicking on a grouped task: window thumbnails in tooltips, the Present Windows effect, or a textual list (me: Nate Graham, Plasma 5.20)

There isn’t yet an option to bring forward all windows for the grouped task, but this is coming too! Bugfixes & Performance Improvements

Spectacle’s --output option now works again (Nazar Kalinowski, Spectacle 20.12.0)

Discover is now radically faster to present a usable user interface after being launched, especially on openSUSE distros (Aleix Pol Gonzalez, Plasma 5.20)

The last-used keyboard layout is now remembered on Wayland (Andrey Butirsky, Plasma 5.20)

On a rotatable device, maximized windows now remain maximized when the device is rotated (Aleix Pol Gonzalez, Plasma 5.20)

The OK and Cancel buttons in the network hotspot dialog no longer overlap the password field (Rijul Gulati, Plasma 5.20)

Fixed the inline button display for Tiles view in the Get New [Thing] dialog (Alexander Lohnau, Frameworks 5.73)

The first entry in the Get New [Thing] dialog is no longer always misleadingly selected (Alexander Lohnau, Frameworks 5.73)

It’s now possible to delete an entry that’s upgradeable in the Get New [Thing] dialog (Alexander Lohnau, Frameworks 5.73)

The old QWidgets-based Get New [Thing] dialog now lets you choose which thing to install when a thing lists multiple installable things in its thing (so you can thing while you thing) (Alexander Lohnau, Frameworks 5.73)

The old QWidgets-based Get New [Thing] dialog no longer changes the width of the main view after you start searching for something (Alexander Lohnau, Frameworks 5.73)

User Interface Improvements

Spectacle no longer includes the mouse cursor in screenshots by default (Antonio Prcela, Spectacle 20.08.0)

KInfoCenter no longer shows useless “Defaults” “Reset” and “Apply” buttons at the bottom of the window (David Redondo, Plasma 5.20)

Line and bar charts used in system monitor widgets now display grid lines and Y axis labels (David Redondo, Plasma 5.20)

The “Add Widgets” sidebar has been subtly improved with a third column and a better top layout for the controls (Carson Black, Plasma 5.20)

Dolphin’s context menus now locates the extra actions to open other applications in the base level of the context menu rather than a sub-menu, so long as there are three of them or less (me: Nate Graham, Frameworks 5.73):

Wow, this menu is getting pretty huge; I guess we should do something about that next How You Can Help

If you are an experienced developer who would like to make a gigantic impact very quickly, fix some recent Plasma regressions or longstanding bugs. Everyone will love you forever! No really. Sometimes people will mail you beer and everything. It’s happened before!

Beyond that, have a look at https://community.kde.org/Get_Involved to discover ways to help be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Finally, consider making a tax-deductible donation to the KDE e.V. foundation.

CSD support in KWin

Thursday 23rd of July 2020 08:44:43 PM

If you are a long time Plasma user, you probably remember the times when most GTK applications in KDE Plasma prior to 5.18 were practically unusable due to the lacking support for _GTK_FRAME_EXTENTS. In this blog post, I would like to get a bit technical and walk you through some changes that happened during the 5.18 time frame that made it possible to support _GTK_FRAME_EXTENTS in KWin. I also would like to explain why after so many years of resistance, we had finally added support for client-side decorations. So, buckle up your seat belt!

What is _GTK_FRAME_EXTENTS, anyway?

A window can be viewed as a thing that has some contents and a frame around it with a close button and so on. The “server-side decoration” term is used to refer to a window frame drawn by the window manager (KWin). If the application draws the window frame by itself, then we refer to that window frame as a “client-side decoration.”

An example of a window frame being drawn by the window manager (KWin) An example of a window frame being drawn by the application (gedit)

A cool thing about client-side decorations is that they are very eye candy, but there is also a lot of drawbacks about them, for example if the application hangs, the user won’t be able to close the window by clicking the close button in the window frame. But the biggest issue with client-side decorations is that the window manager has to know the extents of the client-side drop shadow otherwise things such as window snapping and so on won’t work as desired.

_GTK_FRAME_EXTENTS is a proprietary GTK extension that describes the extents of the client-side decoration on each side (left, right, top, and bottom). From the word “proprietary” you have probably already guessed that _GTK_FRAME_EXTENTS is a thing that is not in any spec. We can’t afford implementing a proprietary extension simply because we don’t know whether re-designing KWin will pay off in the end. What if GTK ditches _GTK_FRAME_EXTENTS for something else and our hard work will be for nothing? There were some suggestions to standardize _GTK_FRAME_EXTENTS in the NETWM spec, but it didn’t go well.

So, what did change our minds?

It might come as a surprise, but the reason why we decided to add CSD support after so many years of being reluctant was Wayland. In order to fully implement the xdg-shell protocol (the de-facto protocol for creating desktop-style surfaces), we must support client-side decorated windows. Prior to that, we didn’t have any reason that could possibly somehow justify the changes that we would have to make in code that was battle-tested for many years.

With Wayland, we know what changes have to be done in order to add support for client-side decorated clients. Surprisingly, the geometry abstractions that we chose specifically for client-side decorated Wayland clients turned out to be also pretty good for client-side decorated X11 clients, so we decided to add support for _GTK_FRAME_EXTENTS since it didn’t require any huge changes.

It still means that we will be screwed if GTK switches to something completely different on X11, though. But let’s hope that it won’t happen.

CSD and KDE Plasma

“But Vlad,” you may say. “Does this mean that KDE is going to switch CSD?” No, as far as I know, nothing has changed, we still use server-side decorations. Support for client-side decorations was added because it’s something that we need on Wayland and to make GTK applications usable on X11.

Frame, buffer, and client geometry

Warning: This is a brain dump. Please skip to the next section if you’re not interested in technical stuff.

For an old school window manager such as KWin, client-side decorated windows are troublesome mainly because due to the long history all rendering related code had been written with the assumption that the window frame wraps the window contents. If an application draws the window frame on its own, that’s not the case.

In 5.18, we tackled that problem by introducing two new geometries to separate window management from rendering – the frame geometry and the buffer geometry.

The frame geometry describes a rectangle that bounds the window frame. It doesn’t matter whether the window is client-side decorated or server-side decorated. This kind of geometry is used practically for everything, for example window snapping, resizing, etc. KWin scripts see and operate on this geometry.

The buffer geometry is used primarily during rendering, for example to build window quads, etc.

In 5.20, we introduced yet another new geometry, which existed prior to that in an implicit form – the client geometry. The client geometry indicates where the window contents inside the window frame [1] is on the screen. We use this geometry primarily for configuring windows.

It can be a bit challenging to deal with three different geometries at the same time, but there is not that much we can do about it, unfortunately. Each geometry has its own specific domain where the other geometries are inappropriate to use.

Conclusion

CSD is a rather controversial subject in the KDE community, but it’s here and it’s not going anywhere, anytime soon.

[1] On X11, the client geometry actually corresponds to the geometry of the client window.

All About the Apps Junior Jobs

Thursday 23rd of July 2020 10:09:09 AM

The Ubuntu Podcast did a review in their new edition of the KDE’s Applications site.  Listen from 14 minutes in.  You can hear such quotes as

“It’s pretty neat, It shows the breadth of applications in the KDE universe, tonnes of stuff in here”
“A big green button to install the thing”
“KDE applications are broad and useful”
“They publish a tonne of applications in the Snap store and they are hugely popular”
“Valuable software that people want to install and use irrespective of the desktop they are on”
“They make high quality and useful applications”
“Well done KDE, always very mindful of user experience”

They did suggest adding a featured app, which is a task we also want to do for Discover which has featured apps but they don’t currently change. That feels like an interesting wee task for anyone who wants to help out KDE.

But more easy would be the task of going over all the apps and checking the info on them is up to date, including going over the various app stores we publish on like the Microsoft Store and making sure those links are in the Appstream meta-data files.

Finally, the main task of All About the Apps is getting the apps onto the stores so we need people who can get the apps running on Windows etc and put them on the relevant Stores.  I did an interview asking for this for Flathub in the most recent monthly apps update.

We’re here to help on our Matrix room and my contact is always open.

You can open Mesh Gradients in Krita now!

Thursday 23rd of July 2020 09:47:00 AM
TL;DR: meshgradient get rendered in Krita just like you'd expect them to render in other apps

Well, because I couldn't get Bicubic interpolation fully working by the time of writing and publishing this blog. This part is still in pending state :(
Firstly, here are is a screenshot of a complex meshgradient which I found from Inkscape (I modified it to be Bilinear to get this):

pepper.svg



As, there isn't much else to say besides giving out the details, I'll jump straight to the technicalities :-)

Technicalities Rendering (Bilinear) I started with reading the algorithm mentioned in the specs sheet i.e Divide And Conquer. I had some confusions about it, but thanks to my mentors. I got the idea. Then, to implement this algorithm, the idea was to subdivide the patches, until the difference between the corners is small enough.

Now the question was, how am I going to divide the surface. Luckily, there was a paper, one search away, which I picked up. I read it quickly and started writing the algorithm on paper. Somethings didn't quite make sense, but as I later found out that they were probably misprints.

So, as I implemented the algorithm, the first thing that I tried to open was this, to check if the subdivision algorithm was right. Comparing this to Inkscape, it was pretty accurate (not the color, but the surface itself).



Next, get colors for the new corners was simple too. You could get it by dividing it in half (we get this from the fact that the surface is Bilinear). Then just mix them when difference is less than the tolerance and you get this:

Caching Because rendering meshgradients is expensive and anything e.g a simple translation would call for a complete redraw. I had to write a way for poor man's caching. Instead of painting directly on a the canvas. I would first paint on QImage and then paint it on QPainter.

Simple enough! However, there was a problem, rendering on QImage and then on scaled up QPainter made things look very pixelated because of scaling. So, to counteract this, my solution was to paint on scaled up QImage i.e scale it up by the same factor as the QPainter. This made everything a whole lot better.
Rendering (Bicubic Interpolation)This has been the hardest part about this GSoC (till now). The way I usually write code is, I write everything in a bit messy way till I get something running. Then I come back and check if something could be written in a better way (read efficiency and readability). This approach went horribly wrong as I probably had made mistakes and my render was pure chaos and nowhere close to what you call interpolation. Then I git stashed everything and started over in a more manageable way, now the interpolation was working. But turns out, it was an exact replica of the Bilinear interpolation, but slower (because it did actually calculate derivatives and what not).

So, I asked for help. But then I immediately realized that my assumptions were wrong. In short, I was calculating derivatives incorrectly. I quickly modified the code in the dirty way and finally I could see a smooth render with no Mach Banding. Unfortunately, this too is based on an assumption, which is not correct for the all cases. So, Bicubic interpolation works neatly for Mesh Patches which are relatively linear. But falls out of place for a more messy (generic case) ones.
But why? The problem here is with the way I do subdivision, it is only valid for linear Coons patches. I haven't written the code to subdivide a Bicubic surface. That isn't the only problem, subdividing Bicubic surface is an expensive operation as well. So, I'll have to find a middle ground.

SavingSince I spent a lot of time on getting Bicubic Interpolation to work, for a context switch I moved to the next milestone.

So, I tried to implement the saving operation. Thanks to all the abstractions! This was pretty straightforward to implement.
I will now write tests for it and the final rendering :)
That's all I have for now, hopefully I'll post an update when I get the Bicubic shading working :)

Bugs ...while rendering

The superfast Ryzen-powered KDE Slimbook

Thursday 23rd of July 2020 06:01:41 AM

I’ve had the privilege of testing and using the brand-new 15.6″ Ryzen-powered KDE Slimbook laptop for the past month. During that time, I worked with the Slimbook developers to perform QA and polish Plasma for this laptop. They’re awesome people who hosted our Plasma+Usability & Productivity Sprint last year at their offices. I’d like to share my impressions of their latest laptop.

Full disclosure: this laptop was sent to me for free for testing and development, so I have no financial skin in the game. They haven’t asked for it back yet, but I plan to either send it back, or purchase it, if I want to keep it. My configuration retails for 930€ (roughly $1,075), which is a steal for what you get. Regardless, what follows is what I believe to be an honest, unbiased review.

Performance and battery life

Here’s what I know you’re all waiting to hear about, so I’ll just start with it: performance with the 8-core/16-thread Ryzen 4800H CPU is unbelievable!

I can compile KWin in five minutes, compared to over 11 with my top-of-the-line Lenovo ThinkPad X1 Yoga with a 10th generation Intel i7 processor. Everything feels smooth and fast. The power of this machine is awesome, and the Ryzen CPU makes it heaven for people who need to perform processor-heavy tasks on a regular basis.

Despite this, case temperatures remain cool and the fan remains off when the machine is not under heavy load. The thermal management is excellent–far better than on my ThinkPad.

Additionally, battery life is amazing. The machine idles at around 3 watts and goes up to only about 7 or 8 with average tasks that don’t involve compiling KWin.

GSoC Work Status

Thursday 23rd of July 2020 12:00:00 AM

Hey everyone,

In the previous blog I wrote about my GSoC first evaluation. In this blog I have written about activities on which I have worked further to add multiple datasets.

Division memory game

In this sub-activity of memory, the goal is to match a division and its result, until all cards are gone. It helps children to practice division.
The procedure of adding multiple datasets to this activity is the same as of other memory activities. We just need to create different Data.qml files in the resource directory and load the datasets. For this activity, we need to use the function getDivTable() implemented in math_util.js and pass the respective numbers ranging from 1-10 to it.
This activity also has two modes. The first mode is in which the child needs to turn the cards to match to its equivalent division result. The second mode is the one in which the child needs to play with Tux to match the equivalent cards, as this mode is called “with Tux”. I have implemented multiple datasets for both of the modes. The dataset content of the activity Division memory game with Tux is the same as without Tux.

After the addition of multiple datasets to both modes, I tested it manually to make sure it works perfectly without any regression. This activity has been merged into the master branch.

Below image shows the multiple datasets content of this activity

Addition and Subtraction memory game

In this sub-activity of memory, the child needs to turn the cards to match addition and subtraction and its result until all cards are gone.

The level of previous memory activities were based on only a single arithmetic operation as only-addition or only-division, but this memory activities has two different operations for a particular level. As in case of this activity for any level, there would be few cards on addition operation and few of them upon subtraction.
The dataset addition procedure for this activity is also similar to other memory activities. We just need to use the getAddMinusTable() function from math_util.js.
This activity also has two modes as one “with Tux” and another “without Tux”. I have implemented multiple datasets to both modes of the activity.
After the addition of datasets, I tested it manually and made a merge request for it. This activity has been merged into the master branch.

Below image shows the multiple datasets content of this activity

Thanks!
Deepak

Week 6-7-8

Wednesday 22nd of July 2020 06:30:00 PM
Part 6 -

Hi everyone

This month, I took forward my ongoing project with Gcompris and added Multiple Datasets for Categorisation, Gnumch equality, and Gnumch inequality activities.

If you are unaware of my project, multiple datasets, or Gcompris. I have explained everything in detail in my last post - here

Categorization

In Categorization, pupils have to identify and categorize elements into correct and incorrect groups. Whenever we add multiple datasets in any activity, It is mandatory to add Activity Config before (you can see purple color sandwich like button in the above screenshot). Clicking on this button open ups the dialog box for configuring both multiple datasets and activity setting. It’s always easy when activity has no settings and I have to take care only about multiple datasets but In case of categorization, It has both activity settings and a dialog box which appears at starting to asks for downloading missing images and according to my proposal, activity setting should show different options for different datasets. So, for sure this was the most challenging activity for me. I was not very sure on how to test that “Download missing image” dialog box, as it appears only if any image is missing or we never clicked on “never show it again” option. So, as expected I broke it in my first commit :(. Where my mentor Johnny told me the way to test it and fix it :). Later mentors and I discussed and agreed that we don’t need to make activity setting options dependent on selected datasets. So in the end the most challenging activity has been merged into the master branch as any other activity.

Gnumch Equality and Inequality

After completing categorization, I picked Gnumch Equality and Inequality, they are two activities that use the same code. So I am working on both of them together. There are a total of 5 Gnumch based activities that inherits the same code and datasets need to be added in 2 of them. So obviously, I have to take care that my changes wouldn’t lead to any regression in other activities. Besides this, both activities only support addition and subtraction operations at starting, I have added the new functionality to support multiplication and division operations. Gnumch is basically a game, a bit similar to Pacman :). So, It was fun to work on this activity. By the way, I am expert now :) The changes for these activities haven’t merged yet, they are still in review.

The final two activities left are to Build the same model and find the details. I have just started working on Build the same model and most probably I will cover both of them in my next blog.

Have fun!

Going Focal

Wednesday 22nd of July 2020 10:33:26 AM

Here at KDE neon base camp we have been working on moving the base of our system to Focal, Ubuntu 20.04. If you’re interested in the mechanics you can see the status, and indeed help out, on our 20.04 workboard.

But probably you’re more interested in giving it a try. This is in testing mode still and comes with a no money back warranty. Instrucitons are on the testing forum thread. You can either do an upgrade or a full install from the preview ISOs. Let us know how you get on!

My file menu is not full of eels

Wednesday 22nd of July 2020 09:00:26 AM

This is the story of a bug in an open-source project I maintain; as the maintainer I review and sometimes fix bug reports from the community. Last week, a user reported that the ‘File’ menu of the application was not appearing on macOS. Some investigation showed this didn’t happen when using the default translation (i.e English), but a bit more investigation showed that it only happened when the language in use was Dutch.

At this point I’d like to make it clear that I like the Dutch and especially gevulde koeken, a type of almond cookie you can only get in the Netherlands. When passing through Amsterdam Schiphol, I take care to stock up at the supermarket on the main concourse. If you’re passing through Schiphol and wonder why they’ve been cleaned out of cookies, it was me.

Anyway, it was weird that the menu code I had written seemed to dislike the Dutch. Actually, as part of investigating the defect, I needed to switch my system language to Dutch. So I just did that for a week, and got to learn most of the macOS UI in Dutch. Lekker!

It turns out, the File menu (or actually, the ‘Bestand’ menu) was missing for a very sensible reason: it contained zero items. In this particular application, the menu is quite simple and contains:

  • load configuration file…
  • save configuration file…
  • quit

If you’ve used Qt on macOS, you’ll know the ‘Quit’ item is moved to the application menu (the one with name of the software) automatically. What you may not know is that Qt will, by default, move some other items:

  • About
  • Help
  • Preferences

All of these items have special locations on macOS. The default way to detect if a menu item should be moved this way, is based on a heuristic. A heuristic is a rule which works some of the time and goes wrong when your customer / manager is using the software. The default rule is to do this based on a string search of the (translated) menu item name. In this case, the Dutch translation of ‘Preferences’ is … ‘Configuration’. Therefore, both the first and second items in my file menu get treated as the application preferences item. Hence, the file menu ends up empty and is therefore made invisible.

Fortunately, the fix is simple: rather than accepting the default TextHeuristic MenuRole on your QActions, just set a role explicitly, such as Qt::NoRole. This disables the text heuristic and Dutch users can once again see the Bestand menu in all its glory.

QMenuBar* mb = new QMenuBar(); QAction* openAction = new QAction(tr("Open saved configuration...")); openAction->setMenuRole(QAction::NoRole); // change from Qt::TextHeuristicRole QAction* saveAction = new QAction(tr("Save configuration as...")); saveAction->setMenuRole(QAction::NoRole); // change from Qt::TextHeuristicRole QMenu* fileMenu = mb->addMenu(tr("File")); fileMenu->addAction(openAction); fileMenu->addAction(saveAction);

Now I just need to get some more of those cookies.

If you’re confused about the title of this post, kindly Google the ‘Hungarian phrasebook’ sketch by Monty Python — only maybe not on a work computer.

The post My file menu is not full of eels appeared first on KDAB.

KDE Slimbook: Plasma, KDE Apps and the Power of the AMD Ryzen CPU

Wednesday 22nd of July 2020 06:10:17 AM





Today Slimbook and KDE launch the new KDE Slimbook.

The third generation of this popular ultrabook comes in a stylish sleek magnesium alloy case that is less than 20 mms thick, but packs under the hood a powerful AMD Ryzen 7 4800 H processor with 8 cores and 16 threads. On top of that runs KDE's Plasma desktop, complete with a wide range of preinstalled, ready-to-use Open Source utilities and apps.

Both things combined make the KDE Slimbook a one-of-a-kind machine ready for casual, everyday use, gaming and entertainment; design work, animation, and 3D rendering; as well as hardcore software development.

The KDE Slimbook can fit up to 64 GBs of DDR4 RAM in its two memory sockets, and has three USB ports, a USB-C port, an HDMI socket, a RJ45 for wired network connections, as well as support for the new Wifi 6 standard.





It comes in two sizes: the 14-inch screen version weighs 1.07 kg, and the 15.6-inch version weighs 1.49 kg. The screens themselves are Full HD IPS LED and cover 100% the sRGB range, making colors more accurate and life-like, something that designers and photographers will appreciate.

Despite its slim shell, the AMD processor and Plasma software deliver enough power to allow you to deploy a full home office with all the productivity and communications software you need. You can also comfortably browse the web and manage social media, play games, watch videos and listen to music. If you are the creative type, the Ryzen 4800 H CPU is well-equipped to let you express your artistic self, be it with painting apps like Krita, 3D design programs like Blender and FreeCAD, or video-editing software like Kdenlive.





If you are into software development, you are in luck too: KDE provides all the tools you need to code and supports your favorite languages and environments. Meanwhile, Slimbook's hardware is ideal for CPU-intensive tasks and will substantially shorten your build times.

Pricing for the KDE Slimbook starts at approximately € 899 for the 14'' version and at € 929 for the 15.6'', making it more affordable than most similarly-powered laptops. Besides, when you order a KDE Slimbook, you will also be contributing to KDE, as the Slimbook company actively supports and sponsors KDE and donates part of the proceedings back into the Community.

Find out more from the KDE Slimbook page.

Google Summer of Code 2020 - Post 6

Wednesday 22nd of July 2020 01:10:00 AM

I updated the user interface of the Rocs graph layout plugin. Now, each layout algorithm corresponds to a tab. See below the tab for the Radial Tree Layout.

Using the same algorithm, the graph layout plugin can draw rooted trees and free trees (trees without a root). The next two figures show the same tree represented as a free tree and as a rooted tree, respectively.

The root vertex can be selected by the user or determined automatically. Currently, a center of the tree is used for automatic root selection. The user can also control the distance between nodes by changing the node separation. Tomorrow I will finish the tests and add some code to check if the graph being laid out is a tree.

Note: I decided to change the title of my GSoC posts to reflect the fact that I am not being able to follow a weekly schedule.

Improve MAVLink Integration of Kirogi – Progress Report 2

Tuesday 21st of July 2020 06:00:00 PM

Hello everyone!

This is my second progress report about GSoC 2020 project.

Please feel free to notice me if you have any question or idea :D

Kirogi can control multiple vehicles now

I finished implementing basic support for controlling multiple vehicles.

Now Kirogi automatically detects more than one vehicle on network and you can control them selecting each one.

This feature may not seem quite useful now but making Kirogi able to manage more than one vehicle at the same time will be very useful after implementing Mission Planner.

TCP&Serial communication will be supported

Currently class definition for supporting UDP and TCP connection is done.

Demonstration will be possible after implementing UI and class for managing those connections.

After implementing connection manager, overall class structure will be like below.

The UI for managing connections will be loaded at runtime. For QML to determine whether load the UI or not, all vehicle support plugins will have Q_PROPERTY isMultiConnectionSupported.

I think it would be better to have some kind of metadata file rather than adding virtual function that just returns constant value to vehicle support plugins. I think using KPluginMetaData. As I know, it just reads some specific keys from json file so I can't add arbitrary key to use but it seems I can read all keys from json file using  KPluginMetaData::rawData().

Message sequence will be tracked

As I mentioned earlier, MAVLink messages have sequence number.

All three kinds of connection classes parse incoming message using MAVLinkProtocol class. So implementing method for tracking sequence of message will be implemented in it.

Week 7: GSoC Project Report

Tuesday 21st of July 2020 10:53:09 AM

This week I completed unit-tests for interactions between storyboard docker and timeline docker. Also now thumbnails will only be updated when the image is idle, meaning if the image is not painted upon for some time, say a sec, the thumbnail will update. This will improve performance when using the canvas. I also wrote some functions that would help when implementing updating of affected thumbnails.

I wrote unit-tests for the interactions between dockers. Some of these interactions have been implemented and some are yet to be implemented. The planned behavior for various interactions according to tests is :

  • Items in storyboard will have unique and positive frame numbers and they will always remain sorted by the frame number.
  • On adding a keyframe in the timeline an item will be added to storyboard docker, if there is no item for that frame already.
  • On removing a keyframe in the timeline an item will be removed from the storyboard docker if there are no other keyframes at that time in timeline docker.
  • On moving an item in timeline docker,
    • if there is no item for the moved to frame, item is inserted for the “to” keyframe, otherwise not.
    • if there is no other keyframe at the moved from frame, item is removed for the “from” keyframe, otherwise not.
  • Selections in storyboard docker would correspond to the last selected keyframe in timeline docker for which item exists in storyboard docker.
  • Changing duration in storyboard docker for an item would add hold frames right after the keyframes for the item in timeline docker. If there are multiple layers in an image, hold frames should be added to all the layers.
  • Changing fps should conserve number of frames, that means if duration for an item was 2s 4f at 24 fps, and then fps changes to 12, then duration would change to 4s 4f.

Now thumbnails would be updated only when the image idle, that means, while the canvas is being painted upon, thumbnails would remain at the last version, and would update only when painting has stopped. It is similar to the overview docker but with a bit less delay.

This week I would work on implementing the remaining interactions and the update of all affected item on keyframe changes.

Launching Kdenlive Tutorials

Tuesday 21st of July 2020 06:53:00 AM
Many years ago I started writing a number of tutorials about Kdenlive, describing how to achieve visual effects similar to what you can get from commercial software (like the Adobe suite). But I didn't have time to translate my writings in English and format them with HTML. That was, until the last couple of weeks.
Now, I'm ready to release the website kdenlivetutorials.com.


All the contents you'll find on Kdenlive Tutorials are released under Creative Commons Attribution Non Commercial.
At this moment there are 48 different tutorials, I'm working on other two because I like multiples of ten.



All this work has been done in my spare time: I've been using automatic translation, performing some fast manual fixes. The result should be understandable, in the next months I'll fix the text to make it sound better.
I also know that Kdenlive's interface has changed a lot in this years: the functions I use in these tutorials are still available, they might just have a slightly different name or might be placed in a different spot of the GUI. If enough people are interested, I'll consider taking new snapshots and adapting the tutorials to the last version of Kdenlive.

The website's layout is simple, and every screenshot can be zoomed on. Every tutorial comes with a video that shows the results of the procedure. Videos are often raw: they have been made quickly, just to show what could be done in a couple of minutes. Anyway, every tutorial explains also how to improve the result, if you wanna spend more time on it.

Google Summer of Code 2020 - Week 5

Tuesday 21st of July 2020 12:30:00 AM

I finished writing an implementation of a tree layout algorithm for Rocs. After some research, I decided to go with a radial layout. The idea is to select a node to be the placed at the center and place the other nodes in circles of different radius. The layout is computed recursively and the plane is partitioned between sub-trees in order to guarantee that no edge crosses will exist. Some layouts generated by this implementation are shown below. Tomorrow I will finish the user interface and some tests.

FreeBSD Qt WebEngine GPU Acceleration

Monday 20th of July 2020 10:00:00 PM

FreeBSD has a handful of Qt WebEngine-based browsers. Falkon, and Otter-Browser, and qutebrowser and probably others, too. All of them can run into issues on FreeBSD with GPU-accelerated rendering not working. Let’s look at some of the workarounds.

I should mention that I personally have not hit this issue. Maybe I don’t watch enough video’s, or maybe I happen to have the right hardware to avoid the problem.

There are reported cases of Qt WebEngine-based browsers displaying video badly. If I remember correctly there was a mix-up at one point between RGB and BGR, and this bug report specifically mentions “videos with wrong colors”.

Depending on the browser, you may be able to use command-line arguments to affect hardware acceleration. The Qt documentation on WebEngine debugging mentions a number of command-line arguments that can be used to modify WebEngine internals.

Unfortunately, web browsers also parse command-line arguments.

From a little investigation, it turns out that the browsers handle the documented command-line arguments very differently:

  • Falkon accepts the WebEngine-related command-line flags that are documented. I can’t tell if they are effective: passing --no-sandbox still gets me debugging messages from sandboxing code. Oddly enough, falkon --help mentions a --help-all command-line argument, which is totally ignored.
  • qutebrowser has its own special --qt-flag to pass flags on to Qt internals. I suppose that’s because it has Python argument-processing first, followed by handing things off to Qt.
  • Otter supposedly supported --disable-gpu in the past, but now complains that it is an unknown option. Unlike Falkon, it does understand --help-all.

From this collection of inconsistencies, I think the conclusion should be that the environment is a better place to do any settings that should apply to WebEngine internally – the path from command-line argument to Qt internals is too much dependent on where and when processing happens and how “cleanly” the overall command-line arguments are passed on.

The documentation says

Alternatively, the environment variable QTWEBENGINE_CHROMIUM_FLAGS can be set.

and that looks like the best way to consistently affect the behavior of WebEngine inside an application, because it end-runs the command-line processing. After all, far fewer applications mess with the environment (their own environment) before instantiating QApplication.

That means that people using FreeBSD, experiencing video corruption in WebEngine-based browsers, can best put the following (or some csh equivalent) in their .profile:

QTWEBENGINE_CHROMIUM_FLAGS="--disable-gpu" export QTWEBENGINE_CHROMIUM_FLAGS

More in Tux Machines

Hardware Freedom: 3D Printing, RasPi and RPi CM3 Module

  • Can 3D Printing Really Solve PPE Shortage in COVID-19 Crisis? The Myth, and The Facts!

    Amid COVID-19 crisis, we see severe shortage of Personal Protective Equipment (PPE) worldwide, to the point that a strict organization like FDA is making exceptions for PPE usage, and there are volunteer effors to try to alleviate this shortage like GetUsPPE. Also, Centers for Disease Control and Prevention (CDC) provides an Excel spreadsheet file to help calculate the PPE Burn Rate. There are many blog posts, video tutorials, and guides that teach people how to print their face shields and masks.

  • Raspberry Pi won’t let your watched pot boil
  • Growing fresh veggies with Rpi and Mender

    Some time ago my wife and I decided to teach our kids how to grow plants. We both have experience as we were raised in small towns where it was common to own a piece of land where you could plant home-grown fresh veggies. The upbringing of our kids is very different compared to ours, and we realized we never showed our kids how to grow our own veggies. We wanted them to learn and to understand that “the vegetables do not grow on the shop-shelf”, and that there is work (and fun) involved to grow those. The fact that we are gone for most of the summer and to start our own garden just to see it die when we returned seemed to be pointless. This was a challenge. Luckily, me being a hands-on engineer I promised my wife to take care of it. There were two options: we could buy something that will water our plants when we are gone, or I could do it myself (with a little help from our kids). Obviously I chose the more fun solution…

  • Comfile Launches 15-inch Industrial Raspberry Pi Touch Panel PC Powered by RPi CM3 Module

    Three years ago, we noted Comfile has made 7-inch and 10.2-inch touch panel PC’s powered by Raspberry Pi 3 Compute Module. The company has recently introduced a new model with a very similar design except for a larger 15-inch touchscreen display with 1024×768 resolution. ComfilePi CPi-A150WR 15-inch industrial Raspberry Pi touch panel PC still features the CM3 module, and the same ports including Ethernet, USB ports, RS232, RS485, and I2C interfaces accessible via terminal blocks, and a 40-pin I/O header.

Programming: Vala, Perl and Python

  • Excellent Free Tutorials to Learn Vala

    Vala is an object-oriented programming language with a self-hosting compiler that generates C code and uses the GObject system. Vala combines the high-level build-time performance of scripting languages with the run-time performance of low-level programming languages. Vala is syntactically similar to C# and includes notable features such as anonymous functions, signals, properties, generics, assisted memory management, exception handling, type inference, and foreach statements. Its developers, Jürg Billeter and Raffaele Sandrini, wanted to bring these features to the plain C runtime with little overhead and no special runtime support by targeting the GObject object system. Rather than compiling directly to machine code or assembly language, it compiles to a lower-level intermediate language. It source-to-source compiles to C, which is then compiled with a C compiler for a given platform, such as GCC. Did you always want to write GTK+ or GNOME programs, but hate C with a passion? Learn Vala with these free tutorials! Vala is published under the GNU Lesser General Public License v2.1+.

  • Supporting Perl-related creators via Patreon

    Yesterday I posted about this in the Perl Weekly newsletter and both Mohammad and myself got 10 new supporters. This is awesome. There are not many ways to express the fact that you really value the work of someone. You can send them postcards or thank-you notes, but when was the last time you remembered to do that? Right, I also keep forgetting to thank the people who create all the free and awesome stuff I use. Giving money as a way to express your thanks is frowned upon by many people, but trust me, the people who open an account on Patreon to make it easy to donate them money will appreciate it. In any case it is way better than not saying anything.

  • 2020.31 TwentyTwenty

    JJ Merelo kicked off the special 20-day Advent Blog cycle in honour of the publication of the first RFC that would lay the foundation for the Raku Programming Language as we now know it. After that, 3 blog posts got already published:

  • Supporting The Full Lifecycle Of Machine Learning Projects With Metaflow

    Netflix uses machine learning to power every aspect of their business. To do this effectively they have had to build extensive expertise and tooling to support their engineers. In this episode Savin Goyal discusses the work that he and his team are doing on the open source machine learning operations platform Metaflow. He shares the inspiration for building an opinionated framework for the full lifecycle of machine learning projects, how it is implemented, and how they have designed it to be extensible to allow for easy adoption by users inside and outside of Netflix. This was a great conversation about the challenges of building machine learning projects and the work being done to make it more achievable.

  • Django 3.1 Released

    The Django team is happy to announce the release of Django 3.1.

  • Awesome Python Applications: buku

    buku: Browser-independent bookmark manager with CLI and web server frontends, with integrations for browsers, cloud-based bookmark managers, and emacs.

  • PSF GSoC students blogs: Week 9 Check-in

DRM and Proprietary Software Leftovers

  • Some Photoshop users can try Adobe’s anti-misinformation system later this year

    Adobe pitched the CAI last year as a general anti-misinformation and pro-attribution tool, but many details remained in flux. A newly released white paper makes its scope clearer. The CAI is primarily a more persistent, verifiable type of image metadata. It’s similar to the standard EXIF tags that show the location or date of a photograph, but with cryptographic signatures that let you verify the tags haven’t been changed or falsely applied to a manipulated photo.

    People can still download and edit the image, take a screenshot of it, or interact the way they would any picture. Any CAI metadata tags will show that the image was manipulated, however. Adobe is basically encouraging adding valuable context and viewing any untagged photos with suspicion, rather than trying to literally stop plagiarism or fakery. “There will always be bad actors,” says Adobe community products VP Will Allen. “What we want to do is provide consumers a way to go a layer deeper — to actually see what happened to that asset, who it came from, where it came from, and what happened to it.”

    The white paper makes clear that Adobe will need lots of hardware and software support for the system to work effectively. CAI-enabled cameras (including both basic smartphones and high-end professional cameras) would need to securely add tags for dates, locations, and other details. Photo editing tools would record how an image has been altered — showing that a journalist adjusted the light balance but didn’t erase or add any details. And social networks or other sites would need to display the information and explain why users should care about it.

  •  
  • EFF and ACLU Tell Federal Court that Forensic Software Source Code Must Be Disclosed
           
             

    Can secret software be used to generate key evidence against a criminal defendant? In an amicus filed ten days ago with the United States District Court of the Western District of Pennsylvania, EFF and the ACLU of Pennsylvania explain that secret forensic technology is inconsistent with criminal defendants’ constitutional rights and the public’s right to oversee the criminal trial process. Our amicus in the case of United States v. Ellis also explains why source code, and other aspects of forensic software programs used in a criminal prosecution, must be disclosed in order to ensure that innocent people do not end up behind bars, or worse—on death row.

             

    The Constitution guarantees anyone accused of a crime due process and a fair trial. Embedded in those foundational ideals is the Sixth Amendment right to confront the evidence used against you. As the Supreme Court has recognized, the Confrontation Clause’s central purpose was to ensure that evidence of a crime was reliable by subjecting it to rigorous testing and challenges. This means that defendants must be given enough information to allow them to examine and challenge the accuracy of evidence relied on by the government.

  •                
  • Powershell Bot with Multiple C2 Protocols
                     
                       

    I spotted another interesting Powershell script. It's a bot and is delivered through a VBA macro that spawns an instance of msbuild.exe This Windows tool is often used to compile/execute malicious on the fly (I already wrote a diary about this technique[1]). I don’t have the original document but based on a technique used in the macro, it is part of a Word document. It calls Document_ContentControlOnEnter[2]: [...]

  •      
  • FBI Used Information From An Online Forum Hacking To Track Down One Of The Hackers Behind The Massive Twitter Attack
           
             

    As Mike reported last week, the DOJ rounded up three alleged participants in the massive Twitter hack that saw dozens of verified accounts start tweeting out promises to double the bitcoin holdings of anyone who sent bitcoin to a certain account.

  •                    
  • Twitter Expects to Pay 9-Figure Fine for Violating FTC Agreement
                         
                           

    That means that the complaint is not related to last month’s high-profile [cr]ack of prominent accounts on the service. That security incident saw accounts from the likes of Joe Biden and Elon Musk ask followers to send them bitcoin. A suspect was arrested in the incident last month.

  •                    
  • Twitter Expects to Pay Up to $250 Million in FTC Fine Over Alleged Privacy Violations
                         
                           

    Twitter disclosed that it anticipates being forced to pay an FTC fine of $150 million to $250 million related to alleged violations over the social network’s use of private data for advertising.

                           

    The company revealed the expected scope of the fine in a 10-Q filing with the SEC. Twitter said that on July 28 it received a draft complaint from the Federal Trade Commission alleging the company violated a 2011 consent order, which required Twitter to establish an information-security program designed to “protect non-public consumer information.”

                           

    “The allegations relate to the Company’s use of phone number and/or email address data provided for safety and security purposes for targeted advertising during periods between 2013 and 2019,” Twitter said in the filing.

  •                
  • Apple removes more than 26,000 games from China app store
                     
                       

    Apple pulled 29,800 apps from its China app store on Saturday, including more than 26,000 games, according to Qimai Research Institute.

                       

    The removals are in response to Beijing's crackdown on unlicensed games, which started in June and intensified in July, Bloomberg reported. This brings an end to the unofficial practice of letting games be published while awaiting approval from Chinese censors.

  •                
  • Intuit Agrees to Buy Singapore Inventory Software Maker
                     
                       

    Intuit will pay more than $80 million for TradeGecko, according to people familiar with the matter, marking one of the biggest exits in Singapore since the Covid-19 pandemic. TradeGecko has raised more than $20 million to date from investors including Wavemaker Partners, Openspace Ventures and Jungle Ventures.

  •                      
  • Justice Department Is Scrutinizing Takeover of Credit Karma by Intuit, Maker of TurboTax
           
             

    The probe comes after ProPublica first reported in February that antitrust experts viewed the deal as concerning because it could allow a dominant firm to eliminate a competitor with an innovative business model. Intuit already dominates online tax preparation, with a 67% market share last year. The article sparked letters from Sen. Ron Wyden, D-Ore., and Rep. David Cicilline, D-R.I., urging the DOJ to investigate further. Cicilline is chair of the House Judiciary Committee’s antitrust subcommittee.

Security Leftovers

           
  • DNS configuration recommendations for IPFire users

    If you are familiar with IPFire, you might have noticed DNSSEC validation is mandatory, since it defeats entire classes of attacks. We receive questions like "where is the switch to turn off DNSSEC" on a regular basis, and to say it once and for all: There is none, and there will never be one. If you are running IPFire, you will be validating DNSSEC. Period. Another question frequently asked is why IPFire does not support filtering DNS replies for certain FQDNs, commonly referred to as a Response Policy Zone (RPZ). This is because an RPZ does what DNSSEC attempts to secure users against: Tamper with DNS responses. From the perspective of a DNSSEC-validating system, a RPZ will just look like an attacker (if the queried FQDN is DNSSEC-signed, which is what we strive for as much of them as possible), thus creating a considerable amount of background noise. Obviously, this makes detecting ongoing attacks very hard, most times even impossible - the haystack to search just becomes too big. Further, it does not cover direct connections to hardcoded IP addresses, which is what some devices and attackers usually do, as it does not rely on DNS to be operational and does not leave any traces. Using an RPZ will not make your network more secure, it just attempts to cover up the fact that certain devices within it cannot be trusted. Back to DNSSEC: In case the queried FQDNs are signed, forged DNS replies are detected since they do not match the RRSIG records retrieved for that domain. Instead of being transparently redirected to a fradulent web server, the client will only display a error message to its user, indicating a DNS lookup failure. Large-scale attacks by returning forged DNS replies are frequently observed in the wild (the DNSChanger trojan is a well-known example), which is why you want to benefit from validating DNSSEC and more and more domains being signed with it.

  • Security updates for Tuesday

    Security updates have been issued by Debian (libx11, webkit2gtk, and zabbix), Fedora (webkit2gtk3), openSUSE (claws-mail, ghostscript, and targetcli-fb), Red Hat (dbus, kpatch-patch, postgresql-jdbc, and python-pillow), Scientific Linux (libvncserver and postgresql-jdbc), SUSE (kernel and python-rtslib-fb), and Ubuntu (ghostscript, sqlite3, squid3, and webkit2gtk). 

  •        
  • Official 1Password Linux App is Available for Testing

    An official 1Password Linux app is on the way, and brave testers are invited to try an early development preview. 1Password is a user-friendly (and rather popular) cross-platform password manager. It provides mobile apps and browser extensions for Windows, macOS, Android, iOS, Google Chrome, Edge, Firefox — and now a dedicated desktop app for Linux, too.

  •        
  • FBI Warns of Increased DDoS Attacks

    The Federal Bureau of Investigation warned in a “private industry notification” last week that attackers are increasingly using amplification techniques in distributed denial-of-service attacks. There has been an uptick in attack attempts since February, the agency’s Cyber Division said in the alert. An amplification attack occurs when attackers send a small number of requests to a server and the server responds with numerous responses. The attackers spoof the IP address to make it look like the requests are coming from a specific victim, and the resulting responses overwhelms the victim’s network. “Cyber actors have exploited built-in network protocols, designed to reduce computation overhead of day-to-day system and operational functions to conduct larger and more destructive distributed denial-of-service amplification attacks against US networks,” the FBI alert said. Copies of the alert were posted online by several recipients, including threat intelligence company Bad Packets.

  • NSA issues BootHole mitigation guidance

    Following the disclosure of a widespread buffer-flow vulnerability that could affect potentially billions of Linux and Windows-based devices, the National Security Agency issued a follow-up cybersecurity advisory highlighting the bug and offering steps for mitigation. The vulnerability -- dubbed BootHole -- impacts devices and operating systems that use signed versions of the open-source GRUB2 bootloader software found in most Linux systems. It also affects any system or device using Secure Boot -- a root firmware interface responsible for validating the booting process -- with Microsoft's standard third party certificate authority. The vulnerability enables attackers to bypass Secure Boot to allow arbitrary code execution and “could be used to install persistent and stealthy bootkits,” NSA said in a press statement.