Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE - http://planetKDE.org/
Updated: 2 hours 33 min ago

Python binding for Kuesa

Monday 22nd of July 2019 12:40:56 PM

KUESA™ is a Qt module designed to load, render and manipulate glTF 2.0 models in applications using Qt 3D.

Kuesa provides a C++ and a QML API which makes it easy to do things like triggering animations contained in the glTF files, finding camera details defined by the designer, etc.

It is a great tool so that designers and developers can share glTF based 3D assets.

With the upcoming release of Kuesa 1.1, we are introducing a python binding for Kuesa. This provides a simple yet powerful way for programmers to integrate glTF content in their python applications with just a few lines of code.

Building Kuesa’s Python binding

The first step is, of course, to build and install Kuesa itself. Instruction are available here, it’s a simple process. Kuesa is a Qt module so it typically installs inside Qt’s standard folders.

Next step is to install Python for Qt, AKA PySide. Note that you must install it for the same version of Qt that you compiled Kuesa with.

If you’ve built your own version of Qt, fear not. Building the python binding for that is fairly easy and quick.

In all cases, we would recommend you use a python virtual environment. This will let you install several versions of the Qt bindings.

Once you’ve installed Python for Qt, you’re ready to build the Kuesa binding.

Building bindings for C++ libraries is a relatively simple process which uses several things:

  • a header file which includes all the C++ headers for the files you want to build bindings for
  • an xml file which lists all the classes and provides helper information for the binding generator. This contains details about enums, can help hide C++ methods or properties, etc
  • a binding generator, Shiboken which parses the C++ headers and generates C++ code that implements the binding

The code for the binding, inside Kuesa’s src folder, contains a CMake project file which takes care of all the details.

So assuming the python virtual env is active and the right version of Qt is in the path, building the binding should be as easy as:

cd .../src/python mkdir build cd build cmake .. make make install

Note: the version of Python for Qt which ships with the 5.12 series is incomplete in it’s coverage of Qt 3D. In particular, some useful classes were missed in the animation module, like QClock which is useful to control the speed and the direction of the animation. We have submitted a patch for PySide which fixes this and it was merged in 5.13.

Your first Kuesa application with Python

Kuesa ships with a simple python application that demonstrates the use of the binding.

The application starts by importing the various required modules:

from PySide2.QtCore import(Property, QObject, QPropertyAnimation, Signal, Slot) from PySide2.QtGui import (QGuiApplication, QMatrix4x4, QQuaternion, QVector3D, QWindow) from PySide2.QtWidgets import (QWidget, QVBoxLayout, QHBoxLayout, QCheckBox, QPushButton, QApplication) from PySide2.Qt3DCore import (Qt3DCore) from PySide2.Qt3DRender import (Qt3DRender) from PySide2.Qt3DExtras import (Qt3DExtras) from PySide2.Qt3DAnimation import (Qt3DAnimation) from Kuesa import (Kuesa)

 

The scene graph will need a SceneEntity and a GLTF2Import node to load the glTF content.

self.rootEntity = Kuesa.SceneEntity() self.rootEntity.loadingDone.connect(self.onLoadingDone) self.gltfImporter = Kuesa.GLTF2Importer(self.rootEntity) self.gltfImporter.setSceneEntity(self.rootEntity) self.gltfImporter.setSource("file://"+ wd +"/../assets/models/car/DodgeViper-draco.gltf")

It must also use the frame graph provided by Kuesa. This is needed for the custom materials that Kuesa uses for PBR rendering. It also provides performance improvements such as early z filling, and additional post-processing effects such as blurring or depth of field.

self.fg = Kuesa.ForwardRenderer() self.fg.setCamera(self.camera()) self.fg.setClearColor("white") self.setActiveFrameGraph(self.fg)

Note: in this case, the base class is an instance of Qt3DWindow from Qt 3D’s extras module.

When loading is completed, the content of the glTF file can be accessed using the various collections that are available on the SceneEntity class. For example, you can access the animations created by the designer and baked into the glTF file. These can they be controlled from python!

def onLoadingDone(self): self.hoodClock = Qt3DAnimation.QClock(self.rootEntity) self.hoodAnimation = Kuesa.AnimationPlayer(self.rootEntity) self.hoodAnimation.setClock(self.hoodClock) self.hoodAnimation.setSceneEntity(self.rootEntity) self.hoodAnimation.setClip("HoodAction")

Referencing glTF content is done using the names assigned by the designer.

From there on, animations can be started and stopped by accessing the animation player object. Speed and direction can be changed using the clock created above.

Finally, the application embeds the Qt3DWindow inside a widget based UI (using a window container widget) and creates a simple UI to control the animations.

 

Download KUESA™ here.

The post Python binding for Kuesa appeared first on KDAB.

Interview with Manga Tengu

Monday 22nd of July 2019 07:23:17 AM

Could you tell us something about yourself?

Hi I’m Nour, better known as “manga tengu”.

I’ve loved drawing since I was a kid. I think it is because pen and paper have always been the most widespread toys for children. When I got to choose what to study I went for architecture as it was a way to combine science and art.

I’ve always been hacking my computer, which led me to get interested in open source.

Do you paint professionally, as a hobby artist, or both?

I paint as a professional hobbyist. Which means it’s a hobby but I put maximum rigor and commitment into it. Professionally I’ve been teaching Krita and digital painting at isart since November 2018.

I’ve made lots of architectural illustration which actually led me to get interested in digital as a painting medium.

What genre(s) do you work in?

I started with caricature as a child. After admitting to myself that I wanted to draw manga, I made hundreds of pages of manga.

After deciding to enter the color realm I’ve been into … drawing manga as digital paintings. Then I rediscovered impressionism, realism…

I’m always piling something on top of what I already have.

Whose work inspires you most — who are your role models as an artist?

I made a rule to always be inspired by several artists at the same time. Getting too focused on a single one appears to have some dangerous effects on me. Actually Vladimir Volegov is the one that moves me the most.

How and when did you get to try digital painting for the first time?

In 2005, I absolutely hated it. It was on a Wacom Graphire 4 small format. At the time it looked so expensive to me… I tried it something like a few hours and left it. Then got back at it a few hours every year or so…Didn’t really get into it before 2011.

What makes you choose digital over traditional painting?

1. The biggest reason for me was that even though a tablet doesn’t look cheap, it’s way cheaper than fine art material. It puts everybody on the same level. If I have a tablet I can have a wider range of color than the finest pigments could give.

2. Speed and flexibility. Depending on how you go at it, your paint can behave as if it was wet as long as you want, or be instantly dry, then wet again… There is no time mixing, cleaning et cetera.

3. You can focus on your art: If the elements you’ve been freed from in point 1 and 2 are no longer, then what can make you stand out? Your talent, your experience, your ideas…

How did you find out about Krita?

I found a David Revoy video about Mypaint. I don’t remember if he suggested Krita or not back then but I thought hey, it seems people in the open source world have more than Blender and Linux for me! Then I went on YouTube and was really impressed with Ramon Miranda’s symmetric robot. I found it so cool I needed to try Krita out.

What was your first impression?

I realized it could do all the things I favored Manga Studio over Photoshop for.

What do you love about Krita?

So many things …

1. At the core, it is definitely meant for drawing and painting. You can feel it in the features and their implementation.

2. I can map shortcuts to any key, not some stupid combination of ctrl button or function buttons. This is very important for lefties. I end up with a very efficient workflow.

3. It’s light and runs on Linux. So I could restore some old computers nobody wanted because “Windows takes 15 minutes to start” and make them into decent working stations.

4. You can talk to the devs directly. It’s not like some gigantic monolith you can only undergo. In fact it feels like a close community.

5. All that for free, seriously?

What do you think needs improvement in Krita? Is there anything that really annoys you?

The Mac version needs some steroids. The resources, bundles, shortcuts import export (I heard they were undergoing some pimping … I have great hopes). For now when I go on a new computer I just override the Krita resource folder, but that’s not enough to bring everything back into place.

What sets Krita apart from the other tools that you use?

The brush engines, the way it is meant for painting…

If you had to pick one favorite of all your work done in Krita so far, what would it be, and why?

That drawing of the Chinese lady in the woods. I feel this is when I stopped focussing on making stupidly smooth shading and begun working on my brushwork.

What techniques and brushes did you use in it?

All my brushes are modifications of brushes bundled with Krita:
g)_dry_bristles
i)_wet_bristles_rough
b)_basic-2_opacity

Where can people see more of your work?

https://www.youtube.com/channel/UCbRzccyl4HujGtujH2PjrwA
https://www.artstation.com/mangatengu
https://twitter.com/MangaTengu
https://www.instagram.com/nour.digital.painting
https://www.twitch.tv/mangatengu

Anything else you’d like to share?

Keep it fun when you paint! If you don’t enjoy it, you need to change it.

Plasma Mobile at Plasma Sprint Valencia

Monday 22nd of July 2019 07:20:00 AM

In June month we gathered in Slimbook’s offices to work on Plasma. Along with Plasma developers, we were also joined by KDE Usability and Productivity team.

During the sprint I mostly worked to create up-to-date image for Plasma Mobile, as from last few weeks Plasma Mobile image was quite out-of-date and needed update.

Some of the bugfixes we did includes,

Apart from Plasma Mobile, I worked on general Plasma bugfixes as well,

If you want to know overall progress made in the Plasma + Usability & Productivity sprint, then you can take a look at dot story for more detailed sprint report.

Thanks to Slimbook for hosting us and KDE e.V. for sponsoring my travel!

Also, I am going to Akademy 2019, and talking with Marco Martin about Plasma on embedded devices.

Somewhat Usable

Monday 22nd of July 2019 06:12:29 AM

Adding a feature by yourself is a lot satisfying than requesting someone to add that for you, cause now you are both the producer and the consumer. But to be honest, I never thought I would be the one implementing the Magnetic Lasso for Krita when I requested it 4 years back, leave the fact that I even getting paid for doing so.

Month 2 in making the Titler – GSoC ’19

Monday 22nd of July 2019 04:00:24 AM

Hi! It’s been a while

And sorry for that, I had planned to update last week but couldn’t do so as I had few health issues but now I’m alright.

The QML MLT producer – the progress so far…

From my understanding so far (forgive me for any mistakes that I might make – it’s a different codebase and different concepts – I wholeheartedly welcome corrections and suggestions) the whole producer boils down to two parts – the actual producer code (which is in C and which is the thing which does the ‘producer stuff’) and the wrapper code (which ‘wraps’, supplements and does the actual rendering part of the QML frames). The wrapper files are responsible for mainly rendering the QML templates that are passed to it and make it available for the actual producer to use. And consequently, most of the work is to be done in the wrapper files, as the producer in itself doesn’t change much as it will still do the same things like the existing XML producer (producer_kdenlivetitle.c) – such as loading a file, generating a frame, calling rendering methods from the wrapper files.

So let’s see what work has been done. Starting with the new producer file in mlt/src/modules/qt/producer_qml.c

void read_qml(mlt_properties properties)

As the name suggests, it opens a “resource” file and stores the QML file in the global mlt_properties which is passed.

static int producer_get_image( mlt_frame frame, uint8_t **buffer, mlt_image_format *format, int *width, int *height, int writable )

This method takes in a frame and makes use of the wrapper file – it calls the method which does the rendering part in the wrapper files ( renderKdenliveTitle() ) and sets the rendered image using mlt_frame_set_image to the frame that was passed.

static int producer_get_frame( mlt_producer producer, mlt_frame_ptr frame, int index )

This method generates a frame, calls producer_get_image() and sets a ready rendered frame for the producer, and prepares for the next frame.

The wrapper file has the following methods –

void loadQml( producer_ktitle_qml self, const char *templateQml )

What this method does is – it loads a QML file which is a pointer to a char array and does a bunch of stuff – it checks if it is valid, initialises few properties using mlt_properties_set() methods (width and height). The next method we have is –

void renderKdenliveTitle( producer_ktitle_qml self, mlt_frame frame, mlt_image_format format, int width, int height, double position, int force_refresh )

renderKdenliveTitle() does the rendering part – given a mlt_frame, format and its parameters. And here is where I use QmlRenderer – my last month’s work – it renders QML. I refactored the code a bit to return a rendered QImage in the library. I make use of the renderSingleFrame() method which renders a QML frame for a given position (time)

The programming part in itself wasn’t difficult (although it is far, far from a complete producer – there are a lot of memory leaks right now), understanding how all of it works together in a piece is what took the most effort – in fact it took me a little more than week just to understand and comprehend working of the  producer codebase!

For most of the part, I believe 80% of the producer work is done. The plan is to get a working, solid producer by next week. Although the current code is still far from a ready producer although the whole structure is set and most of the refactoring that had to be be done in the QmlRenderer library in order to accommodate the producer methods is done.

Also, the build system for the QmlRenderer lib was revamped, it’s a clean build system (thanks to Vincent), so for building, all you need to do is clone the repository and do this –

mkdir build cd build qmake -r .. make cd bin ./QmlRender -o /path/to/output/directory -i /path/to/input/QML/file

Neat

You can view the code for the QML MLT producer here.

GSoC Milestone Update 1.1

Sunday 21st of July 2019 11:22:07 PM

The second part of Milestone 1 for my Google Summer of Code 2019’s project porting KDE Connect to Windows involves enabling the SFTP plugin that ships in the linux build.

The plugin allows you to navigate through your mobile device’s files (like you do with a file manager) ON YOUR DESKTOP! It makes use of sshfs to allow mounting the remote file system on your desktop. After that, you can use any file manager you like; heck, you can even use your terminal to have a walk through your mobile’s files. Once that is done, you can do literally anything with the mobile device’s files as you would do with the local filesystem: move files, copy them to your desktop machine, delete them, rename, anything!

How it works?

The plugin, like all other plugins, has two parts that comprise the plugin: the desktop side (could be Linux, Windows or OS X) and the mobile side (could be Android, Plasma Mobile or Sailfish OS).

Desktop Side

To start the plugin, the simplest way is to use the Browse device button from the device options.

This is not in the official builds yet. I’ll link down an instructions file to get it working if you want to try

Send file?
The SFTP Plugin concerns with the Browse device button only. The Send file button allows you to send one file to the connected device. It has different code and does not share functionality with Browse device.

Dev Notes: Invocation?

Clicking on the button invokes startBrowsing() function, which is the starting point for this plugin. After that, mounter object handles the connecting part of the SSHFS functionality.

Mobile side

Between the startBrowsing() function invocation and mounter object instantiation, the mobile is invoked by the desktop, to which the mobile replies with a packet containing: IP address, port, password, username and path for the desktop to connect to the mobile.

Dev Notes: Security?

To make the connection secure, the both devices save each other’s identity during the pairing phase, and no other device can then use the password and the address of the mobile device even if they are able to decrypt the already encrypted packet that carries the credentials for the SFTP plugin to the desktop.

For the interested, you can have a look at the encryption information in the app as well. All communications over KDE Connect are end-to-end encrypted.

Encryption Info shows the SHA1 fingerprint of both devices on the mobile!

So, now with a working knowledge of the plugin, let’s go on to porting the plugin over to the Windows side

Getting SSHFS Working on the Windows 10

Working with KDE Connect, a free software, it is simply against the principle to opt for, or even recommend using any proprietary software for any part of the functionality that KDE Connect provides. Naturally, my mentors suggested me to find any open alternatives that could enable me use SSHFS on windows.

Attempt 01 : SSHFS-Win

As the site reads-

SSHFS-Win is a minimal port of SSHFS to Windows. Under the hood it uses Cygwin for the POSIX environment and WinFsp for the FUSE functionality.

While the tech stack and the demonstrations looked promising, I could not get it to work with my test system. I tried using all the various methods as described in the documentation of the project, but sadly the solution simply was not able to connect to the device. The connection kept resetting, so it couldn’t be about the credentials either. I tried making it work for a couple days, but then I had to move on to the next one.

NOTE: I also did try to *just make a connection manually* with the credentials through the GUI frontend Sirikali– same fate

Attempt 02 : Win-SSHFS

While Win-SSHFS supports Windows 10 on-paper, I was still unable to make use of Win-SSHFS, same error.

close but, not quite Attempt 03 : SFTP Net Drive

While not exactly an attempt, I started doubting if these credentials were any good at all! I installed the trial version of SFTP Net Drive, and surprisingly, I was able to atleast get a taste of the sweet SFTP goodness on Windows!

see? not impossible! works, but proprietary tech Attempt 04 : Swish

This final attempt at getting the SSHFS bit working took most of the two weeks previously allotted to the SFTP plugin. To test whether this works or not, I tried out the latest release available at the time of testing. It works, yes.

The sad parts about Swish are-

  1. not maintained anymore
  2. uses a whole different package manager(Hunter package manager) to manage dependencies.

For the first part, since it was already working, I decided to go for Swish (hey, it works!) so we get SSHFS in. I could work on fixing any new limitations that the project faces, over time.

Now, as I went ahead with the dissection of the project’s build process, the second part started getting weirder and tougher with every turn. Not only the project uses a lot of alamaison’s own projects, the code base itself is based on MSVC 2013! There had already been a lot of changes and deprecation, now that I build with MSVC 2019.

The other ported software(with tests DISABLED) are available as my forks-

I also got to build Craft blueprints for a boost library as well

Albeit, the two weeks were wasted because the project was practically unusable to build anew with MSVC 19.

Next up, we decided to step back and look at the way the SFTP plugin worked prior to SSHFS.

The Solution

As you can tell by the code here, the prior implementation used to access the SFTP server (or mobile) using KIO. The SFTP plugin of KIO was again, not maintained for Windows.

Detour to kio-extras

The SFTP plugin is a part of kio-extras, which houses a lot of other plugins as well. These plugins increase the functionality of KIO, that may be used by applications that deal with input output protocols like SFTP et al.

The plugin’s functionality was not much difficult to navigate through; it had simple, easy to read functions for various activities one might perform when dealing with SFTP.

Oh well, the patch to fix SFTP just got accepted and I just landed it. Neat!

Next up, patching the plugin in KDE Connect

With a few lines of code, a patch is under review at KDE invent. Apply this patch to get the button in the device menu like in the image (way) above.

After applying this patch, clicking the Browse device button will initiate the connection through the SFTP plugin of kio-extras, and that will pop up a dialog box with pre-filled password box. Just click OK and the connection will be made, after which an sftp:// URI is invoked for the system to start browsing. Sadly, windows systems do not ship with inbuilt support for sftp:// URIs. So, for this last bit, you can install any third party software that can provide you the missing functionality (eg: WinSCP is a good one!). After this, you will be able to navigate across your mobile device’s filesystem like your local drives!

you can use WinSCP or any other file manager that supports sftp:// URI handling

You can also download WinSCP from the Windows Store if you wish to donate to them along the way! The setup from the official website is completely free of charge!

As it stands, here is a demonstration of the KDE Connect’s SFTP plugin for Windows-

Happy KDEing!

Advertisements

Shubham (shubham)

Sunday 21st of July 2019 05:32:12 PM
Second month progressHello visitors!! I am here presenting you with my second month GSoC project report. I will be providing the links to my work at the end of the section.Second month of work period was much more easier to manage than the first one. All thanks to my semester end vacations. Because of that, I contributed much more than during 1st month. This month have been a fruit-full month for me, fruit-full in the sense that now I could see code written by me doing some action. The base outline which I had laid for the Authorization back-end is now producing some results. A Polkit Authentication dialog can now be seen. Coming to the progress made during this period, I have done the following:Refine and merge the Polkit back-end and QDBus communication patches:I have refactored and refined the above stated patches by removing extra functionality which I had added during my first work period. I have written and arranged the code such that now it shows up authorization dialog generated by the KDE polkit daemon. After doing so, I have merged both the patches into one.Add Unit test for Polkit Authorization back-end:I have added a Unit test for Polkit Authorization back-end, testing the functionality of the authorization back-end. This is almost complete just a bit or two left.Compile Helper into a stand-alone applicationHelper itself is a separate non GUI application which works independently from Main application. Earlier, a macro provided by KAuth was used to compile it into stand-alone application. Now, I have completely removed the dependence on KAuth to do so.Epilogue
So yes, we are gradually moving our way forward towards completely removing our dependence over KAuth. But there are some things which are yet to complete. To name one, I need to finish up QDbus communication from helper to application which sends dbus (Inter Process Communication) messages. Currently I had tried this in QDbus patch, but it is not yet fully complete. All this stuff is done by KAuth currently in master.

Screen shot of KDE Authentication Dialog:














Links to my patches:1. Merged Polkit back-end and QDbus patch:2. Unit test:  3. Helper as a standalone application:Link to cgit repository: Maybe give me some suggestions/advice about the code or anything else you feel about. If you have any suggestions/questions, feel free to ping me at aryan10jangid@gmail.com. No spams please (Just kidding : ) Till next time, bye bye!!

Kate LSP Status – July 21

Sunday 21st of July 2019 01:18:00 PM

The new LSP client by Mark Nauwelaerts keeps making nice progress.

It will not be shipped with the KDE Applications 19.08 release, but in master it is now compiled & installed per default. You only need to activate it on the plugin configuration page in Kate’s settings dialog to be able to use it.

For details how to build Kate master with it’s plugins, please take a look at this guide.

If you want to start to hack on the plugin, you find it in the kate.git, addons/lspclient.

Feel welcome to show up on kwrite-devel@kde.org and help out! All development discussions regarding this plugin happen there.

If you are already familiar with Phabricator, post some patch directly at KDE’s Phabricator instance.

What is new this week?

The most thing are internal cleanups and minor improvements.

Feature wise, the hover implementation works now more like in other editors or IDEs, you get some nice tool tip after some delay:

Never try to guess again what some auto means ;=)

There is still a lot that can be improved, e.g. a filter for the symbols outline is in work:

To be able to later out-source some parts of the generic LSP client code to a library, if there is demand, we will aim to make the plugin be licensed under the MIT license. This should make it easier for other projects to depend of our code, if wanted.

[GSoC – 4] Achieving consistency between SDDM and Plasma

Sunday 21st of July 2019 11:47:41 AM

Previously: 1st GSoC post 2nd GSoC post 3rd GSoC post This blog post marks the landing of the initial implementation of theme syncing between SDDM and Plasma, which you may already have read about in Nate's post. Those of you running master can test the feature out by going to the Advanced tab in the...... Continue Reading →

KDE Usability & Productivity: Week 80

Sunday 21st of July 2019 06:01:04 AM

Somehow we’ve gone through 80 weeks of progress reports for KDE’s Usability & Productivity initiative! Does that seem like a lot to you? Because it seems like a lot to me. Speaking of a lot, features are now pouring in for KDE’s Plasma 5.17 release, as well as Applications 19.08. Even more is lined up for Applications 19.12 too, which promises to be quite a release. Anyway, here’s what we’ve got for you:

New Features Bugfixes & Performance Improvements User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a tax-deductible donation to the KDE e.V. foundation.

View and Examples

Sunday 21st of July 2019 12:00:00 AM

This week I began learning about QML to try to fix the View that show the graphs and tools for manipulating graphs. More precisely, I wanted to deal with some problems:

  • Vertexes that go near the border of the view cannot be moved;
  • Ctrl-A do not select all the view, just a limited part;
  • The option menu in the Create Node and Create Edge puts the lateral menu over the scroll bar;
  • The scroll bar in the view don’t have much use, as the view is tiny, and the mouse can be used to move in most cases;
  • The Flickable motion from the Qt is stealing most of the clicks from the MouseArea (maybe remove the interactive option while outside the Select/Move?), making really difficult to create new edges;
  • The Icons for the tools are not being showed.

This is the view now:

This is an probable new view (under implementation), that I will present to my mentors:

I have been studying QML to modify this view and little by little I am building it. Moreover, I modified the two original examples in the code BreadthFirstSearch and PrimSpanningTree to work in the current system (they were with commands not valid for the current version, like interrupt()). I also added one more example for the other search algorithm, DepthFirstSearch and the other implementations are under way:

  • Topological Sorting Algorithm;
  • Kruskal Algorithm;
  • Dijkstra Algorithm;
  • Bellman-Ford Algorithm;
  • Floyd-Warshall Algorithm;
  • Bipartite Matching Algorithm.

I had some other ideas for the interface that can be easily implemented and is a good way to improve the workflow:

This configuration create more space for the programmer while leaving enough space to visualize the graphs correctly and it works well with a new horizontal toolbar.

The new view is not yet available, as it is only local code. The new algorithm for DAG creation and examples are available in the merge requests here and here, respectively.

(Also, I have been really sick in the last week, but now I think I am better!)

Desk lamp

Saturday 20th of July 2019 04:02:28 PM
desk lamp with mirror behind

Some time ago, I wanted to make my own desk lamp. It should provide soft, bright task lighting above my desk, no sharp shadows that could cover part of my work area, but also some atmospheric lighting around the desk in my basement office. The lamp should have a natural look around it, but since I made it myself, I also didn’t mind exposing some of its internals.

SMD5050 LED strips

I had oak floor boards that I got from a friend (thanks, Wendy!) lying around. which I used as base material for the lamp. I combined these with some RGBW led strips that I had lying around, and a wireless controller that would allow me to connect the lamp to my Philips Hue lighting system, that I use throughout the house to control the lights. I sanded the wood until it was completely smooth, and then gave it an oild finish to make it durable and give it a more pronounced texture.

Fixed to the ceiling Internals of the desk lamp

The center board is covered in 0.5mm aluminium sheets to dissipate heat from the LED strips (again, making them last longer) and provide some extra diffusion of the light. This material is easy to work with, and also very suitable to stick the led strips to. For the light itself, I used SMD5050 LED strips that can produce warm and cold white light, as well as RGB colors. I put 3 rows of strips next to each other to provide enough light. The strips wrap around at the top, so light is not just shining down on my desk, but also reflecting from walls and ceiling around it. The front and back are another piece of wood to avoid looking directly into the LEDs, which would be distractive, annoying when working and also quite ugly. I attached a front and back board as well to the lamp, making it into an H shape.

Light reflects nicely from surrounding surfaces

The controller (a Gledopto Z-Wave controller, that is compatible with Philips Hue) is attached to the center board as well, so I just needed to run 2 12V wires to the lamp. I was being a bit creative here, and thought “why not use the power cables also to have the lamp hanging from the ceiling?”. I used coated steel wire, which I stripped here and there to have power run through steel hooks screwed into the ceiling to supply the lamp with power while also being able to adjust its height. This ended up creating a rather clean look for the whole lamp and really brought the whole thing together.

LabPlot has got some beautifying and lots of datasets

Saturday 20th of July 2019 03:34:01 PM
Hello everyone! The second part of this year's GSoC is almost over, so I was due to let you know the progress made in the last 3 weeks. I can assure you we haven't lazed since then. I think I managed to make quite good progress, so everything is going as planned, or I could say that even better. If you haven't read about this year's project or you just want to go through what has already been accomplished you can check out my previous post.
So let's just go through the new things step by step. I'll try to explain the respective feature, and also give examples using videos or screenshots. 
The first step was to improve the welcome screen and make it easily usable, dynamic, clean and intuitive for users. This step was very important since the welcome screen is what the users will first get in contact with when they start using LabPlot. We had a great idea, which was to make a screenshot of the main window whenever the user saves a project. This screenshot will be saved with the project itself and will be used as a thumbnail for Recent and Example projects in the welcome screen. The code section, that deals with making and saving the screenshot, is already committed on the master branch. You can see these thumbnails put to use in the following picture:
Thumbnails put to use
As you might recall, when I wrote the last post, the only section of the welcome screen which wasn't functional was the examples section. Implementing this feature was the next step. You could already catch a glimpse of it in the previous picture. Me and my mentors really like QtCreator's Example section, so that's where I got my inspiration. The example projects are shown in a GridView, and they look quite nice, thanks to the thumbnails. Every example project has a name and one or more tags assigned to itself. Just like in QtCreator's approach, the example projects are searchable based on their names and also on one or more tags. There is a search bar providing this functionality. Unfortunately, we didn't manage to create example projects just yet, so I used some temporary projects for implementing and testing, and also these will be shown in the following demo video:
 The functionality of the Example Projects section
The next step was to make the section and also their content more dynamic. In my last post, these sections/widgets (whichever name you prefer) were static, having a fixed size and their content wasn't adapting really well to the resizing of the main window. As I said that was only a prototype. During the last weeks, I managed to make it really dynamic, as a modern welcome screen should be. The result is pretty nice, at least according to my mentors :). So how does this work? When the user drags the mouse over the frame of a section a line appears, with which the user can easily resize a section just by dragging it. If the user doesn't like the layout he/she created, it can always be reset to the original layout. You can see how it works in the next video:
  Resizing the welcome screen and its content
Another new feature is also connected to the welcome screen. I made it possible for LabPlot to save the layout of the welcome screen whenever it's closed (either because opening/creating a project or because of closing the application). When the welcome screen is displayed next time, it's layout isn't the standard one (to which the user can reset the current layout in the settings) but the saved one. It seemed a good idea and it might be useful since no one would modify the layout, if he/she liked it. This feature is presented in the next video:
  Save welcome screen's layout
Lastly but not least, the welcome screen got another new feature. Now the user can maximize a section, so he/she can interact with the given section much more easily. When the section doesn't need to be maximized, then the user can minimize it and the former layout is restored. This function is particularly useful since there are many sections and they might not be big enough without the others getting particularly small. This was the issue that resulted in figuring out and implementing this idea. The maximizing and minimizing can be done with the icon in the upper-left corner of every section. In addition to this, I also added a go forward and go backwards icon/button to the "Release section" since its implemented using a WebView and users can navigate away from the starting page:
 Maximizing&Minimizng the welcome screen's widgets
Some other changes were made to the categorizing of datasets too. We thought it would be better to organise the datasets into collections (for example collection of R Datasets etc.) then into categories and subcategories. This made possible having a single file for a dataset collection, rather than having a metadata file for every dataset (as it previously was implemented). This, of course, caused some changes in the ImportDatasetWidget and DatasetMetadataManagerWidget, but their functionality stayed the same:
 Changes on ImportDatasetWidget and DatasetMetadataManagerWidget
As you might remember from the post, our main problem was that uploading with KNS3 is disabled for an indefinite amount of time due to errors caused by the library. This made us question whether we should use it at all or not. Given the mentioned problem and the fact that we found the library's functionality quite limited for our purposes, we decided not to use it. Instead, we'll provide a considerable collection of datasets, which should suffice. I already managed to collect and categorize not less than 1000 datasets. And I'm planning to collect some more.

Finally, I'd like to say some words about the next steps. We still have to make some "real" example projects so the users can explore the possibilities provided by LabPlot. I'll have to proceed with the collecting of datasets, in order to provide the users of LabPlot with a considerable dataset collection. As the finish line is getting closer, there won't be new "big features", maybe some minor new ideas, if some will come up, but instead I'd like to focus on cleaning, documenting, refactoring and optimising the code so it will be fit to be brought to the master branch.  I'd also like to search for hidden bugs and errors, to make the new and already implemented features more or less flawless. Some tests should also be written for the dataset management part of the code.

This is it for now. I will continue to work on the project alongside Kristóf and Alexander. I truly enjoy working with them, mostly the "brainstorming part", I think we form a quite good team. I'm thankful to them, for their guidance. When anything new will be finished and running I'll let you know. 

See you soon!Bye!

Popular licenses in OpenAPI

Friday 19th of July 2019 02:25:00 PM

Today I was wondering what the most commonly used license that people use in OpenAPI, so I went and did a quick analysis.

Results

The top 5 (with count and percentage; n=552):

License name count percentage CC-BY-3.0 250 45,29% Apache-2.01 218 39,49% MIT 15 2,71% “This page was built with the Swagger API.” 8 1,44% “Open Government License – British Columbia” 6 1,09%

The striked-out entries are the ones that I would not really consider a proper license.

The license names inside quotation marks are the exact copy-paste from the field. The rest are de-duplicated into their SPDX identifiers.

After those top 5 the long end goes very quickly into only one license per listed API. Several of those seem very odd as well.

Methodology

Note: Before you start complaining, I realise this is probably a very sub-optimal solution code-wise, but it worked for me. In my defence, I did open up my copy of the Sed & Awk Pocket Reference before my eyes went all glassy and I hacked up the following ugly method. Also note that the shell scripts are in Fish shell and may not work directly in a 100% POSIX shell.

First, I needed to get a data set to work on. Hat-tip to Mike Ralphson for pointing me to APIs Guru as a good resource. I analysed their APIs-guru/openapi-directory repository2, where in the APIs folder they keep a big collection of public APIs. Most of them following the OpenAPI (previously Swagger) specification.

git clone https://github.com/APIs-guru/openapi-directory.git cd openapi-directory/APIs

Next I needed to list all the licenses found there. For this I assumed the name: tag in YAML4 (the one including the name of the license) to be in the very next line after the license: tag3 – I relied on people writing OpenAPI files in the same order as it is laid out in the OpenAPI Specification. I stored the list of all licenses, sorted alphabetically in a separate api_licenses file:

grep 'license:' **/openapi.yaml **/swagger.yaml -A 1 --no-filename | \ grep 'name:' | sort > api_licenses

Then I generated another file called api_licenses_unique that would include only all names of these licenses.

grep 'license:' **/openapi.yaml **/swagger.yaml -A 1 --no-filename | \ grep 'name:' | sort | uniq > api_licenses_unique

Because I was too lazy to figure out how to do this properly5, I simply wrapped the same one-liner into a script to go through all the unique license names and count how many times they show up in the (non-duplicated) list of all licenses found.

for license in (grep 'license:' **/openapi.yaml **/swagger.yaml -A 1 \ --no-filename | grep 'name' | sort | uniq) grep "$license" api_licenses --count end

In the end I copied the console output of this last command, opened api_licenses_unique, and pasted said output in the first column (by going into Block Selection Mode in Kate).

Clarification on what I consider “proper license” and re-count of Creative Commons licenses (12 July 2019 update)

I was asked what I considered as a “proper license” above, and specifically why I did not consider “Creative Commons” as such.

First, if the string did not even remotely look like a name of a license, I did not consider that as a proper license. This is the case e.g. with “This page was built with the Swagger API.”.

As for the string “Creative Commons”, it – at best – indicates a family o licenses, which span a vast spectrum from CC0-1.0 (basically public domain) on one end to CC-BY-NC-CA-4.0 (basically, you may copy this, but not change anything, nor get money out of it, and you must keep the same license) on the other. For reference, on the SPDX license list, you will find 32 Creative Commons licenses. And SPDX lists only the International and Universal versions of them7.

Admiteldy, – and this is a caveat in my initial method above – it may be that there is an actual license following the lines after the “Creative Commons” string … or, as it turned out to be true, that the initial 255 count of name: Creative Commons licenses included also valid CC license names such as name: Creative Commons Attribution 3.0.

So, obviously I made a boo-boo, and therefore went and dug deeper ;)

To do so, and after looking at the results a bit more, I noticed that the url: entries of the name: Creative Commons licenses seem to point to actual CC licenses, so I decided to rely on that. Luckily, this turned out to be true.

I broadened up the initial search to one extra line, to include the url: line, narrowed down the next search to name: Creative Commons, and in the end only to url:

grep 'license:' **/openapi.yaml **/swagger.yaml -A 2 --no-filename | \ grep 'name: Creative Commons' -A 1 | grep 'url' | sort > api_licenses_cc

Next, I searched for the most common license – CC-BY-3.0:

grep --count 'creativecommons.org/licenses/by/3.0' api_licenses_cc

The result was 250, so for the remaining6 5 I just opened the api_licenses_cc file and counted them manually.

Using this method the list of all “Creative Commons” license turned out to be as follows:

  1. CC-BY-3.0 (250, of which one was specific to Australian jurisdiction)
  2. CC-BY-4.0 (3)
  3. CC-BY-NC-4.0 (1)
  4. CC-BY-NC-ND-2.0 (1)

In this light, I am amending the results above, and removing the bogus “Creative Commons” entry. Apart from removing the bogus entry, it does not change the ranking, nor the counts, of the top 5 licenses.

Further clean-up of Apache (17 July 2019 update)

Upon further inspection it looked odd that I was getting so many Apache-2.0 matches – if you added all the Apache-2.0 hits (initially 421) with all the CC-BY-3.0 hits (250), you already reached a higher number than all the occurrances of the license: field in all the files (552). Clearly something was off.

So I re-counted the Apache hits by limiting myself only to the url: field of the license:, instead of the name: and came to a half of the original number. Which brought it from first down to second place. Basically I applied the same method as above for counting Creative Commons licenses.

Better method (25 July 2019 update)

I just learnt from Jaka “Lynx” Kranjc of a better solution. Basically, I could cut down quite a bit by simply using uniq --count, which produces a unique list and prepends a column of how many times it found that occurance – super useful!

I will not edit my findings above again, but am mentioning the better method below, together with the attached results, so others can simply check.

grep 'license:' **/openapi.yaml **/swagger.yaml -A 1 --no-filename | \ grep 'name:' | uniq -c | sort > OpenAPI_grouped_by_license_name.txt

… produces OpenAPI_grouped_by_license_name.txt

grep 'license:' **/openapi.yaml **/swagger.yaml -A 2 --no-filename | \ grep 'url:' | uniq -c | sort > OpenAPI_grouped_by_license_url.txt

… produces OpenAPI_grouped_by_license_url.txt

hook out → not proud of the method, but happy with having results

  1. This should come as no surprise, as Apache-2.0 is used as the official specification’s example

  2. At the time of this writing, that was commit 506133b

  3. I tried it also with 3 lines, and the few extra results that came up where mostly useless. 

  4. I did a quick check and the repository seems to include no OpenAPIs in JSON format. 

  5. I expected for license in api_licenses_unique to work, but it did not. 

  6. The result of wc -l api_licenses_cc was 255. 

  7. Prior to version 4.0 of Creative Commons licenses each CC license had several versions localised for specific jurisdictions. 

Kubuntu 18.10 reaches end of life

Friday 19th of July 2019 08:35:21 AM

Kubuntu 18.10 Cosmic Cuttlefish was released on October 18th 2018 with 9 months support. As of 18th July 2019, 18.10 reaches ‘end of life’. No more package updates will be accepted to 18.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The official end of life announcement for Ubuntu as a whole can be found here [1].

Kubuntu 19.04 Disco Dingo continues to be supported, receiving security and high-impact bugfix updates until January 2020.

Users of 18.10 can follow the Kubuntu 18.10 to 19.04 Upgrade [2] instructions.

Should for some reason your upgrade be delayed, and you find that the 18.10 repositories have been archived to old-releases.ubuntu.com, instructions to perform a EOL Upgrade can be found on the Ubuntu wiki [3].

Thank you for using Kubuntu 18.10 Cosmic Cuttlefish.

The Kubuntu team.

[1] – https://lists.ubuntu.com/archives/ubuntu-announce/2019-July/000247.html
[2] – https://help.ubuntu.com/community/DiscoUpgrades/Kubuntu
[3] – https://help.ubuntu.com/community/EOLUpgrades

Overriding Wordlist component function to use Multiple Datasets.

Friday 19th of July 2019 06:34:00 AM
Overview

One of the tasks for GSoC 2019 included adding multiple datasets to smallNumbers activity. The aim of the activity is to teach students to count the number of object on the falling item. But the activity is not an independent activity, but it is the sub-activity of the parent activity gletters. In gletter the overall working is the same, apart from the fact that this time their are no falling objects but alphabets, and the student has to identify the falling alphabet and press the corresponding button on the keyboard. So to manage different types of falling items the activity uses a separate core component i.e. Wordlist. Now since the task was to implement the multiple datasets for smallnumbers activity without affecting the functionalitiy of its parent activity gletters, I had to make some changes in wordlist component.

Wordlist initially reading data

The Wordlist component uses a loadFromFile function which is provided the path to the file location which contains the dataset. The files provided are json files and the function uses a parser to parse the contents of the file.

function loadFromFile(fname) {
function loadFromFile(fname) {
filename = fname;
var from;
maxLevel = 0
wordList = parser.parseFromUrl(filename, validateWordlist);
if (wordList == null) {
error("Wordlist: Invalid wordlist file " + fname);
if (useDefault) {
// fallback to default file:
wordList = parser.parseFromUrl(defaultFilename, validateWordlist);
if (wordList == null) {
error("Wordlist: Invalid wordlist file " + defaultFilename);
return;
}
from = "default-file " + defaultFilename;
}
else {
error("Wordlist: do not use default list, no list loaded");
return;
}
} else {
from = "file " + fname;
}
// at this point we have valid levels
maxLevel = wordList.levels.length;
console.log("Wordlist: loaded " + maxLevel + " levels from " + from);
return wordList;

but in case of multiple datasets we use qml files to store data and hence the same function could not be used with multiple datasets.

Creating a new function to initialise wordlist

In Multiple dataset architecture we are provided with the data part of the Data.qml file in the main qml file of our activity like

property var levels: activity.datasetLoader.item.data

This data is already in JSON format, which is exactly what is needed by the Wordlist. So I created a function loadFromJSON in Wordlist component to take this data as input and initialise the Wordlist according to it.

function loadFromJSON(levels) {
wordList = {levels: levels};
maxLevel = wordList.levels.length;
return wordList;
}

Lastly I added a condition in gletters.js file to use the respective function when there is some value in items.levels variable(when the activity uses multiple datasets).

if(!items.levels)
items.wordlist.loadFromFile(GCompris.ApplicationInfo.getLocaleFilePath(
items.ourActivity.dataSetUrl + "default-"+locale+".json"));
else
items.wordlist.loadFromJSON(items.levels);

This way I was able to use multiple datasets in smallnumbers activity without affecting the functionality of parent activity gletters.

Enable notification plugin in KDE Connect on macOS

Thursday 18th of July 2019 01:34:22 PM

You may have tried KDE Connect for macOS.

If you’ve not yet tried KDE Connect, you can read my post: Connect your Android phone with your Mac via KDE Connect

As I mentioned, this post will help you to build your own KDE Connect with native Notification support for macOS.

Build

This post will not give you instructions of building KDE Connect on macOS because there is already a page on KDE Connect Wiki

If you met any problems, you can submit them on our KDE bug tracker

Add notification support

Notification plugin depends on KNotification. There is no native support for macOS in this library.

I’ve made a native one and it has been submited as a patch. But it takes time to get reviewed and optimized.

I keep the patch available on a repo of my GitHub:
https://github.com/Inokinoki/knotifications. So, Craft can access it and compile it to provide support of macOS Notification.

But we’re looking forward to its delivery in KNotification.

What you need to do is very simple:

  1. Find KNotifications blueprint file
  • Enter your CraftRoot folder. To me, it’s /Users/inoki/CraftRoot.
  • Enter etc -> blueprints -> locations -> craft-blueprints-kde folder.
  • Open kde/frameworks/tier3/knotifications/knotifications.py.
  1. Remove self.versionInfo.setDefaultValues() in setTargets of subinfo class. If you’re not familiar with python, just find this line and delete it.

    1
    self.versionInfo.setDefaultValues()
  2. Add these 2 lines:

    1
    2
    self.svnTargets['master'] = 'https://github.com/Inokinoki/knotifications.git'
    self.defaultTarget = 'master'

The file should look like this:

After that, rebuild KDE Connect with Craft.

If everything is ok, launch your KDE Connect.

You could receive notifications from your phone or your other computers(if well configured), just like this:

You can also change notification settings of KDE Connect in your macOS Notification Center. By default, the notification style is Bar, set it to Alert to see quick actions to your notifications.

Notice: Currently there is a bug, you may receive duplicated notifications. We’re figuring out its reason and it will be fixed as soon as possible.

Thanks for your reading and your support to KDE Connect :)

If you’d like to, you can also follow me on GitHub :)

For pros

For developers, if you’re familiar with diff, just apply this diff patch:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
diff --git a/kde/frameworks/tier3/knotifications/knotifications.py b/kde/frameworks/tier3/knotifications/knotifications.py
index 9b46044..f5c82a4 100644
--- a/kde/frameworks/tier3/knotifications/knotifications.py
+++ b/kde/frameworks/tier3/knotifications/knotifications.py
@@ -3,7 +3,8 @@ import info

class subinfo(info.infoclass):
def setTargets(self):
- self.versionInfo.setDefaultValues()
+ self.svnTargets['master'] = 'https://github.com/Inokinoki/knotifications.git'
+ self.defaultTarget = 'master'
self.patchToApply['5.57.0'] = [("disabled-deprecated-before.patch", 1)]

self.description = "TODO"

Connect your Android phone with your Mac via KDE Connect

Thursday 18th of July 2019 01:34:22 PM

Have you ever heard Continuity, the solution of Apple which provides one seamless experience between your iPhone and your Mac?

You may be surprised, “Woohoo, it’s amazing but I use my OnePlus along with my Mac.” With my GSoC 2019 project, you can connect your Mac and your Android phone with KDE Connect!

And you can even connect your Mac with your Linux PC or Windows PC (Thanks to Piyush, he is working on optimizing experience of KDE Connect on Windows).

Installation instruction
  1. You can download KDE Connect Nightly Build for macOS from KDE Binary Factory: https://binary-factory.kde.org/view/MacOS/job/kdeconnect-kde_Nightly_macos/. But notice that it’s not yet a stable version, and it requires that you have permission to run application from non-certificated developer. We’ll release a stable one next month on August.

  2. Otherwise you can build your own version. Please follow the instructions on KDE Connect Wiki. If you’re using macOS 10.13, MacOS X 10.12 or below, we recommend that you build your own KDE Connect because our Binary Factory are building applications for only macOS 10.14 or above.

You’ll finally get a DMG image file in both 2 ways.

Just click on it, mount it and drap kdeconnect-indicator into Applications folder.

Open kdeconnect-indicator and your magic journey with KDE Connect for macOS begins!

Use

After installation, you can see an icon of kdeconnect-indicator in the Launchpad.

Click it to open. If everything is ok, you will see an KDE Connect icon in your system tray.

Click the icon -> Configure to open configuration window. Here you can see discovered devices and paired devices.

You can enable or disable functions in this window.

Currently, you can do these from your Android phone:

  • Run predefined commands on your Mac from connected devices.
  • Check your phones battery level from the desktop
  • Ring your phone to help finding it
  • Share files and links between devices
  • Control the volume of your Mac from the phone
  • Keep your Mac awake when your phone is connected
  • Receive your phone notifications on your desktop computer (this function is achieved but not yet delivered, you can follow this post to enable it manually)

I’m trying to make more plugins work on macOS. Good luck to my GSoC project :)

Acknowledgement

Thanks to KDE Community and Google, I could start this Google Summer of Code project this summer.

Thanks to members in KDE Connect development. Without them, I cannnot understand the mechanism and get it work on macOS so quickly :)

Conclusion

If you have any question, KDE Connect Wiki may be helpful. And you can find a bug tracker there.

Don’t be hesitated to join our Telegram Group or IRC channel if you’d like to bring more exciting functions into KDE Connect:

  • Telegram
  • IRC (#kdeconnect)
  • matrix.org (#freenode_#kdeconnect:matrix.org)

I wish you could enjoy the seamless experience provided by KDE Connect for macOS and your Android Phone!

Latte Dock, first beta for v0.9 (v0.8.97)

Thursday 18th of July 2019 07:53:53 AM


Welcome Latte Dock v0.8.97  the FIRST BETA release for v0.9 branch!


Go get  v0.8.97   from, download.kde.org*
-----* archive has been signed with gpg key: 325E 97C3 2E60 1F5D 4EAD CF3A 5599 9050 A2D9 110E

I know you waited for this so long but believe me there were really good reasons. Check out the past articles concerning Latte git version and you can get a picture what major new features are introduced for v0.9. Of course this is an article for a beta release and as such I will not provide any fancy videos or screenshots; this is a goal for official stable release article.


New Features


Wayland

I know that community is urgent to move on but from my personal experience in wayland I consider the wayland support still a Technology Preview. The Latte experience of course in v0.9 is much better than v0.8 but do not expect miracles. The graphics drivers, Qt and Plasma libraries are still adjusting and try to mature more in wayland environments.
Latte can not workaround such issues and does not have such focus. Trying to solve things in wayland is really a pain for me and I am trying to avoid it until it is really necessary. I am using a Dell Optimus laptop for development so my hardware might be the worst for the case; crashes, freezes etc. so I do not feel the stability I would expect.
So patience everyone, patience...


Requirements

Even though in the start of v0.9 development circle I was thinking to increase the requirements compared to v0.8; in the end a way was found to keep the same requirements as minimum and focus on proposed one for best experience. I have the confidence that any system/distro supporting v0.8  can update to v0.9 easily.

Minimum requirements:
  • Qt >= 5.9
  • Plasma >=5.12
Proposed requirements:
  • Qt >= 5.12
  • Plasma >=5.15



Latte Development team

Because the community might have not been updated properly, Latte Development team is currently just me and this is the situation for the past one and a half year. At that point I would like to thank all developers sending patches and especially /u/trmdi for his involvement. Latte of course is an official kde project and that means that whenever I disappear or step down the kde community will be able to guide development.

  Latte v0.9 Release Schedule

  • End July 2019: v0.9 will be released officially as the new Latte stable version and the community has ten days until then for bugs, translation string fixes  and improvements



How Can I Help?

Bugs, bugs, bugs.... Translations, translation, translations...
  1. As you may noticed plenty new settings are added in v0.9 and bugs can exist when combining them
  2. Even though the kde localization teams are checking out the translation strings almost daily and I THANK THEM for this!! We are humans and translation strings may be possible to improve
  3. For complicated settings I use tooltips in order to describe them better. If you find such option that does not have any tooltip OR its tooltip text can be explained more or be simplified feel free to report it (I am not a native english speaker)



Donations

You can find Latte at Liberapay if you want to support,    

or you can split your donation between my active projects in kde store.

A few more steps

Thursday 18th of July 2019 03:31:28 AM

Honestly, working on correcting the checkpoint procedure made the tool explode. So I decided to take a couple of steps back and rethink the strategy. And for inspiration, I fired up YouTube and started watching tutorials of Photoshop’s Magnetic Lasso, cause that was the one which made me write this thing in the first place.

More in Tux Machines

LWN: Spectre, Linux and Debian Development

  • Grand Schemozzle: Spectre continues to haunt

    The Spectre v1 hardware vulnerability is often characterized as allowing array bounds checks to be bypassed via speculative execution. While that is true, it is not the full extent of the shenanigans allowed by this particular class of vulnerabilities. For a demonstration of that fact, one need look no further than the "SWAPGS vulnerability" known as CVE-2019-1125 to the wider world or as "Grand Schemozzle" to the select group of developers who addressed it in the Linux kernel. Segments are mostly an architectural relic from the earliest days of x86; to a great extent, they did not survive into the 64-bit era. That said, a few segments still exist for specific tasks; these include FS and GS. The most common use for GS in current Linux systems is for thread-local or CPU-local storage; in the kernel, the GS segment points into the per-CPU data area. User space is allowed to make its own use of GS; the arch_prctl() system call can be used to change its value. As one might expect, the kernel needs to take care to use its own GS pointer rather than something that user space came up with. The x86 architecture obligingly provides an instruction, SWAPGS, to make that relatively easy. On entry into the kernel, a SWAPGS instruction will exchange the current GS segment pointer with a known value (which is kept in a model-specific register); executing SWAPGS again before returning to user space will restore the user-space value. Some carefully placed SWAPGS instructions will thus prevent the kernel from ever running with anything other than its own GS pointer. Or so one would think.

  • Long-term get_user_pages() and truncate(): solved at last?

    Technologies like RDMA benefit from the ability to map file-backed pages into memory. This benefit extends to persistent-memory devices, where the backing store for the file can be mapped directly without the need to go through the kernel's page cache. There is a fundamental conflict, though, between mapping a file's backing store directly and letting the filesystem code modify that file's on-disk layout, especially when the mapping is held in place for a long time (as RDMA is wont to do). The problem seems intractable, but there may yet be a solution in the form of this patch set (marked "V1,000,002") from Ira Weiny. The problems raised by the intersection of mapping a file (via get_user_pages()), persistent memory, and layout changes by the filesystem were the topic of a contentious session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit. The core question can be reduced to this: what should happen if one process calls truncate() while another has an active get_user_pages() mapping that pins some or all of that file's pages? If the filesystem actually truncates the file while leaving the pages mapped, data corruption will certainly ensue. The options discussed in the session were to either fail the truncate() call or to revoke the mapping, causing the process that mapped the pages to receive a SIGBUS signal if it tries to access them afterward. There were passionate proponents for both options, and no conclusion was reached. Weiny's new patch set resolves the question by causing an operation like truncate() to fail if long-term mappings exist on the file in question. But it also requires user space to jump through some hoops before such mappings can be created in the first place. This approach comes from the conclusion that, in the real world, there is no rational use case where somebody might want to truncate a file that has been pinned into place for use with RDMA, so there is no reason to make that operation work. There is ample reason, though, for preventing filesystem corruption and for informing an application that gets into such a situation that it has done something wrong.

  • Hardening the "file" utility for Debian

    In addition, he had already encountered problems with file running in environments with non-standard libraries that were loaded using the LD_PRELOAD environment variable. Those libraries can (and do) make system calls that the regular file binary does not make; the system calls were disallowed by the seccomp() filter. Building a Debian package often uses FakeRoot (or fakeroot) to run commands in a way that appears that they have root privileges for filesystem operations—without actually granting any extra privileges. That is done so that tarballs and the like can be created containing files with owners other than the user ID running the Debian packaging tools, for example. Fakeroot maintains a mapping of the "changes" made to owners, groups, and permissions for files so that it can report those to other tools that access them. It does so by interposing a library ahead of the GNU C library (glibc) to intercept file operations. In order to do its job, fakeroot spawns a daemon (faked) that is used to maintain the state of the changes that programs make inside of the fakeroot. The libfakeroot library that is loaded with LD_PRELOAD will then communicate to the daemon via either System V (sysv) interprocess communication (IPC) calls or by using TCP/IP. Biedl referred to a bug report in his message, where Helmut Grohne had reported a problem with running file inside a fakeroot.

Flameshot is a brilliant screenshot tool for Linux

The default screenshot tool in Ubuntu is alright for basic snips but if you want a really good one you need to install a third-party screenshot app. Shutter is probably my favorite, but I decided to give Flameshot a try. Packages are available for various distributions including Ubuntu, Arch, openSuse and Debian. You find installation instructions on the official project website. Read more

Android Leftovers

IBM/Red Hat and Intel Leftovers

  • Troubleshooting Red Hat OpenShift applications with throwaway containers

    Imagine this scenario: Your cool microservice works fine from your local machine but fails when deployed into your Red Hat OpenShift cluster. You cannot see anything wrong with the code or anything wrong in your services, configuration maps, secrets, and other resources. But, you know something is not right. How do you look at things from the same perspective as your containerized application? How do you compare the runtime environment from your local application with the one from your container? If you performed your due diligence, you wrote unit tests. There are no hard-coded configurations or hidden assumptions about the runtime environment. The cause should be related to the configuration your application receives inside OpenShift. Is it time to run your app under a step-by-step debugger or add tons of logging statements to your code? We’ll show how two features of the OpenShift command-line client can help: the oc run and oc debug commands.

  • What piece of advice had the greatest impact on your career?

    I love learning the what, why, and how of new open source projects, especially when they gain popularity in the DevOps space. Classification as a "DevOps technology" tends to mean scalable, collaborative systems that go across a broad range of challenges—from message bus to monitoring and back again. There is always something new to explore, install, spin up, and explore.

  • How DevOps is like auto racing

    When I talk about desired outcomes or answer a question about where to get started with any part of a DevOps initiative, I like to mention NASCAR or Formula 1 racing. Crew chiefs for these race teams have a goal: finish in the best place possible with the resources available while overcoming the adversity thrown at you. If the team feels capable, the goal gets moved up a series of levels to holding a trophy at the end of the race. To achieve their goals, race teams don’t think from start to finish; they flip the table to look at the race from the end goal to the beginning. They set a goal, a stretch goal, and then work backward from that goal to determine how to get there. Work is delegated to team members to push toward the objectives that will get the team to the desired outcome. [...] Race teams practice pit stops all week before the race. They do weight training and cardio programs to stay physically ready for the grueling conditions of race day. They are continually collaborating to address any issue that comes up. Software teams should also practice software releases often. If safety systems are in place and practice runs have been going well, they can release to production more frequently. Speed makes things safer in this mindset. It’s not about doing the “right” thing; it’s about addressing as many blockers to the desired outcome (goal) as possible and then collaborating and adjusting based on the real-time feedback that’s observed. Expecting anomalies and working to improve quality and minimize the impact of those anomalies is the expectation of everyone in a DevOps world.

  • Deep Learning Reference Stack v4.0 Now Available

    Artificial Intelligence (AI) continues to represent one of the biggest transformations underway, promising to impact everything from the devices we use to cloud technologies, and reshape infrastructure, even entire industries. Intel is committed to advancing the Deep Learning (DL) workloads that power AI by accelerating enterprise and ecosystem development. From our extensive work developing AI solutions, Intel understands how complex it is to create and deploy applications for deep learning workloads. That?s why we developed an integrated Deep Learning Reference Stack, optimized for Intel Xeon Scalable processor and released the companion Data Analytics Reference Stack. Today, we?re proud to announce the next Deep Learning Reference Stack release, incorporating customer feedback and delivering an enhanced user experience with support for expanded use cases.

  • Clear Linux Releases Deep Learning Reference Stack 4.0 For Better AI Performance

    Intel's Clear Linux team on Wednesday announced their Deep Learning Reference Stack 4.0 during the Linux Foundation's Open-Source Summit North America event taking place in San Diego. Clear Linux's Deep Learning Reference Stack continues to be engineered for showing off the most features and maximum performance for those interested in AI / deep learning and running on Intel Xeon Scalable CPUs. This optimized stack allows developers to more easily get going with a tuned deep learning stack that should already be offering near optimal performance.