Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 12 hours 10 min ago

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, September 2019

Tuesday 15th of October 2019 07:20:54 AM


Like each month, here comes a report about
the work of paid contributors
to Debian LTS.

Individual reports

In September, 212.75 work hours have been dispatched among 12 paid contributors. Their reports are available:

  • Adrian Bunk did nothing (and got no hours assigned), but has been carrying 26h from August to October.
  • Ben Hutchings did 20h (out of 20h assigned).
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 30h (out of 23.75h assigned and 5.25h from August), thus anticipating 1h from October.
  • Hugo Lefeuvre did nothing (out of 23.75h assigned), thus is carrying over 23.75h for October.
  • Jonas Meurer did 5h (out of 10h assigned and 9.5h from August), thus carrying over 14.5h to October.
  • Markus Koschany did 23.75h (out of 23.75h assigned).
  • Mike Gabriel did 11h (out of 12h assigned + 0.75h remaining), thus carrying over 1.75h to October.
  • Ola Lundqvist did 2h (out of 8h assigned and 8h from August), thus carrying over 14h to October.
  • Roberto C. Sánchez did 16h (out of 16h assigned).
  • Sylvain Beucler did 23.75h (out of 23.75h assigned).
  • Thorsten Alteholz did 23.75h (out of 23.75h assigned).
Evolution of the situation

September was more like a regular month again, though two contributors were not able to dedicate any time to LTS work.

For October we are welcoming Utkarsh Gupta as a new paid contributor. Welcome to the team, Utkarsh!

This month, we’re glad to announce that Cloudways is joining us as a new silver level sponsor ! With the reduced involvment of another long term sponsor, we are still at the same funding level (roughly 216 hours sponsored by month).

The security tracker currently lists 32 packages with a known CVE and the dla-needed.txt file has 37 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Norbert Preining: State of Calibre in Debian

Tuesday 15th of October 2019 05:25:31 AM

To counter some recent FUD spread about Calibre in general and Calibre in Debian in particular, here a concise explanation of the current state.

Many might have read my previous post on Calibre as a moratorium, but that was not my intention. Development of Calibre in Debian is continuing, despite the current stall.

Since it seems to be unclear what the current blockers are, there are two orthogonal problems regarding recent Calibre in Debian: One is the update to version 4 and the switch to qtwebengine, one is the purge of Python 2 from Debian.

Current state

Debian sid and testing currently hold Calibre 3.48 based on Python 2. Due to the ongoing purge, necessary modules (in particular python-cherrypy3) have been removed from Debian/sid, making the current Calibre package RC buggy (see this bug report). That means, that within reasonable time frame, Calibre will be removed from testing.

Now for the two orthogonal problems we are facing:

Calibre 4 packaging

Calibre 4 is already packaged for Debian (see the master-4.0 branch in the git repository). Uploading was first blocked due to a disappearing python-pyqt5.qwebengine which was extracted from PyQt5 package into its own. Thanks to the maintainers we now have a Python2 version build from the qtwebengine-opensource-src package.

But that still doesn’t cut it for Calibre 4, because it requires Qt 5.12, but Debian still carries 5.11 (released 1.5 years ago).

So the above mentioned branch is ready for upload as soon as Qt 5.12 is included in Debian.

Python 3

The other big problem is the purge of Python 2 from Debian. Upstream Calibre already supports building Python 3 versions since some months, with ongoing bug fixes. But including this into Debian poses some problems: The first stumbling block was a missing Python3 version of mechanize, which I have adopted after a 7 years hiatus, updated to the newest version and provided Python 3 modules for it.

Packaging for Debian is done in the experimental branch of the git repository, and is again ready to be uploaded to unstable.

But the much bigger concern here is that practically none of the external plugins of Calibre is ready for Python 3. Paired with the fact that probably most users of Calibre are using one or the other external plugin (just to mention Kepub plugin, DeDRM, …), uploading a Python 3 based version of Calibre would break usage for practically all users.

Since I put our (Debian’s) users first, I have thus decided to keep Calibre based on Python 2 as long as Debian allows. Unfortunately the overzealous purge spree has already introduced RC bugs, which means I am now forced to decide whether I upload a version of Calibre that breaks most users, or I don’t upload and see Calibre removed from testing. Not an easy decision.

Thus, my original plan was to keep Calibre based on Python 2 as long as possible, and hope that upstream switches to Python 3 in time before the next Debian release. This would trigger a continuous update of most plugins and would allow users in Debian to have a seamless transition without complete breakage. Unfortunately, this plan seems to be not actually executable.

Now let us return to the FUD spread:

  • Calibre is actively developed upstream
  • Calibre in Debian is actively maintained
  • Calibre is Python 3 ready, but the plugins are not
  • Calibre 4 is ready for Debian as soon as the dependencies are updated
  • Calibre/Python3 is ready for upload to Debian, but breaks practically all users

Hope that helps everyone to gain some understanding about the current state of Calibre in Debian.

Sergio Durigan Junior: Installing Gerrit and Keycloak for GDB

Tuesday 15th of October 2019 04:00:00 AM

Back in September, we had the GNU Tools Cauldron in the gorgeous city of Montréal (perhaps I should write a post specifically about it...). One of the sessions we had was the GDB BoF, where we discussed, among other things, how to improve our patch review system.

I have my own personal opinions about the current review system we use (mailing list-based, in a nutshell), and I haven't felt very confident to express it during the discussion. Anyway, the outcome was that at least 3 global maintainers have used or are currently using the Gerrit Code Review system for other projects, are happy with it, and that we should give it a try. Then, when it was time to decide who wanted to configure and set things up for the community, I volunteered. Hey, I'm already running the Buildbot master for GDB, what is the problem to manage yet another service? Oh, well.

Before we dive into the details involved in configuring and running gerrit in a machine, let me first say that I don't totally support the idea of migrating from mailing list to gerrit. I volunteered to set things up because I felt the community (or at least the its most active members) wanted to try it out. I don't necessarily agree with the choice.

Ah, and I'm writing this post mostly because I want to be able to close the 300+ tabs I had to open on my Firefox during these last weeks, when I was searching how to solve the myriad of problems I faced during the set up!

The initial plan

My very initial plan after I left the session room was to talk to the sourceware.org folks and ask them if it would be possible to host our gerrit there. Surprisingly, they already have a gerrit instance up and running. It's been set up back in 2016, it's running an old version of gerrit, and is pretty much abandoned. Actually, saying that it has been configured is an overstatement: it doesn't support authentication, user registration, barely supports projects, etc. It's basically what you get from a pristine installation of the gerrit RPM package in RHEL 6.

I won't go into details here, but after some discussion it was clear to me that the instance on sourceware would not be able to meet our needs (or at least what I had in mind for us), and that it would be really hard to bring it to the quality level I wanted. I decided to go look for other options.

The OSCI folks

Have I mentioned the OSCI project before? They are absolutely awesome. I really love working with them, because so far they've been able to meet every request I made! So, kudos to them! They're the folks that host our GDB Buildbot master. Their infrastructure is quite reliable (I never had a single problem), and Marc Dequénes (Duck) is very helpful, friendly and quick when replying to my questions :-).

So, it shouldn't come as a surprise the fact that when I decided to look for other another place to host gerrit, they were my first choice. And again, they delivered :-).

Now, it was time to start thinking about the gerrit set up.

User registration?

Over the course of these past 4 weeks, I had the opportunity to learn a bit more about how gerrit does things. One of the first things that negatively impressed me was the fact that gerrit doesn't handle user registration by itself. It is possible to have a very rudimentary user registration "system", but it relies on the site administration manually registering the users (via htpasswd) and managing everything by him/herself.

It was quite obvious to me that we would need some kind of access control (we're talking about a GNU project, with a copyright assignment requirement in place, after all), and the best way to implement it is by having registered users. And so my quest for the best user registration system began...

Gerrit supports some user authentication schemes, such as OpenID (not OpenID Connect!), OAuth2 (via plugin) and LDAP. I remembered hearing about FreeIPA a long time ago, and thought it made sense using it. Unfortunately, the project's community told me that installing FreeIPA on a Debian system is really hard, and since our VM is running Debian, it quickly became obvious that I should look somewhere else. I felt a bit sad at the beginning, because I thought FreeIPA would really be our silver bullet here, but then I noticed that it doesn't really offer a self-service user registration.

After exchanging a few emails with Marc, he told me about Keycloak. It's a full-fledged Identity Management and Access Management software, supports OAuth2, LDAP, and provides a self-service user registration system, which is exactly what we needed! However, upon reading the description of the project, I noticed that it is written in Java (JBOSS, to be more specific), and I was afraid that it was going to be very demanding on our system (after all, gerrit is also a Java program). So I decided to put it on hold and take a look at using LDAP...

Oh, man. Where do I start? Actually, I think it's enough to say that I just tried installing OpenLDAP, but gave up because it was too cumbersome to configure. Have you ever heard that LDAP is really complicated? I'm afraid this is true. I just didn't feel like wasting a lot of time trying to understand how it works, only to have to solve the "user registration" problem later (because of course, OpenLDAP is just an LDAP server).

OK, so what now? Back to Keycloak it is. I decided that instead of thinking that it was too big, I should actually install it and check it for real. Best decision, by the way!

Setting up Keycloak

It's pretty easy to set Keycloak up. The official website provides a .tar.gz file which contains the whole directory tree for the project, along with helper scripts, .jar files, configuration, etc. From there, you just need to follow the documentation, edit the configuration, and voilà.

For our specific setup I chose to use PostgreSQL instead of the built-in database. This is a bit more complicated to configure, because you need to download the JDBC driver, and install it in a strange way (at least for me, who is used to just editing a configuration file). I won't go into details on how to do this here, because it's easy to find on the internet. Bear in mind, though, that the official documentation is really incomplete when covering this topic! This is one of the guides I used, along with this other one (which covers MariaDB, but can be adapted to PostgreSQL as well).

Another interesting thing to notice is that Keycloak expects to be running on its own virtual domain, and not under a subdirectory (e.g, https://example.org instead of https://example.org/keycloak). For that reason, I chose to run our instance on another port. It is supposedly possible to configure Keycloak to run under a subdirectory, but it involves editing a lot of files, and I confess I couldn't make it fully work.

A last thing worth mentioning: the official documentation says that Keycloak needs Java 8 to run, but I've been using OpenJDK 11 without problems so far.

Setting up Gerrit

The fun begins now!

The gerrit project also offers a .war file ready to be deployed. After you download it, you can execute it and initialize a gerrit project (or application, as it's called). Gerrit will create a directory full of interesting stuff; the most important for us is the etc/ subdirectory, which contains all of the configuration files for the application.

After initializing everything, you can try starting gerrit to see if it works. This is where I had my first trouble. Gerrit also requires Java 8, but unlike Keycloak, it doesn't work out of the box with OpenJDK 11. I had to make a small but important addition in the file etc/gerrit.config:

[container] ... javaOptions = "--add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED" ...

After that, I was able to start gerrit. And then I started trying to set it up for OAuth2 authentication using Keycloak. This took a very long time, unfortunately. I was having several problems with Gerrit, and I wasn't sure how to solve them. I tried asking for help on the official mailing list, and was able to make some progress, but in the end I figured out what was missing: I had forgotten to add the AddEncodedSlashes On in the Apache configuration file! This was causing a very strange error on Gerrit (as you can see, a java.lang.StringIndexOutOfBoundsException!), which didn't make sense. In the end, my Apache config file looks like this:

<VirtualHost *:80> ServerName gnutoolchain-gerrit.osci.io RedirectPermanent / https://gnutoolchain-gerrit.osci.io/r/ </VirtualHost> <VirtualHost *:443> ServerName gnutoolchain-gerrit.osci.io RedirectPermanent / /r/ SSLEngine On SSLCertificateFile /path/to/cert.pem SSLCertificateKeyFile /path/to/privkey.pem SSLCertificateChainFile /path/to/chain.pem # Good practices for SSL # taken from: <https://mozilla.github.io/server-side-tls/ssl-config-generator/> # intermediate configuration, tweak to your needs SSLProtocol all -SSLv3 SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS SSLHonorCipherOrder on SSLCompression off SSLSessionTickets off # OCSP Stapling, only in httpd 2.3.3 and later #SSLUseStapling on #SSLStaplingResponderTimeout 5 #SSLStaplingReturnResponderErrors off #SSLStaplingCache shmcb:/var/run/ocsp(128000) # HSTS (mod_headers is required) (15768000 seconds = 6 months) Header always set Strict-Transport-Security "max-age=15768000" ProxyRequests Off ProxyVia Off ProxyPreserveHost On <Proxy *> Require all granted </Proxy> AllowEncodedSlashes On ProxyPass /r/ http://127.0.0.1:8081/ nocanon #ProxyPassReverse /r/ http://127.0.0.1:8081/r/ </VirtualHost>

I confess I was almost giving up Keycloak when I finally found the problem...

Anyway, after that things went more smoothly. I was finally able to make the user authentication work, then I made sure Keycloak's user registration feature also worked OK...

Ah, one interesting thing: the user logout wasn't really working as expected. The user was able to logout from gerrit, but not from Keycloak, so when the user clicked on "Sign in", Keycloak would tell gerrit that the user was already logged in, and gerrit would automatically log the user in again! I was able to solve this by redirecting the user to Keycloak's logout page, like this:

[auth] ... logoutUrl = https://keycloak-url:port/auth/realms/REALM/protocol/openid-connect/logout?redirect_uri=https://gerrit-url/ ...

After that, it was already possible to start worrying about configure gerrit itself. I don't know if I'll write a post about that, but let me know if you want me to.

Conclusion

If you ask me if I'm totally comfortable with the way things are set up now, I can't say that I am 100%. I mean, the set up seems robust enough that it won't cause problems in the long run, but what bothers me is the fact that I'm using technologies that are alien to me. I'm used to setting up things written in Python, C, C++, with very simple yet powerful configuration mechanisms, and an easy to discover what's wrong when something bad happens.

I am reasonably satisfied with the Keycloak logs things, but Gerrit leaves a lot to be desired in that area. And both projects are written in languages/frameworks that I am absolutely not comfortable with. Like, it's really tough to debug something when you don't even know where the code is or how to modify it!

All in all, I'm happy that this whole adventure has come to an end, and now all that's left is to maintain it. I hope that the GDB community can make good use of this new service, and I hope that we can see a positive impact in the quality of the whole patch review process.

My final take is that this is all worth as long as the Free Software and the User Freedom are the ones who benefit.

P.S.: Before I forget, our gerrit instance is running at https://gnutoolchain-gerrit.osci.io.

Chris Lamb: Tour d'Orwell: The River Orwell

Tuesday 15th of October 2019 12:19:57 AM

Continuing my Orwell-themed peregrination, a certain Eric Blair took his pen name "George Orwell" because of his love for a certain river just south of Ipswich, Suffolk. With sheepdog trials being undertaken in the field underneath, even the concrete Orwell Bridge looked pretty majestic from the garden centre — cum — food hall.

Martin Pitt: Hardening Cockpit with systemd (socket activation)³

Tuesday 15th of October 2019 12:00:00 AM
Background A major future goal for Cockpit is support for client-side TLS authentication, primarily with smart cards. I created a Proof of Concept and a demo long ago, but before this can be called production-ready, we first need to harden Cockpit’s web server cockpit-ws to be much more tamper-proof than it is today. This heavily uses systemd’s socket activation. I believe we are now using this in quite a unique and interesting way that helped us to achieve our goal rather elegantly and robustly.

Arturo Borrero González: What to expect in Debian 11 Bullseye for nftables/iptables

Monday 14th of October 2019 05:00:00 PM

Debian 11 codename Bullseye is already in the works. Is interesting to make decision early in the development cycle to give people time to accommodate and integrate accordingly, and this post brings you the latest update on the plans for Netfilter software in Debian 11 Bullseye. Mind that Bullseye is expected to be released somewhere in 2021, so still plenty of time ahead.

The situation with the release of Debian 10 Buster is that iptables was using by default the -nft backend and one must explicitly select -legacy in the alternatives system in case of any problem. That was intended to help people migrate from iptables to nftables. Now the question is what to do next.

Back in July 2019, I started an email thread in the debian-devel@lists.debian.org mailing lists looking for consensus on lowering the archive priority of the iptables package in Debian 11 Bullseye. My proposal is to drop iptables from Priority: important and promote nftables instead.

In general, having such a priority level means the package is installed by default in every single Debian installation. Given that we aim to deprecate iptables and that starting with Debian 10 Buster iptables is not even using the x_tables kernel subsystem but nf_tables, having such priority level seems pointless and inconsistent. There was agreement, and I already made the changes to both packages.

This is another step in deprecating iptables and welcoming nftables. But it does not mean that iptables won’t be available in Debian 11 Bullseye. If you need it, you will need to use aptitude install iptables to download and install it from the package repository.

The second part of my proposal was to promote firewalld as the default ‘wrapper’ for firewaling in Debian. I think this is in line with the direction other distros are moving. It turns out firewalld integrates pretty well with the system, includes a DBus interface and many system daemons (like libvirt) already have native integration with firewalld. Also, I believe the days of creating custom-made scripts and hacks to handle the local firewall may be long gone, and firewalld should be very helpful here too.

Ritesh Raj Sarraf: Bpfcc New Release

Monday 14th of October 2019 09:24:33 AM
BPF Compiler Collection 0.11.0

bpfcc version 0.11.0 has been uploaded to Debian Unstable and should be accessible in the repositories by now. After the 0.8.0 release, this has been the next one uploaded to Debian.

Multiple source respositories

This release brought in dependencies to another set of sources from the libbpf project. In the upstream repo, this is still a topic of discussion on how to release tools where one depends on another, in unison. Right now, libbpf is configured as a git submodule in the bcc repository. So anyone using the upstream git repoistory should be able to build it.

Multiple source archive for a Debian package

So I had read in the past about Multiple source tarballs for a single package in Debian but never tried it because I wasn’t maintaining anything in Debian which was such. With bpfcc it was now a good opportunity to try it out. First, I came across this post from RaphaĂŤl Hertzog which gives a good explanation of what all has been done. This article was very clear and concise on the topic

Git Buildpackage

gbp is my tool of choice for packaging in Debian. So I did a quick look to check how gbp would take care of it. And everything was in place and Just Worked

rrs@priyasi:~/rrs-home/Community/Packaging/bpfcc (master)$ gbp buildpackage --git-component=libbpf gbp:info: Creating /home/rrs/NoBackup/Community/Packaging/bpfcc_0.11.0.orig.tar.gz gbp:info: Creating /home/rrs/NoBackup/Community/Packaging/bpfcc_0.11.0.orig-libbpf.tar.gz gbp:info: Performing the build dpkg-checkbuilddeps: error: Unmet build dependencies: arping clang-format cmake iperf libclang-dev libedit-dev libelf-dev libzip-dev llvm-dev libluajit-5.1-dev luajit python3-pyroute2 W: Unmet build-dependency in source dpkg-source: info: using patch list from debian/patches/series dpkg-source: info: applying fix-install-path.patch dh clean --buildsystem=cmake --with python3 --no-parallel dh_auto_clean -O--buildsystem=cmake -O--no-parallel dh_autoreconf_clean -O--buildsystem=cmake -O--no-parallel dh_clean -O--buildsystem=cmake -O--no-parallel dpkg-source: info: using source format '3.0 (quilt)' dpkg-source: info: building bpfcc using existing ./bpfcc_0.11.0.orig-libbpf.tar.gz dpkg-source: info: building bpfcc using existing ./bpfcc_0.11.0.orig.tar.gz dpkg-source: info: using patch list from debian/patches/series dpkg-source: warning: ignoring deletion of directory src/cc/libbpf dpkg-source: info: building bpfcc in bpfcc_0.11.0-1.debian.tar.xz dpkg-source: info: building bpfcc in bpfcc_0.11.0-1.dsc I: Generating source changes file for original dsc dpkg-genchanges: info: including full source code in upload dpkg-source: info: unapplying fix-install-path.patch ERROR: ld.so: object 'libeatmydata.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. W: cgroups are not available on the host, not using them. I: pbuilder: network access will be disabled during build I: Current time: Sun Oct 13 19:53:57 IST 2019 I: pbuilder-time-stamp: 1570976637 I: Building the build Environment I: extracting base tarball [/var/cache/pbuilder/sid-amd64-base.tgz] I: copying local configuration I: mounting /proc filesystem I: mounting /sys filesystem I: creating /{dev,run}/shm I: mounting /dev/pts filesystem I: redirecting /dev/ptmx to /dev/pts/ptmx I: Mounting /var/cache/apt/archives/ I: policy-rc.d already exists W: Could not create compatibility symlink because /tmp/buildd exists and it is not a directory I: using eatmydata during job I: Using pkgname logfile I: Current time: Sun Oct 13 19:54:04 IST 2019 I: pbuilder-time-stamp: 1570976644 I: Setting up ccache I: Copying source file I: copying [../bpfcc_0.11.0-1.dsc] I: copying [../bpfcc_0.11.0.orig-libbpf.tar.gz] I: copying [../bpfcc_0.11.0.orig.tar.gz] I: copying [../bpfcc_0.11.0-1.debian.tar.xz] I: Extracting source dpkg-source: warning: extracting unsigned source package (bpfcc_0.11.0-1.dsc) dpkg-source: info: extracting bpfcc in bpfcc-0.11.0 dpkg-source: info: unpacking bpfcc_0.11.0.orig.tar.gz dpkg-source: info: unpacking bpfcc_0.11.0.orig-libbpf.tar.gz dpkg-source: info: unpacking bpfcc_0.11.0-1.debian.tar.xz dpkg-source: info: using patch list from debian/patches/series dpkg-source: info: applying fix-install-path.patch I: Not using root during the build.

Utkarsh Gupta: Joining Debian LTS!

Monday 14th of October 2019 12:20:00 AM

Hey,

(DPL Style):
TL;DR: I joined Debian LTS as a trainee in July (during DebConf) and finally as a paid contributor from this month onward! :D

Here’s something interesting that happened last weekend!
Back during the good days of DebConf19, I finally got a chance to meet Holger! As amazing and inspiring a person he is, it was an absolute pleasure meeting him and also, I got a chance to talk about Debian LTS in more detail.

I was introduced to Debian LTS by Abhijith during his talk in MiniDebConf Delhi. And since then, I’ve been kinda interested in that project!
But finally it was here that things got a little “official” and after a couple of mail exchanges with Holger and Raphael, I joined in as a trainee!

I had almost no idea what to do next, so the next month I stayed silent, observing the workflow as people kept committing and announcing updates.
And finally in September, I started triaging and fixing the CVEs for Jessie and Stretch (mostly the former).

Thanks to Abhijith who explained the basics of what DLA is and how do we go about fixing bugs and then announcing them.

With that, I could fix a couple of CVEs and thanks to Holger (again) for reviewing and sponsoring the uploads! :D

I mostly worked (as a trainee) on:

  • CVE-2019-10751, affecting httpie, and
  • CVE-2019-16680, affecting file-roller.

And finally this happened:

(Though there’s a little hiccup that happened there, but that’s something we can ignore!)

So finally, I’ll be working with the team from this month on!
As Holger says, very much yay! :D

Until next time.
:wq for today.

Debian XMPP Team: New Dino in Debian

Sunday 13th of October 2019 10:00:00 PM

Dino (dino-im in Debian), the modern and beautiful chat client for the desktop, has some nice, new features. Users of Debian testing (bullseye) might like to try them:

  • XEP-0391: Jingle Encrypted Transports (explained here)
  • XEP-0402: Bookmarks 2 (explained here)

Note, that users of Dino on Debian 10 (buster) should upgrade to version 0.0.git20181129-1+deb10u1, because of a number of security issues, that have been found (CVE-2019-16235, CVE-2019-16236, CVE-2019-16237).

There have been other XMPP related updates in Debian since release of buster, among them:

You might be interested in the Octobers XMPP newsletter, also available in German.

Iustin Pop: Actually fixing a bug

Sunday 13th of October 2019 06:43:00 PM

One of the outcomes of my recent (last few years) sports ramp-up is that my opensource work is almost entirely left aside. Having an office job makes it very hard to spend more time sitting at the computer at home too…

So even my travis dashboard was red for a while now, but I didn’t look into it until today. Since I didn’t change anything recently, just travis builds started to fail, I was sure it’s just environment changes that need to be taken into account.

And indeed it was so, for two out of three projects. The third one… I actually got to fix a bug, introduced at the beginning of the year, but for which gcc (same gcc that originally passed) started to trip on a while back. I even had to read the man page of snprintf! Was fun ☺, too bad I don’t have enough time to do this more often…

My travis dashboard is green again, and “test suite” (if you can call it that) is expanded to explicitly catch this specific problem in the future.

Shirish Agarwal: Social media, knowledge and some history of Banking

Saturday 12th of October 2019 10:58:37 PM

First of all Happy Dusshera to everybody. While Dusshera is India is a symbol of many things, it is a symbol of forgiveness and new beginnings. While I don’t know about new beginnings I do feel there is still lot of baggage which needs to be left I would try to share some insights I uncovered over last few months and few realizations I came across.

First of all thank you to the Debian-gnome-team to keep working at new version of packages. While there are still a bunch of bugs which need to be fixed especially #895990 and #913978 among others, still kudos for working at it. Hopefully, those bugs and others will be fixed soon so we could install gnome without a hiccup. I have not been on IRC because my riot-web has been broken for several days now. Also most of the IRC and telegram channels at least related to Debian become mostly echo chambers one way or the other as you do not get any serious opposition. On twitter, while it’s highly toxic, you also get the urge to fight the good fight when either due to principles or for some other reason (usually paid trolls) people fight, While I follow my own rules on twitter apart from their TOS, I feel at least new people who are going on social media in India or perhaps elsewhere as well could use are –

  1. It is difficult to remain neutral and stick to the facts. If you just stick to the facts, you will be branded as urban naxal or some such names.
  2. I find many times, if you are calm and don’t react, many a times, they are curious and display ignorance of knowledge which you thought everyone knew is not there. Now whether that is due to either due to lack of education, lack of knowledge or pretensions, although if its pretentious, you are caught sooner or later.
  3. Be civil at all times, if somebody harassess you, calls you names, report them and block them, although twitter still needs to fix the reporting thing a whole lot more. Although, when even somebody like me (bit of understanding of law, technology, language etc.) had a hard time figuring out twitter’s reporting ways, I dunno how many people would be able to use it successfully ? Maybe they make it so unhelpful so the traffic flows no matter what. I do realize they still haven’t figured out their business model but that’s a question for another day. In short, they need to make it far more simpler than it is today.
  4. You always have an option to block people but it has its own consequences.
  5. Be passive-aggressive if the situation demands it.
  6. Most importantly though, if somebody starts making jokes about you or start abusing you, it is sure that the person on the other side doesn’t have any more arguments and you have won.
Banking

Before I start, let me share why I am putting a blog post on the topic. The reason is pretty simple. It seems a huge number of Indians don’t either know the history of how banking started, the various turns it took and so on and so forth. In fact, nowadays history is being so hotly contested and perhaps even being re-written. Hence for some things I would be sharing some sources but even within them, there is possibiity of contestations. One of the contestations for a long time is when ancient coinage and the technique of smelting, flattening came to India. Depending on whom you ask, you have different answers. Lot of people are waiting to get more insight from the Keezhadi excavation which may also give some insight to the topic as well. There are rumors that the funding is being stopped but hope that isn’t true and we gain some more insight in Indian history. In fact, in South India, there seems to be lot of curiousity and attraction towards the site. It is possible that the next time I get a chance to see South India, I may try to see if there is a chance to see this unique location if a museum gets built somewhere nearby. Sorry from deviating from the topic, but it seems that ancient coinage started anywhere between 1st millenium BCE to 6th century BCE so it could be anywhere between 1500 – 2000 years old in India. While we can’t say anything for sure, but it’s possible that there was barter before that. There has also been some history about sharing tokens in different parts of the world as well. The various timelines get all jumbled up hence I would suggest people to use the wikipedia page of History of Money as a starting point. While it may not be give a complete, it would probably broaden the understanding a little bit. One of the reasons why history is so hotly contested could also perhaps lie because of the destruction of the Ancient Library of Alexandria. Who knows what more we would have known of our ancients if it was not destroyed

Dirk Eddelbuettel: GitHub Streak: Round Six

Saturday 12th of October 2019 03:53:00 PM

Five ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

and then showed the first chart of GitHub streaking

github activity october 2013 to october 2014

And four year ago a first follow-up appeared in this post:

github activity october 2014 to october 2015

And three years ago we had a followup

github activity october 2015 to october 2016

And two years ago we had another one

github activity october 2016 to october 2017

And last year another one

github activity october 2017 to october 2018

As today is October 12, here is the newest one from 2018 to 2019:

github activity october 2018 to october 2019

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Louis-Philippe Véronneau: Alpine MusicSafe Classic Hearing Protection Review

Saturday 12th of October 2019 04:00:00 AM

Yesterday, I went to a punk rock show and had tons of fun. One of the bands playing (Jeunesse Apatride) hadn't played in 5 years and the crowd was wild. The other bands playing were also great. Here's a few links if you enjoy Oi! and Ska:

Sadly, those kind of concerts are always waaaaayyyyy too loud. I mostly go to small venue concerts and for some reason the sound technicians think it's a good idea to make everyone's ears bleed. You really don't need to amplify the drums when the whole concert venue is 50m²...

So I bough hearing protection. It was the first time I wore earplugs at a concert and it was great! I can't really compare the model I got (Alpine MusicSafe Classic earplugs) to other brands since it's the only one I tried out, but:

  • They were very comfortable. I wore them for about 5 hours and didn't feel any discomfort.

  • They came with two sets of plastic tips you insert in the silicone earbuds. I tried the -17db ones but I decided to go with the -18db inserts as it was still freaking loud.

  • They fitted very well in my ears even tough I was in the roughest mosh pit I've ever experienced (and I've seen quite a few). I was sweating profusely from all the heavy moshing and never once I feared loosing them.

  • My ears weren't ringing when I came back home so I guess they work.

  • The earplugs didn't distort sound, only reduce the volume.

  • They came with a handy aluminium carrying case that's really durable. You can put it on your keychain and carry them around safely.

  • They only cost me ~25 CAD with taxes.

The only thing I disliked was that I found it pretty much impossible to sing while wearing them. as I couldn't really hear myself. With a bit of practice, I was able to sing true but it wasn't great :(

All in all, I'm really happy with my purchase and I don't think I'll ever go to another concert without earplugs.

Molly de Blanc: Conferences

Friday 11th of October 2019 11:23:44 PM

I think there are too many conferences.

Are there too many FLOSS conferences?

— Molly dBoo (@mmillions) October 7, 2019

I conducted this very scientific Twitter poll and out of 52 respondants, only 23% agreed with me. Some people who disagreed with me pointed out specifically what they think is lacking:  more regional events, more in specific countries, and more “generic” FLOSS events.

Many projects have a conference, and then there are “generic” conferences, like FOSDEM, LibrePlanet, LinuxConfAU, and FOSSAsia. Some are more corporate (OSCON), while others more community focused (e.g. SeaGL).

There are just a lot of conferences.

I average a conference a month, with most of them being more general sorts of events, and a few being project specific, like DebConf and GUADEC.

So far in 2019, I went to: FOSDEM, CopyLeft Conf, LibrePlanet, FOSS North, Linux Fest Northwest, OSCON, FrOSCon, GUADEC, and GitLab Commit. I’m going to All Things Open next week. In November I have COSCon scheduled. I’m skipping SeaGL this year. I am not planning on attending 36C3 unless my talk is accepted. I canceled my trip to DebConf19. I did not go to Camp this year. I also had a board meeting in NY, an upcoming one in Berlin, and a Debian meeting in the other Cambridge. I’m skipping LAS and likely going to SFSCon for GNOME.

So 9 so far this year,  and somewhere between 1-4 more, depending on some details.

There are also conferences that don’t happen every year, like HOPE and CubaConf. There are some that I haven’t been to yet, like PyCon, and more regional events like Ohio Linux Fest, SCALE, and FOSSCon in Philadelphia.

I think I travel too much, and plenty of people travel more than I do. This is one of the reasons why we have too many events: the same people are traveling so much.

When you’re nose deep in it, when you think that you’re doing is important, you keep going to them as long as you’re invited. I really believe in the messages I share during my talks, and I know by speaking I am reaching audiences I wouldn’t otherwise. As long as I keep getting invited places, I’ll probably keep going.

Finding sponsors is hard(er).

It is becoming increasingly difficult to find sponsors for conferences. This is my experience, and what I’ve heard from speaking with others about it. Lower response rates to requests and people choosing lower sponsorship levels than they have in past years.

CFP responses are not increasing.

I’m yet to hear of any established community-run tech conferences who’ve had growth in their CFP response rate this year.

Peak conference?

— Christopher Neugebauer

Markus Koschany: My Free Software Activities in September 2019

Thursday 10th of October 2019 08:49:21 PM

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • Reiner Herrmann investigated a build failure of supertuxkart on several architectures and prepared an update to link against libatomic. I reviewed and sponsored the new revision which allowed supertuxkart 1.0 to migrate to testing.
  • Python 3 ports: Reiner also ported bouncy, a game for small kids, to Python3 which I reviewed and uploaded to unstable.
  • Myself upgraded atomix to version 3.34.0 as requested although it is unlikely that you will find a major difference to the previous version.
Debian Java Misc
  • I packaged new upstream releases of ublock-origin and privacybadger, two popular Firefox/Chromium addons and
  • packaged a new upstream release of wabt, the WebAssembly Binary Toolkit.
Debian LTS

This was my 43. month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 11.09.2019 until 15.09.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in libonig, bird, curl, openssl, wpa, httpie, asterisk, wireshark and libsixel.
  • DLA-1922-1. Issued a security update for wpa fixing 1 CVE.
  • DLA-1932-1. Issued a security update for openssl fixing 2 CVE.
  • DLA-1900-2. Issued a regression update for apache fixing 1 CVE.
  • DLA-1943-1. Issued a security update for jackson-databind fixing 4 CVE.
  • DLA-1954-1. Issued a security update for lucene-solr fixing 1 CVE. I triaged CVE-2019-12401 and marked Jessie as not-affected because we use the system libraries of woodstox in Debian.
  • DLA-1955-1. Issued a security update for tcpdump fixing 24 CVE by backporting the latest upstream release to Jessie. I discovered several test failures but after more investigation I came to the conclusion that the test cases were simply created with a newer version of libpcap which causes the test failures with Jessie’s older version.
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 “Wheezy”. This was my sixteenth month and I have been assigned to work 15 hours on ELTS plus five hours from August. I used 15 of them for the following:

  • I was in charge of our ELTS frontdesk from 30.09.2019 until 06.10.2019 and I triaged CVE in tcpdump. There were no reports of other security vulnerabilities for supported packages in this week.
  • ELA-163-1. Issued a security update for curl fixing 1 CVE.
  • ELA-171-1. Issued a security update for openssl fixing 2 CVE.
  • ELA-172-1. Issued a security update for linux fixing 23 CVE.
  • ELA-174-1. Issued a security update for tcpdump fixing 24 CVE.

Norbert Preining: R with TensorFlow 2.0 on Debian/sid

Thursday 10th of October 2019 06:15:18 AM

I recently posted on getting TensorFlow 2.0 with GPU support running on Debian/sid. At that time I didn’t manage to get the tensorflow package for R running properly. It didn’t need much to get it running, though.

The biggest problem I faced was that the R/TensorFlow package recommends using install_tensorflow, which can use either auto, conda, virtualenv, or system (at least according to the linked web page). I didn’t want to set up neither a conda nor virtualenv environment, since TensorFlow was already installed, so I thought system would be correct, but then, I had it already installed. Anyway, the system option is gone and not accepted, but I still got errors. In particular because the code mentioned on the installation page is incorrect for TF2.0!

It turned out to be a simple error on my side – the default is to use the program python which in Debian is still Python2, while I have TF only installed for Python3. The magic incantation to fix that is use_python("/usr/bin/python3") and one is set.

So here is a full list of commands to get R/TensorFlow running on top of an already installed TensorFlow for Python3 (as usual either as root to be installed into /usr/local or as user to have a local installation):

devtools::install_github("rstudio/tensorflow")

And if you want to run some TF program:

library(tensorflow) use_python("/usr/bin/python3") tf$math$cumprod(1:5)

This gives lots of output but mentioning that it is running on my GPU.

At least for the (probably very short) time being this looks like a workable system. Now off to convert my TF1.N code to TF2.0.

Louis-Philippe Véronneau: Trying out Sourcehut

Thursday 10th of October 2019 04:00:00 AM

Last month, I decided it was finally time to move a project I maintain from Github1 to another git hosting platform.

While polling other contributors (I proposed moving to gitlab.com), someone suggested moving to Sourcehut, a newish git hosting platform written and maintained by Drew DeVault. I've been following Drew's work for a while now and although I had read a few blog posts on Sourcehut's development, I had never really considered giving it a try. So I did!

Sourcehut is still in alpha and I'm expecting a lot of things to change in the future, but here's my quick review.

Things I like Sustainable FOSS

Sourcehut is 100% Free Software. Github is proprietary and I dislike Gitlab's Open Core business model.

Sourcehut's business model also seems sustainable to me, as it relies on people paying a monthly fee for the service. You'll need to pay if you want your code hosted on https://sr.ht once Sourcehut moves into beta. As I've written previously, I like that a lot.

In comparison, Gitlab is mainly funded by venture capital and I'm afraid of the long term repercussions this choice will have.

Continuous Integration

Continuous Integration is very important to me and I'm happy to say Sourcehut's CI is pretty good! Like Travis and Gitlab CI, you declare what needs to happen in a YAML file. The CI uses real virtual machines backed by QEMU, so you can run many different distros and CPU archs!

Even nicer, you can actually SSH into a failed CI job to debug things. In comparison, Gitlab CI's Interactive Web Terminal is ... web based and thus not as nice. Worse, it seems it's still somewhat buggy as Gitlab still hasn't enabled it on their gitlab.com instance.

Here's what the instructions to SSH into the CI look like when a job fails:

This build job failed. You may log into the failed build environment within 10 minutes to examine the results with the following command: ssh -t builds@foo.bar connect NUMBER

Sourcehut's CI is not as feature-rich or as flexible as Gitlab CI, but I feel it is more powerful then Gitlab CI's default docker executor. Folks that run integration tests or more complicated setups where Docker fails should definitely give it a try.

From the few tests I did, Sourcehut's CI is also pretty quick (it's definitely faster than Travis or Gitlab CI).

No JS

Although Sourcehut's web interface does bundle some Javascript, all features work without it. Three cheers for that!

Things I dislike Features division

I'm not sure I like the way features (the issue tracker, the CI builds, the git repository, the wikis, etc.) are subdivided in different subdomains.

For example, when you create a git repository on git.sr.ht, you only get a git repository. If you want an issue tracker for that git repository, you have to create one at todo.sr.ht with the same name. That issue tracker isn't visible from the git repository web interface.

That's the same for all the features. For example, you don't see the build status of a merged commit when you look at it. This design choice makes you feel like the different features aren't integrated to one another.

In comparison, Gitlab and Github use a more "centralised" approach: everything is centered around a central interface (your git repository) and it feels more natural to me.

Discoverability

I haven't seen a way to search sr.ht for things hosted there. That makes it hard to find repositories, issues or even the Sourcehut source code!

Merge Request workflow

I'm a sucker for the Merge Request workflow. I really like to have a big green button I can click on to merge things. I know some people prefer a more manual workflow that uses git merge and stuff, but I find that tiresome.

Sourcehut chose a workflow based on sending patches by email. It's neat since you can submit code without having an account. Sourcehut also provides mailing lists for projects, so people can send patches to a central place.

I find that workflow harder to work with, since to me it makes it more difficult to see what patches have been submitted. It also makes the review process more tedious, since the CI isn't ran automatically on email patches.

Summary

All in all, I don't think I'll be moving ISBG to Sourcehut (yet?). At the moment it doesn't quite feel as ready as I'd want it to be, and that's OK. Most of the things I disliked about the service can be fixed by some UI work and I'm sure people are already working on it.

Github was bought by MS for 7.5 billion USD and Gitlab is currently valued at 2.7 billion USD. It's not really fair to ask Sourcehut to fully compete just yet :)

With Sourcehut, Drew DeVault is fighting the good fight and I wish him the most resounding success. Who knows, maybe I'll really migrate to it in a few years!

  1. Github is a proprietary service, has been bought by Microsoft and gosh darn do I hate Travis CI. 

Dirk Eddelbuettel: RcppArmadillo 0.9.800.1.0

Thursday 10th of October 2019 12:59:00 AM

Another month, another Armadillo upstream release! Hence a new RcppArmadillo release arrived on CRAN earlier today, and was just shipped to Debian as well. It brings a faster solve() method and other goodies. We also switched to the (awesome) tinytest unit test frameowrk, and Min Kim made the configure.ac script more portable for the benefit of NetBSD and other non-bash users; see below for more details. One again we ran two full sets of reverse-depends checks, no issues were found, and the packages was auto-admitted similarly at CRAN after less than two hours despite there being 665 reverse depends. Impressive stuff, so a big Thank You! as always to the CRAN team.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 665 other packages on CRAN.

Changes in RcppArmadillo version 0.9.800.1.0 (2019-10-09)
  • Upgraded to Armadillo release 9.800 (Horizon Scraper)

    • faster solve() in default operation; iterative refinement is no longer applied by default; use solve_opts::refine to explicitly enable refinement

    • faster expmat()

    • faster handling of triangular matrices by rcond()

    • added .front() and .back()

    • added .is_trimatu() and .is_trimatl()

    • added .is_diagmat()

  • The package now uses tinytest for unit tests (Dirk in #269).

  • The configure.ac script is now more careful about shell portability (Min Kim in #270).

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Adnan Hodzic: Hello world!

Wednesday 9th of October 2019 03:55:10 PM

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Enrico Zini: Fixed XSS issue on debtags.debian.org

Wednesday 9th of October 2019 08:51:57 AM

Thanks to Moritz Naumann who found the issues and wrote a very useful report, I fixed a number of Cross Site Scripting vulnerabilities on https://debtags.debian.org.

The core of the issue was code like this in a Django view:

def pkginfo_view(request, name): pkg = bmodels.Package.by_name(name) if pkg is None: return http.HttpResponseNotFound("Package %s was not found" % name) # …

The default content-type of HttpResponseNotFound is text/html, and the string passed is the raw HTML with clearly no escaping, so this allows injection of arbitrary HTML/<script> code in the name variable.

I was so used to Django doing proper auto-escaping that I missed this place in which it can't do that.

There are various things that can be improved in that code.

One could introduce escaping (and while one's at it, migrate the old % to format):

from django.utils.html import escape def pkginfo_view(request, name): pkg = bmodels.Package.by_name(name) if pkg is None: return http.HttpResponseNotFound("Package {} was not found".format(escape(name))) # …

Alternatively, set content_type to text/plain:

def pkginfo_view(request, name): pkg = bmodels.Package.by_name(name) if pkg is None: return http.HttpResponseNotFound("Package {} was not found".format(name), content_type="text/plain") # …

Even better, raise Http404:

from django.utils.html import escape def pkginfo_view(request, name): pkg = bmodels.Package.by_name(name) if pkg is None: raise Http404(f"Package {name} was not found") # …

Even better, use standard shortcuts and model functions if possible:

from django.shortcuts import get_object_or_404 def pkginfo_view(request, name): pkg = get_object_or_404(bmodels.Package, name=name) # …

And finally, though not security related, it's about time to switch to class-based views:

class PkgInfo(TemplateView): template_name = "reports/package.html" def get_context_data(self, **kw): ctx = super().get_context_data(**kw) ctx["pkg"] = get_object_or_404(bmodels.Package, name=self.kwargs["name"]) # … return ctx

I proceeded with a review of the other Django sites I maintain in case I reproduced this mistake also there.

More in Tux Machines

Red Hat Enterprise Linux 7 and CentOS 7 Get Important Kernel Security Update

Marked as important by Red Hat Product Security, the new Linux kernel security patch is here to fix a use-after-free flaw (CVE-2018-20856) discovered in the __blk_drain_queue() function in block/blk-core.c, as well as a heap overflow issue (CVE-2019-3846) discovered in the mwifiex_update_bss_desc_with_ie function in marvell/mwifiex/scan.c. It also addresses a heap overflow issue (CVE-2019-10126) discovered in the mwifiex_uap_parse_tail_ies function in drivers/net/wireless/marvell/mwifiex/ie.c and a Bluetooth flaw (CVE-2019-9506) that may lead to BR/EDR encryption key negotiation attacks (KNOB). Read more

Purism: Supplying the Demand

Thank you all for the continued support and remarkable demand for the Librem 5. As we’ve shared earlier, we are iterating through shipping batches. The purpose of doing so is to increment and improve with each batch toward mass production and share that story publicly. As a result, these earlier batches are limited in quantity as we move toward mass production. Publicly releasing iterated hardware at this level of transparency is extremely uncommon, but in nearly everything we do we try to lead by example. Forming as a Social Purpose Corporation, open sourcing all our software, having PureOS be FSF endorsed, securing the lower layers of computing, or manufacturing a revolutionary mobile phone from scratch… all have required sacrifice but are well worth it to provide people with a values-driven alternative to Big Tech. Read more Also: Purism Provides Update On Librem 5 Shipping, Known Issues

KDE Plasma 5.17 Desktop Environment Gets First Point Release with 40 Bug Fixes

Released last week on October 15th, the KDE Plasma 5.17 desktop environment introduces Night Color support on X11, fractional scaling on Wayland, HiDPI and multi-screen improvements, as well as the ability to support for managing and configuring Thunderbolt devices in System Settings. It also improves the notification system with a new Do Not Disturb mode that automatically detects presentations, Breeze GTK theme support for the Google Chrome and Chromium web browsers, Nvidia GPU stats in System Settings, and color scheme support for GTK and GNOME apps in the Breeze GTK theme. Read more

Ubuntu Touch OTA-11 Release

Ubuntu Touch is the privacy and freedom respecting mobile operating system by UBports. Today we are happy to announce the release of Ubuntu Touch OTA-11! OTA-11 is immediately available for all supported Ubuntu Touch devices. You can skip to How to get OTA-11 to get it right away if you're impatient, or read on to learn more about this release. We were calling this a "small release" originally. Our plan was to cover the backlog of pull requests that weren't quite ready for OTA-10. It turns out, that made this "small" update not small at all. Read more Also: Ubuntu Touch OTA-11 for Ubuntu Phones Brings Smarter Keyboard, Better Browsing UBports' Ubuntu Touch OTA-11 Released