Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 2 hours 47 min ago

Paul Wise: FLOSS Activities June 2019

Monday 1st of July 2019 02:45:29 AM
Changes Issues Review Administration
  • Debian: investigate/fix gitlab issue, break LDAP sync lock
  • Debian wiki: fix bugs in anti-spam script, whitelist email domains, whitelist email addresses, update email for accounts with bouncing email
Communication Sponsors

All work was done on a volunteer basis.

Sylvain Beucler: Debian LTS - June 2019

Sunday 30th of June 2019 08:16:37 PM

Here is my transparent report for my work on the Debian Long Term Support (LTS) project, which extends the security support for past Debian releases, as a paid contributor.

In June, the monthly sponsored hours were split evenly among contributors depending on their max availability - I declared max 30h and got 17h.

I mostly spent time on tricky updates. Uploading one with literally thousands of reverse dependencies can be quite a challenge. Especially when, as is sadly common, the CVE description is (willingly?) vague, and no reproducer is available.

Matthew Garrett: Which smart bulbs should you buy (from a security perspective)

Sunday 30th of June 2019 08:10:15 PM
People keep asking me which smart bulbs they should buy. It's a great question! As someone who has, for some reason, ended up spending a bunch of time reverse engineering various types of lightbulb, I'm probably a reasonable person to ask. So. There are four primary communications mechanisms for bulbs: wifi, bluetooth, zigbee and zwave. There's basically zero compelling reasons to care about zwave, so I'm not going to.

Wifi
Advantages: Doesn't need an additional hub - you can just put the bulbs wherever. The bulbs can connect out to a cloud service, so you can control them even if you're not on the same network.
Disadvantages: Only works if you have wifi coverage, each bulb has to have wifi hardware and be configured appropriately.
Which should you get: If you search Amazon for "wifi bulb" you'll get a whole bunch of cheap bulbs. Don't buy any of them. They're mostly based on a custom protocol from Zengge and they're shit. Colour reproduction is bad, there's no good way to use the colour LEDs and the white LEDs simultaneously, and if you use any of the vendor apps they'll proxy your device control through a remote server with terrible authentication mechanisms. Just don't. The ones that aren't Zengge are generally based on the Tuya platform, whose security model is to have keys embedded in some incredibly obfuscated code and hope that nobody can find them. TP-Link make some reasonably competent bulbs but also use a weird custom protocol with hand-rolled security. Eufy are fine but again there's weird custom security. Lifx are the best bulbs, but have zero security on the local network - anyone on your wifi can control the bulbs. If that's something you care about then they're a bad choice, but also if that's something you care about maybe just don't let people you don't trust use your wifi.
Conclusion: If you have to use wifi, go with lifx. Their security is not meaningfully worse than anything else on the market (and they're better than many), and they're better bulbs. But you probably shouldn't go with wifi.

Bluetooth
Advantages: Doesn't need an additional hub. Doesn't need wifi coverage. Doesn't connect to the internet, so remote attack is unlikely.
Disadvantages: Only one control device at a time can connect to a bulb, so harder to share. Control device needs to be in Bluetooth range of the bulb. Doesn't connect to the internet, so you can't control your bulbs remotely.
Which should you get: Again, most Bluetooth bulbs you'll find on Amazon are shit. There's a whole bunch of weird custom protocols and the quality of the bulbs is just bad. If you're going to go with anything, go with the C by GE bulbs. Their protocol is still some AES-encrypted custom binary thing, but they use a Bluetooth controller from Telink that supports a mesh network protocol. This means that you can talk to any bulb in your network and still send commands to other bulbs - the dual advantages here are that you can communicate with bulbs that are outside the range of your control device and also that you can have as many control devices as you have bulbs. If you've bought into the Google Home ecosystem, you can associate them directly with a Home and use Google Assistant to control them remotely. GE also sell a wifi bridge - I have one, but haven't had time to review it yet, so make no assertions around its competence. The colour bulbs are also disappointing, with much dimmer colour output than white output.

Zigbee
Advantages: Zigbee is a mesh protocol, so bulbs can forward messages to each other. The bulbs are also pretty cheap. Zigbee is a standard, so you can obtain bulbs from several vendors that will then interoperate - unfortunately there are actually two separate standards for Zigbee bulbs, and you'll sometimes find yourself with incompatibility issues there.
Disadvantages: Your phone doesn't have a Zigbee radio, so you can't communicate with the bulbs directly. You'll need a hub of some sort to bridge between IP and Zigbee. The ecosystem is kind of a mess, and you may have weird incompatibilities.
Which should you get: Pretty much every vendor that produces Zigbee bulbs also produces a hub for them. Don't get the Sengled hub - anyone on the local network can perform arbitrary unauthenticated command execution on it. I've previously recommended the Ikea Tradfri, which at the time only had local control. They've since added remote control support, and I haven't investigated that in detail. But overall, I'd go with the Philips Hue. Their colour bulbs are simply the best on the market, and their security story seems solid - performing a factory reset on the hub generates a new keypair, and adding local control users requires a physical button press on the hub to allow pairing. Using the Philips hub doesn't tie you into only using Philips bulbs, but right now the Philips bulbs tend to be as cheap (or cheaper) than anything else.

But what about
If you're into tying together all kinds of home automation stuff, then either go with Smartthings or roll your own with Home Assistant. Both are definitely more effort if you only want lighting.

My priority is software freedom
Excellent! There are various bulbs that can run the Espurna or AiLight firmwares, but you'll have to deal with flashing them yourself. You can tie that into Home Assistant and have a completely free stack. If you're ok with your bulbs being proprietary, Home Assistant can speak to most types of bulb without an additional hub (you'll need a supported Zigbee USB stick to control Zigbee bulbs), and will support the C by GE ones as soon as I figure out why my Bluetooth transmissions stop working every so often.

Conclusion
Outside niche cases, just buy a Hue. Philips have done a genuinely good job. Don't buy cheap wifi bulbs. Don't buy a Sengled hub.

(Disclaimer: I mentioned a Google product above. I am a Google employee, but do not work on anything related to Home.)

comments

Chris Lamb: Free software activities in June 2019

Sunday 30th of June 2019 07:04:14 PM

Here is my monthly update covering what I have been doing in the free software world during June 2019 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom. Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month:

I then spent significant time working on buildinfo.debian.net, my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them. This included:

  • Started making the move to Python 3.x (and Django 2.x) [...][...][...][...][...][...][...], additionally performing a large number of adjacent cleanups including dropping the authentication framework [...], fixing a number of flake8 warnings [...], adding a setup.cfg to silence some warnings [...], moving to __str__ and str.format(...) over %-style interpolation and u"unicode" strings [...], etc.

  • I also added a number of (as-yet unreleased…) features, including caching the expensive landing page queries. [...]

  • Took the opportunity to start migrating the hosting from its current GitHub home to a more-centralised repository on salsa.debian.org, moving from the Travis to the GitLab continuous integration platform, updating the URL to the source in the footer [...] and many other related changes [...].

  • Applied the Black "uncompromising code formatter" to the codebase. [...]

I also made the following changes to our tooling:

  • strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, I added support for the clamp#ing of tIME chunks in .png files. [...]

  • In diffoscope (our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues) I documented that run_diffoscope should not be considered a stable API [...] and adjusted the configuration to build our the Docker image from the current Git checkout, not the Debian archive [...]

Finally, I spent significant amount of time working on our website this month, including:

  • Move the remaining site to the newer website design. This was a long-outstanding task (#2) and required a huge number of changes, including moving all the event and documentation pages to the new design [...] and migrating/merging the old _layouts/page.html into the new design [...] too. This could then allow for many cleanups including moving/deleting files into cleaner directories, dropping a bunch of example layouts [...] and dropping the old "home" layout. [...]

  • Adding reports to the homepage. (#16)

  • I also took the opportunity to re-order and merge various top-level sections of the site to make the page easier to parse/navigate [...][... and I updated the documentation for SOURCE_DATE_EPOCH to clarify that the alternative -r call to date(1) is for compatibility with BSD variants of UNIX [...].

  • Made a large number of visual fixups, particularly to accommodate the principles of responsive web design. [...][...][...][...][...]

  • Updated the lint functionality of the build system to check for URIs that are not using {{ "/foo/" | prepend: site.baseurl }}-style relative URLs. [...]


Debian Lintian

Even more hacking on the Lintian static analysis tool for Debian packages, including the following new features:

  • Warn about files referencing /usr/bin/foo if the binary is actually installed under /usr/sbin/foo. (#930702)
  • Support --suppress-tags-from-file in the configuration file. (#930700)

… and the following bug fixes:

Debian LTS

This month I have worked 17 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

Dirk Eddelbuettel: RProtoBuf 0.4.14

Sunday 30th of June 2019 04:48:00 PM

A new release 0.4.14 of RProtoBuf is arriving at CRAN. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release contains two very helpful pull requests by Jarod Meng that solidify behaviour in two corner cases of message translation. Jeroen also updated the Windows build settings which will help with the upcoming transition to a new Rtools version.

Changes in RProtoBuf version 0.4.14 (2019-06-30)
  • An all.equal.Message method was added to avoid a fallback to the generic (Jarod Meng in #54 fixing #53)

  • Recursive fields now handled by identical() (Jarod Meng in #57 fixing #56)

  • Update Windows build infrastructure (Jeroen)

CRANberries provides the usual diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Ben Hutchings: Debian LTS work, June 2019

Sunday 30th of June 2019 04:30:55 PM

I was assigned 17 hours of work by Freexian's Debian LTS initiative and worked all those hours this month.

I applied a number of security fixes to Linux 3.16, including those for the TCP denial-of-service vulnerabilities. I uploaded the updated package to jessie and issued DLA-1823.

I backported the corresponding security update for Linux 4.9 from stretch to jessie and issued DLA-1824.

I also prepared and released Linux 3.16.69 with most of the same security fixes, excluding those that weren't yet applied upstream.

Russ Allbery: DocKnot 3.00

Sunday 30th of June 2019 05:32:00 AM

This package started as only a documentation generator, but my goal for some time has been to gather together all of the tools and random scripts I use to maintain my web site and free software releases. This release does a bunch of internal restructuring to make it easier to add new commands, and then starts that process by adding a docknot dist command. This performs some (although not all) of the actions I currently use my release script for, and provides a platform for ensuring that the full package test suite is run as part of generating a distribution tarball.

This has been half-implemented for quite a while before I finally found the time to finish off a release. Hopefully releases will come a bit faster in the future.

Also in this release are a few tweaks to the DocKnot output (including better support for orphaned packages), and some hopeful fixes for test suite failures on Windows (although I'm not sure how useful this package will be in general on Windows).

You can get the latest version from the DocKnot distribution page or from CPAN.

Keith Packard: snekboard

Sunday 30th of June 2019 12:57:59 AM
SnekBoard and Lego

I was hoping to use some existing boards for snek+Lego, but I haven't found anything that can control 9V motors. So, I designed SnekBoard.

(click on the picture to watch the demo in motion!)

Here's the code:

def setservo(v): if v < 0: setleft(); v = -v else: setright() setpower(v) def track(sensor,motor): talkto(motor) setpower(0) setright() on() while True: setservo(read(sensor) * 2 - 1) track(ANALOG1, MOTOR2) SnekBoard Hardware

SnekBoard is made from:

  1. SAMD21G18A processor. This is the same chip found in many Arduino boards, including some from Adafruit. It's a ARM Cortex M0 with 256kB of flash and 32kB of RAM.

  2. Lithium Polymer battery. This uses the same connector found on batteries made by SparkFun and Adafruit. There's a battery charger on the board powered from USB so it will always be charging when connected to the computer.

  3. 9V boost power supply. Lego motors for the last many years have run on 9V. Instead of using 9V worth of batteries, using a boost regulator means the board can run off a single cell LiPo.

  4. Four motor controllers for Lego motors and servos. The current boards use a TI DRV9938, which provides up to 1.5A.

  5. Two NeoPixels

  6. Eight GPIOs with 3.3V and GND available for each one.

  7. One blue LED.

Getting SnekBoard Built

The SnekBoard PCBs arrived from OshPark a few days ago and I got them assembled and running. OshPark now has an associated stencil service, and I took advantage of that to get a stainless stencil along with the boards. The DRV8838 chips have small enough pads enough that my home-cut stencils don't work reliably, so having a 'real' stencil really helps. I ordered a 4mil stencil, which was probably too thick. They offer 3mil, and I think that would have reduced some of the bridging I got from having too much paste on the board.

Flashing a Bootloader on SnekBoard

I forked the Adafruit UF2 boot loader and added definitions for this board. The version of GCC provided in Debian appears to generate larger code than the newest upstream version, so I wasn't able to add the NeoPixel support, but the boot loader is happy enough to use the blue LED to indicate status.

STLink V2 vs SAMD21

I've got an STLink V2 SWD dongle which I use on all of my Arm boards for debugging. It appears that this device has a limitation in how it can access memory on the target; it can either use 8-bit or 32-bit accesses, but not 16-bit. That's usually just fine, but there's one register in the flash memory controller on the SAMD21 which requires atomic 16-bit accesses.

The STLinkV2 driver for OpenOCD emulates 16-bit accesses using two 8-bit accesses, causing all flash operations to fail. Fixing this was pretty simple, the 2 bytes following the relevant register aren't used, so I switched the 16-bit access to a 32-bit access. That solved the problem and I was able to flash the bootloader. I've submitted an OpenOCD patch including this upstream and pushed the OpenOCD fork to github.

Snek on the SnekBoard

Snek already supports the target processor; all that was needed for this port was to describe the GPIOs and configure the clocks. This port is on the master branch of the snek repository.

All of the hardware appears to work correctly, except that I haven't tested the 16MHz crystal which I plan to use for a more precise time source.

SnekBoard and Lego Motors

You can see a nice description of pretty much every motor Lego has ever made on Philo's web site. I've got a small selection of them, including:

  1. Electric Technic Mini-Motor 9v (71427)
  2. Power Functions Medium motor (8883)
  3. Power Functions Large motor (88003)
  4. Power Functions XL motor (8882)
  5. Power Functions Servo Motor 88004

In testing, all of them except the Power Functions Medium motor work great. That motor refused to start and just sat on the bench whinging (at about 1kHz). Reading through the DRV8838 docs, I discovered that if the motor consumes more than about 2A for more than 1µs, the chip will turn off the output, wait 1ms and try again.

So I hooked the board up to my oscilloscope and took a look and here's what I saw:

The upper trace is the 9V rail, which looks solid. The lower trace is the motor control output. At 500µs/div, you can see that it's cycling every 1ms, just like the chip docs say it will do in over current situations.

I zoomed in to the very start of one of the cycles and saw this:

This one is scaled to 500ns/div, and you can see that the power is high for a bit more than 1µs, and then goes a bit wild before turning off.

So the Medium motor draws so much current at startup that the DRV8838 turns it off, waits 1ms and tries again. Hence the 1kHz whine heard from the motor.

I tried to measure the current going into the motor with my DVM, but when I did that, just the tiny additional resistance from the DVM caused the motor to start working (!).

Swapping out the Motor Controller

I spent a bunch of time looking for a replacement motor controller; the SnekBoard is a bit special as I want a motor controller that takes direction and PWM instead of PWM1/PWM2, which is what you usually find on an H-bridge set. The PWM1/PWM2 mode is both simpler and more flexible as it allows both brake and coast modes, but it requires two PWM outputs from the SoC for each controller. I found the DRV8876, which provides 3.5A of current instead of 1.5A. That "should" be plenty for even the Medium motor.

Future Plans

I'll get new boards made and loaded to make sure the updated motor controller works. After that, I'll probably build half a dozen or so in time for class this October. I'm wondering if other people would like some of these boards, and if so, how I should go about making them available. Suggestions welcome!

Dirk Eddelbuettel: rvw 0.6.0: First release

Saturday 29th of June 2019 09:05:00 PM

Note: Crossposted by Ivan, James and myself.

Today Dirk Eddelbuettel, James Balamuta and Ivan Pavlov are happy to announce the first release of a reworked R interface to the Vowpal Wabbit machine learning system.

Started as a GSoC 2018 project, the new rvw package was built to give R users easier access to a variety of efficient machine learning algorithms. Key features that promote this idea and differentiate the new rvw from existing Vowpal Wabbit packages in R are:

  • A reworked interface that simplifies model manipulations (direct usage of CLI arguments is also available)
  • Support of the majority of Vowpal Wabbit learning algorithms and reductions
  • Extended data.frame converter covering different variations of Vowpal Wabbit input formats

Below is a simple example of how to use the renewed rvw’s interface:

library(rvw) library(mlbench) # for a dataset # Basic data preparation data("BreastCancer", package = "mlbench") data_full <- BreastCancer ind_train <- sample(1:nrow(data_full), 0.8*nrow(data_full)) data_full <- data_full[,-1] data_full$Class <- ifelse(data_full$Class == "malignant", 1, -1) data_train <- data_full[ind_train,] data_test <- data_full[-ind_train,] # Simple Vowpal Wabbit model for binary classification vwmodel <- vwsetup(dir = "./", model = "mdl.vw", option = "binary") # Training vwtrain(vwmodel = test_vwmodel, data = data_train, passes = 10, targets = "Class") # And testing vw_output <- vwtest(vwmodel = test_vwmodel, data = data_test)

More information is available in the Introduction and Examples sections of the wiki.

The rvw links directly to libvw and so initially we offer a Docker container in order to ship the most up to date package with everything needed.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russell Coker: Long-term Device Use

Saturday 29th of June 2019 11:37:11 AM

It seems to me that Android phones have recently passed the stage where hardware advances are well ahead of software bloat. This is the point that desktop PCs passed about 15 years ago and laptops passed about 8 years ago. For just over 15 years I’ve been avoiding buying desktop PCs, the hardware that organisations I work for throw out is good enough that I don’t need to. For the last 8 years I’ve been avoiding buying new laptops, instead buying refurbished or second hand ones which are more than adequate for my needs. Now it seems that Android phones have reached the same stage of development.

3 years ago I purchased my last phone, a Nexus 6P [1]. Then 18 months ago I got a Huawei Mate 9 as a warranty replacement [2] (I had swapped phones with my wife so the phone I was using which broke was less than a year old). The Nexus 6P had been working quite well for me until it stopped booting, but I was happy to have something a little newer and faster to replace it at no extra cost.

Prior to the Nexus 6P I had a Samsung Galaxy Note 3 for 1 year 9 months which was a personal record for owning a phone and not wanting to replace it. I was quite happy with the Note 3 until the day I fell on top of it and cracked the screen (it would have been ok if I had just dropped it). While the Note 3 still has my personal record for continuous phone use, the Nexus 6P/Huawei Mate 9 have the record for going without paying for a new phone.

A few days ago when browsing the Kogan web site I saw a refurbished Mate 10 Pro on sale for about $380. That’s not much money (I usually have spent $500+ on each phone) and while the Mate 9 is still going strong the Mate 10 is a little faster and has more RAM. The extra RAM is important to me as I have problems with Android killing apps when I don’t want it to. Also the IP67 protection will be a handy feature. So that phone should be delivered to me soon.

Some phones are getting ridiculously expensive nowadays (who wants to walk around with a $1000+ Pixel?) but it seems that the slightly lower end models are more than adequate and the older versions are still good.

Cost Summary

If I can buy a refurbished or old model phone every 2 years for under $400 that will make using a phone cost about $0.50 per day. The Nexus 6P cost me $704 in June 2016 which means that for the past 3 years my phone cost was about $0.62 per day.

It seems that laptops tend to last me about 4 years [3], and I don’t need high-end models (I even used one from a rubbish pile for a while). The last laptops I bought cost me $289 for a Thinkpad X1 Carbon [4] and $306 for the Thinkpad T420 [5]. That makes laptops about $0.20 per day.

In May 2014 I bought a Samsung Galaxy Note 10.1 2014 edition tablet for $579. That is still working very well for me today, apart from only having 32G of internal storage space and an OS update preventing Android apps from writing to the micro SD card (so I have to use USB to copy TV shows on to it) there’s nothing more than I need from a tablet. Strangely I even get good battery life out of it, I can use it for a couple of hours without the battery running out. Battery life isn’t nearly as good as when it was new, but it’s still OK for my needs. As Samsung stopped providing security updates I can’t use the tablet as a SSH client, but now that my primary laptop is a small and light model that’s less of an issue. Currently that tablet has cost me just over $0.30 per day and it’s still working well.

Currently it seems that my hardware expense for the forseeable future is likely to be about $1 per day. 20 cents for laptop, 30 cents for tablet, and 50 cents for phone. The overall expense is about $1.66 per month as I’m on a $20 per month pre-paid plan with Aldi Mobile.

Saving Money

A laptop is very important to me, the amounts of money that I’m spending don’t reflect that. But it seems that I don’t have any option for spending more on a laptop (the Thinkpad X1 Carbon I have now is just great and there’s no real option for getting more utility by spending more). I also don’t have any option to spend less on a tablet, 5 years is a great lifetime for a device that is practically impossible to repair (repair will cost a significant portion of the replacement cost).

I hope that the Mate 10 can last at least 2 years which will make it a new record for low cost of ownership of a phone for me. If app vendors can refrain from making their bloated software take 50% more RAM in the next 2 years that should be achievable.

The surprising thing I learned while writing this post is that my mobile phone expense is the largest of all my expenses related to mobile computing. Given that I want to get good reception in remote areas (needs to be Telstra or another company that uses their network) and that I need at least 3GB of data transfer per month it doesn’t seem that I have any options for reducing that cost.

Related posts:

  1. A Long Term Review of Android Devices Xperia X10 My first Android device was The Sony Ericsson...
  2. Huawei Mate9 Warranty Etc I recently got a Huawei Mate 9 phone....
  3. Android Device Service Life In recent years Android devices have been the most expensive...

Gunnar Wolf: Updates from Raspberrypi-land

Saturday 29th of June 2019 05:06:42 AM

Yay!

I was feeling sad and depressed because it's already late June... And I had not had enough time to get the unofficial Debian Buster Raspberry preview images booting on the entry-level models of the family (Broadcom 2835-based Raspberries 1A, 1B, 0 and 0W). But, this morning I found a very interesting pull request open in our GitHub repository!

Dispatched some piled-up work, and set an image build. Some minutes later, I had a shiny image, raspi0w.tar.gz. Quickly fired up dd to prepare an SD card. Searched for my RPi0w under too many papers until I found it. Connected to my trusty little monitor, and...

So, as a spoiler for my DebConf talk... Yes! We have (apparent, maybe still a bit incomplete) true Debian-plus-the-boot-blob, straight-Buster support for the whole Raspberry Pi family all of the raspberries sold until last month (yeah, the RPi4 is probably not yet supported — the kernel does not yet have a Device Tree for it. But it should be fixed soon, hopefully!)

AttachmentSize IMG_20190628_235934.1500.jpg580.38 KB IMG_20190629_000257.1500.jpg420.63 KB

Bits from Debian: Diversity and inclusion in Debian: small actions and large impacts

Friday 28th of June 2019 10:40:00 PM

The Debian Project always has and always will welcome contributions from people who are willing to work on a constructive level with each other, without discrimination.

The Diversity Statement and the Code of Conduct are genuinely important parts of our community, and over recent years some other things have been done to make it clear that they aren't just words.

One of those things is the creation of the Debian Diversity Team: it was announced in April 2019, although it had already been working for several months before as a welcoming space for, and a way of increasing visibility of, underrepresented groups within the Debian project.

During DebConf19 in Curitiba there will be a dedicated Diversity and Welcoming Team. It will consist of people from the Debian community to offer a contact point when you feel lost or uneasy. The DebConf team is also in contact with a local LGBTIQA+ support group for exchange of safety concerns and information with respect to Brazil in general.

Today Debian also recognizes the impact LGBTIQA+ people have had in the world and within the Debian project, joining the worldwide Pride celebrations. We show it by changing our logo for this time to the Debian Diversity logo, and encourage all Debian members and contributors to show their support of a diverse and inclusive community.

Daniel Kahn Gillmor: Community Impact of OpenPGP Certificate Flooding

Friday 28th of June 2019 07:00:00 PM
Community Impact of OpenPGP Certificate Flooding

I wrote yesterday about a recent OpenPGP certificate flooding attack, what I think it means for the ecosystem, and how it impacted me. This is a brief followup, trying to zoom out a bit and think about why it affected me emotionally the way that it did.

One of the reasons this situation makes me sad is not just that it's more breakage that needs cleaning up, or even that my personal identity certificate was on the receiving end. It's that it has impacted (and will continue impacting at least in the short term) many different people -- friends and colleagues -- who I know and care about. It's not just that they may be the next targets of such a flooding attack if we don't fix things, although that's certainly possible. What gets me is that they were affected because they know me and communicate with me. They had my certificate in their keyring, or in some mutually-maintained system, and as a result of what we know to be good practice -- regular keyring refresh -- they got burned.

Of course, they didn't get actually, physically burned. But from several conversations i've had over the last 24 hours, i know personally at least a half-dozen different people who i personally know have lost hours of work, being stymied by the failing tools, some of that time spent confused and anxious and frustrated. Some of them thought they might have lost access to their encrypted e-mail messages entirely. Others were struggling to wrestle a suddenly non-responsive machine back into order. These are all good people doing other interesting work that I want to succeed, and I can't give them those hours back, or relieve them of that stress retroactively.

One of the points I've been driving at for years is that the goals of much of the work I care about (confidentiality; privacy; information security and data sovereignty; healthy communications systems) are not individual goods. They are interdependent, communally-constructed and communally-defended social properties.

As an engineering community, we failed -- and as an engineer, I contributed to that failure -- at protecting these folks in this instance about because we left things sloppy and broken and supposedly "good enough".

Fortunately, this failure isn't the worst situation. There's no arbitrary code execution, no permanent data loss (unless people get panicked and delete everything), no accidental broadcast of secrets that shouldn't be leaked.

And as much as this is a community failure, there are also communities of people who have recognized these problems and have been working to solve them. So I'm pretty happy that good people have been working on infrastructure that saw this coming, and were preparing for it, even if their tools haven't been as fully implemented (or as widely adopted) as they should be yet. Those projects include:

  • Autocrypt -- which avoids any interaction with the keyserver network in favor of in-band key discovery.

  • Web Key Directory or WKD, which maps e-mail addresses to a user-controlled publication space for their OpenPGP Keys.

  • DANE OPENPGPKEY which lets a domain owner publish their user's minimal OpenPGP certificates in the DNS directly.

  • Hagrid, the implementation behind the keys.openpgp.org keyserver, which presents the opportunity for a updates-only interface as well as a place for people to publish their certificates if their domain controller doesn't support WKD or DANE OPENPGPKEY. Hagrid is also an excellent first public showing for the Sequoia project, a Rust-based implementation of the OpenPGP standards that hopefully we can build more tooling on top of in the years to come.

Let's keep pushing these community-driven approaches forward and get the ecosystem to a healthier place.

Mike Gabriel: List Open Files for a Running Application/Service

Friday 28th of June 2019 07:03:10 AM

This is merely a little reminder for myself:

for pid in `ps -C <process-name> -o pid=`; do ls -l "/proc/$pid/fd"; done

On Linux, this returns a list of file handles being held open by all instances of <process-name>.

Update (2019-06-27): Martin Schuster suggested an even nicer (and regarding the output seemingly a more complete) approach to me by email:

lsof -c /^<process-name>$/ -a -d ^mem

Daniel Kahn Gillmor: OpenPGP Certificate Flooding

Friday 28th of June 2019 04:00:00 AM
OpenPGP Certificate Flooding

My public cryptographic identity has been spammed to the point where it is unusable in standard workflows. This blogpost talks about what happened, what I'm doing about it, and what it means for the broader ecosystem.

If you work with me and you use OpenPGP certificates to do so, the crucial things you should know are:

  • Do not refresh my OpenPGP certificate from the SKS keyserver network.

  • Use a constrained keyserver like keys.openpgp.org if you want to check my certificate for updates like revocation, expiration, or subkey rollover.

  • Use an Autocrypt-capable e-mail client, WKD, or direct download from my server to find my certificate in the first place.

  • If you have already fetched my certificate in the last week, and it is bloaated, or your GnuPG instance is horribly slow as a result, you probably want to delete it and then recover it via one of the channels described above.

What Happened?

Some time in the last few weeks, my OpenPGP certificate, 0xC4BC2DDB38CCE96485EBE9C2F20691179038E5C6 was flooded with bogus certifications which were uploaded to the SKS keyserver network.

SKS is known to be vulnerable to this kind of Certificate Flooding, and is difficult to address due to the synchronization mechanism of the SKS pool. (SKS's synchronization assumes that all keyservers have the same set of filters). You can see discussion about this problem from a year ago along with earlier proposals for how to mitigate it. But none of those proposals have quite come to fruition, and people are still reliant on the SKS network.

Previous Instances of Certificate Flooding

We've seen various forms of certificate flooding before, including spam on Werner Koch's key over a year ago, and abuse tools made available years ago under the name "trollwot". There's even a keyserver-backed filesystem proposed as a proof of concept to point out the abuse.

There was even a discussion a few months ago about how the SKS keyserver network is dying.

So none of this is a novel or surprising problem. However, the scale of spam attached to certificates recently appears to be unprecedented. It's not just mine: Robert J, Hansen's certificate has also been spammed into oblivion as well. The signature spam on Werner's certificate, for example is "only" about 5K signatures (a total of < 1MiB), whereas the signature spam attached to mine is more like 55K signatures for a total of 17MiB, and rjh's is more than double that.

What Problems does Certificate Flooding Cause?

The fact that my certificate is flooded quite this badly provides an opportunity to see what breaks. I've been filing bug reports and profiling problems over the last day.

GnuPG can't even import my certificate from the keyservers any more in the common case. This also has implications for ensuring that revocations are discovered, or new subkeys rotated, as described in that ticket.

In the situations where it's possible to have imported the large certificate, gpg exhibits severe performance problems for even basic operations over the keyring.

This causes Enigmail to become unusable if it encounters a flooded certificate.

It also causes problems for monkeysphere-authentication if it encounters a flooded certificate.

There are probably more! If you find other problems for tools that deal with these sort of flooded certs, please report bugs appropriately.

Dealing with Certificate Flooding

What can we do about this? Months ago, i wrote a draft about abuse-resistant keystores that outlined these problems and what we need from a keyserver.

Use Abuse-Resistant Keystores to Refresh Certificates

If the purpose of refreshing your certificate is to find key material updates and revocations, we need to use an abuse-resistant keyserver or network of keyservers for that.

Fortunately, keys.openpgp.org is just such a service, and it was recently launched. It seems to work! It can distribute revocations and subkey rollovers automatically, even if you don't have a user ID for the certificate. You can do this by putting the following line in ~/.gnupg/dirmngr.conf

keyserver hkps://keys.openpgp.org

and ensure that there is no keyserver entry at all in ~/.gnupg/gpg.conf.

This keyserver doesn't distribute third-party certifications at all, though. And if the owner of the e-mail address hasn't confirmed with the operators of keys.openpgp.org that they want that keyserver to distribute their certificate

Fix GnuPG to Import certificate updates even without User IDs

Unfortunately, GnuPG doesn't cope well with importing minimalist certificates. I've applied patches for this in debian experimental (and they're documented in debian as #930665, but those fixes are not yet adopted upstream, or widely deployed elsewhere.

In-band Certificate Discovery

Refreshing certificates is only part of the role that keyserver networks play. Another is just finding OpenPGP certificates in the first place.

The best way to find a certificate is if someone just gives it to you in the context that it makes sense.

The Autocrypt project is an example of this pattern for e-mail messages. If you can adopt an Autocrypt-capable e-mail client, you should, since that will avoid needing to search for keys at all when dealing with e-mail. Unfortunately, those implementations are also not widely available yet.

Certificate Lookup via WKD or DANE

If you're looking up an OpenPGP certificate by e-mail address, you should try looking it up via some mechanism where the address owner (or at least the domain owner) can publish the record. The current best examples of this are WKD and DANE's OPENPGPKEY DNS records. Modern versions of GnuPG support both of these methods. See the auto-key-locate documentation in gpg(1).

Conclusion

This is a mess, and it's a mess a long time coming. The parts of the OpenPGP ecosystem that rely on the naive assumptions of the SKS keyserver can no longer be relied on, because people are deliberately abusing those keyservers. We need significantly more defensive programming, and a better set of protocols for thinking about how and when to retrieve OpenPGP certificates.

A Personal Postscript

I've spent a significant amount of time over the years trying to push the ecosystem into a more responsible posture with respect to OpenPGP certificates, and have clearly not been as successful at it or as fast as I wanted to be. Complex ecosystems can take time to move.

To have my own certificate directly spammed in this way felt surprisingly personal, as though someone was trying to attack or punish me, specifically. I can't know whether that's actually the case, of course, nor do i really want to. And the fact that Robert J. Hansen's certificate was also spammed makes me feel a little less like a singular or unique target, but I also don't feel particularly pround of feeling relieved that someone else is also being "punished" in addition to me.

But this report wouldn't be complete if I didn't mention that i've felt disheartened and demotivated by this situation. I'm a stubborn person, and I'm trying to make the best of the situation by being constructive about at least documenting the places that are most severely broken by this. But I've also found myself tempted to walk away from this ecosytsem entirely because of this incident. I don't want to be too dramatic about this, but whoever did this basically experimented on me (and Robert) directly, and it's a pretty shitty thing to do.

If you're reading this, and you set this off, and you selected me specifically because of my role in the OpenPGP ecosystem, or because I wrote the abuse-resistant-keystore draft, or because I'm part of the Autocrypt project, then you should know that I care about making this stuff work for people. If you'd reached out to me to describe what you were planning to do, we could have done all of the above bug reporting and triage using demonstration certificates, and worked on it together. I would have happily helped. I still might! But because of the way this was done, I'm not feeling particularly happy right now. I hope that someone is, somewhere.

Daniel Kahn Gillmor: OpenPGP Certificate Flooding

Friday 28th of June 2019 04:00:00 AM
OpenPGP Certificate Flooding

My public cryptographic identity has been spammed to the point where it is unusable in standard workflows. This blogpost talks about what happened, what I'm doing about it, and what it means for the broader ecosystem.

If you work with me and you use OpenPGP certificates to do so, the crucial things you should know are:

  • Do not refresh my OpenPGP certificate from the SKS keyserver network.

  • Use a constrained keyserver like keys.openpgp.org if you want to check my certificate for updates like revocation, expiration, or subkey rollover.

  • Use an Autocrypt-capable e-mail client, WKD, or direct download from my server to find my certificate in the first place.

  • If you have already fetched my certificate in the last week, and it is bloated, or your GnuPG instance is horribly slow as a result, you probably want to delete it and then recover it via one of the channels described above.

What Happened?

Some time in the last few weeks, my OpenPGP certificate, 0xC4BC2DDB38CCE96485EBE9C2F20691179038E5C6 was flooded with bogus certifications which were uploaded to the SKS keyserver network.

SKS is known to be vulnerable to this kind of Certificate Flooding, and is difficult to address due to the synchronization mechanism of the SKS pool. (SKS's synchronization assumes that all keyservers have the same set of filters). You can see discussion about this problem from a year ago along with earlier proposals for how to mitigate it. But none of those proposals have quite come to fruition, and people are still reliant on the SKS network.

Previous Instances of Certificate Flooding

We've seen various forms of certificate flooding before, including spam on Werner Koch's key over a year ago, and abuse tools made available years ago under the name "trollwot". There's a keyserver-backed filesystem proposed as a proof of concept to point out the abuse.

There was even a discussion a few months ago about how the SKS keyserver network is dying.

So none of this is a novel or surprising problem. However, the scale of spam attached to certificates recently appears to be unprecedented. It's not just mine: Robert J, Hansen's certificate has also been spammed into oblivion as well. The older certification spam on Werner's certificate, for example is "only" about 5K certifications (a total of < 1MiB), whereas the certification spam attached to mine is more like 55K certifications for a total of 17MiB, and rjh's is more than double that.

What Problems does Certificate Flooding Cause?

The fact that my certificate is flooded quite this badly provides an opportunity to see what breaks. I've been filing bug reports and profiling problems over the last day.

GnuPG can't even import my certificate from the keyservers any more in the common case. This also has implications for ensuring that revocations are discovered, or new subkeys rotated, as described in that ticket.

In the situations where it's possible to have imported the large certificate, gpg exhibits severe performance problems for even basic operations over the keyring.

This causes Enigmail to become unusable if it encounters a flooded certificate.

It also causes problems for monkeysphere-authentication if it encounters a flooded certificate.

If this spammed certificate is in the GnuPG keyring, just verifying an OpenPGP-signed tag in the git revision control system made by this certificate is now extremely expensive. git tag -v $tagname, for a tag that is signed with the signing-capable subkey of this certificate consumes 145 seconds of CPU time (tag signature verification often happens as part of an automated process, and typically takes much less than 1 second of CPU time).

There are probably more! If you find other problems for tools that deal with these sort of flooded certs, please report bugs appropriately.

Dealing with Certificate Flooding

What can we do about this? Months ago, i wrote a draft about abuse-resistant keystores that outlined these problems and what we need from a keyserver.

Use Abuse-Resistant Keystores to Refresh Certificates

If the purpose of refreshing your certificate is to find key material updates and revocations, we need to use an abuse-resistant keyserver or network of keyservers for that.

Fortunately, keys.openpgp.org is just such a service, and it was recently launched. It seems to work! It can distribute revocations and subkey rollovers automatically, even if you don't have a user ID for the certificate. You can do this by putting the following line in ~/.gnupg/dirmngr.conf

keyserver hkps://keys.openpgp.org

and ensure that there is no keyserver entry at all in ~/.gnupg/gpg.conf.

This keyserver doesn't distribute third-party certifications at all, though. And if the owner of the e-mail address hasn't confirmed with the operators of keys.openpgp.org that they want that keyserver to distribute their certificate, it won't even distribute the certificate's user IDs.

This keyserver also doesn't have the same keys as the SKS pool. It was seeded with the keys on the pool on setup, but is not pulling new updates in nor sending updates back.

Fix GnuPG to Import certificate updates even without User IDs

Unfortunately, GnuPG doesn't cope well with importing minimalist certificates. I've applied patches for this in debian experimental (and they're documented in debian as #930665), but those fixes are not yet adopted upstream, or widely deployed elsewhere.

In-band Certificate Discovery

Refreshing certificates is only part of the role that keyserver networks play. Another is just finding OpenPGP certificates in the first place.

The best way to find a certificate is if someone just gives it to you in the context that it makes sense.

The Autocrypt project is an example of this pattern for e-mail messages. If you can adopt an Autocrypt-capable e-mail client, you should, since that will avoid needing to search for keys at all when dealing with e-mail. Unfortunately, those implementations are also not widely available yet.

Certificate Lookup via WKD or DANE

If you're looking up an OpenPGP certificate by e-mail address, you should try looking it up via some mechanism where the address owner (or at least the domain owner) can publish the record. The current best examples of this are WKD and DANE's OPENPGPKEY DNS records. Modern versions of GnuPG support both of these methods. See the auto-key-locate documentation in gpg(1).

Conclusion

This is a mess, and it's a mess a long time coming. The parts of the OpenPGP ecosystem that rely on the naive assumptions of the SKS keyserver can no longer be relied on, because people are deliberately abusing those keyservers. We need significantly more defensive programming, and a better set of protocols for thinking about how and when to retrieve OpenPGP certificates.

A Personal Postscript

I've spent a significant amount of time over the years trying to push the ecosystem into a more responsible posture with respect to OpenPGP certificates, and have clearly not been as successful at it or as fast as I wanted to be. Complex ecosystems can take time to move.

To have my own certificate directly spammed in this way felt surprisingly personal, as though someone was trying to attack or punish me, specifically. I can't know whether that's actually the case, of course, nor do I really want to. And the fact that Robert J. Hansen's certificate was also spammed makes me feel a little less like a singular or unique target, but I also don't feel particularly proud of feeling relieved that someone else is also being "punished" in addition to me.

But this report wouldn't be complete if I didn't mention that I've felt disheartened and demotivated by this situation. I'm a stubborn person, and I'm trying to make the best of the situation by being constructive about at least documenting the places that are most severely broken by this. But I've also found myself tempted to walk away from this ecosystem entirely because of this incident. I don't want to be too dramatic about this, but whoever did this basically experimented on me (and Robert) directly, and it's a pretty shitty thing to do.

If you're reading this, and you set this off, and you selected me specifically because of my role in the OpenPGP ecosystem, or because I wrote the abuse-resistant-keystore draft, or because I'm part of the Autocrypt project, then you should know that I care about making this stuff work for people. If you'd reached out to me to describe what you were planning to do, we could have done all of the above bug reporting and triage using demonstration certificates, and worked on it together. I would have happily helped. I still might! But because of the way this was done, I'm not feeling particularly happy right now. I hope that someone is, somewhere.

Kees Cook: package hardening asymptote

Thursday 27th of June 2019 10:35:09 PM

Forever ago I set up tooling to generate graphs representing the adoption of various hardening features in Ubuntu packaging. These were very interesting in 2006 when stack protector was making its way into the package archive. Similarly in 2008 and 2009 as FORTIFY_SOURCE and read-only relocations made their way through the archive. It took a while to really catch hold, but finally PIE-by-default started to take off in 2016 through 2018:

Around 2012 when Debian started in earnest to enable hardening features for their archive, I realized this was going to be a long road. I added the above “20 year view” for Ubuntu and then started similarly graphing hardening features in Debian packages too (the blip on PIE here was a tooling glitch, IIRC):

Today I realized that my Ubuntu tooling broke back in January and no one noticed, including me. And really, why should anyone notice? The “near term” (weekly, monthly) graphs have been basically flat for years:

In the long-term view the measurements have a distinctly asymptotic appearance and the graphs are maybe only good for their historical curves now. But then I wonder, what’s next? What new compiler feature adoption could be measured? I think there are still a few good candidates…

How about enabling -fstack-clash-protection (only in GCC, Clang still hasn’t implemented it).

Or how about getting serious and using forward-edge Control Flow Integrity? (Clang has -fsanitize=cfi for general purpose function prototype based enforcement, and GCC has the more limited -fvtable-verify for C++ objects.)

Where is backward-edge CFI? (Is everyone waiting for CET?)

Does anyone see something meaningful that needs adoption tracking?

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Matrix on Debian blog: June 2019 Matrix on Debian update

Wednesday 26th of June 2019 07:36:00 PM

This is an update on the state of Matrix-related software in Debian.

Synapse

Unfortunately, the recently published Synapse 1.0 didn’t make it into Debian Buster, which is due to be released next week, so if you install 0.99.2 from Buster, you need to update to a newer version which will be available from backports shortly after the release.

Originally, 0.99 was meant to be the last version before 1.0, but due to a bunch of issues discovered since then, some of them security-related, new incompatible room format was introduced in 0.99.5. This means 0.99.2 currently in Debian Buster is going to only see limited usefulness, since rooms are being upgraded to the new format as 1.0 is being deployed across the network.

For those of you running forever unstable Sid, good news: Synapse 1.0 is now available in unstable! ACME support has not yet been enabled, since it requires a few packages not yet in Debian (they’re currently in the NEW queue). We hope it will be available soon after Buster is released.

Quaternion

Quaternion 0.0.9.4 is being packaged by Hubert Chathi, soon to be uploaded. Hubert has already updated and uploaded libqmatrixclient and olm, which are waiting in NEW.

Circle

There’s been some progress on packaging Circle, a modular IRC client with Matrix support. The backend and IRC support have been available for some time in Debian already, but to be useful, it also needs a user-interfacing front-end. The GTK 2 front-end has just been uploaded to Debian, as have the necessary Perl modules for Matrix support. All of the said packages are now being reviewed in the NEW queue.

Fractal

Early in June, Andrej Shadura looked into packaging Fractal, but found a few crates being of incompatible versions in Debian compared to what upstream expects. A current blocker is a pending release of rust-phf.

Get in touch

Come chat to us in #debian-matrix:matrix.org!

Sam Hartman: AH/DAM/DPL Meet Up

Wednesday 26th of June 2019 02:42:47 PM
All the members of the Antiharassment team met with the Debian Account Managers and the DPL in that other Cambridge— the one with proper behaviour, not the one where pounds are weight and not money.

I was nervous. I was not part of decision making earlier this year around code of conduct issues. I was worried that my concerns would be taken as insensitive judgment applied by someone who wasn’t there.

I was worried about whether I would find my values aligned with the others. I care about treating people with respect. I also care about freedom of expression. I value a lot of feminist principles and fighting oppression. Yet I’m happy with my masculinity. I acknowledge my privilege and have some understanding of the inequities in the world. Yet I find some arguments based on privilege problematic and find almost all uses of the phrase “check your privilege” to be dismissive and to deny any attempt at building empathy and understanding.

And Joerg was there. He can be amazingly compassionate and helpful. He can also be gruff at times. He values brevity, which I’m not good at. I was bracing myself for a sharp, brief, gruff rebuke delivered in response to my feedback. I know there would be something compassionate under such a rebuke, but it might take work to find.

The meeting was hard; we were talking about emotionally intense issues. But it was also wonderful. We made huge progress. This blog is not about reporting that progress.

Like the other Debian meetings I’ve been at, I felt like I was part of something wonderful. We sat around and described the problems we were working on. They were social not technical. We brainstormed solutions, talked about what worked, what didn’t work. We disagreed. We listened to each other. We made progress.

Listening to the discussions on debian-private in December and January, it sounded like DAM and Antiharassment thought they had it all together. I got a note asking if I had any suggestions for how things could have been done better. I kind of felt like they were being polite and asking since I had offered support.

Yet I know now that they were struggling as much as any of us struggle with a thorny RC bug that crosses multiple teams and packages. The account managers tried to invent suspensions in response to what was going on. They wanted to take a stand against bullying and disrespectful behavior. But they didn’t want to drive away contributors; they wanted to find a way to let people know that a real problem required immediate attention. Existing tools were inadequate. So they invented account suspensions. It was buggy. And when your social problem solving tools are buggy, people get hurt.

But I didn’t find myself facing off against that mythical group of people sure in their own actions I had half imagined. I found myself sitting around a table with members of my community, more alike than different. They had insecurities just like I do. They doubted themselves. I’m sure there was some extent to which they felt it was the project against them in December and January. But they also felt some of that pain that raged across debian-private. They didn’t think they had the answers, and they wanted to work with all of us to find them.

I found a group of people who genuinely care about openness and expressing dissenting views. The triggers for action were about how views were expressed not about those views. The biggest way to get under DAM’s skin and get them started thinking about whether there is a membership issue appears to be declining to engage constructively when someone wants to talk to you about a problem. In contrast, even if something has gone horribly wrong trying to engage constructively is likely to get you the support of all of us around that table in finding a way to meet your needs as well as the greater project.

Fear over language didn’t get in our way. People sometimes made errors about using someone’s preferred pronouns. It wasn’t a big deal: when they noticed they corrected themselves, acknowledged that they cared about the issue and went on with life. There was cursing sometimes and some really strong feelings.

There was even a sex joke. Someone talked about sucking and someone else intentionally misinterpreted it in a sexual context. But people payed attention to the boundaries of others. I couldn’t have gotten away with telling that joke: I didn’t know the people well enough to know their boundaries. It is not that I’m worried I’ll offend. It is that I actively want to respect the others around me. One way I can do that is to understand their boundaries and respect them.

One joke did cross a line. With a series of looks and semi-verbal communication, we realized that was probably a bit too far for that group while we were meeting. The person telling the joke acknowledged and we moved on.

I was reassured that we all care about the balance that allows Debian to work. We bring the same dedication to creating the universal operating system that we do to building our community. With sufficient practice we’ll be really good at the community work. I’m excited!

Jonathan McDowell: Support your local Hackerspace

Wednesday 26th of June 2019 01:43:09 PM

My first Hackerspace was Noisebridge. It was full of smart and interesting people and I never felt like I belonged, but I had just moved to San Francisco and it had interesting events, like 5MoF, and provided access to basic stuff I hadn’t moved with me, like a soldering iron. While I was never a heavy user of the space I very much appreciated its presence, and availability even to non-members. People were generally welcoming, it was a well stocked space and there was always something going on.

These days my local hackerspace is Farset Labs. I don’t have a need for tooling in the same way, being lucky enough to have space at home and access to all the things I didn’t move to the US, but it’s still a space full of smart and interesting people that has interesting events. And mostly that’s how I make use of the space - I attend events there. It’s one of many venues in Belfast that are part of the regular Meetup scene, and for a while I was just another meetup attendee. A couple of things changed the way I looked at. Firstly, for whatever reason, I have more of a sense of belonging. It could be because the tech scene in Belfast is small enough that you’ll bump into the same people at wildly different events, but I think that’s true of the tech scene in most places. Secondly, I had the realisation (and this is obvious once you say it, but still) that Farset was the only non-commercial venue that was hosting these events. It’s predominantly funded by members fees; it’s not getting Invest NI or government subsidies (though I believe Weavers Court is a pretty supportive landlord).

So I became a member. It then took me several months after signing up to actually be in the space again, but I feel it’s the right thing to do; without the support of their local tech communities hackerspaces can’t thrive. I’m probably in Farset at most once a month, but I’d miss it if it wasn’t there. Plus I don’t want to see such a valuable resource disappear from the Belfast scene.

And that would be my message to you, dear reader. Support your local hackerspace. Become a member if you can afford it, donate what you can if not, or just show up and help out - as non-commercial entities things generally happen as a result of people turning up and volunteering their time to help out.

(This post prompted by a bunch of Small Charity Week tweets last week singing the praises of Farset, alongside the announcement today that Farset Labs is expanding - if you use the space and have been considering becoming a member or even just donating, now is the time to do it.)

More in Tux Machines

Audiocasts/Shows: Jupiter (Linux Academy) and TLLTS

Android Leftovers

KMyMoney 5.0.6 released

The KMyMoney development team today announces the immediate availability of version 5.0.6 of its open source Personal Finance Manager. Another maintenance release is ready: KMyMoney 5.0.6 comes with some important bugfixes. As usual, problems have been reported by our users and the development team fixed some of them in the meantime. The result of this effort is the brand new KMyMoney 5.0.6 release. Despite even more testing we understand that some bugs may have slipped past our best efforts. If you find one of them, please forgive us, and be sure to report it, either to the mailing list or on bugs.kde.org. Read more

Games: Don't Starve Together, Cthulhu Saves the World, EVERSPACE 2 and Stadia

  • Don't Starve Together has a big free update adding in boats and a strange island

    Klei Entertainment have given the gift of new features to their co-op survival game Don't Starve Together, with the Turn of Tides update now available. Taking a little inspiration from the Shipwrecked DLC available for the single-player version Don't Starve, this new free update enables you to build a boat to carry you and other survivors across the sea. Turn of Tides is the first part of a larger update chain they're calling Return of Them, so I'm excited to see what else is going to come to DST.

  • Cthulhu Saves the World has an unofficial Linux port available

    In response to an announcement to a sequel to Cthulhu Saves the World, Ethan Lee AKA flibitijibibo has made a unofficial port for the original and a few other previously Windows-only games. As a quick reminder FNA is a reimplementation of the proprietary XNA API created by Micrsosoft and quite a few games were made with that technology. We’ve gotten several ports thanks to FNA over the years though Ethan himself has mostly moved on to other projects like working on FAudio and Steam Play.

  • EVERSPACE 2 announced, with more of a focus on exploration and it will release for Linux

    EVERSPACE is probably one of my absolute favourite space shooters from the last few years, so I'm extremely excited to see EVERSPACE 2 be announced and confirmed for Linux. For the Linux confirmation, I reached out on Twitter where the developer replied with "#Linux support scheduled for full release in 2021!".

  • Google reveal more games with the latest Stadia Connect, including Cyberpunk 2077

    Today, Google went back to YouTube to show off an impressive list of games coming to their Stadia game streaming service, which we already know is powered by Debian Linux and Vulkan. As a reminder, Google said not to see Stadia as if it was the "Netflix of games", as it's clearly not. Stadia Base requires you to buy all your games as normal, with Stadia Pro ($9.99 monthly) giving you a trickle of free games to access on top of 4K and surround sound support.