Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 1 hour 57 min ago

Rhonda D'Vine: Enigma

Friday 18th of January 2019 03:57:00 PM

Just the other day a working colleague asked me what kind of music I listen to, especially when working. It's true, music helps me to focus better and work more concentrated. But it obviously depends on what kind of music it is. And there is one project I come to every now and then. The name is Enigma. It's not disturbing, good for background, with soothing and non-intrusive vocals. Here are the songs:

  • Return To Innocence: This is quite likely the song you know from them, which also got me hooked up originally.
  • Push The Limits: A powerful song. The album version is even a few minutes longer.
  • Voyageur: Love the rhythm and theme in this song.

Like always, enjoy.

/music | permanent link | Comments: 0 | Flattr this

Keith Packard: newt-duino

Friday 18th of January 2019 05:19:07 AM
Newt-Duino: Newt on an Arduino

Here's our target system. The venerable Arduino Duemilanove. Designed in 2009, this board comes with the Atmel ATmega328 system on chip, and not a lot else. This 8-bit microcontroller sports 32kB of flash, 2kB of RAM and another 1kB of EEPROM. Squeezing even a tiny version of Python onto this device took some doing.

How Small is Newt Anyways?

From my other ?postings about Newt, the amount of memory and set of dependencies for running Newt has shrunk over time. Right now, a complete Newt system fits in about 30kB of text on both Cortex M and ATmel processors. That includes the parser and bytecode interpreter, plus garbage collector for memory management.

Bare Metal Arduino

The first choice to make is whether to take some existing OS and get that running, or to just wing it and run right on the metal. To save space, I decided (at least for now), to skip the OS and implement whatever hardware support I need myself.

Newt has limited dependencies; it doesn't use malloc, and the only OS interface the base language uses is getchar and putchar to the console. That means that a complete Newt system need not be much larger than the core Newt language and some simple I/O routines.

For the basic Arduino port, I included some simple serial I/O routines for the console to bring the language up. Once running, I've started adding some simple I/O functions to talk to the pins of the device.

Pushing data out of .data

The ATmega 328P, like most (all?) of the 8-bit Atmel processors cannot directly access flash memory as data. Instead, you are required to use special library functions. Whoever came up with this scheme clearly loved the original 8051 design because it worked the same way.

Modern 8051 clones (like the CC1111 we used to use for Altus Metrum stuff) fix this bug by creating a flat 16-bit address space for both flash and ram so that 16-bit pointers can see all of memory. For older systems, SDCC actually creates magic pointers and has run-time code that performs the relevant magic to fetch data from anywhere in the system. Sadly, no-one has bothered to do this with avr-gcc.

This means that any data accesses done in the normal way can only touch RAM. To make this work, avr-gcc places all data, read-write and read-only in RAM. Newt has some pretty big read-only data bits, including parse tables and error messages. With only 2kB of RAM but 32kB of flash, it's pretty important to avoid filling that with read-only data.

avr-gcc has a whole 'progmem' mechanism which allows you to direct data into living only in flash by decorating the declaration with PROGMEM:

const char PROGMEM error_message[] = "This is in flash, not RAM";

This is pretty easy to manage, the only problem is that attempts to access this data from your program will fail unless you use the pgm_read_word and pgm_read_byte functions:

const char *m = error_message; char c; while ((c = (char) pgm_read_byte(m++))) putchar(c);

avr-libc includes some functions, often indicated with a '_P' suffix, which take pointers to flash instead of pointers to RAM. So, it's possible to place all read-only data in flash and not in RAM, it's just a pain, especially when using portable code.

So, I hacked up the newt code to add macros for the parse tables, one to add the necessary decoration and three others to fetch elements of the parse table by address.

#define PARSE_TABLE_DECLARATION(t) PROGMEM t #define PARSE_TABLE_FETCH_KEY(a) ((parse_key_t) pgm_read_word(a)) #define PARSE_TABLE_FETCH_TOKEN(a) ((token_t) pgm_read_byte(a)) #define PARSE_TABLE_FETCH_PRODUCTION(a) ((uint8_t) pgm_read_byte(a))

With suitable hacks in Newt to use these macros, I could finally build newt for the Arduino.

Automatically Getting Strings out of RAM

A lot of the strings in Newt are printf format strings passed directly to a printf-ish functions. I created some wrapper macros to automatically move the format strings out of RAM and call functions expecting the strings to be in flash. Here's the wrapper I wrote for fprintf:

#define fprintf(file, fmt, args...) do { \ static const char PROGMEM __fmt__[] = (fmt); \ fprintf_P(file, __fmt__, ## args); \ } while(0)

This assumes that all calls to fprintf will take constant strings, which is true in Newt, but not true generally. I would love to automatically handle those cases using __builtin_const_p, but gcc isn't as helpful as it could be; you can't declare a string to be initialized from a variable value, even if that code will never be executed:

#define fprintf(file, fmt, args...) do { \ if (__builtin_const_p(fmt)) { \ static const char PROGMEM __fmt__[] = (fmt); \ fprintf_P(file, __fmt__, ## args); \ } else { \ fprintf(file, fmt, ## args); \ } \ } while(0)

This doesn't compile when 'fmt' isn't a constant string because the initialization of fmt, even though never executed, isn't legal. Suggestions on how to make this work would be most welcome. I only need this for sprintf, so for now, I've created a 'sprintf_const' macro which does the above trick that I use for all sprintf calls with a constant string format.

With this hack added, I saved hundreds of bytes of RAM, making enough space for an (wait for it) 900 byte heap within the interpreter. That's probably enough to do some simple robotics stuff, with luck sufficient for my robotics students.

It Works! > def fact(x): > r = 1 > for y in range(2,x): > r *= y > return r > > fact(10) 362880 > fact(20) * fact(10) 4.41426e+22 >

This example was actually run on the Arduino pictured above.

Future Work

All I've got running at this point is the basic language and a couple of test primitives to control the LED on D13. Here are some things I'd like to add.

Using EEPROM for Program Storage

To make the system actually useful as a stand-alone robot, we need some place to store the application. Fortunately, the ATmega328P has 1kB of EEPROM memory. I plan on using this for program storage and will parse the contents at start up time.

Simple I/O API

Having taught students using Lego Logo, I've learned that the fewer keystrokes needed to get the first light blinking the better the students will do. Seeking to replicate that experience, I'm thinking that the I/O API should avoid requiring any pin mode settings, and should have functions that directly manipulate any devices on the board. The Lego Logo API also avoids functions with more than one argument, preferring to just save state within the system. So, for now, I plan to replicate that API loosely:

def onfor(n): talkto(LED) on() time.sleep(n) off()

In the Arduino environment, a motor controller is managed with two separate bits, requiring two separate numbers and lots of initialization:

int motor_speed = 6; int motor_dir = 5; void setup() { pinMod(motor_dir, OUTPUT); } void motor(int speed, int dir) { digitalWrite(motor_dir, dir); analogWrite(motor_speed, speed); }

By creating an API which knows how to talk to the motor controller as a unified device, we get:

def motor(speed, dir): talkto(MOTOR_1) setdir(dir) setpower(speed) Plans

I'll see about getting the EEPROM bits working, then the I/O API running. At that point, you should be able to do anything with this that you can with C, aside from performance.

Then some kind of host-side interface, probably written in Java to be portable to whatever machine the user has, and I think the system should be ready to experiment with.

Links

The source code is available from my server at https://keithp.com/cgit/newt.git/, and also at github https://github.com/keith-packard/newt. It is licensed under the GPLv2 (or later version).

Daniel Kahn Gillmor: New OpenPGP certificate for dkg, 2019

Friday 18th of January 2019 04:43:11 AM

My old OpenPGP certificate will be 12 years old later this year. I'm transitioning to a new OpenPGP certificate.

You might know my old OpenPGP certificate as:

pub rsa4096 2007-06-02 [SC] [expires: 2019-06-29] 0EE5BE979282D80B9F7540F1CCD2ED94D21739E9 uid Daniel Kahn Gillmor <dkg@fifthhorseman.net> uid Daniel Kahn Gillmor <dkg@debian.org>

My new OpenPGP certificate is:

pub ed25519 2019-01-17 [C] [expires: 2021-01-16] 723E343AC00331F03473E6837BE5A11FA37E8721 uid Daniel Kahn Gillmor uid dkg@debian.org uid dkg@fifthhorseman.net

If you've certified my old certificate, I'd appreciate your certifying my new one. Please do confirm by contacting me via whatever channels you think are most appropriate (including in-person if you want to share food or drink with me!) before you re-certify, of course.

I've published the new certificate to the SKS keyserver network, as well as to my personal website -- you can fetch it like this:

wget -O- https://dkg.fifthhorseman.net/dkg-openpgp.key | gpg --import

A copy of this transition statement signed by both the old and new certificates is available on my website, and you can also find further explanation about technical details, choices, and rationale on my blog.

Technical details

I've made a few decisions differently about this certificate:

Ed25519 and Curve25519 for Public Key Material

I've moved from 4096-bit RSA public keys to the Bernstein elliptic curve 25519 for all my public key material (EdDSA for signing, certification, and authentication, and Curve25519 for encryption). While 4096-bit RSA is likely to be marginally stronger cryptographically than curve 25519, 25519 still appears to be significantly stronger than any cryptanalytic attack known to the public.

Additionally, elliptic curve keys and the signatures associated with them are tiny compared to 4096-bit RSA. I certified my new cert with my old one, and well over half of the new certificate is just certifications from the old key because they are so large.

This size advantage makes it easier for me ship the public key material (and signatures from it) in places that would be more awkward otherwise. See the discussion below.

Separated User IDs

The other thing you're likely to notice if you're considering certifying my key is that my User IDs are now split out. rather than two User IDs:

  • Daniel Kahn Gillmor <dkg@fifthhorseman.net>
  • Daniel Kahn Gillmor <dkg@debian.org>

I now have the name separate from the e-mail addresses, making three User IDs:

  • Daniel Kahn Gillmor
  • dkg@fifthhorseman.net
  • dkg@debian.org

There are a couple reasons i've done this.

One reason is to simplify the certification process. Traditional OpenPGP User ID certification is an all-or-nothing process: the certifier is asserting that both the name and e-mail address belong to the identified party. But this can be tough to reason about. Maybe you know my name, but not my e-mail address. Or maybe you know my over e-mail, but aren't really sure what my "real" name is (i'll leave questions about what counts as a real name to a more philosophical blog post). You ought to be able to certify them independently. Now you can, since it's possible to certify one User ID independently of another.

Another reason is because i plan to use this certificate for e-mail, among other purposes. In e-mail systems, the human name is a confusing distraction, as the real underlying correspondent is the e-mail address. E-mail programs should definitely allow their users to couple a memorable name with an e-mail address, but it should be more like a petname. The bundling of a human "real" name with the e-mail address by the User ID itself just provides more points of confusion for the mail client.

If the user communicates with a certain person by e-mail address, the key should be bound to the e-mail protocol address on its own. Then the user themselves can decide what other monikers they want to use for the person; the User ID shouldn't force them to look at a "real" name just because it's been bundled together.

Split out ACLU identity

Note that my old certificate included some additional identities, including job-specific e-mail addresses. I've split out my job-specific cryptographic credentials to a different OpenPGP key entirely. If you want to mail me at dkg@aclu.org, you can use key 888E6BEAC41959269EAA177F138F5AB68615C560 (which is also published on my work bio page).

This is in part because the folks who communicate with me at my ACLU address are more likely to have old or poorly-maintained e-mail systems than other people i communicate with, and they might not be able to handle curve 25519. So the ACLU key is using 3072-bit RSA, which is universally supported by any plausible OpenPGP implementation.

This way i can experiment with being more forward-looking in my free software and engineering community work, and shake out any bugs that i might find there, before cutting over the e-mails that come in from more legal- and policy-focused colleagues.

Isolated Subkey Capabilities

In my new certificate, the primary key is designated certification-only. There are three subkeys, one each for authentication, encryption, and signing. The primary key also has a longer expiration time (2 years as of this writing), while the subkeys have 1 year expiration dates.

Isolating this functionality helps a little bit with security (i can take the certification key entirely offline while still being able to sign non-identity data), and it also offers a pathway toward having a more robust subkey rotation schedule. As i build out my tooling for subkey rotation, i'll probably make a few more blog posts about that.

Autocrypt-friendly

Finally, several of these changes are related to the Autocrypt project, a really great collaboration of a group of mail user agent developers, designers, UX experts, trainers, and users, who are providing guidance to make encrypted e-mail something that normal humans can use without having to think too much about it.

Autocrypt treats the OpenPGP certificate User IDs as merely decorative, but its recommended form of the User ID for an OpenPGP certificate is just the bare e-mail address. With the User IDs split out as described above, i can produce a minimized OpenPGP certificate that's Autocrypt-friendly, including only the User ID that matches my sending e-mail address precisely, and leaving out the rest.

I'm proud to be associated with the Autocrypt project, and have been helping to shepherd some of the Autocrypt functionality into different clients (my work on my own MUA of choice, notmuch is currently stalled, but i hope to pick it back up again soon). Having an OpenPGP certificate that works well with Autocrypt, and that i can stuff into messages even from clients that aren't fully-Autocrypt compliant yet is useful to me for getting things tested.

Documenting workflow vs. tooling

Some people may want to know "how did you make your OpenPGP cert like this?" For those folks, i'm sorry but this is not a step-by-step technical howto. I've read far too many "One True Way To Set Up Your OpenPGP Certificate" blog posts that haven't aged well, and i'm not confident enough to tell people to run the weird arbitrary commands that i ran to get things working this way.

Furthermore, i don't want people to have to run those commands.

If i think there are sensible ways to set up OpenPGP certificates, i want those patterns built into standard tooling for normal people to use, without a lot of command-line hackery.

So if i'm going to publish a "how to", it would be in the form of software that i think can be sensibly maintained and provides a sane user interface for normal humans. I haven't written that tooling yet, but i need to change keys first, so for now you just get this blog post in English. But feel free to tell me what you think i could do better!

Dirk Eddelbuettel: RcppArmadillo 0.9.200.7.0

Friday 18th of January 2019 02:00:00 AM

A new RcppArmadillo bugfix release arrived at CRAN today. The version 0.9.200.7.0 is another minor bugfix release, and based on the new Armadillo bugfix release 9.200.7 from earlier this week. I also just uploaded the Debian version, and Uwe’s systems have already create the CRAN Windows binary.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 559 other packages on CRAN.

This release just brings minor upstream bug fixes, see below for details (and we also include the updated entry for the November bugfix release).

Changes in RcppArmadillo version 0.9.200.7.0 (2019-01-17)
  • Upgraded to Armadillo release 9.200.7 (Carpe Noctem)

  • Fixes in 9.200.7 compared to 9.200.5:

    • handling complex compound expressions by trace()

    • handling .rows() and .cols() by the Cube class

Changes in RcppArmadillo version 0.9.200.5.0 (2018-11-09)
  • Upgraded to Armadillo release 9.200.5 (Carpe Noctem)

  • Changes in this release

    • linking issue when using fixed size matrices and vectors

    • faster handling of common cases by princomp()

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Anastasia Tsikoza: Week 5: Resolving the blocker

Thursday 17th of January 2019 11:42:56 PM
Photo by Pascal Debrunner on Unsplash

Hi, I am Anastasia, an Outreachy intern working on the project “Improve integration of Debian derivatives with Debian infrastructure and community”. Here are my other posts that haven’t appeared in the feed on time:

This post is about my work on the subscription feature for Debian derivatives - first of the two main issues to be resolved within my internship. And this week’s topic from the organizers is “Think About Your Audience”, especially newcomers to the community and future Outreachy applicants. So I’ll try to write about the feature keeping the most important details but taking into account that the readers might be unfamiliar with some terms and concepts.

What problem I was trying to solve

As I wrote here, around the middle of December the Debian derivatives census was re-enabled. It means that the census scripts started running on a scheduled basis. Every midnight they scanned through the derivatives’ wiki pages, retrieved all the info needed, processed it and formed files and directories for future use.

At each stage different kinds of errors could occur. Usually that means the maintainers should be contacted so that they update wiki pages of their derivatives or fix something in their apt repositories. Checking error logs, identifying new issues, notifying maintainers about them and keeping in mind who has already been notified and who has not - all this always was a significant amount of work which should have been eventually automated (at least partially).

What exactly I needed to do

By design, the subscription system had nothing in common with the daily spam mailing. Notifications should have only been sent when new issues appeared, which doesn’t happen too often.

It was decided to create several small logs for each derivative and then merge them into one. The resulting log should have also had comments, like which issues are new and which ones have been fixed since the last log was sent.

So my task could be divided into three parts:

  • create a way for folks to subscribe
  • write a script joining small logs into separate log for each particular derivative
  • write a script sending the log to subscribers (if it has to be sent)

The directory with the small logs was supposed to be kept until the next run to be compared with the new small logs. If some new issues appeared, the combined log would be sent to subscribers; otherwise it would simply be stored in the derivative’s directory.

Overcoming difficulties

I ran into first difficulties almost immediately after my changes were deployed. I forgot about the parallel execution of make: in the project’s crontab file we use the make --jobs=3, which means three parallel Make jobs are creating files and directories concurrently. So all can turn to chaos if you weren’t extra careful with the makefiles.

Every time I needed to fix something, Paul had to run a cron job and create most of the files from zero. And for nearly two days we’ve pulled together, coordinating actions through IRC. At the first day’s end the main problem seemed to be solved and I let myself rejoice… and then things went wrong. I was at my wits’ end and felt extremely tired, so I went off to bed. The next day I found the solution seconds after waking up!

Cool things

When finally all went without a hitch, it was a huge relief to me. And I am happy that some folks have already signed up for the errors, and hope they consider the feature useful.

Also I was truly pleased to know that Paul had first learned about makefiles’ order-only prerequisites from my code! That was quite a surprise :)

More details

If you are curious, in the Projects section a more detailed report will appear soon.

Jonathan Dowland: multi-coloured Fedoras

Thursday 17th of January 2019 04:57:20 PM

My blog post about my Red Hat shell prompt proved very popular, although some people tried my instructions and couldn't get it to work, so here's some further information.

The unicode code-point I am using is "

Julien Danjou: Serious Python released!

Thursday 17th of January 2019 04:56:09 PM

Today I'm glad to announce that my new book, Serious Python, has been released.

However, you wonder… what is Serious Python?

Well, Serious Python is the the new name of The Hacker's Guide to Python — the first book I published. Serious Python is the 4th update of that book — but with a brand a new name and a new editor!

For more than a year, I've been working with the editor No Starch Press to enhance this book and bring it to the next level! I'm very proud of what we achieved, and working with a whole team on this book has been a fantastic experience.

The content has been updated to be ready for 2019: pytest is now a de-facto standard for testing, so I had to write about it. On the other hand, Python 2 support was less a focus, and I removed many mentions of Python 2 altogether. Some chapters have been reorganized, regrouped and others got enhanced with new content!

The good news: you can get this new edition of the book with a 15% discount for the next 24 hours using the coupon code SERIOUSPYTHONLAUNCH on the book page.

The book is also released as part as No Starch collection. They also are in charge of distributing the paperback copy of the book. If you want a version of the book that you can touch and hold in your arms, look for it in No Starch shop, on Amazon or in your favorite book shop!

No Starch version of Serious Python cover

Kunal Mehta: Eliminating PHP polyfills

Thursday 17th of January 2019 07:50:13 AM

The Symfony project has recently created a set of pure-PHP polyfills for both PHP extensions and newer language features. It allows developers to add requirements upon those functions or language additions without increasing the system requirements upon end users. For the most part, I think this is a good thing, and valuable to have. We've done similar things inside MediaWiki as well for CDB support, Memcached, and internationalization, just to name a few.

But the downside is that on platforms where it is possible to install the missing PHP extensions or upgrade PHP itself, we're shipping empty code. MediaWiki requires both the ctypes and mbstring PHP extensions, and our servers have those, so there's no use in deploying polyfills for those, because they'll never be used. In September, Reedy and I replaced the polyfills with "unpolyfills" that simply provide the correct package, so the polyfill is skipped by composer. That removed about 3,700 lines of code from what we're committing, reviewing, and deploying - a big win.

Last month I came across the same problem in Debian: #911832. The php-symfony-polyfill package was failing tests on the new PHP 7.3, and up for removal from the next stable release (Buster). On its own, the package isn't too important, but was a dependency of other important packages. In Debian, the polyfills are even more useless, since instead of depending upon e.g. php-symfony-polyfill-mbstring, the package could simply depend upon the native PHP extension, php-mbstring. In fact, there was already a system designed to implement those kinds of overrides. After looking at the dependencies, I uploaded a fixed version of php-webmozart-assert, filed bugs for two other packages. and provided patches for symfony. I also made a patch to the default overrides in pkg-php-tools, so that any future package that depends upon a symfony polyfill should now automatically depend upon the native PHP extension if necessary.

Ideally composer would support alternative requirements like ext-mbstring | php-symfony-polyfill-mbstring, but that's been declined by their developers. There's another issue that is somewhat related, but doesn't do much to reduce the installation of polyfills when unnecessary.

Reproducible builds folks: Reproducible Builds: Weekly report #194

Wednesday 16th of January 2019 04:39:41 PM

Here’s what happened in the Reproducible Builds effort between Sunday January 6 and Saturday January 12 2019:

Packages reviewed and fixed, and bugs filed Website development

There were a number of updates to the reproducible-builds.org project website this week, including:

Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org this week, including:

  • Holger Levsen:
    • Arch Linux-specific changes:
      • Use Debian’s sed, untar and others with sudo as they are not available in the bootstrap.tar.gz file ([], [], [], [], etc.).
      • Fix incorrect sudoers(5) regex. []
      • Only move old schroot away if it exists. []
      • Add and drop debug code, cleanup cruft, exit on cleanup(), etc. ([], [])
      • cleanup() is only called on errors, thus exit 1. []
    • Debian-specific changes:
      • Revert “Support arbitrary package filters when generating deb822 output” ([]) and re-open the corresponding merge request
      • Show the total number of packages in a package set. []
    • Misc/generic changes:
      • Node maintenance. ([, [], [], etc.)
  • Mattia Rizzolo:
    • Fix the NODE_NAME value in case it’s not a full-qualified domain name. []
    • Node maintenance. ([], etc.)
  • Vagrant Cascadian:

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Iain R. Learmonth: A Solution for Authoritative DNS

Wednesday 16th of January 2019 04:30:00 PM

I’ve been thinking about improving my DNS setup. So many things will use e-mail verification as a backup authentication measure that it is starting to show as a real weak point. An Ars Technica article earlier this year talked about how “[f]ederal authorities and private researchers are alerting companies to a wave of domain hijacking attacks that use relatively novel techniques to compromise targets at an almost unprecedented scale.”

The two attacks that are mentioned in that article, changing the nameserver and changing records, are something that DNSSEC could protect against. Records wouldn’t have to be changed on my chosen nameservers, a BGP-hijacking could just give another server the queries for records on my domain instead and then reply with whatever it chooses.

After thinking for a while, my requirements come down to:

  • Offline DNSSEC signing
  • Support for storing signing keys on a HSM (YubiKey)
  • Version control
  • No requirement to run any Internet-facing infrastructure myself

After some searching I discovered GooDNS, a “good” DNS hosting provider. They have an interesting setup that looks to fit all of my requirements. If you’re coming from a more traditional arrangement with either a self-hosted name server or a web panel then this might seem weird, but if you’ve done a little “infrastructure as code” then maybe it is not so weird.

The inital setup must be completed via the web interface. You’ll need to have an hardware security module (HSM) for providing a time based one time password (TOTP), an SSH key and optionally a GPG key as part of the registration. You will need the TOTP to make any changes via the web interface, the SSH key will be used to interact with the git service, and the GPG key will be used for any email correspondance including recovery in the case that you lose your TOTP HSM or password.

You must validate your domain before it will be served from the GooDNS servers. There are two options for this, one for new domains and one “zero-downtime” option that is more complex but may be desirable if your domain is already live. For new domains you can simply update your nameservers at the registrar to validate your domain, for existing domains you can add a TXT record to the current DNS setup that will be validated by GooDNS to allow for the domain to be configured fully before switching the nameservers. Once the domain is validated, you will not need to use the web interface again unless updating contact, security or billing details.

All the DNS configuration is managed in a single git repository. There are three branches in the repository: “master”, “staging” and “production”. These are just the default branches, you can create other branches if you like. The only two that GooDNS will use are the “staging” and “production” branches.

GooDNS provides a script that you can install at /usr/local/bin/git-dns (or elsewhere in your path) which provides some simple helper commands for working with the git repository. The script is extremely readable and so it’s easy enough to understand and write your own scripts if you find yourself needing something a little different.

When you clone your git repository you’ll find one text file on the master branch for each of your configured zones:

irl@computer$ git clone git@goodns.net:irl.git Cloning into 'irl1'... remote: Enumerating objects: 3, done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3 Receiving objects: 100% (3/3), 22.55 KiB | 11.28 MiB/s, done. Resolving deltas: 100% (1/1), done. irl@computer$ ls irl1.net learmonth.me irl@computer$ cat irl1.net @ IN SOA ns1.irl1.net. hostmaster.irl1.net. ( _SERIAL_ 28800 7200 864000 86400 ) @ IN NS ns1.goodns.net. @ IN NS ns2.goodns.net. @ IN NS ns3.goodns.net.

In the backend GooDNS is using OpenBSD 6.4 servers with nsd(8). This means that the zone files use the same syntax. If you don’t know what this means then that is fine as the documentation has loads of examples in it that should help you to configure all the record types you might need. If a record type is not yet supported by nsd(8), you can always specify the record manually and it will work just fine.

One thing you might note here is that the string _SERIAL_ appears instead of a serial number. The git-dns script will replace this with a serial number when you are ready to publish the zone file.

I’ll assume that you already have you GPG key and SSH key set up, now let’s set up the DNSSEC signing key. For this, we will use one of the four slots of the YubiKey. You could use either 9a or 9e, but here I’ll use 9e as 9a is already the SSH key for me.

To set up the token, we will need the yubico-piv-tool. Be extremely careful when following these steps especially if you are using a production device. Try to understand the commands before pasting them into the terminal.

First, make sure the slot is empty. You should get an output similar to the following one:

irl@computer$ yubico-piv-tool -s 9e -a status CHUID: ... CCC: No data available PIN tries left: 10

Now we will use git-dns to create our key signing key (KSK):

irl@computer$ git dns kskinit --yubikey-neo Successfully generated a new private key. Successfully generated a new self signed certificate. Found YubiKey NEO. Slots available: (1) 9a - Not empty (2) 9e - Empty Which slot to use for DNSSEC signing key? 2 Successfully imported a new certificate. CHUID: ... CCC: No data available Slot 9e: Algorithm: ECCP256 Subject DN: CN=irl1.net Issuer DN: CN=irl1.net Fingerprint: 97dda8a441a401102328ab6ed4483f08bc3b4e4c91abee8a6e144a6bb07a674c Not Before: Feb 01 13:10:10 2019 GMT Not After: Feb 01 13:10:10 2021 GMT PIN tries left: 10

We can see the public key for this new KSK:

irl@computer$ git dns pubkeys irl1.net. DNSKEY 256 3 13 UgGYfiNse1qT4GIojG0VGcHByLWqByiafQ8Yt7/Eit2hCPYYcyiE+TX8HP8al/SzCnaA8nOpAkqFgPCI26ydqw==

Next we will create a zone signing key (ZSK). These are stored in the keys/ folder of your git repository but are not version controlled. You can optionally encrypt these with GnuPG (and so requiring the YubiKey to sign zones) but I’ve not done that here. Operations using slot 9e do not require the PIN so leaving the YubiKey connected to the computer is pretty much the same as leaving the KSK on the disk. Maybe a future YubiKey will not have this restriction or will add more slots.

irl@computer$ git dns zskinit Created ./keys/ Successfully generated a new private key. irl@computer$ git dns pubkeys irl1.net. DNSKEY 256 3 13 UgGYfiNse1qT4GIojG0VGcHByLWqByiafQ8Yt7/Eit2hCPYYcyiE+TX8HP8al/SzCnaA8nOpAkqFgPCI26ydqw= irl1.net. DNSKEY 257 3 13 kS7DoH7fxDsuH8o1vkvNkRcMRfTbhLqAZdaT2SRdxjRwZSCThxxpZ3S750anoPHV048FFpDrS8Jof08D2Gqj9w==

Now we can go to our domain registrar and add DS records to the registry for our domain using the public keys. First though, we should actually sign the zone. To create a signed zone:

irl@computer$ git dns signall Signing irl1.net... Signing learmonth.me... [production 51da0f0] Signed all zone files at 2019-02-01 13:28:02 2 files changed, 6 insertions(+), 0 deletions(-)

You’ll notice that all the zones were signed although we only created one set of keys. Set ups where you have one shared KSK and individual ZSK per zone are possible but they provide questionable additional security. Reducing the number of keys required for DNSSEC helps to keep them all under control.

To make these changes live, all that is needed is to push the production branch. To keep things tidy, and to keep a backup of your sources, you can push the master branch too. git-dns provides a helper function for this:

irl@computer$ git dns push Pushing master...done Pushing production...done Pushing staging...done

If I now edit a zone file on the master branch and want to try out the zone before making it live, all I need to do is:

irl@computer$ git dns signall --staging Signing irl1.net... Signing learmonth.me... [staging 72ea1fc] Signed all zone files at 2019-02-01 13:30:12 2 files changed, 8 insertions(+), 0 deletions(-) irl@computer$ git dns push Pushing master...done Pushing production...done Pushing staging...done

If I now use the staging resolver or lookup records at irl1.net.staging.goodns.net then I’ll see the zone live. The staging resolver is a really cool idea for development and testing. They give you a couple of unique IPv6 addresses just for you that will serve your staging zone files and act as a resolver for everything else. You just have to plug these into your staging environment and everything is ready to go. In the future they are planning to allow you to have more than one staging environment too.

All that is left to do is ensure that your zone signatures stay fresh. This is easy to achieve with a cron job:

0 3 * * * /usr/local/bin/git-dns cron --repository=/srv/dns/irl1.net --quiet

I monitor the records independently and disable the mail output from this command but you might want to drop the --quiet if you’d like to get mails from cron on errors/warnings.

On the GooDNS blog they talk about adding an Onion service for the git server in the future so that they do not have logs that could show the location of your DNSSEC signing keys, which allows you to have even greater protection. They already support performing the git push via Tor but the addition of the Onion service would make it faster and more reliable.

Unfortunately, GooDNS is entirely fictional and you can’t actually manage your DNS in this way, but wouldn’t it be nice? This post has drawn inspiration from the following:

Daniel Silverstone: Plans for 2019

Wednesday 16th of January 2019 11:29:51 AM

At the end of last year I made eight statements about what I wanted to do throughout 2019. I tried to split them semi-evenly between being a better adult human and being a better software community contributor. I have had a few weeks now to settle my thoughts around what they mean and I'd like to take some time to go through the eight and discuss them a little more.

I've been told that doing this reduces the chance of me sticking to the points because simply announcing the points and receiving any kind of positive feedback may stunt my desire to actually achieve the goals. I'm not sure about that though, and I really want my wider friends community to help keep me honest about them all. I've set a reminder for April 7th to review the situation and hopefully be able to report back positively on my progress.

My list of goals was stated in a pair of tweets:

  1. Continue to lose weight and get fit. I'd like to reach 80kg during the year if I can
  2. Begin a couch to 5k and give it my very best
  3. Focus my software work on finishing projects I have already started
  4. Where I join in other projects be a net benefit
  5. Give back to the @rustlang community because I've gained so much from them already
  6. Be better at tidying up
  7. Save up lots of money for renovations
  8. Go on a proper holiday
Weight and fitness

Some of you may be aware already, others may not, that I have been making an effort to shed some of my excess weight over the past six or seven months. I "started" in May of 2018 weighing approximately 141kg and I am, as of this morning, weighing approximately 101kg. Essentially that's a semi-steady rate of 5kg per month, though it has, obviously, been slowing down of late.

In theory, given my height of roughly 178cm I should aim for a weight of around 70kg. I am trying to improve my fitness and to build some muscle and as such I'm aiming long-term for roughly 75kg. My goal for this year is to continue my improvement and to reach and maintain 80kg or better. I think this will make a significant difference to my health and my general wellbeing. I'm already sleeping better on average, and I feel like I have more energy over all. I bought a Garmin Vivoactive 3 and have been using that to track my general health and activity. My resting heart rate has gone down a few BPM over the past six months, and I can see my general improvement in sleep etc over that time too. I bought a Garmin Index Scale to track my weight and body composition, and that is also showing me good values as well as encouraging me to weigh myself every day and to learn how to interpret the results.

I've been managing my weight loss partly by means of a 16:8 intermittent fasting protocol, combined with a steady calorie deficit of around 1000kcal/day. While this sounds pretty drastic, I was horrendously overweight and this was critical to getting my weight to shift quickly. I expect I'll reduce that deficit over the course of the year, hence I'm only aiming for a 20kg drop over a year rather than trying to maintain what could in theory be a drop of 30kg or more.

In addition to the IF/deficit, I have been more active. I bought an e-bike and slowly got going on that over the summer, along with learning to enjoy walks around my local parks and scrubland. Since the weather got bad enough that I didn't want to be out of doors I joined a gym where I have been going regularly since September. Since the end of October I have been doing a very basic strength training routine and my shoulders do seem to be improving for it. I can still barely do a pushup but it's less embarassingly awful than it was.

Given my efforts toward my fitness, my intention this year is to extend that to include a Couch to 5k type effort. Amusingly, Garmin offer a self adjusting "coach" called Garmin Coach which I will likely use to guide me through the process. While I'm not committing to any, maybe I'll get involved in some parkruns this year too. I'm not committing to reach an ability to run 5k because, quite simply, my bad leg may not let me, but I am committing to give it my best. My promise to myself was to start some level of jogging once I hit 100kg, so that's looking likely by the end of this month. Maybe February is when I'll start the c25k stuff in earnest.

Adulting

I have put three items down in this category to get better at this year. One is a big thing for our house. I am, quite simply put, awful at tidying up. I leave all sorts of things lying around and I am messy and lazy. I need to fix this. My short-term goal in this respect is to pick one room of the house where the mess is mostly mine, and learn to keep it tidy before my checkpoint in April. I think I'm likely to choose the Study because it's where others of my activities for this year will centre and it's definitely almost entirely my mess in there. I'm not yet certain how I'll learn to do this, but it has been a long time coming and I really do need to. It's not fair to my husband for me to be this awful all the time.

The second of these points is to explicitly save money for renovations. Last year we had a new bathroom installed and I've been seriously happy about that. We will need to pay that off this year (we have the money, we're just waiting as long as we can to earn the best interest on it first) and then I'll want to be saving up for another spot of renovations. I'd like to have the kitchen and dining room done - new floor, new units and sink in the kitchen, fix up the messy wall in the dining room, have them decorated, etc. I imagine this will take quite a bit of 2019 to save for, but hopefully this time next year I'll be saying that we managed that and it's time for the next part of the house.

Finally I want to take a proper holiday this year. It has been a couple of years since Rob and I went to Seoul for a month, and while that was excellent, it was partly "work from home" and so I'd like to take a holiday which isn't also a conference, or working from home, or anything other than relaxation and seeing of interesting things. This will also require saving for, so I imagine we won't get to do it until mid to late 2019, but I feel like this is part of a general effort I've been making to take care of myself more. The fitness stuff above being physical, but a proper holiday being part of taking better care of my mental health.

Software, Hardware, and all the squishy humans in between

2018 was not a great year for me in terms of getting projects done. I have failed to do almost anything with Gitano and I did not doing well with Debian or other projects I am part of. As such, I'm committing to do better by my projects in 2019.

First, and foremost, I'm pledging to focus my efforts on finishing projects which I've already started. I am very good at thinking "Oh, that sounds fun" and starting something new, leaving old projects by the wayside and not getting them to any state of completion. While software is never entirely "done", I do feel like I should get in-progress projects to a point that others can use them and maybe contribute too.

As such, I'll be making an effort to sort out issues which others have raised in Gitano (though I doubt I'll do much more feature development for it) so that it can be used by NetSurf and so that it doesn't drop out of Debian. Since the next release of Debian is due soon, I will have to pull my finger out and get this done pretty soon.

I have been working, on and off, with Rob on a new point-of-sale for our local pub Ye Olde Vic and I am committing to get it done to a point that we can experiment with using it in the pub by the summer. Also I was working on a way to measure fluid flow through a pipe so that we can correlate the pulled beer with the sales and determine wastage etc. I expect I'll get back to the "beer'o'meter" once the point-of-sale work is in place and usable. I am not going to commit to getting it done this year, but I'd like to make a dent in the remaining work for it.

I have an on-again off-again relationship with some code I wrote quite a while ago when learning Rust. I am speaking of my Yarn implementation called (imaginatively) rsyarn. I'd like to have that project reworked into something which can be used with Cargo and associated tooling nicely so that running cargo test in a Rust project can result in running yarns as well.

There may be other projects which jump into this category over the year, but those listed above are the ones I'm committing to make a difference to my previous lackadaisical approach.

On a more community-minded note, one of my goals is to ensure that I'm always a net benefit to any project I join or work on in 2019. I am very aware that in a lot of cases, I provide short drive-by contributions to projects which can end up costing that project more than I gave them in benefit. I want to stop that behaviour and instead invest more effort into fewer projects so that I always end up a net benefit to the project in question. This may mean spending longer to ensure that an issue I file has enough in it that I may not need to interact with it again until verification of a correct fix is required. It may mean spending time fixing someone elses' issues so that there is the engineering bandwidth for someone else to fix mine. I can't say for sure how this will manifest, beyond being up-front and requesting of any community I decide to take part in, that they tell me if I end up costing more than I'm bringing in benefit.

Rust and the Rust community

I've mentioned Rust above, and this is perhaps the most overlappy of my promises for 2019. I want to give back to the Rust community because over the past few years as I've learned Rust and learned more and more about the community, I've seen how much of a positive effect they've had on my life. Not just because they made learning a new programming langauge so enjoyable, but because of the community's focus on programmers as human beings. The fantastic documentation ethics, and the wonderfully inclusive atmosphere in the community meant that I managed to get going with Rust so much more effectively than with almost any other language I've ever tried to learn since Lua.

I have, since Christmas, been slowly involving myself in the Rust community more and more. I joined one of the various Discord servers and have been learning about how crates.io is managed and I have been contributing to rustup.rs which is the initial software interface most Rust users encounter and forms such an integral part of the experience of the ecosystem that I feel it's somewhere I can make a useful impact.

While I can't say a significant amount more right now, I hope I'll be able to blog more in the future on what I'm up to in the Rust community and how I hope that will benefit others already in, and interested in joining, the fun that is programming in Rust.

In summary, I hope at least some of you will help to keep me honest about my intentions for 2019, and if, in return, I can help you too, please feel free to let me know.

Daniel Lange: Openssh taking minutes to become available, booting takes half an hour ... because your server waits for a few bytes of randomness

Wednesday 16th of January 2019 08:20:05 AM

So, your machine now needs minutes to boot before you can ssh in where it used to be seconds before the Debian Buster update?

Problem

Linux 3.17 (2014-10-05) learnt a new syscall getrandom() that, well, gets bytes from the entropy pool. Glibc learnt about this with 2.25 (2017-02-05) and two tries and four years after the kernel, OpenSSL used that functionality from release 1.1.1 (2018-09-11). OpenSSH implemented this natively for the 7.8 release (2018-08-24) as well.

Now the getrandom() syscall will block1 if the kernel can't provide enough entropy. And that's frequenty the case during boot. Esp. with VMs that have no input devices or IO jitter to source the pseudo random number generator from.

First seen in the wild January 2017

I vividly remember not seeing my Alpine Linux VMs back on the net after the Alpine 3.5 upgrade. That was basically the same issue.

Systemd. Yeah.

Systemd makes this behaviour worse, see issue #4271, #4513 and #10621.
Basically as of now the entropy file saved as /var/lib/systemd/random-seed will not - drumroll - add entropy to the random pool when played back during boot. Actually it will. It will just not be accounted for. So Linux doesn't know. And continues blocking getrandom(). This is obviously different from SysVinit times2 when /var/lib/urandom/random-seed (that you still have lying around on updated systems) made sure the system carried enough entropy over reboot to continue working right after enough of the system was booted.

#4167 is a re-opened discussion about systemd eating randomness early at boot (hashmaps in PID 0...). Some Debian folks participate in the recent discussion and it is worth reading if you want to learn about the mess that booting a Linux system has become.

While we're talking systemd ... #10676 also means systems will use RDRAND in the future despite Ted Ts'o's warning on RDRAND [Archive.org mirror and mirrored locally as 130905_Ted_Tso_on_RDRAND.pdf, 205kB as Google+ will be discontinued in April 2019].

Debian

Debian is seeing the same issue working up towards the Buster release, e.g. Bug #912087.

The typical issue is:

[    4.428797] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: data=ordered
[ 130.970863] random: crng init done

with delays up to tens of minutes on systems with very little external random sources.

This is what it should look like:

[    1.616819] random: fast init done
[    2.299314] random: crng init done

Check dmesg | grep -E "(rng|random)" to see how your systems are doing.

If this is not fully solved before the Buster release, I hope some of the below can end up in the release notes3.

Solutions

You need to get entropy into the random pool earlier at boot. There are many ways to achieve this and - currently - all require action by the system administrator.

Kernel boot parameter

From kernel 4.19 (Debian Buster currently runs 4.18 [Update: but will be getting 4.19 before release according to Ben via Mika]) you can set RANDOM_TRUST_CPU at compile time or random.trust_cpu=on on the kernel command line. This will make Intel / AMD system trust RDRAND and fill the entropy pool with it. See the warning from Ted Ts'o linked above.

Using a TPM

The Trusted Platform Module has an embedded random number generator that can be used. Of course you need to have one on your board for this to be useful. It's a hardware device.

Load the tpm-rng module (ideally from initrd) or compile it into the kernel (config HW_RANDOM_TPM). Now, the kernel does not "trust" the TPM RNG by default, so you need to add

rng_core.default_quality=1000

to the kernel command line. 1000 means "trust", 0 means "don't use". So you can chose any value in between that works for you depending on how much you consider your TPM to be unbugged.

VirtIO

For Virtual Machines (VMs) you can forward entropy from the host (that should be running longer than the VMs and have enough entropy) via virtio_rng.

So on the host, you do:

kvm ... -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x7

and within the VM newer kernels should automatically load virtio_rng and use that.

You can confirm with dmesg as per above.

Or check:

# cat /sys/devices/virtual/misc/hw_random/rng_available
virtio_rng.0
# cat /sys/devices/virtual/misc/hw_random/rng_current
virtio_rng.0 Patching systemd

The Fedora bugtracker has a bash / python script that replaces the systemd rnd seeding with a (better) working one. The script can also serve as a good starting point if you need to script your own solution, e.g. for reading from an entropy provider available within your (secure) network.

Chaoskey

The wonderful Keith Packard and Bdale Garbee have developed a USB dongle, ChaosKey, that supplies entropy to the kernel. Hard- and software are open source.

Jitterentropy_RNG

Kernel 4.2 introduced jitterentropy_rng which will use the jitter in CPU timings to generate randomness.

modprobe jitterentropy_rng

This apparently needs a userspace daemon though (read: design mistake) so

apt install jitterentropy-rngd (available from Buster/testing).

The current version 1.0.8-3 installs nicely on Stretch. dpkg -i is your friend.

But - drumroll - that daemon doesn't seem to use the kernel module at all.

That's where I stopped looking at that solution. At least for now. There are extensive docs if you want to dig into this yourself.

Haveged

apt install haveged

Haveged is a user-space daemon that gathers entropy though the timing jitter any CPU has. It will only run "late" in boot but may still get your openssh back online within seconds and not minutes.

It is also - to the best of my knowledge - not verified at all regarding the quality of randomness it generates. The haveged design and history page provides and interesting read and I wouldn't recommend haveged if you have alternatives. If you have none, haveged is a wonderful solution though as it works reliably. And unverified entropy is better than no entropy. Just forget this is 2018 2019 .

Updates 14.01.2019

Stefan Fritsch, the Apache2 maintainer in Debian, OpenBSD developer and a former Debian security team member stumbled over the systemd issue preventing Apache libssl to initialize at boot in a Debian bug #916690 - apache2: getrandom call blocks on first startup, systemd kills with timeout.

The bug has been retitled "document getrandom changes causing entropy starvation" hinting at not fixing the underlying issue but documenting it in the Debian Buster release notes.

Unhappy with this "minimal compromise" Stefan wrote a comprehensive summary of the current situation to the Debian-devel mailing list. The discussion spans over December 2018 and January 2019 and mostly iterated what had been written above already. The discussion has - so far - not reached any consensus. There is still the "systemd stance" (not our problem, fix the daemons and the "ssh/apache stance" (fix systemd, credit entropy).

The "document in release notes" minimal compromise was brought up again and Stefan warned of the problems this would create for Buster users:

> I'd prefer having this documented in the release notes: > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=916690 > with possible solutions like installing haveged, configuring virtio-rng, > etc. depending on the situation. That would be an extremely user-unfriendly "solution" and would lead to countless hours of debugging and useless bug reports.

This is exactly why I wrote this blog entry and keep it updated. We need to either fix this or tell everybody we can reach before upgrading to Buster. Otherwise this will lead to huge amounts of systems dead on the network after what looked like a successful upgrade.

Some interesting tidbits were mentioned within the thread:

Raphael Hertzog fixed the issue for Kali Linux by installing haveged by default. Michael Prokop did the same for the grml distribution within its December 2018 release.

Ben Hutchings pointed to an interesting thread on the debian-release mailing list he kicked off in May 2018. Multiple people summarized the options and the fact that there is no "general solution that is both correct and easy" at the time.

Sam Hartman identified Debian Buster VMs running under VMware as an issue, because that supervisor does not provide virtio-rng. So Debian VMs wouldn't boot into ssh availability within a reasonable time. This is an issue for real world use cases albeit running a proprietary product as the supervisor.

16.01.2019

Daniel Kahn Gillmor wrote in to explain a risk for VMs starting right after the boot of the host OS:

If that pool is used by the guest to generate long-term secrets because it appears to be well-initialized, that could be a serious problem.
(e.g. "Mining your P's and Q's" by Heninger et al -- https://factorable.net/weakkeys12.extended.pdf)
I've just opened https://bugs.launchpad.net/qemu/+bug/1811758 to report a way to improve that situation in qemu by default.

So ... make sure that your host OS has access to a hardware random number generator or at least carries over its random seed properly across reboots. You could also delay VM starts until the crng on the host Linux is fully initialized (random: crng init done).
Otherwise your VMs may get insufficiently generated pseudo-random numbers and won't even know.

  1. it will return with EAGAIN in the GRND_NONBLOCK use case. The blocking behaviour when lacking entropy is a security measure as per Bug #1559 of Google's Project Zero

  2. Update 18.12.2018: "SysVinit times" ::= "The times when most Linux distros used SysVinit over other init systems." So Wheezy and previous for Debian. Some people objected to the statement, so I added this footnote as a clarification. See the discussion in the comments below. 

  3. there is no Buster branch in the release notes repository yet (2018-12-17) 

Russ Allbery: Review: Aerial Magic Season 1

Wednesday 16th of January 2019 04:02:00 AM

Review: Aerial Magic Season 1, by walkingnorth

Series: Aerial Magic #1 Publisher: LINE WEBTOON Copyright: 2018 Format: Online graphic novel Pages: 156 Aerial Magic is a graphic novel published on the LINE WEBTOON platform by the same author as the wonderful Always Human, originally in weekly episodes. It is readable for free, starting with the prologue. I was going to wait until all seasons were complete and then review the entire work, like I did with Always Human, but apparently there are going to be five seasons and I don't want to wait that long. This is a review of the first season, which is now complete in 25 episodes plus a prologue.

As with Always Human, the pages metadata in the sidebar is a bit of a lie: a very rough guess on how many pages this would be if it were published as a traditional graphic novel (six times the number of episodes, since each episode seems a bit longer than in Always Human). A lot of the artwork is large panels, so it may be an underestimate. Consider it only a rough guide to how long it might take to read.

Wisteria Kemp is an apprentice witch. This is an unusual thing to be — not the witch part, which is very common in a society that appears to use magic in much the way that we use technology, but the apprentice part. Most people training for a career in magic go to university, but school doesn't agree with Wisteria. There are several reasons for that, but one is that she's textblind and relies on a familiar (a crow-like bird named Puppy) to read for her. Her dream is to be accredited to do aerial magic, but her high-school work was... not good, and she's very afraid she'll be sent home after her ten-day trial period.

Magister Cecily Moon owns a magical item repair shop in the large city of Vectum and agreed to take Wisteria on as an apprentice, something that most magisters no longer do. She's an outgoing woman with a rather suspicious seven-year-old, two other employees, and a warm heart. She doesn't seem to have the same pessimism Wisteria has about her future; she instead is more concerned with whether Wisteria will want to stay after her trial period. This doesn't reassure Wisteria, nor do her initial test exercises, all of which go poorly.

I found the beginning of this story a bit more painful than Always Human. Wisteria has such a deep crisis of self-confidence, and I found Cecily's lack of awareness of it quite frustrating. This is not unrealistic — Cecily is clearly as new to having an apprentice as Wisteria is to being one, and is struggling to calibrate her style — but it's somewhat hard reading since at least some of Wisteria's unhappiness is avoidable. I wish Cecily had shown a bit more awareness of how much harder she made things for Wisteria by not explaining more of what she was seeing. But it does set up a highly effective pivot in tone, and the last few episodes were truly lovely. Now I'm nearly as excited for more Aerial Magic as I would be for more Always Human.

walkingnorth's art style is much the same as that in Always Human, but with more large background panels showing the city of Vectum and the sky above it. Her faces are still exceptional: expressive, unique, and so very good at showing character emotion. She occasionally uses an exaggerated chibi style for some emotions, but I feel like she's leaning more on subtlety of expression in this series and doing a wonderful job with it. Wisteria's happy expressions are a delight to look at. The backgrounds are not generally that detailed, but I think they're better than Always Human. They feature a lot of beautiful sky, clouds, and sunrise and sunset moments, which are perfect for walkingnorth's pastel palette.

The magical system underlying this story doesn't appear in much detail, at least yet, but what is shown has an interesting animist feel and seems focused on the emotions and memories of objects. Spells appear to be standardized symbolism that is known to be effective, which makes magic something like cooking: most people use recipes that are known to work, but a recipe is not strictly required. I like the feel of it and the way that magic is woven into everyday life (personal broom transport is common), and am looking forward to learning more in future seasons.

As with Always Human, this is a world full of fundamentally good people. The conflict comes primarily from typical interpersonal conflicts and inner struggles rather than any true villain. Also as with Always Human, the world features a wide variety of unremarked family arrangements, although since it's not a romance the relationships aren't quite as central. It makes for relaxing and welcoming reading.

Also as in Always Human, each episode features its own soundtrack, composed by the author. I am again not reviewing those because I'm a poor music reviewer and because I tend to read online comics in places and at times where I don't want the audio, but if you like that sort of thing, the tracks I listened to were enjoyable, fit the emotions of the scene, and were unobtrusive to listen to while reading.

This is an online comic on a for-profit publishing platform, so you'll have to deal with some amount of JavaScript and modern web gunk. I at least (using up-to-date Chrome on Linux with UMatrix) had fewer technical problems with delayed and partly-loaded panels than I had with Always Human.

I didn't like this first season quite as well as Always Human, but that's a high bar, and it took some time for Always Human to build up to its emotional impact as well. What there is so far is a charming, gentle, and empathetic story, full of likable characters (even the ones who don't seem that likable at first) and a fascinating world background. This is an excellent start, and I will certainly be reading (and reviewing) later seasons as they're published.

walkingnorth has a Patreon, which, in addition to letting you support the artist directly, has various supporting material such as larger artwork and downloadable versions of the music.

Rating: 7 out of 10

Keith Packard: newt-lola

Tuesday 15th of January 2019 07:13:17 PM
Newt: Replacing Bison and Flex

Bison and Flex (or any of the yacc/lex family members) are a quick way to generate reliable parsers and lexers for language development. It's easy to write a token recognizer in Flex and a grammar in Bison, and you can readily hook code up to the resulting parsing operation. However, neither Bison nor Flex are really designed for embedded systems where memory is limited and malloc is to be avoided.

When starting Newt, I didn't hesitate to use them though; it's was nice to use well tested and debugged tools so that I could focus on other parts of the implementation.

With the rest of Newt working well, I decided to go take another look at the cost of lexing and parsing to see if I could reduce their impact on the system.

A Custom Lexer

Most mature languages end up with a custom lexer. I'm not really sure why this has to be the case, but it's pretty rare to run across anyone still using Flex for lexical analysis. The lexical structure of Python is simple enough that this isn't a huge burden; the hardest part of lexing Python code is in dealing with indentation, and Flex wasn't really helping with that part much anyways.

So I decided to just follow the same path and write a custom lexer. The result generates only about 1400 bytes of Thumb code, a significant savings from the Flex version which was about 6700 bytes.

To help make the resulting language LL, I added lexical recognition of the 'is not' and 'not in' operators; instead of attempting to sort these out in the parser, the lexer does a bit of look-ahead and returns a single token for both of these.

Parsing on the Cheap

Many of common languages are "almost" LL; this may come from using recursive-descent parsers. In 'pure' form, recursive descent parsers can only recognize LL languages, but it's easy to hack them up to add a bit of look ahead here and there to make them handle non-LL cases.

Which is one reason we end up using parser generators designed to handle LALR languages instead; that class of grammars does cover most modern languages fairly well, only requiring a small number of kludges to paper over the remaining gaps. I think the other reason I like using Bison is that the way an LALR parser works makes it really easy to handle synthetic attributes during parsing. Synthetic attributes are those built from collections of tokens that match an implicit parse tree of the input.

The '$[0-9]+' notation within Bison actions represent the values of lower-level parse tree nodes, while '$$' is the attribute value passed to higher levels of the tree.

However, LALR parser generators are pretty complicated, and the resulting parse tables are not tiny. I wondered how much space I could save by using a simpler parser structure, and (equally important), one designed for embedded use. Because Python is supposed to be an actual LL language, I decided to pull out an ancient tool I had and give it a try.

Lola: Reviving Ancient Lisp Code

Back in the 80s, I wrote a little lisp called Kalypso. One of the sub-projects resulted an LL parser generator called Lola. LL parsers are a lot easier to understand than LALR parsers, so that's what I wrote.

A program written in a long-disused dialect of lisp offers two choices:

1) Get the lisp interpreter running again

2) Re-write the program in an available language.

I started trying to get Kalypso working again, and decided that it was just too hard. Kalypso was not very portably written and depended on a lot of historical architecture, including the structure of a.out files and the mapping of memory.

So, as I was writing a Python-like language anyways, I decided to transliterate Lola into Python. It's now likely the least "Pythonic" program around as it reflects a lot of common Lisp-like programming ideas. I've removed the worst of the recursive execution, but it is still full of list operations. The Python version uses tuples for lists, and provides a 'head' and 'rest' operation to manipulate them. I probably should have just called these 'car' and 'cdr'...

One benefit of using Lisp was that I could write the grammar as s-expressions and avoid needing a parser for the Lola input language. Of course, Lola is a parser generator, so it actually bootstraps itself by having the grammar for the Lola language written as Python data structures, generating a parser for that and then parsing the user's input. Here's what the Lola grammar looks like, in Lola grammar syntax:

start : non-term start | ; non-term : SYMBOL @NONTERM@ COLON rules @RULES@ SEMI ; rules : rule rules-p ; rules-p : VBAR rule rules-p | ; rule : symbols @RULE@ ; symbols : SYMBOL @SYMBOL@ symbols | ;

Lola now has a fairly clean input syntax, including the ability to code actions in C (or whatever language). It has two output modules; a Python module that generates just the Python parse table structure, and a C module that generates a complete parser, ready to incorporate into your application much as Bison does.

Lola is available in my git repository, https://keithp.com/cgit/lola.git/

Actions in Bison vs Lola

Remember how I said that Bison makes processing synthetic attributes really easy? Well, the same is not true of the simple LL parser generated by Lola.

Actions in Lola are chucks of C code executed when they appear at to top of the parse stack. However, there isn't any context for them in the parsing process itself; the parsing process discards any knowledge of production boundaries. As a result, the actions have to manually track state on a separate attribute stack. There are pushes to this stack in one action that are expected to be matched by pops in another.

The resulting actions are not very pretty, and writing them somewhat error prone. I'd love to come up with a cleaner mechanism, and I've got some ideas, but those will have to wait for another time.

Bison vs Lola in Newt

Bison generated 4kB of parse tables and a 1470 byte parser. Lola generates 2kB of parse tables and a a 1530 byte parser. So, switching has saved about 2kB of memory. Most of the parser code in both cases is probably the actions, which my guess as to why they're similar. I think the 2kB savings is worth it, but it's a close thing for sure.

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, December 2018

Tuesday 15th of January 2019 10:26:45 AM

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In December, about 224 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 8 hours (out of 8 hours allocated).
  • Antoine Beaupré did 24 hours (out of 24 hours allocated).
  • Ben Hutchings did 15 hours (out of 20 hours allocated, thus keeping 5 extra hours for January).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 44 hours (out of 30 hours allocated + 39.25 extra hours, thus keeping 25.25 extra hours for January).
  • Hugo Lefeuvre did 20 hours (out of 20 hours allocated).
  • Lucas Kanashiro did 3 hours (out of 4 hours allocated, thus keeping one extra hour for January).
  • Markus Koschany did 30 hours (out of 30 hours allocated).
  • Mike Gabriel did 21 hours (out of 10 hours allocated and 1 extra hour from November and 10 hours additionally allocated during the month).
  • Ola Lundqvist did 8 hours (out of 8 hours allocated + 7 extra hours, thus keeping 7 extra hours for January).
  • Roberto C. Sanchez did 12.75 hours (out of 12 hours allocated + 0.75 extra hours from November).
  • Thorsten Alteholz did 30 hours (out of 30 hours allocated).
Evolution of the situation

In December we managed to dispatch all the hours available to contributors again, and we had one new contributor in training. Still, we continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 37 packages with a known CVE and the dla-needed.txt file has 31 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Petter Reinholdtsen: CasparCG Server for TV broadcast playout in Debian

Monday 14th of January 2019 11:10:00 PM

The layered video playout server created by Sveriges Television, CasparCG Server, entered Debian today. This completes many months of work to get the source ready to go into Debian. The first upload to the Debian NEW queue happened a month ago, but the work upstream to prepare it for Debian started more than two and a half month ago. So far the casparcg-server package is only available for amd64, but I hope this can be improved. The package is in contrib because it depend on the non-free fdk-aac library. The Debian package lack support for streaming web pages because Debian is missing CEF, Chromium Embedded Framework. CEF is wanted by several packages in Debian. But because the Chromium source is not available as a build dependency, it is not yet possible to upload CEF to Debian. I hope this will change in the future.

The reason I got involved is that the Norwegian open channel Frikanalen is starting to use CasparCG for our HD playout, and I would like to have all the free software tools we use to run the TV channel available as packages from the Debian project. The last remaining piece in the puzzle is Open Broadcast Encoder, but it depend on quite a lot of patched libraries which would have to be included in Debian first.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Jonathan Dowland: Amiga/Gotek boot test

Monday 14th of January 2019 07:42:32 PM

This is the fourth part in a series of blog posts. The previous post was part 3: preliminaries.

A500 mainboard

In 2015 Game 2.0, a Retro Gaming exhibition visited the Centre for Life in Newcastle. On display were a range of vintage home computers and consoles, rigged up so you could play them. There was a huge range of machines, including some Arcade units and various (relatively) modern VR systems that were drawing big crowds but something else caught my attention: a modest little Amiga A500, running the classic puzzle game, "Lemmings".

A couple of weeks ago I managed to disassemble my Amiga and remove the broken floppy disk drive. The machine was pretty clean under the hood, considering its age. I fed new, longer power and data ribbon cables out of the floppy slot in the case (in order to attach the Gotek Floppy Emulator externally) and re-assembled it.

Success! Lemmings!

I then iterated a bit with setting up disk images and configuring the firmware on the Gotek. It was supplied with FlashFloppy, a versatile and open source firmware that can operate in a number of different modes and read disk images in a variety of formats. I had some Amiga images in "IPF" format, others in "ADF" and also some packages with "RP9" suffixes. After a bit of reading around, I realised the "IPF" ones weren't going to work, the "RP9" ones were basically ZIP archives of other disk images and metadata, and the "ADF" format was supported.

Amiga & peripherals on my desk

For my first boot test of the Gotek adaptor, the disk image really had to be Lemmings. Success! Now that I knew the hardware worked, I spent some time re-arranging my desk at home, to try and squeeze the Amiga, its peripherals and the small LCD monitor alongside the equipment I use for my daily work. It was a struggle but they just about fit.

The next step was to be planning out and testing a workflow for writing to virtual floppies via the Gotek. Unfortunately, before I could get as far as performing the first write test, my hardware developed a problem…

Jonathan Dowland: Amiga floppy recovery project, part 3: preliminaries

Monday 14th of January 2019 07:42:32 PM

This is the third part in a series of blog posts, following Amiga floppy recovery project, part 2. The next part is Amiga/Gotek boot test.

The first step for my Amiga project was to recover the hardware from my loft and check it all worked.

When we originally bought the A500 (in, I think, 1991) we bought a RAM expansion at the same time. The base model had a whole 512KiB of RAM, but it was common for people to buy a RAM expander that doubled the amount of memory to a whopping 1 MiB. The official RAM expander was the Amiga 501, which fit into a slot on the underside of the Amiga, behind a trapdoor.

The 501 also featured a real-time clock (RTC), which was powered by a backup NiCad battery soldered onto the circuit board. These batteries are notorious for leaking over a long enough time-frame, and our Amiga had been in a loft for at least 20 years. I had heard about this problem when I first dug the machine back out in 2015, and had a vague memory that I checked the board at the time and could find no sign of leakage, but reading around the subject more recently made me nervous, so I double-checked.

AMRAM-NC-issue 2 RAM expansion

Lo and behold, we don't have an official Commodore RAM expander: we were sold a third-party one, an "AMRAM-NC-issue 2". It contains the 512KiB RAM and a DIP switch, but no RTC or corresponding battery, so no leakage. The DIP switch was used to enable and disable the RAM expansion. Curiously it is currently flicked to "disable". I wonder if we ever actually had it switched on?

The follow-on Amiga models A500+ and A600 featured the RTC and battery directly on the machine's mainboard. I wonder if that has resulted in more of those units being irrevocably damaged from leaking batteries, compared to the A500. My neighbours had an A600, but they got rid of it at some point in the intervening decades. If I were looking to buy an Amiga model today, I'd be quite tempted by the A600, due to its low profile, lacking the numpad, and integrated composite video output and RF modulator.

Kickstart 1.3 (firmware) prompt

I wasn't sure whether I was going to have to rescue my old Philips CRT monitor from the loft. It would have been a struggle to find somewhere to house the Amiga and the monitor combined, as my desk at home is already a bit cramped. Our A500 was supplied with a Commodore A520 RF adapter which we never used in the machine's heyday. Over the Christmas break I tested it and it works, meaning I can use the A500 with my trusty 15" TFT TV (which has proven very useful for testing old equipment, far outlasting many monitors I've had since I bought it).

A520 RF modulator and external FDD

Finally I recovered my old Amiga external floppy disk drive. From what I recall this had very light usage in the past, so hopefully it still works, although I haven't yet verified it. I had partially disassembled this back in 2015, intending to re-purpose the drive into the Amiga. Now I have the Gotek, my plans have changed, so I carefully re-assembled it. Compared to enclosures I've used for PC equipment, it's built like a tank!

The next step is to remove the faulty internal floppy disk drive from the A500 and wire up the Gotek. I was thwarted from attempting this over the Christmas break. The Amiga enclosure's top part is screwed on with T9 Torx screws, and I lacked the right screwdriver part to remove it. I've now obtained the right screwdriver bit and can proceed.

Bits from Debian: "futurePrototype" will be the default theme for Debian 10

Monday 14th of January 2019 12:15:00 PM

The theme "futurePrototype" by Alex Makas has been selected as default theme for Debian 10 'buster'.

After the Debian Desktop Team made the call for proposing themes, a total of eleven choices have been submitted, and any Debian contributor has received the opportunity to vote on them in a survey. We received 3,646 responses ranking the different choices, and futurePrototype has been the winner among them.

We'd like to thank all the designers that have participated providing nice wallpapers and artwork for Debian 10, and encourage everybody interested in this area of Debian, to join the Design Team.

Congratulations, Alex, and thank you very much for your contribution to Debian!

Russ Allbery: Review: The Wonder Engine

Monday 14th of January 2019 05:32:00 AM

Review: The Wonder Engine, by T. Kingfisher

Series: The Clocktaur War #2 Publisher: Red Wombat Tea Company Copyright: 2018 ASIN: B079KX1XFD Format: Kindle Pages: 318

The Wonder Engine is the second half of The Clocktaur War duology, following Clockwork Boys. Although there is a substantial transition between the books, I think it's best to think of this as one novel published in two parts. T. Kingfisher is a pen name for Ursula Vernon when she's writing books for adults.

The prologue has an honest-to-God recap of the previous book, and I cannot express how happy that makes me. This time, I read both books within a month of each other and didn't need it, but I've needed that sort of recap so many times in the past and am mystified by the usual resistance to including one.

Slate and company have arrived in Anuket City and obtained temporary housing in an inn. No one is trying to kill them at the moment; indeed, the city seems oblivious to the fact that it's in the middle of a war. On the plus side, this means that they can do some unharried investigation into the source of the Clocktaurs, the war machines that are coming ever closer to smashing their city. On the minus side, it's quite disconcerting, and ominous, that the Clocktaurs involve so little apparent expenditure of effort.

The next steps are fairly obvious: pull on the thread of research of the missing member of Learned Edmund's order, follow the Clocktaurs and scout the part of the city they're coming from, and make contact with the underworld and try to buy some information. The last part poses some serious problems for Slate, though. She knows the underworld of Anuket City well because she used to be part of it, before making a rather spectacular exit. If anyone figures out who she is, death by slow torture is the best she can hope for. But the underworld may be their best hope for the information they need.

If this sounds a lot like a D&D campaign, I'm giving the right impression. The thief, ranger, paladin, and priest added a gnole to their company in the previous book, but otherwise closely match a typical D&D party in a game that's de-emphasizing combat. It's a very good D&D campaign, though, with some excellent banter, the intermittent amusement of Vernon's dry sense of humor, and some fascinating tidbits of gnole politics and gnole views on humanity, which were my favorite part of the book.

Somewhat unfortunately for me, it's also a romance. Slate and Caliban, the paladin, had a few exchanges in passing in the first book, but much of The Wonder Engine involves them dancing around each other, getting exasperated with each other, and trying to decide if they're both mutually interested and if a relationship could possibly work. I don't object to the relationship, which is quite fun in places and only rarely drifts into infuriating "why won't you people talk to each other" territory. I do object to Caliban, who Slate sees as charmingly pig-headed, a bit simple, and physically gorgeous, and who I saw as a morose, self-righteous jerk.

As mentioned in my review of the previous book, this series is in part Vernon's paladin rant, and much more of that comes into play here as the story centers more around Caliban and digs into his relationship with his god and with gods in general. Based on Vernon's comments elsewhere, one of the points is to show a paladin in a different (and more religiously realistic) light than the cliche of being one crisis of faith away from some sort of fall. Caliban makes it clear that when you've had a god in your head, a crisis of faith is not the sort of thing that actually happens, since not much faith is required to believe in something you've directly experienced. (Also, as is rather directly hinted, religions tend not to recruit as paladins the people who are prone to thinking about such things deeply enough to tie themselves up in metaphysical knots.) Guilt, on the other hand... religions are very good at guilt.

Caliban is therefore interesting on that level. What sort of person is recruited as a paladin? How does that person react when they fall victim to what they fight in other people? What's the relationship between a paladin and a god, and what is the mental framework they use to make sense of that relationship? The answers here are good ones that fit a long-lasting structure of organized religious warfare in a fantasy world of directly-perceivable gods, rather than fragile, crusading, faith-driven paladins who seem obsessed with the real world's uncertainty and lack of evidence.

None of those combine into characteristics that made me like Caliban, though. While I can admire him as a bit of world-building, Slate wants to have a relationship with him. My primary reaction to that was to want to take Slate aside and explain how she deserves quite a bit better than this rather dim piece of siege equipment, no matter how good he might look without his clothes on. I really liked Slate in the first book; I liked her even better in the second (particularly given how the rescue scene in this book plays out). Personally, I think she should have dropped Caliban down a convenient well and explored the possibilities of a platonic partnership with Grimehug, the gnole, who was easily my second-favorite character in this book.

I will give Caliban credit for sincerely trying, at least in between the times when he decided to act like an insufferable martyr. And the rest of the story, while rather straightforward, enjoyably delivered on the setup in the first book and did so with a lot of good banter. Learned Edmund was a lot more fun as a character by the end of this book than he was when introduced in the first book, and that journey was fun to see. And the ultimate source of the Clocktaurs, and particularly how they fit into the politics of Anuket City, was more interesting than I had been expecting.

This book is a bit darker than Clockwork Boys, including some rather gruesome scenes, a bit of on-screen gore, and quite a lot of anticipation of torture (although thankfully no actual torture scenes). It was more tense and a bit more uncomfortable to read; the ending is not a light romp, so you'll want to be in the right mood for that.

Overall, I do recommend this duology, despite the romance. I suspect some (maybe a lot) of my reservations are peculiar to me, and the romance will work better for other people. If you like Vernon's banter (and if you don't, we have very different taste) and want to see it applied at long novel length in a D&D-style fantasy world with some truly excellent protagonists, give this series a try.

The Clocktaur War is complete with this book, but the later Swordheart is set in the same universe.

Rating: 8 out of 10

More in Tux Machines

today's howtos

Get started with Roland, a random selection tool for the command line

There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way. Here's the seventh of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019. Read more

Nginx vs Apache: Which Serves You Best in 2019?

For two decades Apache held sway over the web server market which is shrinking by the day. Not only has Nginx caught up with the oldest kid on the block, but it is currently the toast of many high traffic websites. Apache users might disagree here. That is why one should not jump to conclusions about which web server is better. The truth is that both form the core of complete web stacks (LAMP and LEMP), and the final choice boils down to individual needs. For instance, people running Drupal websites often call on Apache, whereas WordPress users seem to favor Nginx as much if not more. Accordingly, our goal is to help you understand your own requirements better rather than providing a one-size recommendation. Having said that, the following comparison between the two gives an accurate picture. Read more

Security: Updates, 'Smart' Things, Android Proprietary Software and Firefox Woes on Windows

  • Security updates for Friday
  • How Do You Handle Security in Your Smart Devices?
    Look around your daily life and that of your friends and family, and you’ll see that smart devices are beginning to take over our lives. But this also means an increase in a need for security, though not everyone realizes it, as discussed in a recent article on our IoT-related site. Are you aware of the need for security even when it’s IoT-related? How do you handle security in your smart devices?
  • A Vulnerability in ES File Explorer Exposes All of Your Files to Anyone on the Same Network
  • 2018 Roundup: Q1
    One of our major pain points over the years of dealing with injected DLLs has been that the vendor of the DLL is not always apparent to us. In general, our crash reports and telemetry pings only include the leaf name of the various DLLs on a user’s system. This is intentional on our part: we want to preserve user privacy. On the other hand, this severely limits our ability to determine which party is responsible for a particular DLL. One avenue for obtaining this information is to look at any digital signature that is embedded in the DLL. By examining the certificate that was used to sign the binary, we can extract the organization of the cert’s owner and include that with our crash reports and telemetry. In bug 1430857 I wrote a bunch of code that enables us to extract that information from signed binaries using the Windows Authenticode APIs. Originally, in that bug, all of that signature extraction work happened from within the browser itself, while it was running: It would gather the cert information on a background thread while the browser was running, and include those annotations in a subsequent crash dump, should such a thing occur.