Language Selection

English French German Italian Portuguese Spanish

Gnu Planet

Syndicate content
Planet GNU -
Updated: 5 hours 29 min ago

FSF Blogs: Charity Navigator awards the FSF coveted four-star rating for the seventh time in a row

Tuesday 18th of February 2020 05:45:00 PM

Recently, we got some terrific news: Charity Navigator, an independent evaluator of US-based nonprofit charities, awarded the Free Software Foundation (FSF) a four-star rating, the highest available. According to the confirmation letter from Charity Navigator president Michael Thatcher, this rating demonstrates the FSF's "strong financial health and commitment to accountability and transparency." A four-star charity, according to their ratings, "exceeds industry standards and outperforms most charities in its cause."

This is our seventh time in a row receiving the coveted four-star rating! Only 7% of the charities that Charity Navigator evaluates have gotten this many in a row, and they assess over 9,000 charities a year. As Thatcher's letter says, "This exceptional designation from Charity Navigator sets the Free Software Foundation apart from its peers and demonstrates to the public its trustworthiness." Even better: our overall score went from 96.66 out of 100% last year, up to 98.55 this cycle.

We do score 100% in the transparency category, which we work very hard at. You can see all of our audited financials at, and our yearly annual reports give you even more details about our activities. It's nice to see this effort pay off.

This is why you can be confident that when you contribute to the FSF, we're going to turn your money into free software advocacy, infrastructure, and development – and you don't have to just take our word for it, either. We have a certificate that says so! And if you need more confirmation, you can see Charity Navigator's breakdown of our facts and figures on their Free Software Foundation summary page.

Christopher Allan Webber: Vats and Propagators: towards a global brain

Sunday 16th of February 2020 08:55:05 PM

(This is a writeup for future exploration; I will be exploring a small amount of this soon as a side effect of some UI building I am doing, but not a full system. A full system will come later, maybe even by years. Consider this a desiderata document. Also a forewarning that this document was originally written for an ocap-oriented audience, and some terms are left unexpanded; for instance, "vat" really just means a one-turn-at-a-time single-threaded event loop that a bunch of actors live in.)

We have been living the last couple of decades with networks that are capable of communicating ideas. However, by and large it is left to the humans to reason about these ideas that are propagated. Most machines that operate on the network merely execute the will of humans that have carefully constructed them. Recently neural network based machine learning has gotten much better, but merely resembles intuition, not reasoning. (The human brain succeeds by combining both, and a successful system likely will too.) Could we ever achieve a network that itself reasons? And can it be secure enough not to tear itself apart?

Near-term background

In working towards building out a demonstration of petname systems in action in a social network, I ran into the issue of changes to a petname database automatically being reflected through the UI. This lead me back down a rabbit hole of exploring reactive UI patterns, and also lead me back to exploring that section, and the following propagator section, of SICP again. This also lead me to rewatch one of my favorite talks: We Don't Really Know How to Compute! by Gerald Sussman.

At 24:54 Sussman sets up an example problem: specifically, an expert in electrical systems having a sense of how to be able to handle and solve an electrical wiring diagram. (The kind of steps explained are not dissimilar to the kind of steps that programmers go through while reasoning about debugging a coding problem.) Sussman then launches into an exploration of propagators, and how they can solve the problem. Sussman's explanation is better than mine would be, so I'll leave you to watch the video to see how it's used to solve various problems.

Okay, a short explanation of propagators

Well, I guess I'll give a little introduction to propagators and why I think they're interesting.

Propagators have gone through some revisions since the SICP days; relevant reading are the Revised Report on the Propagator Model, The Art of the Propagator, and to really get into depth with the ideas, Propagation networks: a flexible and expressive substrate for computation (Radul's PhD thesis).

In summary, a propagator model has the following properties:

  • There are cells with accumulate information about a value. Note! This is a big change from previous propagator versions! In the modern version of a propagator model, a cell doesn't hold a value, it accrues information about a value which must be non-contradictory.
  • Such cell information may be complete (the number 42 is all there is to know), whereas some other information may be a range of possibilities (hm, could be anywhere between -5 to 45...). As more information is made available, we can "narrow down" what we know.
  • Cells are connected together with propagators.
  • Information is (usually) bidirectional. For example, with the slope formula of y = (m * x) + b, we don't need to just solve for y... we could solve for m, x, or b given the other information. Similarly, partial information can propagate.
  • Contradictions are not allowed. Attempting to introduce contradictory information into the network will throw an exception.
  • We can "play with" different ideas via a Truth Maintenance System. What do we believe? Changes in our beliefs can result in changes to the generated topology of the network.
  • Debugging is quite possible. One of the goals of propagator networks is that you should be able to investigate and determine blame for a result. Relationships are clear and well defined. As Sussman says (roughly paraphrased), "if an autonomous car drives off the car of the road, I could sue the car manufacturer, but I'd rather sue the car... I want to hold it accountable for its decision making". The ability to hold accountability and determine blame stands in contrast to squishier systems like neural nets, genetic programs, etc (which are still useful, but not as easy to interrogate).

There are a lot of things that can be built with propagators as the general case of constraint solving and reasoning; functional reactive UIs, type checkers, etc etc.

Bridging vats and propagators

The prototype implementations are written in Scheme. The good news is, this means we could implement propagators on top of something like Spritely Goblins.

However (and, granted, I haven't completed it) I think there is one thing that is inaccurately described in Radul's thesis and Sussman's explanations, but which I think actually is no problem at all if we apply the vat model of computation (as in E, Agoric, Goblins): how distributed can these cells and propagators be? Section 2.1 of Radul's thesis explains propagators as asynchronous and completely autonomous, as if cells and their propagators could live anywhere on the computer network with no change in effectiveness. I think this is only partially true. The reference implementation actually does not fully explore this because it uses a single-threaded event loop that processes events until there are no more to process, during which it may encounter a contradiction and raise it. However I believe that the ability to "stop the presses" as it were is one of the nicest features of propagators and actually should not be lost... if we introduced asynchronous events coming in, there may be multiple events that come in at the same time and which try making changes to the propagator network in parallel. Thankfully a nice answer comes in form of a the vat model: it should be possible to have a propagator network within a single vat. Spritely Goblins' implementation of the vat model is transactional, so this means that if we try to introduce a contradiction, we could roll back immediately. This is the right behavior. As it turns out, this is very close to the propagator system in the way it's implemented in the reference implementation... I think the reference implementation did something more or less right while trying to do the simplest thing. Combined with a proper ocap vat model this should work great.

Thus, I believe that a propagator system (here I mean a propagator network, meaning a network of propagator-connected cells) should actually be vat-local. But wait, we talked about network (as in internet) based reasoning, and here I am advocating locality! What gives?

The right answer seems to me that propagator networks should be able to be hooked together, but a change to a vat-contained propagator system can trigger message passing to another vat-contained propagator system, which can even happen over a computer network such as the internet. We will have to treat propagator systems and changes to them as vat-local, but they can still communicate with other propagator systems. (This is a good idea anyway; if you communicate an idea with me and it's inconsistent with my worldview, it should be important for me to be able to realize that and use that as an opportunity to correct our misunderstandings between each other.)

However, cells are still objects with classic object references. This means it is possible to hold onto one and use it as either a local or networked capability. Attenuation also composes nicely; it should be possible to produce a facet of a cell that only allows read access or only allows adding information. It's clear and easily demonstrated that ocaps can be the right security model for the propagator model simply by realizing that both the propagator prototype system is written in scheme, and so is Jonathan Rees' W7 security kernel.

This is all to say, if we built the propagator model on top of an ocap-powered vat model, we'd already have a good network communication model, a good security model, and a transactional model. Sounds great to me.

Best of all, a propagator system can live alongside normal actors. We don't have to choose one or the other... a multi-paradigm approach can work great.

Speaking the same language

One of the most important things in a system that communicates is that ideas should be able to be expressed and considered in such a way that both parties understand. Of course, humans do this, and we call it "language".

Certain primitives exist in our system already; for optimization reasons, we are unlikely to want to build numbers out of mere tallying of numbers (such as in Peano arithmetic); we instead build in primitives for integers and a means of combination for them. So we will of course want to have several primitive data types.

But at some point we will want to talk about concepts that are not encoded in the system. If I would like to tell you about a beautiful red bird I saw, where would I even begin? Well obviously at minimum, we will have to have ways of communicating ideas such as "red" and "bird". We will have to build a vocabulary together.

Natural language vocabulary has a way of becoming ambiguous fast. A "note" passed in class versus a "note" in a musical score versus that I would like to "note" a topic of interest to you are all different things.

Linked data (formerly "semantic web") folks have tried to use full URIs as a way to get around this problem. For instance, two ActivityPub servers which are communicating are very likely speaking about the same thing if they both use "", which is to say they are talking about some written note-like message (probably a (micro)blog post). This is not a guarantee; vocabulary drift is still possible, but it is much less likely.

Unfortunately, http(s) based URIs are a poor choice for hosting vocabulary. Domains expire, websites go down, and choosing whether to extend a vocabulary in some namespace is (in the author's experience) a governance nightmare. A better option is "content-addressed vocabulary"; instead of "" we could instead simply take the text from the standard:

"Represents a short written work typically less than a single paragraph in length."

Hash that and you get "urn:sha256:54c14cbd844dc9ae3fa5f5f7b8c1255ee32f55b8afaba88ce983a489155ac398". No governance or liveness issues required. (Hashing mechanism upgrades, however, do pose some challenge; mapping old hashes to new ones for equivalence can be a partial solution.)

This seems sufficient to me; groups can collaborate somewhere to hammer out the definition of some term, simply hash the definition of it, and use that as the terminology URI. This also avoids hazards from choosing a different edge of Zooko's Triangle for vocabulary.

Now that we have this, we can express advanced new ideas across the network and experiment with new terms. Better yet, we might be even able to use our propagator networks to associate ideas with them. I think in many systems, content-addressed-vocabulary could be a good way to describe beliefs that could be considered, accepted, rejected in truth maintenance systems.

Cerealize me, cap'n!

One observation from Agoric is that it is possible to treat systems that do not resemble traditional live actor'y vats still as vats (and "machines") and develop semantics for message passing between them (and performing promise resolution) nonetheless, for instance blockchains.

Similarly, above we have observed that propagator systems can be built on top of actors; I believe it is also possible to describe propagator networks in terms of pure data. It should be possible to describe changes to a propagator network as a standard serialized ledger that can be transferred from place to place or reproduced.

However, the fact that interoperability with actors is possible is good, desirable, and thankfully a nice transitional place for experimentation (porting propagator model semantics to Spritely Goblins should not be hard).

Where to from here?

That's a lot of ideas above, but how likely is any of this stuff to be usable soon? I'm not anticipating dropping any current work to try to make this happen, but I probably will be experimenting in my upcoming UI work to try to have the UI powered by a propagator system (possibly even a stripped down version) so that the experimental seeds are in place to see if such a system can be grown. But I am not anticipating that we'll see anything like a fully distributed propagator system doing something interesting from my own network soon... but sometimes I end up surprised.

Closing the loop

I mentioned before that human brains are a combination of faster intuitive methods (resembling current work on neural nets) and slower, more calculating reasoning systems (resembling propagators or some logic programming languages). That's also to say nothing about the giant emotional soup that a mind/body tends to live in.

Realistically the emergence of a fully sapient system won't involve any of these systems independently, but rather a networked interconnection of many of them. I think the vat model of execution is a nice glue system for it; pulling propagators into the system could bring us one step closer, maybe.

Or maybe it's all just fantastical dreaming! Who knows. But it could be interesting to play and find out at some point... perhaps some day we can indeed get a proper brain into a vat.

FSF Blogs: Register today for LibrePlanet -- or organize your own satellite instance

Friday 14th of February 2020 04:44:15 PM

LibrePlanet started out as a gathering of Free Software Foundation (FSF) associate members, and has remained a community event ever since. We are proud to bring so many different people together to discuss the latest developments and the future of free software. We envision that some day there will be satellite instances all over the globe livestreaming our annual conference on technology and social justice -- and you can create your own today! All you need is a venue, a screen, and a schedule of LibrePlanet events, which we'll be releasing soon. This year, a free software supporter in Ontario, Canada, has confirmed an event, and we encourage you to host one, too.

Of course, ideally you'll be able to join us in person for LibrePlanet 2020: "Free the Future." If you can come, please register now to let us know -- FSF associate members attend gratis. We are looking forward to receiving the community at the newly confirmed Back Bay Events Center this year. We've put together some information on where to eat, sleep, and park in the vicinity of the new venue.

However, we know that not every free software enthusiast can make it to Boston, which is why we livestream the entire event. You can view it solo, with friends, or even with a large group of like-minded free software enthusiasts! It is a great opportunity to bring other people in your community together to view some of the foremost speakers in free software, including Internet Archive founder and Internet Hall of Famer Brewster Kahle.

We will also host an IRC instance, #libreplanet on Freenode, through which you can be in direct contact with the room monitors, who can relay any questions you may have about the talks going on here in Boston.

If you are working on getting a group of people together for the event, please let us and others know by announcing it on the LibrePlanet wiki and the LibrePlanet email list. If you have any questions, if you need any help organizing, if you'd like some free FSF sticker packs, or if you just want to let us know about a satellite instance, email us at We look forward to receiving you here in Boston and all over the world.

LibrePlanet needs volunteers -- maybe you!

LibrePlanet has grown every year in size and scope -- and its continued success is thanks to dozens of volunteers who help prepare for and run the conference. Volunteering is a great way to meet fellow community members and contribute to LibrePlanet, even if you can't attend in person. And yes, remote volunteers are definitely needed to help us moderate IRC chat rooms -- you can help us out from anywhere in the world!

If you are interested in volunteering for LibrePlanet 2020, email We thank all of our in-person volunteers by offering them gratis conference admission, lunch, and a LibrePlanet T-shirt.

Help others attend!

Take your support for LibrePlanet to the next level by helping others attend. We get a lot of requests from people internationally who would like to attend the event. We try to help as many as we can, and with your support, we can really put the "planet" in LibrePlanet.

We also hope that you'll spread the word about LibrePlanet 2020: write a blog, or take it to social media to let people know that you'll be there, using the hashtag #libreplanet.

We hope to see you in March!

FSF Blogs: Why freeing Windows 7 opens doors

Thursday 13th of February 2020 05:05:00 PM

Since its launch on January 24th, we've had an overwhelming amount of support in our call to "upcycle" Windows 7. Truthfully, the signature count flew far faster than we ever expected it to, even despite our conservative (if aptly numbered) goal of 7,777 signatures. We have seen the campaign called quixotic and even "completely delusional," but in every case, people have recognized the "pragmatic idealism" that is at the core of the FSF's message. Even where this campaign has been attacked, it's nevertheless been understood that the FSF really does want all software to be free software. We recommend every fully free operating system that we are aware of, and want to be able to expand that list to include every operating system. So long as any remain proprietary, we will always work to free them.

Over the last few weeks, we have been carefully watching the press coverage, and are glad to see the message of software freedom popping up in so many places at once. We received a lot of support, and have responded to dozens of comments expressing support, concern, and even outrage over why the FSF would think that upcycling Windows 7 was a good idea, and why it was something we would want to demand.

Microsoft can free Windows. They already have all of the legal rights necessary or the leverage to obtain them. Whether they choose to do so or not is up to them. In the past weeks, we've given them the message that thousands of people around the world want Windows to be freed. Next, we'll give them the medium.

This afternoon we will be mailing an upcycled hard drive along with the signatures to Microsoft's corporate offices. It's as easy as copying the source code, giving it a license notice, and mailing it back to us. As the guardian of the most popular free software license in the world, we're ready to give them all of the help we can. All they have to do is ask.

We want them to show exactly how much love they have for the "open source" software they mention in their advertising. If they really do love free software -- and we're willing to give them the benefit of the doubt -- they have the opportunity to show it to the world. We hope they're not just capitalizing on the free software development model in the most superficial and exploitative way possible: by using it as a marketing tool to fool us into thinking that they care about our freedom.

Together, we've stood up for our principles. They can reject us, or ignore us, but what they cannot do is stop us. We'll go on campaigning, until all of us are free.

FSF Blogs: "I Love Free Software Day": Swipe (copy)left on dating apps

Tuesday 11th of February 2020 03:55:00 PM

Every year, Free Software Foundation Europe (FSFE) encourages supporters to celebrate Valentine’s Day as “I Love Free Software Day,” a day for supporters to show their gratitude to the people who enable them to enjoy software freedom, including maintainers, contributors, and other activists. It seems appropriate on this holiday to once again address how seeking love on the Internet is, unfortunately, laden with landmines for your freedom and privacy. But today, I’m also going to make the argument that our community should think seriously about developing a freedom-respecting alternative.

Before we get started, though: make sure to show your love and gratitude for free software on February 14 and beyond! Share the graphic below with the hashtag #ilovefs:

With that said: as you probably heard earlier this year, the hydra-headed Match Group, which divides its customers among Tinder, OKCupid,, Hinge, and others, as well as several other dating companies, was revealed to be sharing user information in flagrant violation of privacy laws. OKCupid was caught sharing what was described as “highly personal data about sexuality, drug use, political views, and more,” and Grindr has been caught multiple times sharing users' HIV status. All of these apps also tell Facebook everything, whether a user has a profile or not (remember, even if you're not a user, you probably have a shadow Facebook profile!). This is typical behavior for modern technology companies, but the fact that it’s so ordinary makes it neither less ugly nor less flagrant.

Why do people put up with this? It isn’t that they don’t know that their personal information is being treated like candy tossed from a parade float: in 2014, Pew Research Center found that 91% of poll participants “agree or strongly agree that people have lost control over how personal information is collected and used by all kinds of entities.” A 2017 survey found that only 9% of social media users felt sure that Facebook and their ilk were protecting their data. And a 2017 Pew study led researchers to conclude that “a higher percentage of online participation certainly does not indicate a higher level of trust.” One anonymous commenter quipped, “People will expect data breaches, but will use online services anyway because of their convenience. It’s like when people accepted being mugged as the price of living in New York.”

It turns out that even if they're aware of how these companies are mistreating us, many people are making a cost-benefit analysis, and perceiving the benefits they get from these downright skeevy programs as valuable enough to be worth the ever-increasing exposure to the advertisers’ panopticon. As one anonymous Web and mobile developer from the Pew study said, “Being able to buy groceries when you’re commuting, talking with colleagues when doing a transatlantic flight, or simply ordering food for your goldfish right before skydiving will allow people to take more advantage of the scarcest good of our modern times: time itself.”

Here at the Free Software Foundation (FSF), we disagree strongly that the tradeoff is worth it, and it’s central to our mission to convince software users that letting developers pull their strings is destructive to their lives and dangerous to our society. When you use proprietary software, the program controls you, and the people who develop that program can use it as a tool to manipulate you in many absolutely terrifying ways. The same can also be true of services where the software is not distributed at all and is therefore neither free nor nonfree; but step one is to ditch all of the proprietary apps and JavaScript these companies try to get people to use.

Nevertheless, our battle is going to be an uphill one when a majority of people perceive conveniences to be worth the cost. In the case of dating Web sites, by 2015, 59% of people polled by Pew agreed that “online dating is a good way to meet people.” And it’s perceived, at least to some degree, as being effective: according to Pew, “nearly half of the public knows someone who uses online dating or who has met a spouse or partner via online dating.” eHarmony claimed, according to this 2019 article, that four percent of US marriages begin on their site, while a poll by The Knot found that twenty two percent of spouses polled met online. (The eHarmony stats may be questionable, but as part of a sales pitch, it definitely works to draw people in.)

Conversely, the alternative to online dating doesn’t feel very rosy to an increasing number of people. The same poll on The Knot found that one in five couples polled were introduced in a more traditional way, through their personal network, which sounds terrific, except for one small problem: our IRL social networks are shrinking. In 2009, Psychology Today reported that 25% of Americans have not a single friend or family member they can count on, and half of all Americans had nobody outside of their immediate family. So, how do you meet the elusive love of your life? It’s unsurprising that many people reluctantly choose the less obvious potential harms of OKCupid over the more tangible harms of isolation and loneliness. (After all, they’re not exactly trumpeting on their front page, “We’ll help you find a date, but in the meantime, we have information about what you’re into in bed, and we’ll give it to whoever we like!”)

This quandary sets up an extraordinarily unfair proposition: nobody should be forced to sacrifice their freedom in the name of a perceived shot at happiness. At the end of the day, we maintain that it’s not worth it, and you should keep Mark Zuckerberg as far away from your love life as possible, but I don’t think we should stop there, either. I believe that ethical, freedom-respecting online services that facilitate people’s social lives, from finding someone to date to staying in touch with friends far away, are an important social good, and that the free software movement has something unique and important to contribute.

Just as we have encouraged free software enthusiasts to move their social media presence from the walled gardens of Facebook to decentralized, federated services like Mastodon, GNU social, Pixelfed, and Diaspora, we would love to be able to point lovelorn free software supporters to an online dating site that will treat them like a human being rather than a commodity to be dissected into chunks of profitable data. So while we can’t endorse a project that’s barely gotten started at all, much less one that’s being built on Kickstarter, we were pleased to see a Redditor introduce the idea of Alovoa, which “aims to be the first widespread free software dating Web application on the Web.” Alovoa is licensed under AGPLv3, which is an excellent signpost for ethical behavior in the future.

Is Alovoa the solution? It’s far too early to say -- but we do know that the only acceptable solution will be a dating site that is 100% free software. And we also know that the free software community possesses the talent and conviction to make that alternative happen. When you’re freely permitted to use, share, study, modify, and share the modifications of the software you own, there are no shackles on your creativity: you can build the programs that you need, and make them available to everyone else who needs them. Perhaps we can solve the problem of how to find love online without sacrificing your privacy, and that’s only the beginning of the many problems we can solve. If we can build free software that offers ordinary people the conveniences they crave without the ethical tradeoffs, then someday, we will have a future where all software is free.

Christopher Allan Webber: State of Spritely for February 2020

Monday 10th of February 2020 09:30:00 PM

We are now approximately 50% of the way through the Samsung Stack Zero grant for Spritely, and only a few months more since I announced the Spritely project at all. I thought this would be a good opportunity to review what has happened so far and what's on the way.

In my view, quite a lot has happened over the course of the last year:

  • Datashards grew out of two Spritely projects, Magenc and Crystal. This provides the "secure storage layer" for the system, and by moving into Datashards has even become its own project (now mostly under the maintainership of Serge Wroclawski, who as it turns out is also co-host with me of Libre Lounge. There's external interest in this from the rest of the federated social web, and it was a topic of discussion in the last meeting of the SocialCG. While not as publicly visible recently, the project is indeed active; I am currently helping advise and assist Serge with some of the ongoing work on optimizations for smaller files, fixing the manifest format to permit larger files, and a more robust HTTP API for stores/registries. (Thank you Serge also for taking on a large portion of this work and responsibility!)

  • Spritely Goblins, the actor model layer of Spritely, continues its development. We are now up to release v0.5. I don't consider the API to be stable, but it is stabilizing. In particular, the object/update model, the synchronous communication layer, and the transactional update support are all very close to stable. Asynchronous programming mostly works but has a few bugs I need to work out, and the distributed programming environment design is coming together enough where I expect to be able to demo it soon.

  • In addition, I have finally started to write docs for Spritely Goblins. I think the tutorial above is fairly nice, and I've had a good amount of review from various parties, and those who have tried it seem to think it is fairly nice. (Please be advised that it requires working with the dev branch of Goblins at the time of writing.) v0.6 should the first release to have documentation after the major overhaul I did last summer (effectively an entire rewrite of the system, including many changes to the design after doing research into ocap practices). I cannot recommend that anyone else write production-level code using the system yet, but I hope that by the summer things will have congealed enough that this will change.

  • I have made a couple of publicly visible demos of Goblins' design. Weirdly enough all of these have involved ascii art.

    • The proto-version was the Let's Just Be Weird Together demo. Actually it's a bit strange to say this because the LJBWT demo didn't use Goblins, it used a library called DOS/HURD. However, writing this library (and adapting it from DOS/Win) directly informed the rewrite of Goblins, Goblinoid which eventually became Goblins itself, replacing all the old code. This is why I advocate demo-driven-development: the right design of an architecture flows out of a demo of it. (Oh yeah, and uh, it also allowed me to make a present for my 10th wedding anniversary, too.)

    • Continuing in a similar vein, I made the "Season's Greetings" postcard, which Software Freedom Conservancy actually used in their funding campaign this year. This snowy scene used the new rewrite of Goblins and allowed me to try to push the new "become" feature of Goblins to its limit (the third principle of actor model semantics, taken very literally). It wasn't really obvious to anyone else that this was using Goblins in any interesting way, but I'll say that writing this really allowed me to congeal many things about the update layer and it also lead to uncovering a performance problem, leading to a 10x speedup. Having written this demo, I was starting to get the hang of things in the Goblins synchronous layer.

    • Finally there was the Terminal Phase demo. (See the prototype announcement blogpost and the 1.0 announcement.) This was originally designed as a reward for donors for hitting $500/mo on my Patreon account (you can still show up in the credits by donating!), though once 1.0 made it out the door it seems like it raised considerable excitement on the r/linux subreddit and on Hacker News, which was nice to see. Terminal Phase helped me finish testing and gaining confidence in the transactional object-update and synchronous call semantics of Spritely Goblins, and I now have no doubt that this layer has a good design. But I think Terminal Phase was the first time that other people could see why Spritely Goblins was exciting, especially once I showed off the time travel debugging in Terminal Phase demo. That last post lead people to finally start pinging me asking "when can I use Spritely Goblins"? That's good... I'm glad it's obvious now that Goblins is doing something interesting (though the most interesting things are yet to be demo'ed).

  • I participated in, keynoted, and drummed up enthusiasm for ActivityPub Conference 2019. (I didn't organize though, that was Morgan Lemmer-Webber's doing, alongside Sebastian Lasse and with DeeAnn Little organizing the video recording.) We had a great speaker list and even got Mark S. Miller to keynote. Videos of the event are also available. While that event was obviously much bigger than Spritely, the engagement of the ActivityPub community is obviously important for its success.

  • Relatedly, I continue to co-chair the SocialCG but Nightpool has joined as co-chair which should relieve some pressure there, as I was a bit too overloaded to be able to handle this all on my own. The addition of the SocialHub community forum has also allowed the ActivityPub community to be able to coordinate in a way that does not rely on me being a blocker. Again, not Spritely related directly, but the health of the ActivityPub community is important to Spritely's success.

  • At Rebooting Web of Trust I coordinated with a number of contributors (including Mark Miller) on sketching out plans for secure UI designs. Sadly the paper is incomplete but has given me the framework for understanding the necessary UI components for when we get to the social network layer of Spritely.

  • Further along the lines of sketching out the desiderata of federated social networks, I have written a nearly-complete OcapPub: towards networks of consent. However, there are still some details to be figured out; I have been hammering them out on the cap-talk mailing list (see this post laying out a very ocappub-like design with some known problems, and then this analysis). The ocap community has thankfully been very willing to participate in working with me to hammer out the right security foundations, and I think we're close to the right design details. Of course, the proof of the pudding is in the demo, which has yet to be written.

Okay, so I hope I've convinced you that a lot has happened, and hopefully you feel that I am using my time reasonably well. But there is much, much, much ahead for Spritely to succeed in its goals. So, what's next?

  • I need to finish cleaning up the Goblins documentation and do a v0.6 release with it included. At that point I can start recommending some brave souls to use it for some simple applications.

  • A demo of Spritely Goblins working in a primarily asynchronous environment. This might simply be a port of mudsync as a first step. (Recorded demo of mudsync from a few years ago.) I'm not actually sure. The goal of this isn't to be the "right" social network design (not full OcapPub), just to test the async behaviors of Spritely Goblins. Like the synchronous demos that have already been done, the purpose of this is to congeal and ensure the quality of the async primitives. I expect this and the previous bullet point to be done within the next couple of months, so hopefully by the end of April.

  • Distributed networked programming in Goblins, and associated demo. May expand on the previous demo. Probably will come out about two months later, so end of June.

  • Prototype of the secure UI concepts from the forementioned secure UIs paper. I expect/hope this to be usable by end of third quarter 2020.

  • Somewhere in-between all this, I'd like to add a demo of being able to securely run untrusted code from third parties, maybe in the MUD demo. Not sure when yet.

  • All along, I continue to expect to push out new updates to Terminal Phase with more fun enemies and powerups to continue to reward donors to the Patreon campaign.

This will probably take most of this year. What you will notice is that this does not explicitly state a tie-in with the ActivityPub network. This is intentional, because the main goal of all the above demos are to prove more foundational concepts before they are all fully integrated. I think we'll see the full integration and it coming together with the existing fediverse beginning in early 2021.

Anyway, that's a lot of stuff ahead. I haven't even mentioned my involvement in Libre Lounge, which I've been on hiatus from due to a health issue that has made recording difficult, and from being busy trying to deliver on these foundations, but I expect to be coming back to LL shortly.

I hope I have instilled you with some confidence that I am moving steadily along the abstract Spritely roadmap. (Gosh, I ought to finally put together a website for Spritely, huh?) Things are happening, and interesting ones I think.

But how do you think things are going? Maybe you would like to leave me feedback. If so, feel free to reach out.

Until next time...

FSF Blogs: Thank you for supporting the FSF

Monday 10th of February 2020 03:51:53 PM

On January 17th, we closed the Free Software Foundation (FSF)'s end of the year fundraiser and associate membership drive, bringing 368 new associate members to the FSF community.

This year's fundraiser began with a series of shareable images aiming to bring user freedom issues to the kitchen table, helping to start conversations about the impact that proprietary software has on the autonomy and privacy of our everyday lives. Your enthusiasm in sharing these has been inspiring. We also debuted the ShoeTool video, an animated short presenting a day in the life of an unfortunate elf who is duped into forking over his liberty for the sake of convenience. And we also sent out our biannual issue of the Free Software Bulletin, which had FSF staff writing on topics as diverse as ethical software licensing and online dating.

It is your support of the FSF that makes all of our work possible. Your generosity impacts us on a direct level. It doesn't just keep the lights on, but is also the source of our motivation to fight full-time for software freedom. Your support is at the heart of our work advocating for the use of copyleft and the GPL. It's also what brought seventeen new devices to the RYF program this year, and is what drives our campaigning against Digital Restrictions Management (DRM). We are deeply grateful for the new memberships and donations we have received this year, not to mention the existing members and recurring donors that have enabled us to reach this point. And not to worry, we're working hard to send you the premium gifts we offered as soon as possible!

2020 has started off strong already, with our petition calling on Microsoft to "upcycle" Windows 7 by releasing it as free software, which has reached more than 12,000 signatures in less than a week. And there is much more to come. The campaigns, tech, and licensing teams are all working on ambitious projects that we hope will drive the fight for freedom forward, especially as the FSF enters its 35th year of free software activism.

This year's LibrePlanet: "Free the Future" conference is almost upon us as well, and we're all putting our best into the planning process. LibrePlanet 2020 will see keynotes from speakers including Internet Archive founder Brewster Kahle, and there are still more surprises to come. We hope to see you there.

Andy Wingo: state of the gnunion 2020

Sunday 9th of February 2020 07:44:10 PM

Greetings, GNU hackers! This blog post rounds up GNU happenings over 2019. My goal is to celebrate the software we produced over the last year and to help us plan a successful 2020.

Over the past few months I have been discussing project health with a group of GNU maintainers and we were wondering how the project was doing. We had impressions, but little in the way of data. To that end I wrote some scripts to collect dates and versions for all releases made by GNU projects, as far back as data is available.

In 2019, I count 243 releases, from 98 projects. Nice! Notably, on we have the first stable releases from three projects:

GNU Guix
GNU Guix is perhaps the most exciting project in GNU these days. It's a package manager! It's a distribution! It's a container construction tool! It's a package-manager-cum-distribution-cum-container-construction-tool! Hearty congratulations to Guix on their first stable release.
GNU Shepherd
The GNU Daemon Shepherd is a modern dependency-based init service, written in Guile Scheme, and used in Guix. When you install Guix as an operating system, it actually stages Scheme programs from the operating system definition into the Shepherd configuration. So cool!
GNU Backgammon
Version 1.06.002 is not GNU Backgammon's first stable release, but it is the earliest version which is available on Formerly hosted on the now-defunct, GNU Backgammon is a venerable foe, and uses neural networks since before they were cool. Welcome back, GNU Backgammon!

The total release counts above are slightly above what Mike Gerwitz's scripts count in his "GNU Spotlight", posted on the FSF blog. This could be because in addition to files released on, I also manually collected release dates for most packages that upload their software somewhere other than I don't count releases, and there were a handful of packages for which I wasn't successful at retrieving their release dates. But as a first approximation, it's a relatively complete data set.

I put my scripts in git repository if anyone is interested in playing with the data. Some raw CSV files are there as well.

where we at?

Hair toss, check my nails, baby how you GNUing? Hard to tell!

To get us closer to an answer, I calculated the active package count per year. There can be other definitions, but my reading is that an active package is one that has had a stable release within the preceding 3 calendar years. So for 2019, for example, a GNU package is considered active if it had a stable release in 2017, 2018, or 2019. What I got was a graph that looks like this:

What we see is nothing before 1991 -- surely pointing to lacunae in my data set -- then a more or less linear rise in active package count until 2002, some stuttering growth rising to a peak in 2014 at 208 active packages, and from there a steady decline down to 153 active packages in 2019.

Of course, as a metric, active package count isn't precisely the same as project health; GNU ed is indeed the standard editor but it's not GCC. But we need to look for measurements that indirectly indicate project health and this is what I could come up with.

Looking a little deeper, I tabulated the first and last release date for each GNU package, and then grouped them by year. In this graph, the left blue bars indicate the number of packages making their first recorded release, and the right green bars indicate the number of packages making their last release. Obviously a last release in 2019 indicates an active package, so it's to be expected that we have a spike in green bars on the right.

What this graph indicates is that GNU had an uninterrupted growth phase from its beginning until 2006, with more projects being born than dying. Things are mixed until 2012 or so, and since then we see many more projects making their last release and above all, very few packages "being born".

where we going?

I am not sure exactly what steps GNU should take in the future but I hope that this analysis can be a good conversation-starter. I do have some thoughts but will post in a follow-up. Until then, happy hacking in 2020!

GNU Guix: Outreachy May 2020 to August 2020 Status Report I

Sunday 9th of February 2020 02:30:00 PM

We are happy to announce that for the fourth time GNU Guix offers a three-month internship through Outreachy, the inclusion program for groups traditionally underrepresented in free software and tech. We currently propose three subjects to work on:

  1. Create Netlink bindings in Guile.
  2. Improve internationalization support for the Guix Data Service.
  3. Integration of desktop environments into GNU Guix.

The initial application deadline is on Feb. 25, 2020 at 4PM UTC.

The final project list is announced on Feb. 25, 2020.

Should you have any questions regarding the internship, please check out the timeline, information about the application process, and the eligibility rules.

If you’d like to contribute to computing freedom, Scheme, functional programming, or operating system development, now is a good time to join us. Let’s get in touch on the mailing lists and on the #guix channel on the Freenode IRC network!

Last year we had the pleasure to welcome Laura Lazzati as an Outreachy intern working on documentation video creation, which led to the videos you can now see on the home page.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

Sylvain Beucler: Escoria - point-and-click system for the Godot engine

Saturday 8th of February 2020 04:32:08 PM

Escoria, the point-and-click system for the Godot game engine, is now working again with the latest Godot (3.2).

Godot is a general-purpose game engine. It comes with an extensive graphic editor with skeleton and animation support, can create all sorts of games and mini-games, making it an interesting choice for point-and-click's.

The Escoria point-and-click template provides notably a dialog system and the Esc language to write the story and interactions. It was developed for the Dog Mendonça and Pizzaboy crowdfunded game and later released as free software. A community is developing the next version, but the current version has been incompatible with the current Godot engine. So I upgraded the game template as well as the Escoria in Daïza tutorial game to Godot 3.2. Enjoy!

HTML5 support is still lacking, so I might get a compulsive need to fix it in the future

Andy Wingo: lessons learned from guile, the ancient & spry

Friday 7th of February 2020 11:38:00 AM

Greets, hackfolk!

Like just about every year, last week I took the train up to Brussels for FOSDEM, the messy and wonderful carnival of free software and of those that make it. Mostly I go for the hallway track: to see old friends, catch up, scheme about future plans, and refill my hacker culture reserves.

I usually try to see if I can get a talk or two in, and this year was no exception. First on my mind was the recent release of Guile 3. This was the culmination of a 10-year plan of work and so obviously there are some things to say! But at the same time, I wanted to reflect back a bit and look at the past with a bit of distance.

So in the end, my one talk was two talks. Let's start with the first one. (I'm trying a new thing where I share my talks as blog posts. We'll see how this goes. I know the rendering can be a bit off relative to the slides, but hopefully it's good enough. If you prefer, you can just watch the video instead!)

Celebrating Guile 3

FOSDEM 2020, Brussels

Andy Wingo | | @andywingo

So yeah let's celebrate! I co-maintain the Guile implementation of Scheme. It's a programming language. Guile 3, in summary, is just Guile, but faster. We added a simple just-in-time compiler as well as a bunch of ahead-of-time optimizations. The result is that it runs faster -- sometimes by a lot!

In the image above you can see Guile 3's performance on a number of microbenchmarks, relative to Guile 2.2, sorted by speedup. The baseline is 1.0x as fast. You can see that besides the first couple microbenchmarks where things are a bit inconclusive (click for full-size image), everything gets faster. Most are at least 2x as fast, and one benchmark is even 32x as fast. (Note the logarithmic scale on the Y axis.)

I only took a look at microbenchmarks at the end of the Guile 3 series; before that, I was mostly going by instinct. It's a relief to find out that in this case, my instincts did align with improvement.

mini-benchmark: eval(primitive-eval ’(let fib ((n 30)) (if (< n 2) n (+ (fib (- n 1)) (fib (- n 2))))))

Guile 1.8: primitive-eval written in C

Guile 2.0+: primitive-eval in Scheme

Taking a look at a more medium-sized benchmark, let's compute the 30th fibonacci number, but using the interpreter instead of compiling the procedure. In Guile 2.0 and up, the interpreter (primitive-eval) is implemented in Scheme, so it's a good test of an important small Scheme program.

Before 2.0, though, primitive-eval was actually implemented in C. This had a number of disadvantages, notably that it prevented tail calls between interpreted and compiled code. When we switched to a Scheme implementation of primitive-eval, we knew we would have a performance hit, but we thought that we would gain it back eventually as the compiler got better.

As you can see, it took a while before the compiler and run-time improved to the point that primitive-eval in Scheme reached the speed of its old hand-tuned C implementation, but for Guile 3, we finally got there. Note again the logarithmic scale on the Y axis.

macro-benchmark: guixguix build libreoffice ghc-pandoc guix \ –dry-run --derivation

7% faster

guix system build config.scm \ –dry-run --derivation

10% faster

Finally, taking a real-world benchmark, the Guix package manager is implemented entirely in Scheme. All ten thousand packages are defined in Scheme, the building scripts are in Scheme, the initial RAM disk is in Scheme -- you get the idea. Guile performance in Guix can have an important effect on user experience. As you can see, Guile 3 lowered elapsed time for some operations by around 10 percent or so. Of course there's a lot of I/O going on in addition to computation, so Guile running twice as fast will rarely make Guix run twice as fast (Amdahl's law and all that).

spry /sprī/

  • adjective: active; lively

So, when I was thinking about words that describe Guile, the word "spry" came to mind.

spry /sprī/

  • adjective: (especially of an old person) active; lively

But actually when I went to look up the meaning of "spry", Collins Dictionary says that it especially applies to the agèd. At first I was a bit offended, but I knew in my heart that the dictionary was right.

Lessons Learned from Guile, the Ancient & Spry

FOSDEM 2020, Brussels

Andy Wingo | | @andywingo

That leads me into my second talk.

guile is ancient

2010: Rust

2009: Go

2007: Clojure

1995: Ruby

1995: PHP

1995: JavaScript

1993: Guile (33 years before 3.0!)

It's common for a new project to be lively, but Guile is definitely not new. People have been born, raised, and earned doctorates in programming languages in the time that Guile has been around.

built from ancient parts

1991: Python

1990: Haskell

1990: SCM

1989: Bash

1988: Tcl

1988: SIOD

Guile didn't appear out of nothing, though. It was hacked up from the pieces of another Scheme implementation called SCM, which itself was initially based on Scheme in One Defun (SIOD), back before the Berlin Wall fell.

written in an ancient language

1987: Perl

1984: C++

1975: Scheme

1972: C

1958: Lisp

1958: Algol

1954: Fortran

1958: Lisp

1930s: λ-calculus (34 years ago!)

But it goes back further! The Scheme language, of which Guile is an implementation, dates from 1975, before I was born; and you can, if you choose, trace the lines back to the lambda calculus, created in mid-30s as a notation for computation. I suppose at this point I should say mid-2030s, to disambiguate.

The point is, Guile is old! Statistically, most software projects from olden times are now dead. How has Guile managed to survive and (sometimes) thrive? Surely there must be some lesson or other that can be learned here.

ancient & spry

Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past.

The tradition of all dead generations weighs like a nightmare on the brains of the living. [...]

Eighteenth Brumaire of Louis Bonaparte, Marx, 1852

I am no philospher of history, but I know that there are some ways of looking at the past that do not help me understand things. One is the arrow of enlightened progress, in which events exist in a causal chain, each producing the next. It doesn't help me understand the atmosphere, tensions, and possibilities inherent at any particular point. I find the "progress" theory of history to be an extreme form of selection bias.

Much more helpful to me is the Hegelian notion of dialectics: that at an given point in time there are various tensions at work. In our field, an example could be memory safety versus systems programming. These tensions create an environment that favors actions that lead towards resolution of the tensions. It doesn't mean that there's only one way to resolve the tensions, and it's not an automatic process -- people still have to do things. But the tendency is to ratchet history forward to a new set of tensions.

The history of a project, to me, is then a process of dialectic tensions and resolutions. If the project survives, as Guile has, then it should teach us something about the way this process works in practice.

ancient & spry

Languages evolve; how to remain minimal?

Dialectic opposites

  • world and guile

  • stable and active

  • ...

Lessons learned from inside Hegel’s motor of history

One dialectic is the tension between the world's problems and what tools Guile offers to understand and solve them. In 1993, the web didn't really exist. In 2033, if Guile doesn't run well in a web browser, probably it will be dead. But this process operates very slowly, for an old project; Guile isn't built on CORBA or something ephemeral like that, so we don't have very much data here.

The tension between being a stable base for others to build on, and in being a dynamic project that improves and changes, is a key tension that this talk investigates.

In the specific context of Guile, and for the audience of the FOSDEM minimal languages devroom, we should recognize that for a software project, age and minimalism don't necessarily go together. Software gets features over time and becomes bigger. What does it mean for a minimal language to evolve?

hill-climbing is insufficient

Ex: Guile 1.8; Extend vs Embed

One key lesson that I have learned is that the strategy of making only incremental improvements is a recipe for death, in the long term. The natural result is that you reach what you perceive to be the most optimal state of your project. Any change can only make it worse, so you stop moving.

This is what happened to Guile around version 1.8: we had taken the paradigm of the interpreter as language implementation strategy as far as it could go. There were only around 150 commits to Guile in 2007. We were stuck.

users stay unless pushed away

Inertial factor: interface

  • Source (API)

  • Binary (ABI)

  • Embedding (API)

  • CLI

  • ...

Ex: Python 3; local-eval; R6RS syntax; set!, set-car!

So how do we make change, in such a circumstance? You could start a new project, but then you wouldn't have any users. It would be nice to change and keep your users. Fortunately, it turns out that users don't really go away; yes, they trickle out if you don't do anything, but unless you change in an incompatible way, they stay with you, out of inertia.

Inertia is good and bad. It does conflict with minimalism as a principle; if you were to design Scheme in 2020, you would not include mutable variables or even mutable pairs. But they are still with us because if we removed them, we'd break too many users.

Users can even make you add back things that you had removed. In Guile 2.0, we removed the capability to evaluate an expression at run-time within the lexical environment of an expression, as we didn't know how to implement this outside an interpreter. It turns out this was so important to users that we had to add local-eval back to Guile, later in the 2.0 series. (Fortunately we were able to do it in a way that layered on lower-level facilities; this approach reconciled me to the solution.)

you can’t keep all users

What users say: don’t change or remove existing behavior

But: sometimes losing users is OK. Hard to know when, though

No change at all == death

  • Natural result of hill-climbing

Ex: psyntax; BDW-GC mark & finalize; compile-time; Unicode / locales

Unfortunately, the need to change means that sometimes you will lose users. It's either a dead project, or losing users.

In Guile 1.8, for example, the macro expander ran lazily: it would only expand code the first time it ran it. This was good for start-up time, because not all code is evaluated in the course of a simple script. Lazy expansion allowed us to start doing important work sooner. However, this approach caused immense pain to people that wanted "proper" Scheme macros that preserved lexical scoping; the state of the art was to eagerly expand an entire file. So we switched, and at the same time added a notion of compile-time. This compromise kept good start-up time while allowing fancy macros.

But eager expansion was a change. Users that relied on side effects from macro expansion would see them at compile-time instead of run-time. Users of old "defmacros" that could previously splice in live Scheme closures as literals in expanded source could no longer do that. I think it was the right choice but it did lose some users. In fact I just got another bug report related to this 10-year-old change last week.

every interface is a cost

Guile binary ABI:; compiled Scheme files

Make compatibility easier: minimize interface

Ex: scm_sym_unquote, GOOPS, Go, Guix

So if you don't want to lose users, don't change any interface. The easiest way to do this is to minimize your interface surface. In Go, for example, they mostly haven't had dynamic-linking problems because that's not a thing they do: all code is statically linked into binaries. Similarly, Guix doesn't define a stable API, because all of its code is maintained in one "monorepo" that can develop in lock-step.

You always have some interfaces, though. For example Guix can't change its command-line interface from one day to the next, for example, because users would complain. But it's been surprising to me the extent to which Guile has interfaces that I didn't consider. Recently for example in the 3.0 release, we unexported some symbols by mistake. Users complained, so we're putting them back in now.

parallel installs for the win

Highly effective pattern for change



Changed ABI is new ABI; it should have a new name

Ex: make-struct/no-tail, GUILE_PKG([2.2]), libtool

So how does one do incompatible change? If "don't" isn't a sufficient answer, then parallel installs is a good strategy. For example in Guile, users don't have to upgrade to 3.0 until they are ready. Guile 2.2 happily installs in parallel with Guile 3.0.

As another small example, there's a function in Guile called make-struct (old doc link), whose first argument is the number of "tail" slots, followed by initializers for all slots (normal and "tail"). This tail feature is weird and I would like to remove it. Unfortunately I can't just remove the argument, so I had to make a new function, make-struct/no-tail, which exists in parallel with the old version that I can't break.

deprecation facilitates migration__attribute__ ((__deprecated__))(issue-deprecation-warning "(ice-9 mapping) is deprecated." " Use srfi-69 or rnrs hash tables instead.")scm_c_issue_deprecation_warning ("Arbiters are deprecated. " "Use mutexes or atomic variables instead.");

begin-deprecated, SCM_ENABLE_DEPRECATED

Fortunately there is a way to encourage users to migrate from old interfaces to new ones: deprecation. In Guile this applies to all of our interfaces (binary, source, etc). If a feature is marked as deprecated, we cause its use to issue a warning, ideally at compile-time when users responsible for the package can fix it. You can even add __attribute__((__deprecated__)) on C types!

the arch-pattern

Replace, Deprecate, Remove

All change is possible; question is only length of deprecation period

Applies to all interfaces

Guile deprecation period generally one stable series

Ex: scm_t_uint8; make-struct; Foreign objects; uniform vectors

Finally, you end up in a situation where you have replaced the old interface and issued deprecation warnings to help users migrate. The next step is to remove the old interface. If you don't do this, you are failing as a project maintainer -- your project becomes literally unmaintainable as it just grows and grows.

This strategy applies to all changes. The deprecation period may last a while, and it may be that the replacement you built doesn't serve the purpose. There is still a dialog with the users that needs to happen. As an example, I made a replacement for the "SMOB" facility in Guile that allows users to define new types, backed by C interfaces. This new "foreign object" facility might not actually be good enough to replace SMOBs; since I haven't formally deprecatd SMOBs, I don't know yet because users are still using the old thing!

change produces a new stable point

Stability within series: only additions

Corollary: dependencies must be at least as stable as you!

  • for your definition of stable

  • social norms help (GNU, semver)

Ex: libtool; unistring; gnulib

In my experience, the old management dictum that "the only constant is change" does not describe software. Guile changes, then it becomes stable for a while. You need an unstable series escape hill-climbing, then once you found your new hill, you start climbing again in the stable series.

Once you reach your stable point, the projects you rely on need to exhibit the same degree of stability that you envision for your project. You can't build a web site that you expect to maintain for 10 years on technology that fundamentally changes every 6 months. But stable dependencies isn't something you can ensure technically; rather it relies on social norms of who makes the software you use.

who can crank the motor of history?

All libraries define languages

Allow user to evolve the language

  • User functionality: modules (Guix)

  • User syntax: macros (yay Scheme)

Guile 1.8 perf created tension

  • incorporate code into Guile

  • large C interface “for speed”

Compiler removed pressure on C ABI

Empowered users need less from you

A dialectic process does not progress on its own: it requires actions. As a project maintainer, some of my actions are because I want to do them. Others are because users want me to do them. The user-driven actions are generally a burden and as a lazy maintainer, I want to minimize them.

Here I think Guile has to a large degree escaped some of the pressures that weigh on other languages, for example Python. Because Scheme allows users to define language features that exist on par with "built-in" features, users don't need my approval or intervention to add (say) new syntax to the language they work in. Furthermore, their work can still compose with the work of others, even if the others don't buy in to their language extensions.

Still, Guile 1.8 did have a dynamic whereby the relatively poor performance of having to run all code through primitive-eval meant that users were pushed towards writing extensions in C. This in turn pushed Guile to expose all of its guts for access from C, which obviously has led to an overbloated C API and ABI. Happily the work on the Scheme compiler has mostly relieved this pressure, and we may therefore be able to trim the size of the C API and ABI over time.

contributions and risk

From maintenance point of view, all interface is legacy

Guile: Sometimes OK to accept user modules when they are more stable than Guile

In-tree users keep you honest

Ex: SSAX, fibers, SRFI

It can be a good strategy to "sediment" solutions to common use cases into Guile itself. This can improve the minimalism of an entire ecosystem of code. The maintenance burden has to be minimal, however; Guile has sometimes adopted experimental code into its repository, and without active maintenance, it soon becomes stale relative to what users and the module maintainers expect.

I would note an interesting effect: pieces of code that were adopted into Guile become a snapshot of the coding style at that time. It's useful to have some in-tree users because it gives you a better idea about how a project is seen from the outside, from a code perspective.

sticky bits

Memory management is an ongoing thorn

Local maximum: Boehm-Demers-Weiser conservative collector

How to get to precise, generational GC?

Not just Guile; e.g. CPython __del__

There are some points that resist change. The stickiest of these is the representation of heap-allocated Scheme objects in C. Guile currently uses a garbage collector that "automatically" finds all live Scheme values on the C stack and in registers. It was the right choice at the time, given our maintenance budget. But to get the next bump in performance, we need to switch to a generational garbage collector. It's hard to do that without a lot of pain to C users, essentially because the C language is too weak to express the patterns that we would need. I don't know how to proceed.

I would note, though, that memory management is a kind of cross-cutting interface, and that it's not just Guile that's having problems changing; I understand PyPy has had a lot of problems regarding changes on when Python destructors get called due to its switch from reference counting to a proper GC.


We are here: stability

And then?

  • Parallel-installability for source languages: #lang

  • Sediment idioms from Racket to evolve Guile user base

Remove myself from “holding the crank”

So where are we going? Nowhere, for the moment; or rather, up the hill. We just released Guile 3.0, so let's just appreciate that for the time being.

But as far as next steps in language evolution, I think in the short term they are essentially to further enable change while further sedimenting good practices into Guile. On the change side, we need parallel installability for entire languages. Racket did a great job facilitating this with #lang and we should just adopt that.

As for sedimentation, we should step back and if any common Guile use patterns built by our users should be include core Guile, and widen our gaze to Racket also. It will take some effort both on a technical perspective but also on a social/emotional consensus about how much change is good and how bold versus conservative to be: putting the dialog into dialectic.

dialectic, boogie woogie woogie

#guile on freenode


Happy hacking!

Hey that was the talk! Hope you enjoyed the writeup. Again, video and slides available on the FOSDEM web site. Happy hacking!

FSF News: GNU-FSF cooperation update

Thursday 6th of February 2020 10:00:00 PM

The Free Software Foundation and the GNU Project leadership are defining how these two separate groups cooperate. Our mutual aim is to work together as peers, while minimizing change in the practical aspects of this cooperation, so we can advance in our common free software mission.

Alex Oliva, Henry Poole and John Sullivan (board members or officers of the FSF), and Richard Stallman (head of the GNU Project), have been meeting to develop a general framework which will serve as the foundation for further discussion about specific areas of cooperation. Together we have been considering the input received from the public on and We urge people to send any further input by February 13, because we expect to finish this framework soon.

This joint announcement can also be read on

screen @ Savannah: GNU Screen v.4.8.0

Wednesday 5th of February 2020 08:48:18 PM

I'm announcing availability of GNU Screen v.4.8.0

Screen is a full-screen window manager that multiplexes a physical
terminal between several processes, typically interactive shells.

This release
  * Improves startup time by only polling for already open files to
  * Fixes:
       - Fix for segfault if termcap doesn't have Km entry
       - Make screen exit code be 0 when checking --version
       - Fix potential memory corruption when using OSC 49

As last fix, fixes potential memory overwrite of quite big size (~768
bytes), and even though I'm not sure about potential exploitability of
that issue, I highly recommend everyone to upgrade as soon as possible.
This issue is present at least since v.4.2.0 (haven't checked earlier).
Thanks to pippin who brought this to my attention.

For full list of changes see

Release is available for download at:
or your closest mirror (may have some delay)

Please report any bugs or regressions.

Applied Pokology: Hyperlink Support in GNU Poke

Sunday 2nd of February 2020 12:00:00 AM

FOSDEM 2020 is over, and hyperlink support has just landed for GNU Poke!

Wait, Hyperlinks!?

What do hyperlinks, a web concept, mean for GNU Poke, a terminal application?

For many years now, terminal emulators have been detecting http:// URLs in the output of any program and giving the user a chance to click on them and immediately navigate to the corresponding web page. In 2017, Egmont Kob made a proposal for supporting general hyperlinks in terminal emulators. Gnome Terminal, iTerm and a few other terminal emulators have already implemented this proposal in their latest releases. With Egmont's proposal, an application can emit any valid URI and have the terminal emulator take the user to that resource.

libc @ Savannah: The GNU C Library version 2.31 is now available

Saturday 1st of February 2020 01:31:10 PM

The GNU C Library

The GNU C Library version 2.31 is now available.

The GNU C Library is used as the C library in the GNU system and
in GNU/Linux systems, as well as many other systems that use Linux
as the kernel.

The GNU C Library is primarily designed to be a portable
and high performance C library.  It follows all relevant
standards including ISO C11 and POSIX.1-2017.  It is also
internationalized and has one of the most complete
internationalization interfaces known.

The GNU C Library webpage is at

Packages for the 2.31 release may be downloaded from:

The mirror list is at

NEWS for version 2.31

Major new features:

  • The GNU C Library now supports a feature test macro _ISOC2X_SOURCE

  to enable features from the draft ISO C2X standard.  Only some
  features from this draft standard are supported by the GNU C
  Library, and as the draft is under active development, the set of
  features enabled by this macro is liable to change.  Features from
  C2X are also enabled by _GNU_SOURCE, or by compiling with "gcc

  • The <math.h> functions that round their results to a narrower type

  now have corresponding type-generic macros in <tgmath.h>, as defined
  in TS 18661-1:2014 and TS 18661-3:2015 as amended by the resolution
  of Clarification Request 13 to TS 18661-3.

  • The function pthread_clockjoin_np has been added, enabling join with

  a terminated thread with a specific clock.  It allows waiting
  against CLOCK_MONOTONIC and CLOCK_REALTIME.  This function is a GNU

  • New locale added: mnw_MM (Mon language spoken in Myanmar).
  • The DNS stub resolver will optionally send the AD (authenticated

  data) bit in queries if the trust-ad option is set via the options
  directive in /etc/resolv.conf (or if RES_TRUSTAD is set in
  _res.options).  In this mode, the AD bit, as provided by the name
  server, is available to applications which call res_search and
  related functions.  In the default mode, the AD bit is not set in
  queries, and it is automatically cleared in responses, indicating a
  lack of DNSSEC validation.  (Therefore, the name servers and the
  network path to them are treated as untrusted.)

Deprecated and removed features, and other changes affecting

  • The totalorder and totalordermag functions, and the corresponding

  functions for other floating-point types, now take pointer arguments
  to avoid signaling NaNs possibly being converted to quiet NaNs in
  argument passing.  This is in accordance with the resolution of
  Clarification Request 25 to TS 18661-1, as applied for C2X.
  Existing binaries that pass floating-point arguments directly will
  continue to work.

  • The obsolete function stime is no longer available to newly linked

  binaries, and its declaration has been removed from <time.h>.
  Programs that set the system time should use clock_settime instead.

  • We plan to remove the obsolete function ftime, and the header

  <sys/timeb.h>, in a future version of glibc.  In this release, the
  header still exists but calling ftime will cause a compiler warning.
  All programs should use gettimeofday or clock_gettime instead.

  • The gettimeofday function no longer reports information about a

  system-wide time zone.  This 4.2-BSD-era feature has been deprecated
  for many years, as it cannot handle the full complexity of the
  world's timezones, but hitherto we have supported it on a
  best-effort basis.  Changes required to support 64-bit time_t on
  32-bit architectures have made this no longer practical.

  As of this release, callers of gettimeofday with a non-null 'tzp'
  argument should expect to receive a 'struct timezone' whose
  tz_minuteswest and tz_dsttime fields are zero.  (For efficiency
  reasons, this does not always happen on a few Linux-based ports.
  This will be corrected in a future release.)

  All callers should supply a null pointer for the 'tzp' argument to
  gettimeofday.  For accurate information about the time zone
  associated with the current time, use the localtime function.

  gettimeofday itself is obsolescent according to POSIX.  We have no
  plans to remove access to this function, but portable programs
  should consider using clock_gettime instead.

  • The settimeofday function can still be used to set a system-wide

  time zone when the operating system supports it.  This is because
  the Linux kernel reused the API, on some architectures, to describe
  a system-wide time-zone-like offset between the software clock
  maintained by the kernel, and the "RTC" clock that keeps time when
  the system is shut down.

  However, to reduce the odds of this offset being set by accident,
  settimeofday can no longer be used to set the time and the offset
  simultaneously.  If both of its two arguments are non-null, the call
  will fail (setting errno to EINVAL).

  Callers attempting to set this offset should also be prepared for
  the call to fail and set errno to ENOSYS; this already happens on
  the Hurd and on some Linux architectures.  The Linux kernel
  maintainers are discussing a more principled replacement for the
  reused API.  After a replacement becomes available, we will change
  settimeofday to fail with ENOSYS on all platforms when its 'tzp'
  argument is not a null pointer.

  settimeofday itself is obsolescent according to POSIX.  Programs
  that set the system time should use clock_settime and/or the adjtime
  family of functions instead.  We may cease to make settimeofday
  available to newly linked binaries after there is a replacement for
  Linux's time-zone-like offset API.

  • SPARC ISA v7 is no longer supported.  v8 is still supported, but

  only if the optional CAS instruction is implemented (for instance,
  LEON processors are still supported, but SuperSPARC processors are

  As the oldest 64-bit SPARC ISA is v9, this only affects 32-bit

  • If a lazy binding failure happens during dlopen, during the

  execution of an ELF constructor, the process is now terminated.
  Previously, the dynamic loader would return NULL from dlopen, with
  the lazy binding error captured in a dlerror message.  In general,
  this is unsafe because resetting the stack in an arbitrary function
  call is not possible.

  • For MIPS hard-float ABIs, the GNU C Library will be configured to

  need an executable stack unless explicitly configured at build time
  to require minimum kernel version 4.8 or newer.  This is because
  executing floating-point branches on a non-executable stack on Linux
  kernels prior to 4.8 can lead to application crashes for some MIPS
  configurations. While currently PT_GNU_STACK is not widely used on
  MIPS, future releases of GCC are expected to enable non-executable
  stack by default with PT_GNU_STACK by default and is thus likely to
  trigger a crash on older kernels.

  The GNU C Library can be built with --enable-kernel=4.8.0 in order
  to keep a non-executable stack while dropping support for older

  • System call wrappers for time system calls now use the new time64

  system calls when available. On 32-bit targets, these wrappers
  attempt to call the new system calls first and fall back to the
  older 32-bit time system calls if they are not present.  This may
  cause issues in environments that cannot handle unsupported system
  calls gracefully by returning -ENOSYS. Seccomp sandboxes are
  affected by this issue.

Changes to build and runtime requirements:

  • It is no longer necessary to have recent Linux kernel headers to

  build working (non-stub) system call wrappers on all architectures
  except 64-bit RISC-V.  64-bit RISC-V requires a minimum kernel
  headers version of 5.0.

  • The ChangeLog file is no longer present in the toplevel directory of

  the source tree.  ChangeLog files are located in the ChangeLog.old
  directory as ChangeLog.N where the highest N has the latest entries.

Security related changes:

  CVE-2019-19126: failed to ignore the LD_PREFER_MAP_32BIT_EXEC
  environment variable during program execution after a security
  transition, allowing local attackers to restrict the possible
  mapping addresses for loaded libraries and thus bypass ASLR for a
  setuid program.  Reported by Marcin Kościelnicki.

The following bugs are resolved with this release:

  [12031] localedata: iconv -t ascii//translit with Greek characters
  [15813] libc: Multiple issues in __gen_tempname
  [17726] libc: [arm, sparc] profil_counter should be compat symbol
  [18231] libc: ipc_perm struct's mode member has wrong type in
  [19767] libc: vdso is not used with static linking
  [19903] hurd: Shared mappings not being inherited by children
  [20358] network: RES_USE_DNSSEC sets DO; should also have a way to set
  [20839] dynamic-link: Incomplete rollback of dynamic linker state on
    linking failure
  [23132] localedata: Missing transliterations in Miscellaneous
    Mathematical Symbols-A/B Unicode blocks
  [23518] libc: Eliminate __libc_utmp_jump_table
  [24026] malloc: malloc_info() returns wrong numbers
  [24054] localedata: Many locales are missing date_fmt
  [24214] dynamic-link: user defined ifunc resolvers may run in ldd mode
  [24304] dynamic-link: Lazy binding failure during ELF
    constructors/destructors is not fatal
  [24376] libc: RISC-V symbol size confusion with _start
  [24682] localedata: zh_CN first weekday should be Monday per GB/T
  [24824] libc: test-in-container does not install charmap files
    compatible with localedef
  [24844] regex: regex bad pointer / leakage if malloc fails
  [24867] malloc: Unintended malloc_info formatting changes
  [24879] libc: login: utmp alarm timer can arrive after lock
  [24880] libc: login: utmp implementation uses struct flock with
  [24882] libc: login: pututline uses potentially outdated cache
  [24899] libc: Missing nonstring attributes in <utmp.h>, <utmpx.h>
  [24902] libc: login: Repeating pututxline on EINTR/EAGAIN causes stale
    utmp entries
  [24916] dynamic-link: [MIPS] Highest EI_ABIVERSION value not raised to
  [24930] dynamic-link: dlopen of PIE executable can result in
    _dl_allocate_tls_init assertion failure
  [24950] localedata: Top-of-tree glibc does not build with top-of-tree
    GCC (stringop-overflow error)
  [24959] time: librt IFUNC resolvers for clock_gettime and clock_*
    functions other  can lead to crashes
  [24967] libc: jemalloc static linking causes runtime failure
  [24986] libc: alpha: new getegid, geteuid and getppid syscalls used
  [25035] libc: sbrk() failure handled poorly in tunables_strdup
  [25087] dynamic-link: ldconfig mishandles unusual .dynstr placement
  [25097] libc: new -Warray-bounds with GCC 10
  [25112] dynamic-link: dlopen must not make new objects accessible when
    it still can fail with an error
  [25139] localedata: Please add the new mnw_MM locale
  [25149] regex: Array bounds violation in proceed_next_node
  [25157] dynamic-link: Audit cookie for the dynamic loader is not
    initialized correctly
  [25189] libc: glibc's __glibc_has_include causes issues with clang
  [25194] malloc: malloc.c: do_set_mxfast incorrectly casts the mallopt
    value to an unsigned
  [25204] dynamic-link: LD_PREFER_MAP_32BIT_EXEC not ignored in setuid
    binaries (CVE-2019-19126)
  [25225] libc: fails to link on x86 if GCC defaults to -fcf-
  [25226] string: strstr: Invalid result if needle crosses page on s390-
    z15 ifunc variant.
  [25232] string: <string.h> does not enable const correctness for
    strchr et al. for Clang++
  [25233] localedata: Consider "." as the thousands separator for sl_SI
  [25241] nptl: __SIZEOF_PTHREAD_MUTEX_T defined twice for x86
  [25251] build: Failure to run tests when CFLAGS contains -DNDEBUG.
  [25271] libc: undeclared identifier PTHREAD_MUTEX_DEFAULT when
    compiling with -std=c11
  [25323] localedata: km_KH: d_t_fmt contains "m" instead of "%M"
  [25324] localedata: lv_LV: d_t_fmt contains suspicious words in the
    time part
  [25396] dynamic-link: Failing dlopen can leave behind dangling GL
    (dl_initfirst) link map pointer
  [25401] malloc: pvalloc must not have _attribute_alloc_size_
  [25423] libc: Array overflow in backtrace on powerpc
  [25425] network: Missing call to __resolv_context_put in

Release Notes


This release was made possible by the contributions of many people.
The maintainers are grateful to everyone who has contributed changes
or bug reports.  These include:

Adhemerval Zanella
Alexandra Hájková
Alistair Francis
Andreas Schwab
Andrew Eggenberger
Arjun Shankar
Aurelien Jarno
Carlos O'Donell
Chung-Lin Tang
DJ Delorie
Dmitry V. Levin
Dragan Mladjenovic
Egor Kobylkin
Emilio Cobos Álvarez
Emilio Pozuelo Monfort
Feng Xue
Florian Weimer
Gabriel F. T. Gomes
Gustavo Romero
H.J. Lu
Ian Kent
James Clarke
Jeremie Koenig
John David Anglin
Joseph Myers
Kamlesh Kumar
Krzysztof Koch
Leandro Pereira
Lucas A. M. Magalhaes
Lukasz Majewski
Marcin Kościelnicki
Matheus Castanho
Mihailo Stojanovic
Mike Crowe
Niklas Hambüchen
Paul A. Clarke
Paul Eggert
Petr Vorel
Rafal Luzynski
Rafał Lużyński
Rajalakshmi Srinivasaraghavan
Raoni Fassina Firmino
Richard Braun
Samuel Thibault
Sandra Loosemore
Siddhesh Poyarekar
Stefan Liebler
Svante Signell
Szabolcs Nagy
Talachan Mon
Thomas Schwinge
Tim Rühsen
Tulio Magno Quites Machado Filho
Wilco Dijkstra
Xuelei Zhang
Zack Weinberg

FSF News: Libiquity Wi-Fri ND2H Wi-Fi card now FSF-certified to Respect Your Freedom

Thursday 30th of January 2020 08:55:02 PM

BOSTON, Massachusetts, USA -- Thursday, January 30, 2020 -- The Free Software Foundation (FSF) today awarded Respects Your Freedom (RYF) certification to the Libiquity dual-band 802.11a/b/g/n Wi-Fi card, from Libiquity LLC. The RYF certification mark means that Libiquity's distribution of this device meets the FSF's standards in regard to users' freedom, control over the product, and privacy.

Libiquity currently sells this device as part of its previously-certified Taurinus X200 laptop. Technoethical also offers the same hardware with their RYF-certified Technoethical N300DB Dual Band Wireless Card. With today's certification, Libiquity is able to sell the Libiquity Wi-Fri ND2H Wi-Fi card as a stand-alone product for the first time, and now has two RYF-certified devices available.

"In the years since first joining the RYF program, we at Libiquity have worked to improve and expand our catalog. For anyone looking to join distant or congested 2.4-GHz or 5-GHz wireless networks, the Wi-Fri ND2H is a great internal Wi-Fi card for laptops, desktops, servers, single-board computers, and more. Most importantly, in an era when more and more hardware disrespects your freedom, we're proud to offer a Wi-Fi card branded with the RYF logo on the product itself, as a trusted symbol of its compatibility with free software such as GNU Linux-libre," said Patrick McDermott, Founder and CEO, Libiquity LLC.

With this certification, the total number of RYF-certified wireless adapters grows to thirteen. The Libiquity Wi-Fri ND2H Wi-Fi card enables users to have wireless connectivity without having to rely on nonfree drivers or firmware.

"We are especially glad to see the certification mark printed directly on the product. While not a requirement of the program, this helps us get closer to the world we are aiming for, where people shopping can immediately and easily see what products are best for their freedom," said the FSF's executive director, John Sullivan.

Like other previously certified peripheral devices, the Libiquity Wi-Fri ND2H Wi-Fi card was tested using an FSF-endorsed GNU/Linux distro to ensure that it works using only free software. The device does not ship with any software included, as all the free software needed is already provided by fully free distributions.

"Expanding the availability of hardware that works with fully free systems like Trisquel GNU/Linux is always something to celebrate. It's great to see Libiquity offering this device as a stand-alone product so that users can customize and upgrade their own setup," said the FSF's licensing and compliance manager, Donald Robertson, III.

To learn more about the Respects Your Freedom certification program, including details on the certification of this Libiquity device, please visit

Retailers interested in applying for certification can consult

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at and, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at

About Libiquity

Founded by CEO Patrick McDermott, Libiquity is a privately held New Jersey, USA company that provides world-class technologies which put customers in control of their computing. The company develops and sells electronics products, provides firmware and embedded systems services, and leads the development of the innovative and flexible ProteanOS embedded operating system. More information about Libiquity and its offerings can be found on its Web site at

Media Contacts

Donald Robertson, III
Licensing and Compliance Manager
Free Software Foundation
+1 (617) 542 5942

Patrick McDermott
Founder and CEO
Libiquity LLC

FSF Blogs: LibrePlanet 2020 needs you: Volunteer today!

Tuesday 28th of January 2020 04:27:34 PM

The LibrePlanet 2020 conference is coming very soon, on March 14 and 15 at the Back Bay Events Center in Boston, and WE NEED YOU to make the world's premier gathering of free software enthusiasts a success.

Volunteers are needed for several different tasks at LibrePlanet, from an audio/visual crew to point cameras and adjust microphones, to room monitors to introduce speakers, to a set-up and clean-up crew to make our conference appear and disappear at the Event Center, and more! You can volunteer for as much or as little time as you like, whether you choose to help out for an hour or two, or the entirety of both days. Either way, we'll provide you with a VERY handsome LibrePlanet 2020 shirt in your size, in addition to free admission to the entire conference and lunch and our eternal gratitude.

Excited? If you're ready to help put on an excellent conference, we are more than ready to show you how. One important step is to come to an in-person training and info session at the Free Software Foundation office, in downtown Boston. We have scheduled six training sessions beginning late February; the last one is the afternoon of the day immediately before LibrePlanet, which is perfect for people arriving from far away for the event. Please come to one if you can! Some volunteer tasks (room monitors, A/V crew) require more training than others, but there are some important things we need all volunteers to know, and attending a training will ensure that you're fully informed. The schedule for trainings is at the bottom of this email.

You're interested? Wonderful. Please reply to this email or write to Let me know your T-shirt size (we'll have unisex S-XXXXL and fitted S-XXXL) and which training you can make it to. You can certainly volunteer without making it to a training -- I'll send you some info via email -- but your role may be a little less glamorous. Please also feel free to contact me with any questions or suggestions you may have; I will respond eagerly to your queries.

THANK YOU for supporting the Free Software Foundation and THANK YOU for volunteering for an excellent LibrePlanet!


All except one of these take place from 6 PM to 8 PM at the FSF office, 51 Franklin Street, Fifth floor, Downtown Crossing, Boston:

  • Wednesday, February 19
  • Tuesday, February 25
  • Thursday, February 27 (includes A/V training)
  • Wednesday, March 4 (includes A/V training
  • Tuesday, March 10 (includes A/V training
  • Friday, March 13: This is an afternoon session for people coming to town late, starting at 3 PM! It will also be at the FSF office, prior to the Friday night open house.

recutils @ Savannah: Pre-release 1.8.90 in

Tuesday 28th of January 2020 11:31:56 AM

The pre-release recutils-1.8.90.tar.gz is now available at

The NEWS file in the tarball contains a list of the changes since 1.8.

The planned date for releasing 1.9 is Saturday 1 February 2020.

Please report any problem found with the pre-release to


FSF Blogs: GNU Spotlight with Mike Gerwitz: 16 new GNU releases in January!

Monday 27th of January 2020 09:42:39 PM

For announcements of most new GNU releases, subscribe to the info-gnu mailing list:

To download: nearly all GNU software is available from, or preferably one of its mirrors from You can use the URL to be automatically redirected to a (hopefully) nearby and up-to-date mirror.

A number of GNU packages, as well as the GNU operating system as a whole, are looking for maintainers and other assistance: please see if you'd like to help. The general page on how to help GNU is at

If you have a working or partly working program that you'd like to offer to the GNU project as a GNU package, see

As always, please feel free to write to us at with any GNUish questions or suggestions for future installments.

FSF Blogs: LibrePlanet 2020: We'll see you at the Back Bay Events Center in Boston, MA!

Monday 27th of January 2020 05:55:00 PM

We at the Free Software Foundation (FSF) are excited to say that the Dorothy Quincy suite of Boston's very own Back Bay Events Center will be the home of this year's LibrePlanet conference! We've taken the grand tour and couldn't be happier about our choice of location. We're confident that the Events Center will be a great host for the technology and social justice conference we've all come to know and love. It's just the right place for us (and the movement) to take our next steps in freeing the future.

The Events Center is providing LibrePlanet with its own entrance and a dedicated and speedy Internet connection for the livestream, and is close to both public transportation and the FSF headquarters itself. As in past years, we'll have ample space for an exhibit hall and free software workshops, as well as the ever popular "hallway track," where you can engage with other attendees in conversations on contributing to free software projects.

On the Events Center Web site, you will find accommodation and transportation suggestions that will pair nicely with those we've put up on the LibrePlanet 2020 site. The Back Bay Events Center is located at the corner of Berkeley and Stuart Street, and is close by the Back Bay stop of the Orange Line MBTA train and the Arlington stop of the Green Line MBTA train.

If you have attended LibrePlanet in past years but are generally unfamiliar with the Boston area, please note that LibrePlanet 2020 will be held in Boston and not the nearby city of Cambridge.

As in past years, you can expect the venue to be fully accessible, and equipped with enough network horsepower to drive the conference livestream. For more information on the venue's perks and services, visit the Back Bay Events Center Web site, or reach out to And if you have yet to register for the LibrePlanet 2020 conference, now is the time to do so!

LibrePlanet depends on the community for its success. One way you can help us is by donating to help sponsor an attendee to come to LibrePlanet, and assist us in making the conference a truly global one. If you're interested in volunteering, please write us at

All of us here at the FSF are deep in the planning process, but we couldn't be more excited about seeing you in person -- especially if it is your first time. (No worries, it's my first LibrePlanet conference as well!) Let's use the time we have together to the fullest, and make LibrePlanet 2020 go down in history as the place where we made great strides to "Free the Future."

More in Tux Machines

Antitrust Laws and Open Collaboration

If you participate in standards development organizations, open source foundations, trade associations, or the like (Organizations), you already know that you’re required to comply with antitrust laws. The risks of noncompliance are not theoretical – violations can result in severe criminal and civil penalties, both for your organization and the individuals involved. The U.S. Department of Justice (DOJ) has in fact opened investigations into several standards organizations in recent years. Maybe you’ve had a training session at your company, or at least are aware that there’s an antitrust policy you’re supposed to read and comply with. But what if you’re a working group chair, or even an executive director, and therefore responsible for actually making sure nothing happens that’s not supposed to? Beyond paying attention, posting or reviewing an antitrust statement at meetings, and perhaps calling your attorney when member discussions drift into grey zones, what do you actually do to keep antitrust risk in check? Well, the good news is that regulators recognize that standards and other collaboration deliverables are good for consumers. The challenge is knowing where the boundaries of appropriate conduct can be found, whether you’re hosting, leading or just participating in activity involving competitors. Once you know the rules, you can forge ahead, expecting to navigate those risks, and knowing the benefits of collaboration can be powerful and procompetitive. We don’t often get glimpses into the specific criteria regulators use to evaluate potential antitrust violations, particularly as applicable to collaborative organizations. But when we do, it can help consortia and other collaborative foundations focus their efforts and take concrete steps to ensure compliance. In July 2019, the DOJ Antitrust Division (Division) provided a new glimpse, in its Evaluation of Corporate Compliance Programs in Criminal Antitrust Investigations (Guidance). Although the Guidance is specifically intended to assist Division prosecutors evaluating corporate compliance programs when charging and sentencing, it provides valuable insights for building or improving an Organization’s antitrust compliance program (Program). At a high level, the Guidance suggests that an effective Program will be one that is well designed, is applied earnestly and in good faith by management, and includes adequate procedures to maximize effectiveness through efficiency, leadership, training, education, information and due diligence. This is important because organizations that detect violations and self-report to the Division’s Corporate Leniency program may receive credit (e.g. lower charges or penalties) for having an effective antitrust compliance program in place. Read more

today's howtos

Events: SUSECON, OpenShift Troubleshooting Workshop and Kubernetes Contributor Summit Amsterdam

  • Get Expert Guided Hands-On Experience at the SUSECON 2020 Pre-Conference Workshops

    Are you ready for SUSECON 2020? It’s coming up fast! Join us in Dublin Ireland from March 23 – 27 for a week packed with learning and networking.

  • Get Certified During SUSECON 2020

    Working in IT is not for the feint of heart; the work is demanding, and change is constant. Right now, your organization is undoubtedly seeking new ways to extend the value of their investment in IT and get more done faster.

  • The OpenShift Troubleshooting Workshop

    The first workshop in our Customer Empathy Workshop series was held October 28, 2019 during the AI/ML (Artificial Intelligence and Machine Learning) OpenShift Commons event in San Francisco. We collaborated with 5 Red Hat OpenShift customers for 2 hours on the topic of troubleshooting. We learned about the challenges faced by operations and development teams in the field and together brainstormed ways to reduce blockers and increase efficiency for users. The open source spirit was very much alive in this workshop. We came together with customers to work as a team so that we can better understand their unique challenges with troubleshooting. Here are some highlights from the experience.

  • [Kubernetes] Contributor Summit Amsterdam Schedule Announced

Security: Patches, Bugs, RMS Talk and NG Firewall 15.0

  • Security updates for Wednesday

    Security updates have been issued by CentOS (firefox, java-1.7.0-openjdk, ksh, and sudo), Debian (php7.0 and python-django), Fedora (cacti, cacti-spine, mbedtls, and thunderbird), openSUSE (chromium, re2), Oracle (firefox, java-1.7.0-openjdk, and sudo), Red Hat (openjpeg2 and sudo), Scientific Linux (java-1.7.0-openjdk and sudo), SUSE (dbus-1, dpdk, enigmail, fontforge, gcc9, ImageMagick, ipmitool, php72, sudo, and wicked), and Ubuntu (clamav, linux, linux-aws, linux-aws-hwe, linux-azure, linux-gcp, linux-gke-4.15, linux-hwe, linux-kvm, linux-oracle, linux-raspi2, linux-snapdragon, linux, linux-aws, linux-kvm, linux-raspi2, linux-snapdragon, linux-aws-5.0, linux-azure, linux-gcp, linux-gke-5.0, linux-oracle-5.0, linux-azure, linux-kvm, linux-oracle, linux-raspi2, linux-raspi2-5.3, linux-lts-xenial, linux-aws, and qemu).

  • Certificate validity and a y2k20 bug

    One of the standard fields of an SSL certificate is the validity period. This field includes notBefore and notAfter dates which, according to RFC5280 section, indicates the interval "during which the CA warrants that it will maintain information about the status of the certificate" This is one of the fields that should be inspected when accepting new or unknown certificates. When creating certificates, there are a number of theories on how long to set that period of validity. A short period reduces risk if a private key is compromised. The certificate expires soon after and can no longer be used. On the other hand, if the keys are well protected, then there is a need to regularly renew those short-lived certificates.

  • Free Software is protecting your data – 2014 TEDx Richard Stallman Free Software Windows and the NSA

    Libre booted (BIOS with Linux overwritten) Thinkpad T400s running Trisquel GNU/Linux OS. (src: LibreBooting the BIOS? Yes! It is possible to overwrite the BIOS of some Lenovo laptops (why only some?) with a minimal version of Linux.

  • NG Firewall 15.0 is here with better protection for SMB assets

    Here comes the release of NG Firewall 15.0 by Untangle with the creators claiming top-notch security for SMB assets. Let’s thoroughly discuss the latest NG Firewall update. With that being said, it only makes sense to first introduce this software to the readers who aren’t familiar with it. As the name ‘NG Firewall’ suggests, it is indeed a firewall but a very powerful one. It is a Debian-based and network gateway designed for small to medium-sized enterprises. If you want to be up-to-date with the latest firewall technology, your best bet would be to opt for this third-generation firewall. Another factor that distinguishes the NG Firewall from other such products in the market is that it combines network device filtering functions and traditional firewall technology.