Language Selection

English French German Italian Portuguese Spanish


Syndicate content
Planet Mozilla -
Updated: 3 hours 43 min ago

William Lachance: Using Sphinx in a Monorepo

Sunday 25th of September 2022 07:55:40 PM

Just wanted to type up a couple of notes about working with Sphinx (the python documentation generator) inside a monorepo, an issue I’ve been struggling with (off and on) at Voltus since I started. I haven’t seen much written about this topic despite (I suspect) it being a reasonably frequent problem.

In general, there’s a lot to like about Sphinx: it’s great at handling deeply nested trees of detailed documentation with cross-references inside a version control system. It has local search that works pretty well and some themes (like readthedocs) scale pretty nicely to hundreds of documents. The directives and roles system is pretty flexible and covers most of the common things one might want to express in technical documentation. And if the built-in set of functionality isn’t enough, there’s a wealth of third party extension modules. My only major complaint is that it uses the somewhat obscure restructuredText file format by default, but you can get around that by using the excellent MyST extension.

Unfortunately, it has a pretty deeply baked in assumption that all documentation for your project lives inside a single subfolder. This is fine for a small repository representing a single python module, like this:

<root> setup.cfg pyproject.toml mymodule/ docs/

However, this doesn’t work for a large monorepo, where you would typically see something like:

<root>/module-1/submodule-a <root>/module-1/submodule-b <root>/module-2/submodule-c ...

In a monorepo, you usually want to include a module’s documentation inside its own directory. This allows you to use your code ownership constraints for documentation, among other things.

The naive solution would be to create a sphinx site for every single one of these submodules. This is what happened at Voltus and I don’t recommend it. For a large monorepo you’ll end up with dozens, maybe hundreds of documentation “sites”. Under this scenario, discoverability becomes a huge problem: no longer can you rely on tables of contents and the built-in search to discover content: you just have to “know” where things live. I’m more than nine months in here and I’m still discovering new documentation.

It would be much better if we could somehow collect documentation from other parts of the repository into a single site. Is this possible? tl;dr: Yes. There’s a few solutions, each with their pros and cons.

The obvious solution that doesn’t work

The most obvious solution here is to create a symbolic link inside your documentation directory, say the following:

<root>/docs/ <root>/docs/module-1/submodule-a -> <root>/module-1/submodule-a/docs

Unfortunately, this doesn’t work. ☹️ Sphinx doesn’t follow symbolic links.

Solution 1: Just copy the files in

The most obvious solution is to just copy the files from various parts of the monorepo into place, as part of the build system. Mozilla did this for Firefox, with the moztreedocs system.

The results look pretty good, but this is a bespoke solution. Aside from general ideas, there’s no way I’m going to be able to apply anything in moztreedocs to Voltus’s monorepo (which is based on a completely different build system). And being honest, I’m not sure if the 40+ hour (estimated) effort to reimplement it would be a good use of time compared to other things I could be doing.

Solution 2: Use the include directive with MyST

Later versions of MyST include support for directly importing a markdown file from another part of the repository.

This is a limited form of embedding: it won’t let you import an entire directory of markdown files. But if your submodules mostly just include content in the form of a (or similar), it might just be enough. Just create a directory for these files to live (say services) and slot them in:


```{include} ../../../module-1/submodule-a/ ```

I’m currently in the process of implementing this solution inside Voltus. I have optimism that this will be a big (if incremental) step up over what we have right now. There are obviously limits, but you can cram a lot of useful information in a README. As a bonus, it’s a pretty nice marker for those spelunking through the source code (much more so than a forest of tiny documentation files).

Solution 3: Sphinx Collections

This one I just found about today: Sphinx Collections is a small python module that lets you automatically import entire directories of files into your sphinx tree, under a _collections module. You configure it in your top-level like this:

extensions = [ ... "sphinxcontrib.collections" ] collections = { "submodule-a": { "driver": "symlink", "source": "/monorepo/module-1/submodule-a/docs", "target": "submodule-a" }, ... }

After setting this up, submodule-a is now available under _collections and you can include it in your table of contents like this:

... ```{toctree} :caption: submodule-a _collections/submodule-a/ ``` ...

At this point, submodule-a’s documentation should be available under http://<my doc domain>/_collections/submodule-a/index.html

Pretty nifty. The main downside I’ve found so far is that this doesn’t play nicely with the Edit on GitHub links that the readthedocs theme automatically inserts (it thinks the files exist under _collections), but there’s probably a way to work around that.

I plan on investigating this approach further in the coming months.

Tantek Çelik: W3C TPAC 2022 Sustainability Community Group Meeting

Sunday 25th of September 2022 06:47:00 AM
First IRL TPAC Since 2019

The W3C convened an annual TPAC in-person meeting from 2001-2019. After a couple of virtual TPACs, last week we finally returned to an in-person TPAC, the first in three years. It was great to see people, have informal conversations in hallways and outside at lunch & coffee breaks. I took some notes during meetings & breakout sessions. The Sustainability CG meeting is the first I’ve chosen to write-up.

This year’s W3C TPAC Plenary Day was a combination of the first ever AC open session in the early morning, and breakout sessions in the late morning and afternoon. Nick Doty proposed a breakout session for Sustainability for the Web and W3C which he & I volunteered to co-chair, as co-chairs of the Sustainability (s12y) CG which we created on Earth Day earlier this year. Nick & I met during a break on Wednesday afternoon and made plans for how we would run the session as a Sustainability CG meeting, which topics to introduce, how to deal with unproductive participation if any, and how to focus the latter part of the session into follow-up actions.

We agreed that our primary role as chairs should be facilitation. We determined a few key meeting goals, in particular to help participants:

  • Avoid/minimize any trolling or fallacy arguments (based on experience from 2021)
  • Learn who is interested in which sustainability topics & work areas
  • Determine clusters of similar, related, and overlapping sustainability topics
  • Focus on prioritizing actual sustainability work rather than process mechanics
  • Encourage active collaboration in work areas (like a do-ocracy)

The session went better than I expected. The small meeting room was packed with ~20 participants, with a few more joining us on Zoom (which thankfully worked without any issues, thanks to the W3C staff for setting that up so all we had to do as chairs was push a button to start the meeting!).

I am grateful for everyone’s participation and more importantly the shared sense of collaboration, teamwork, and frank urgency. It was great to meet & connect in-person, and see everyone on video who took time out of their days across timezones to join us. There was a lot of eagerness in participation, and Nick & I did our best to give everyone who wanted to speak time to contribute (the IRC bot Zakim's two minute speaker timer feature helped).

It was one of the more hopeful meetings I participated in all week. Thanks to Yoav Weiss for scribing the minutes. Here are a few of the highlights.

Session Introduction

Nick introduced himself and proposed topics of discussion for our breakout session.

  • How we can apply sustainbility to web standards
  • Goals we could work on as a community
  • Consider metrics to enable other measures to take effect
  • Measure the impact of the W3C meetings themselves
  • Working mode and how we talk about sustainability in W3C
  • Horizontal reviews

I introduced myself and my role at Mozilla as one our Environmental Champions, and noted that it’s been three years since we had the chance to meet in person at TPAC. Since then many of us who participate at W3C have recognized the urgency of sustainability, especially as underscored by recent IPCC reports. From the past few years of publications & discussions:

For our TPAC 2022 session, I asked that we proceed with the assumption of sustainability as a principle, and that if folks came to argue with that, that they should raise an issue with the TAG, not this meeting.

In the Call for Participation in the Sustainability Community Group, we highlighted both developing a W3C practice of Sustainability (s12y) Horizontal Review (similar to a11y, i18n, privacy, security) as proposed at TPAC 2021, and an overall venue for participants to discuss all aspects of sustainability with respect to web technologies present & future. For our limited meeting time, I asked participants to share how they want to have the biggest impact on sustainability at W3C, with the web in general, and actively prioritize our work accordingly.

Work Areas, Groups, Resources

Everyone took turns introducing themselves and expressing which aspects of sustainability were important to them, noting any particular background or applicable expertise, as well as which other W3C groups they are participating in, as opportunities for liaison and collaboration. Several clusters of interest emerged:

  • Technologies to reduce energy usage
  • W3C meetings and operations
  • Measurement
  • System Effects
  • Horizontal Review
  • Principles

The following W3C Groups were noted which are either already working on sustainability related efforts or would be good for collaboration, and except for the TAG, had a group co-chair in the meeting!

I proposed adding a liaisons section to our public Sustainability wiki page accordingly explicitly listing these groups and specific items for collaboration. Participants also shared the following links to additional efforts & resources:

Sustainability Work In Public By Default

Noting that since all our work on sustainability is built on a lot of public work by others, the best chance of our work having an impact is to also do it publicly, I proposed that Sustainability CG work in public by default, as well as sustainability work at W3C in general, and that we send that request to the AB to advise W3C accordingly. The proposal was strongly supported with no opposition.

Active Interest From Organizations

There were a number of organizations whose representatives indicated that they are committed to making a positive impact on the environment, and would like to work on efforts accordingly in the Sustainability CG, or would at least see if they could contact experts at their organizations to see if any of them were interested in contributing.

  • Igalia
  • Mozilla
  • Lawrence Berkeley National Laboratory
  • Washington Post
Meeting Wrap-up And Next Steps

We finished up the meeting with participants signing up to work on each of the work areas (clusters of interest noted above) that they were personally interested in working on. This has been captured on our wiki: W3C Wiki: Sustainability Work Areas.

The weekend after the meeting I wrote up an email summary of the meeting & next steps and sent it directly to those who were present at the meeting, encouraging them to Join the Sustainability Community Group (requires a W3C account) for future emails and updates. Nick & I are also on the W3C Community Slack #sustainability channel which I recommended joining. Signup link:

Next Steps: we encouraged everyone signed up for a Work Area to reach out to each other directly and determine their preferred work mode, including in which venue they’d like to do the work, whether in the Sustainability CG, another CG, or somewhere else. We noted that work on sustainable development & design of web sites in particular should be done directly with the Sustainable Web Design CG (sustyweb), “a community group dedicated to creating sustainable websites”.

Some possibilities for work modes that Work Area participants can use:

  • W3C Community Slack #sustainability channel
  • public-sustainability email list of the Sustainability CG
  • Our Sustainability wiki page, creating "/" subpages as needed

There is lots of work to do across many different areas for sustainability & the web, and for technology as a whole, which lends itself to small groups working in parallel. Nick & I want to help facilitate those that have the interest, energy, and initiative to do so. We are available to help Work Area participants pick a work mode & venue that will best meet their needs and help them get started on their projects.

The Talospace Project: Firefox 105 on POWER

Friday 23rd of September 2022 01:36:37 PM
Firefox 105 is out. No, it's not your imagination: I ended up skipping a couple versions. I wasn't able to build Firefox 103 because gcc 12 in Fedora 36 caused weird build failures until it was finally fixed; separately, building 104 and working more on the POWER9 JavaScript JIT got delayed because I'd finally had it with the performance issues and breakage in GNOME 42 and took a couple weeks renovating Plasma so I could be happy with my desktop environment again. Now I'm on track again with everything hopefully maintainable and my workflows properly restored, and we're back to the grind with both those concerns largely resolved.

Unfortunately, we have a couple new ones. Debug builds broke in Fx103 using our standard .mozconfig when mfbt/lz4/xxhash.h was upgraded, because we compile with -Og and it wants to compile its functions with static __inline__ __attribute__((always_inline, unused)). When gcc builds a deoptimized debugging build and fails to inline those functions, it throws a compilation error, and the build screeches to a halt. (This doesn't affect Fedora's build because they always build at a sufficient optimization level such that these functions do indeed get inlined.) After a little thinking, this is the new debug .mozconfig:

export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

mk_add_options MOZ_MAKE_FLAGS="-j24" # or as you likez
ac_add_options --enable-application=browser
ac_add_options --enable-optimize="-Og -mcpu=power9 -fpermissive -DXXH_NO_INLINE_HINTS=1"
ac_add_options --enable-debug
ac_add_options --enable-linker=bfd
ac_add_options --without-wasm-sandboxed-libraries

export GN=/home/censored/bin/gn # if you haz
This builds, or at least compiles, but fails at linkage because of the second problem. This time, it's libwebrtc ... again. To glue the Google build system onto Mozilla's, there is a fragile and system-dependent permuting-processing step that again has broken and Mozilla would like a definitive fix. Until then, we're high and dry because the request is for the generated build file to be generated correctly rather than just patching the generated build file. That's a much bigger knot to unravel and building the gn tool it depends on used to be incredibly difficult (it's now much easier and I was able to upgrade, but all this has done is show me where the problem is and it's not a straightforward fix). If this is not repaired, then various screen capture components used by libwebrtc are not compiled, and linking will fail. Right now it looks like we're the only platform affected even though aarch64 has been busted by the same underlying issue in the past.

The easy choice, especially if you don't use WebRTC, is just add ac_add_options --disable-webrtc to your .mozconfig. I don't use WebRTC much and I'm pretty lazy so ordinarily I would go this route — except you, gentle reader, expect me to be able to tell you when Firefox compiles are breaking, so that brings us to the second option: Dan Horák's patch. This also works and is the version I'm typing into now. Expect you will have to carry this patch in your local tree for a couple versions until this gets dealt with.

Fortunately, the PGO-LTO patch for Firefox 101 still applies to Fx105, so you can still use that. While the optimized .mozconfig is unchanged, here it is for reference:

export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

mk_add_options MOZ_MAKE_FLAGS="-j24" # or as you likez
ac_add_options --enable-application=browser
ac_add_options --enable-optimize="-O3 -mcpu=power9 -fpermissive"
ac_add_options --enable-release
ac_add_options --enable-linker=bfd
ac_add_options --enable-lto=full
ac_add_options --without-wasm-sandboxed-libraries
ac_add_options MOZ_PGO=1

export GN=/home/censored/bin/gn # if you haz
I've got one other issue to settle, and then I hope to get back to porting the JavaScript and Wasm JIT to 102ESR. But my real life and my $DAYJOB interfere with my after-hours hacking, so contributors still solicited so our work can benefit the OpenPOWER community. When it's one person working on it, things are slower.

Niko Matsakis: Rust 2024…the year of everywhere?

Thursday 22nd of September 2022 07:51:00 PM

I’ve been thinking about what “Rust 2024” will look like lately. I don’t really mean the edition itself — but more like, what will Rust feel like after we’ve finished up the next few years of work? I think the answer is that Rust 2024 is going to be the year of “everywhere”. Let me explain what I mean. Up until now, Rust has had a lot of nice features, but they only work sometimes. By the time 2024 rolls around, they’re going to work everywhere that you want to use them, and I think that’s going to make a big difference in how Rust feels.

Async everywhere

Let’s start with async. Right now, you can write async functions, but not in traits. You can’t write async closures. You can’t use async drop. This creates a real hurdle. You have to learn the workarounds (e.g., the async-trait crate), and in some cases, there are no proper workarounds (e.g., for async-drop).

Thanks to a recent PR by Michael Goulet, static async functions in traits almost work on nightly today! I’m confident we can work out the remaining kinks soon and start advancing the static subset (i.e., no support for dyn trait) towards stabilization.

The plans for dyn, meanwhile, are advancing rapidly. At this point I think we have two good options on the table and I’m hopeful we can get that nailed down and start planning what’s needed to make the implementation work.

Once async functions in traits work, the next steps for core Rust will be figuring out how to support async closures and async drop. Both of them add some additional challenges — particularly async drop, which has some complex interactions with other parts of the language, as Sabrina Jewson elaborated in a great, if dense, blog post — but we’ve started to develop a crack team of people in the async working group and I’m confident we can overcome them.

There is also library work, most notably settling on some interop traits, and defining ways to write code that is portable across allocators. I would like to see more exploration of structured concurrency1, as well, or other alternatives to select! like the stream merging pattern Yosh has been advocating for.

Finally, for extra credit, I would love to see us integrate async/await keywords into other bits of the function body, permitting you to write common patterns more easily. Yoshua Wuyts has had a really interesting series of blog posts exploring these sorts of ideas. I think that being able to do for await x in y to iterate, or (a, b).await as a form of join, or async let x = … to create a future in a really lightweight way could be great.

Impl trait everywhere

The impl Trait notation is one of Rust’s most powerful conveniences, allowing you to omit specific types and instead talk about the interface you need. Like async, however, impl Trait can only be used in inherent functions and methods, and can’t be used for return types in traits, nor can it be used in type aliases, let bindings, or any number of other places it might be useful.

Thanks to Oli Scherer’s hard work over the last year, we are nearing stabilization for impl Trait in type aliases. Oli’s work has also laid the groundwork to support impl trait in let bindings, meaning that you will be able to do something like

let iter: impl Iterator<Item = i32> = (0..10); // ^^^^^^^^^^^^^ Declare type of `iter` to be “some iterator”.

Finally, the same PR that added support for async fns in traits also added initial support for return-position impl trait in traits. Put it all together, and we are getting very close the letting you use impl trait everywhere you might want to.

There is still at least one place where impl Trait is not accepted that I think it should be, which is nested in other positions. I’d like you to be able to write impl Fn(impl Debug), for example, to refer to “some closure that takes an argument of type impl Debug” (i.e., can be invoked multiple times with different debug types).

Generics everywhere

Generic types are a big part of how Rust libraries are built, but Rust doesn’t allow people to write generic parameters in all the places they would be useful, and limitations in the compiler prevent us from making full use of the annotations we do have.

Not being able to use generic types everywhere might seem abstract, particularly if you’re not super familiar with Rust. And indeed, for a lot of code, it’s not a big deal. But if you’re trying to write libraries, or to write one common function that will be used all over your code base, then it can quickly become a huge blocker. Moreover, given that Rust supports generic types in many places, the fact that we don’t support them in some places can be really confusing — people don’t realize that the reason their idea doesn’t work is not because the idea is wrong, it’s because the language (or, often, the compiler) is limited.

The biggest example of generics everywhere is generic associated types. Thanks to hard work by Jack Huey, Matthew Jasper, and a number of others, this feature is very close to hitting stable Rust — in fact, it is in the current beta, and should be available in 1.65. One caveat, though: the upcoming support for GATs has a number of known limitations and shortcomings, and it gives some pretty confusing errors. It’s still really useful, and a lot of people are already using it on nightly, but it’s going to require more attention before it lives up to its full potential.

You may not wind up using GATs in your code, but it will definitely be used in some of the libraries you rely on. GATs directly enables common patterns like Iterable that have heretofore been inexpressible, but we’ve also seen a lot of examples where its used internally to help libraries present a more unified, simpler interface to their users.

Beyond GATs, there are a number of other places where we could support generics, but we don’t. In the previous section, for example, I talked about being able to have a function with a parameter like impl Fn(impl Debug) — this is actually an example of a “generic closure”. That is, a closure that itself has generic arguments. Rust doesn’t support this yet, but there’s no reason we can’t.

Oftentimes, though, the work to realize “generics everywhere” is not so much a matter of extending the language as it is a matter of improving the compiler’s implementation. Rust’s current traits implementation works pretty well, but as you start to push the bounds of it, you find that there are lots of places where it could be smarter. A lot of the ergonomic problems in GATs arise exactly out of these areas.

One of the developments I’m most excited about in Rust is not any particular feature, it’s the formation of the new types team. The goal of this team is to revamp the compiler’s trait system implementation into something efficient and extensible, as well as building up a core set of contributors.

Making Rust feel simpler by making it more uniform

The topics in this post, of course, only scratch the surface of what’s going on in Rust right now. For example, I’m really excited about “everyday niceties” like let/else-syntax and if-let-pattern guards, or the scoped threads API that we got in 1.63. There are exciting conversations about ways to improve error messages. Cargo, the compiler, and rust-analyzer are all generally getting faster and more capable. And so on, and so on.

The pattern of having a feature that starts working somewhere and then extending it so that it works everywhere seems, though, to be a key part of how Rust development works. It’s inspiring also because it becomes a win-win for users. Newer users find Rust easier to use and more consistent; they don’t have to learn the “edges” of where one thing works and where it doesn’t. Experienced users gain new expressiveness and unlock patterns that were either awkward or impossible before.

One challenge with this iterative development style is that sometimes it takes a long time. Async functions, impl Trait, and generic reasoning are three areas where progress has been stalled for years, for a variety of reasons. That’s all started to shift this year, though. A big part of is the formation of new Rust teams at many companies, allowing a lot more people to have a lot more time. It’s also just the accumulation of the hard work of many people over a long time, slowly chipping away at hard problems (to get a sense for what I mean, read Jack’s blog post on NLL removal, and take a look at the full list of contributors he cited there — just assembling the list was impressive work, not to mention the actual work itself).

It may have been a long time coming, but I’m really excited about where Rust is going right now, as well as the new crop of contributors that have started to push the compiler faster and faster than it’s ever moved before. If things continue like this, Rust in 2024 is going to be pretty damn great.

  1. Oh, my beloved moro! I will return to thee! 

The Rust Programming Language Blog: Announcing Rust 1.64.0

Thursday 22nd of September 2022 12:00:00 AM

The Rust team is happy to announce a new version of Rust, 1.64.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.64.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.64.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.64.0 stable Enhancing .await with IntoFuture

Rust 1.64 stabilizes the IntoFuture trait. IntoFuture is a trait similar to IntoIterator, but rather than supporting for ... in ... loops, IntoFuture changes how .await works. With IntoFuture, the .await keyword can await more than just futures; it can await anything which can be converted into a Future via IntoFuture - which can help make your APIs more user-friendly!

Take for example a builder which constructs requests to some storage provider over the network:

pub struct Error { ... } pub struct StorageResponse { ... }: pub struct StorageRequest(bool); impl StorageRequest { /// Create a new instance of `StorageRequest`. pub fn new() -> Self { ... } /// Decide whether debug mode should be enabled. pub fn set_debug(self, b: bool) -> Self { ... } /// Send the request and receive a response. pub async fn send(self) -> Result<StorageResponse, Error> { ... } }

Typical usage would likely look something like this:

let response = StorageRequest::new() // 1. create a new instance .set_debug(true) // 2. set some option .send() // 3. construct the future .await?; // 4. run the future + propagate errors

This is not bad, but we can do better here. Using IntoFuture we can combine "construct the future" (line 3) and "run the future" (line 4) into a single step:

let response = StorageRequest::new() // 1. create a new instance .set_debug(true) // 2. set some option .await?; // 3. construct + run the future + propagate errors

We can do this by implementing IntoFuture for StorageRequest. IntoFuture requires us to have a named future we can return, which we can do by creating a "boxed future" and defining a type alias for it:

// First we must import some new types into the scope. use std::pin::Pin; use std::future::{Future, IntoFuture}; pub struct Error { ... } pub struct StorageResponse { ... } pub struct StorageRequest(bool); impl StorageRequest { /// Create a new instance of `StorageRequest`. pub fn new() -> Self { ... } /// Decide whether debug mode should be enabled. pub fn set_debug(self, b: bool) -> Self { ... } /// Send the request and receive a response. pub async fn send(self) -> Result<StorageResponse, Error> { ... } } // The new implementations: // 1. create a new named future type // 2. implement `IntoFuture` for `StorageRequest` pub type StorageRequestFuture = Pin<Box<dyn Future<Output = Result<StorageResponse, Error> + Send + 'static>> impl IntoFuture for StorageRequest { type IntoFuture = StorageRequestFuture; type Output = <StorageRequestFuture as Future>::Output; fn into_future(self) -> Self::IntoFuture { Box::pin(self.send()) } }

This takes a bit more code to implement, but provides a simpler API for users.

In the future, the Rust Async WG hopes to simplify the creating new named futures by supporting impl Trait in type aliases (Type Alias Impl Trait or TAIT). This should make implementing IntoFuture easier by simplifying the type alias' signature, and make it more performant by removing the Box from the type alias.

C-compatible FFI types in core and alloc

When calling or being called by C ABIs, Rust code can use type aliases like c_uint or c_ulong to match the corresponding types from C on any target, without requiring target-specific code or conditionals.

Previously, these type aliases were only available in std, so code written for embedded targets and other scenarios that could only use core or alloc could not use these types.

Rust 1.64 now provides all of the c_* type aliases in core::ffi, as well as core::ffi::CStr for working with C strings. Rust 1.64 also provides alloc::ffi::CString for working with owned C strings using only the alloc crate, rather than the full std library.

rust-analyzer is now available via rustup

rust-analyzer is now included as part of the collection of tools included with Rust. This makes it easier to download and access rust-analyzer, and makes it available on more platforms. It is available as a rustup component which can be installed with:

rustup component add rust-analyzer

At this time, to run the rustup-installed version, you need to invoke it this way:

rustup run stable rust-analyzer

The next release of rustup will provide a built-in proxy so that running the executable rust-analyzer will launch the appropriate version.

Most users should continue to use the releases provided by the rust-analyzer team (available on the rust-analyzer releases page), which are published more frequently. Users of the official VSCode extension are not affected since it automatically downloads and updates releases in the background.

Cargo improvements: workspace inheritance and multi-target builds

When working with collections of related libraries or binary crates in one Cargo workspace, you can now avoid duplication of common field values between crates, such as common version numbers, repository URLs, or rust-version. This also helps keep these values in sync between crates when updating them. For more details, see workspace.package, workspace.dependencies, and "inheriting a dependency from a workspace".

When building for multiple targets, you can now pass multiple --target options to cargo build, to build all of those targets at once. You can also set to an array of multiple targets in .cargo/config.toml to build for multiple targets by default.

Stabilized APIs

The following methods and trait implementations are now stabilized:

These types were previously stable in std::ffi, but are now also available in core and alloc:

These types were previously stable in std::os::raw, but are now also available in core::ffi and std::ffi:

We've stabilized some helpers for use with Poll, the low-level implementation underneath futures:

In the future, we hope to provide simpler APIs that require less use of low-level details like Poll and Pin, but in the meantime, these helpers make it easier to write such code.

These APIs are now usable in const contexts:

Compatibility notes
  • As previously announced, linux targets now require at least Linux kernel 3.2 (except for targets which already required a newer kernel), and linux-gnu targets now require glibc 2.17 (except for targets which already required a newer glibc).

  • Rust 1.64.0 changes the memory layout of Ipv4Addr, Ipv6Addr, SocketAddrV4 and SocketAddrV6 to be more compact and memory efficient. This internal representation was never exposed, but some crates relied on it anyway by using std::mem::transmute, resulting in invalid memory accesses. Such internal implementation details of the standard library are never considered a stable interface. To limit the damage, we worked with the authors of all of the still-maintained crates doing so to release fixed versions, which have been out for more than a year. The vast majority of impacted users should be able to mitigate with a cargo update.

  • As part of the RLS deprecation, this is also the last release containing a copy of RLS. Starting from Rust 1.65.0, RLS will be replaced by a small LSP server showing the deprecation warning.

Other changes

There are other changes in the Rust 1.64 release, including:

  • Windows builds of the Rust compiler now use profile-guided optimization, providing performance improvements of 10-20% for compiling Rust code on Windows.

  • If you define a struct containing fields that are never used, rustc will warn about the unused fields. Now, in Rust 1.64, you can enable the unused_tuple_struct_fields lint to get the same warnings about unused fields in a tuple struct. In future versions, we plan to make this lint warn by default. Fields of type unit (()) do not produce this warning, to make it easier to migrate existing code without having to change tuple indices.

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.64.0

Many people came together to create Rust 1.64.0. We couldn't have done it without all of you. Thanks!

Niko Matsakis: Dyn async traits, part 9: call-site selection

Wednesday 21st of September 2022 09:35:00 PM

After my last post on dyn async traits, some folks pointed out that I was overlooking a seemingly obvious possibility. Why not have the choice of how to manage the future be made at the call site? It’s true, I had largely dismissed that alternative, but it’s worth consideration. This post is going to explore what it would take to get call-site-based dispatch working, and what the ergonomics might look like. I think it’s actually fairly appealing, though it has some limitations.

If we added support for unsized return values…

The idea is to build on the mechanisms proposed in RFC 2884. With that RFC, you would be able to have functions that returned a dyn Future:

fn return_dyn() -> dyn Future<Output = ()> { async move { } }

Normally, when you call a function, we can allocate space on the stack to store the return value. But when you call return_dyn, we don’t know how much space we need at compile time, so we can’t do that1. This means you can’t just write let x = return_dyn(). Instead, you have to choose how to allocate that memory. Using the APIs proposed in RFC 2884, the most common option would be to store it on the heap. A new method, Box::new_with, would be added to Box; it acts like new, but it takes a closure, and the closure can return values of any type, including dyn values:

let result = Box::new_with(|| return_dyn()); // result has type `Box<dyn Future<Output = ()>>`

Invoking new_with would be ergonomically unpleasant, so we could also add a .box operator. Rust has had an unstable box operator since forever, this might finally provide enough motivation to make it worth adding:

let result = return_dyn().box; // result has type `Box<dyn Future<Output = ()>>`

Of course, you wouldn’t have to use Box. Assuming we have sufficient APIs available, people can write their own methods, such as something to do arena allocation…

let arena = Arena::new(); let result = arena.new_with(|| return_dyn());

…or perhaps a hypothetical maybe_box, which would use a buffer if that’s big enough, and use box otherwise:

let mut big_buf = [0; 1024]; let result = maybe_box(&mut big_buf, || return_dyn()).await;

If we add postfix macros, then we might even support something like return_dyn.maybe_box!(&mut big_buf), though I’m not sure if the current proposal would support that or not.

What are unsized return values?

This idea of returning dyn Future is sometimes called “unsized return values”, as functions can now return values of “unsized” type (i.e., types who size is not statically known). They’ve been proposed in RFC 2884 by Olivier Faure, and I believe there were some earlier RFCs as well. The .box operator, meanwhile, has been a part of “nightly Rust” since approximately forever, though its currently written in prefix form, i.e., box foo2.

The primary motivation for both unsized-return-values and .box has historically been efficiency: they permit in-place initialization in cases where it is not possible today. For example, if I write Box::new([0; 1024]) today, I am technically allocating a [0; 1024] buffer on the stack and then copying it into the box:

// First evaluate the argument, creating the temporary: let temp: [u8; 1024] = ...; // Then invoke `Box::new`, which allocates a Box... let box: *const T = allocate_memory(); // ...and copies the memory in. std::ptr::write(box, temp);

The optimizer may be able to fix that, but it’s not trivial. If you look at the order of operations, it requires making the allocation happen before the arguments are allocated. LLVM considers calls to known allocators to be “side-effect free”, but promoting them is still risky, since it means that more memory is allocated earlier, which can lead to memory exhaustion. The point isn’t so much to look at exactly what optimizations LLVM will do in practice, so much as to say that it is not trivial to optimize away the temporary: it requires some thoughtful heuristics.

How would unsized return values work?

This merits a blog post of its own, and I won’t dive into details. For our purposes here, the key point is that somehow when the callee goes to return its final value, it can use whatever strategy the caller prefers to get a return point, and write the return value directly in there. RFC 2884 proposes one solution based on generators, but I would want to spend time thinking through all the alternatives before we settled on something.

Using dynamic return types for async fn in traits

So, the question is, can we use dyn return types to help with async function in traits? Continuing with my example from my previous post, if you have an AsyncIterator trait…

trait AsyncIterator { type Item; async fn next(&mut self) -> Option<Self::Item>; }

…the idea is that calling next on a dyn AsyncIterator type would yield dyn Future<Output = Option<Self::Item>>. Therefore, one could write code like this:

fn use_dyn(di: &mut dyn AsyncIterator) {; // ^^^^ }

The expression by itself yields a dyn Future. This type is not sized and so it won’t compile on its own. Adding .box produces a Box<dyn AsyncIterator>, which you can then await.3

Compared to the Boxing adapter I discussed before, this is relatively straightforward to explain. I’m not entirely sure which is more convenient to use in practice: it depends how many dyn values you create and how many methods you call on them. Certainly you can work around the problem of having to write .box at each call-site via wrapper types or helper methods that do it for you.

Complication: dyn AsyncIterator does not implement AsyncIterator

There is one complication. Today in Rust, every dyn Trait type also implements Trait. But can dyn AsyncIterator implement AsyncIterator? In fact, it cannot! The problem is that the AsyncIterator trait defines next as returning impl Future<..>, which is actually shorthand for impl Future<..> + Sized, but we said that next would return dyn Future<..>, which is ?Sized. So the dyn AsyncIterator type doesn’t meet the bounds the trait requires. Hmm.

But…does dyn AsyncIterator have to implement AsyncIterator?

There is no “hard and fixed” reason that dyn Trait types have to implement Trait, and there are a few good reasons not to do it. The alternative to dyn safety is a design like this: you can always create a dyn Trait value for any Trait, but you may not be able to use all of its members. For example, given a dyn Iterator, you could call next, but you couldn’t call generic methods like map. In fact, we’ve kind of got this design in practice, thanks to the where Self: Sized hack that lets us exclude methods from being used on dyn values.

Why did we adopt object safety in the first place? If you look back at RFC 255, the primary motivation for this rule was ergonomics: clearer rules and better error messages. Although I argued for RFC 255 at the time, I don’t think these motivations have aged so well. Right now, for example, if you have a trait with a generic method, you get an error when you try to create a dyn Trait value, telling you that you cannot create a dyn Trait from a trait with a generic method. But it may well be clearer to get an error at the point where you to call that generic method telling you that you cannot call generic methods through dyn Trait.

Another motivation for having dyn Trait implement Trait was that one could write a generic function with T: Trait and have it work equally well for object types. That capability is useful, but because you have to write T: ?Sized to take advantage of it, it only really works if you plan carefully. In practice what I’ve found works much better is to implement Trait to &dyn Trait.

What would it mean to remove the rule that dyn AsyncIterator: AsyncIterator?

I think the new system would be something like this…

  • You can always4 create a dyn Foo value. The dyn Foo type would define inherent methods based on the trait Foo that use dynamic dispatch, but with some changes:
    • Async functions and other methods defined with -> impl Trait return -> dyn Trait instead.
    • Generic methods, methods referencing Self, and other such cases are excluded. These cannot be handled with virtual dispatch.
  • If Foo is object safe using today’s rules, dyn Foo: Foo holds. Otherwise, it does not.5
    • On a related but orthogonal note, I would like to make a dyn keyword required to declare dyn safety.
Implications of removing that rule

This implies that dyn AsyncIterator (or any trait with async functions/RPITIT6) will not implement AsyncIterator. So if I write this function…

fn use_any<I>(x: &mut I) where I: ?Sized + AsyncIterator, { }

…I cannot use it with I = dyn AsyncIterator. You can see why: it calls next and assumes the result is Sized (as promised by the trait), so it doesn’t add any kind of .box directive (and it shouldn’t have to).

What you can do is implement a wrapper type that encapsulates the boxing:

struct BoxingAsyncIterator<'i, I> { iter: &'i mut dyn AsyncIterator<Item = I> } impl<I> AsyncIterator for BoxingAsyncIterator<'i, I> { type Item = I; async fn next(&mut self) -> Option<Self::Item> { } }

…and then you can call use_any(BoxingAsyncIterator::new(ai)).7

Limitation: what if you wanted to do stack allocation?

One of the goals with the previous proposal was to allow you to write code that used dyn AsyncIterator which worked equally well in std and no-std environments. I would say that goal was partially achieved. The core idea was that the caller would choose the strategy by which the future got allocated, and so it could opt to use inline allocation (and thus be no-std compatible) or use boxing (and thus be simple).

In this proposal, the call-site has to choose. You might think then that you could just choose to use stack allocation at the call-site and thus be no-std compatible. But how does one choose stack allocation? It’s actually quite tricky! Part of the problem is that async stack frames are stored in structs, and thus we cannot support something like alloca (at least not for values that will be live across an await, which includes any future that is awaited8). In fact, even outside of async, using alloca is quite hard! The problem is that a stack is, well, a stack. Ideally, you would do the allocation just before your callee returns, but that’s when you know how much memory you need. But at that time, your callee is still using the stack, so your allocation is on the wrong spot.9 I personally think we should just rule out the idea of using alloca to do stack allocation.

If we can’t use alloca, what can we do? We have a few choices. In the very beginning, I talked about the idea of a maybe_box function that would take a buffer and use it only for really large values. That’s kind of nifty, but it still relies on a box fallback, so it doesn’t really work for no-std.10 Might be a nice alternative to stackfuture though!11

You can also achieve inlining by writing wrapper types (something tmandry and I prototyped some time back), but the challenge then is that your callee doesn’t accept a &mut dyn AsyncIterator, it accepts something like &mut DynAsyncIter, where DynAsyncIter is a struct that you defined to do the wrapping.

All told, I think the answer in reality would be: If you want to be used in a no-std environment, you don’t use dyn in your public interfaces. Just use impl AsyncIterator. You can use hacks like the wrapper types internally if you really want dynamic dispatch.

Question: How much room is there for the compiler to get clever?

One other concern I had in thinking about this proposal was that it seemed like it was overspecified. That is, the vast majority of call-sites in this proposal will be written with .box, which thus specifies that they should allocate a box to store the result. But what about ideas like caching the box across invocations, or “best effort” stack allocation? Where do they fit in? From what I can tell, those optimizations are still possible, so long as the Box which would be allocated doesn’t escape the function (which was the same condition we had before).

The way to think of it: by writing foo().box.await, the user told us to use the boxing allocator to box the return value of foo. But we can then see that this result is passed to await, which takes ownership and later frees it. We can thus decide to substitute a different allocator, perhaps one that reuses the box across invocations, or tries to use stack memory; this is fine so long as we modifed the freeing code to match. Doing this relies on knowing that the allocated value is immediately returned to us and that it never leaves our control.


To sum up, I think for most users this design would work like so…

  • You can use dyn with traits that have async functions, but you have to write .box every time you call a method.
  • You get to use .box in other places too, and we gain at least some support for unsized return values.12
  • If you want to write code that is sometimes using dyn and sometimes using static dispatch, you’ll have to write some awkward wrapper types.13
  • If you are writing no-std code, use impl Trait, not dyn Trait; if you must use dyn, it’ll require wrapper types.

Initially, I dismissed call-site allocation because it violated dyn Trait: Trait and it didn’t allow code to be written with dyn that could work in both std and no-std. But I think that violating dyn Trait: Trait may actually be good, and I’m not sure how important that latter constraint truly is. Furthermore, I think that Boxing::new and the various “dyn adapters” are probably going to be pretty confusing for users, but writing .box on a call-site is relatively easy to explain (“we don’t know what future you need, so you have to box it”). So now it seems a lot more appealing to me, and I’m grateful to Olivier Faure for bringing it up again.

One possible extension would be to permit users to specify the type of each returned future in some way. As I was finishing up this post, I saw that matthieum posted an intriguing idea in this direction on the internals thread. In general, I do see a need for some kind of “trait adapters”, such that you can take a base trait like Iterator and “adapt” it in various ways, e.g. producing a version that uses async methods, or which is const-safe. This has some pretty heavy overlap with the whole keyword generics initiative too. I think it’s a good extension to think about, but it wouldn’t be part of the “MVP” that we ship first.


Please leave comments in this internals thread, thanks!

Appendix A: the Output associated type

Here is an interesting thing! The FnOnce trait, implemented by all callable things, defines its associated type Output as Sized! We have to change this if we want to allow unsized return values.

In theory, this could be a big backwards compatibility hazard. Code that writes F::Output can assume, based on the trait, that the return value is sized – so if we remove that bound, the code will no longer build!

Fortunately, I think this is ok. We’ve deliberately restricted the fn types so you can only use them with the () notation, e.g., where F: FnOnce() or where F: FnOnce() -> (). Both of these forms expand to something which explicitly specifies Output, like F: FnOnce<(), Output = ()>. What this means is that even if you really generic code…

fn foo<F, R>(f: F) where F: FnOnce<Output = R> { let value: F::Output = f(); ... }

…when you write F::Output, that is actually normalized to R, and the type R has its own (implicit) Sized bound.

(There’s was actually a recent unsoundness related to this bound, closed by this PR, and we discussed exactly this forwards compatibility question on Zulip.)

  1. I can hear you now: “but what about alloca!” I’ll get there. 

  2. The box foo operator supported by the compiler has no current path to stabilization. There were earlier plans (see RFC 809 and RFC 1228), but we ultimately abandoned those efforts. Part of the problem, in fact, was that the precedence of box foo made for bad ergonomics: works much better. 

  3. If you try to await a Box<dyn Future> today, you get an error that it needs to be pinned. I think we can solve that by implementing IntoFuture for Box<dyn Future> and having that convert it to Pin<Box<dyn Future>>. 

  4. Or almost always? I may be overlooking some edge cases. 

  5. Internally in the compiler, this would require modifying the definition of MIR to make “dyn dispatch” more first-class. 

  6. Don’t know what RPITIT stands for?! “Return position impl trait in traits!” Get with the program! 

  7. This is basically what the “magical” Boxing::new would have done for you in the older proposal. 

  8. Brief explanation of why async and alloca don’t mix here. 

  9. I was told Ada compiles will allocate the memory at the top of the stack, copy it over to the start of the function’s area, and then pop what’s left. Theoretically possible! 

  10. You could imagine a version that aborted the code if the size is wrong, too, which would make it no-std safe, but not in a realiable way (aborts == yuck). 

  11. Conceivably you could set the size to size_of(SomeOtherType) to automatically determine how much space is needed. 

  12. I say at least some because I suspect many details of the more general case would remain unstable until we gain more experience. 

  13. You have to write awkward wrapper types for now, anyway. I’m intrigued by ideas about how we could make that more automatic, but I think it’s way out of scope here. 

Firefox Nightly: These Weeks In Firefox: Issue 124

Wednesday 21st of September 2022 05:47:24 PM
Highlights Friends of the Firefox team Introductions/Shout-Outs
  • Welcome Schalk! Schalk has been contributing for a while and is the community manager for MDN Web Docs, and is hanging out to hear about DevTools-y things and other interesting things going on in Firefox-land to help promote them to the wider community
Resolved bugs (excluding employees) Volunteers that fixed more than one bug
  • axtinemvsn (one of our CalState students!)
  • Itiel
New contributors (

This Week In Rust: This Week in Rust 461

Wednesday 21st of September 2022 04:00:00 AM

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community Official Project/Tooling Updates Observations/Thoughts Rust Walkthroughs Miscellaneous Crate of the Week

This week's crate is match_deref, a macro crate to implement deref patterns on stable Rust.

Thanks to meithecatte for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

347 pull requests were merged in the last week

Rust Compiler Performance Triage

This was a fairly negative week for compiler performance, with regressions overall up to 14% on some workloads (primarily incr-unchanged scenarios), largely caused by #101620. We are still chasing down either a revert or a fix for that regression, though a partial mitigation in #101862 has been applied. Hopefully the full fix or revert will be part of the next triage report.

We also saw a number of other regressions land, though most were much smaller in magnitude.

Triage done by @simulacrum. Revision range: 17cbdfd0..8fd6d03

See the full report for more details.

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
  • No Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs Upcoming Events

Rusty Events between 2022-09-21 - 2022-10-19

Niko Matsakis: What I meant by the “soul of Rust”

Monday 19th of September 2022 02:15:00 PM

Re-reading my previous post, I felt I should clarify why I called it the “soul of Rust”. The soul of Rust, to my mind, is definitely not being explicit about allocation. Rather, it’s about the struggle between a few key values — especially productivity and versatility1 in tension with transparency. Rust’s goal has always been to feel like a high-level but with the performance and control of a low-level one. Oftentimes, we are able to find a “third way” that removes the tradeoff, solving both goals pretty well. But finding those “third ways” takes time — and sometimes we just have to accept a certain hit to one value or another for the time being to make progress. It’s exactly at these times, when we have to make a difficult call, that questions about the “soul of Rust” starts to come into play. I’ve been thinking about this a lot, so I thought I would write a post that expands on the role of transparency in Rust, and some of the tensions that arise around it.

Why do we value transparency?

From the draft Rustacean Principles:

Niko Matsakis: Dyn async traits, part 8: the soul of Rust

Sunday 18th of September 2022 05:49:00 PM

In the last few months, Tyler Mandry and I have been circulating a “User’s Guide from the Future” that describes our current proposed design for async functions in traits. In this blog post, I want to deep dive on one aspect of that proposal: how to handle dynamic dispatch. My goal here is to explore the space a bit and also to address one particularly tricky topic: how explicit do we have to be about the possibility of allocation? This is a tricky topic, and one that gets at that core question: what is the soul of Rust?

The running example trait

Throughout this blog post, I am going to focus exclusively on this example trait, AsyncIterator:

trait AsyncIterator { type Item; async fn next(&mut self) -> Option<Self::Item>; }

And we’re particularly focused on the scenario where we are invoking next via dynamic dispatch:

fn make_dyn<AI: AsyncIterator>(ai: AI) { use_dyn(&mut ai); // <— coercion from `&mut AI` to `&mut dyn AsyncIterator` } fn use_dyn(di: &mut dyn AsyncIterator) {; // <— this call right here! }

Even though I’m focusing the blog post on this particular snippet of code, everything I’m talking about is applicable to any trait with methods that return impl Trait (async functions themselves being a shorthand for a function that returns impl Future).

The basic challenge that we have to face is this:

  • The caller function, use_dyn, doesn’t know what impl is behind the dyn, so it needs to allocate a fixed amount of space that works for everybody. It also needs some kind of vtable so it knows what poll method to call.
  • The callee, AI::next, needs to be able to package up the future for its next function in some way to fit the caller’s expectations.

The first blog post in this series1 explains the problem in more detail.

A brief tour through the options

One of the challenges here is that there are many, many ways to make this work, and none of them is “obviously best”. What follows is, I think, an exhaustive list of the various ways one might handle the situation. If anybody has an idea that doesn’t fit into this list, I’d love to hear it.

Box it. The most obvious strategy is to have the callee box the future type, effectively returning a Box<dyn Future>, and have the caller invoke the poll method via virtual dispatch. This is what the async-trait crate does (although it also boxes for static dispatch, which we don’t have to do).

Box it with some custom allocator. You might want to box the future with a custom allocator.

Box it and cache box in the caller. For most applications, boxing itself is not a performance problem, unless it occurs repeatedly in a tight loop. Mathias Einwag pointed out if you have some code that is repeatedly calling next on the same object, you could have that caller cache the box in between calls, and have the callee reuse it. This way you only have to actually allocate once.

Inline it into the iterator. Another option is to store all the state needed by the function in the AsyncIter type itself. This is actually what the existing Stream trait does, if you think about it: instead of returning a future, it offers a poll_next method, so that the implementor of Stream effectively is the future, and the caller doesn’t have to store any state. Tyler and I worked out a more general way to do inlining that doesn’t require user intervention, where you basically wrap the AsyncIterator type in another type W that has a field big enough to store the next future. When you call next, this wrapper W stores the future into that field and then returns a pointer to the field, so that the caller only has to poll that pointer. One problem with inlining things into the iterator is that it only works well for &mut self methods, since in that case there can be at most one active future at a time. With &self methods, you could have any number of active futures.

Box it and cache box in the callee. Instead of inlining the entire future into the AsyncIterator type, you could inline just one pointer-word slot, so that you can cache and reuse the Box that next returns. The upside of this strategy is that the cached box moves with the iterator and can potentially be reused across callers. The downside is that once the caller has finished, the cached box lives on until the object itself is destroyed.

Have caller allocate maximal space. Another strategy is to have the caller allocate a big chunk of space on the stack, one that should be big enough for every callee. If you know the callees your code will have to handle, and the futures for those callees are close enough in size, this strategy works well. Eric Holk recently released the [stackfuture crate] that can help automate it. One problem with this strategy is that the caller has to know the size of all its callees.

Have caller allocate some space, and fall back to boxing for large callees. If you don’t know the sizes of all your callees, or those sizes have a wide distribution, another strategy might be to have the caller allocate some amount of stack space (say, 128 bytes) and then have the callee invoke Box if that space is not enough.

Alloca on the caller side. You might think you can store the size of the future to be returned in the vtable and then have the caller “alloca” that space — i.e., bump the stack pointer by some dynamic amount. Interestingly, this doesn’t work with Rust’s async model. Async tasks require that the size of the stack frame is known up front.

Side stack. Similar to the previous suggestion, you could imagine having the async runtimes provide some kind of “dynamic side stack” for each task.2 We could then allocate the right amount of space on this stack. This is probably the most efficient option, but it assumes that the runtime is able to provide a dynamic stack. Runtimes like embassy wouldn’t be able to do this. Moreover, we don’t have any sort of protocol for this sort of thing right now. Introducing a side-stack also starts to “eat away” at some of the appeal of Rust’s async model, which is designed to allocate the “perfect size stack” up front and avoid the need to allocate a “big stack per task”.3

Can async functions used with dyn be “normal”?

One of my initial goals for async functions in traits was that they should feel “as natural as possible”. In particular, I wanted you to be able to use them with dynamic dispatch in just the same way as you would a synchronous function. In other words, I wanted this code to compile, and I would want it to work even if use_dyn were put into another crate (and therefore were compiled with no idea of who is calling it):

fn make_dyn<AI: AsyncIterator>(ai: AI) { use_dyn(&mut ai); } fn use_dyn(di: &mut dyn AsyncIterator) {; }

My hope was that we could make this code work just as it is by selecting some kind of default strategy that works most of the time, and then provide ways for you to pick other strategies for those code where the default strategy is not a good fit. The problem though is that there is no single default strategy that seems “obvious and right almost all of the time”…

Strategy Downside Box it (with default allocator) requires allocation, not especially efficient Box it with cache on caller side requires allocation Inline it into the iterator adds space to AI, doesn’t work for &self Box it with cache on callee side requires allocation, adds space to AI, doesn’t work for &self Allocate maximal space can’t necessarily use that across crates, requires extensive interprocedural analysis Allocate some space, fallback uses allocator, requires extensive interprocedural analysis or else random guesswork Alloca on the caller side incompatible with async Rust Side-stack requires cooperation from runtime and allocation The soul of Rust

This is where we get to the “soul of Rust”. Looking at the above table, the strategy that seems the closest to “obviously correct” is “box it”. It works fine with separate compilation, fits great with Rust’s async model, and it matches what people are doing today in practice. I’ve spoken with a fair number of people who use async Rust in production, and virtually all of them agreed that “box by default, but let me control it” would work great in practice.

And yet, when we floated the idea of using this as the default, Josh Triplett objected strenuously, and I think for good reason. Josh’s core concern was that this would be crossing a line for Rust. Until now, there is no way to allocate heap memory without some kind of explicit operation (though that operation could be a function call). But if we wanted make “box it” the default strategy, then you’d be able to write “innocent looking” Rust code that nonetheless is invoking Box::new. In particular, it would be invoking Box::new each time that next is called, to box up the future. But that is very unclear from reading over make_dyn and use_dyn.

As an example of where this might matter, it might be that you are writing some sensitive systems code where allocation is something you always do with great care. It doesn’t mean the code is no-std, it may have access to an allocator, but you still would like to know exactly where you will be doing allocations. Today, you can audit the code by hand, scanning for “obvious” allocation points like Box::new or vec![]. Under this proposal, while it would still be possible, the presence of an allocation in the code is much less obvious. The allocation is “injected” as part of the vtable construction process. To figure out that this will happen, you have to know Rust’s rules quite well, and you also have to know the signature of the callee (because in this case, the vtable is built as part of an implicit coercion). In short, scanning for allocation went from being relatively obvious to requiring a PhD in Rustology. Hmm.

On the other hand, if scanning for allocations is what is important, we could address that in many ways. We could add an “allow by default” lint to flag the points where the “default vtable” is constructed, and you could enable it in your project. This way the compiler would warn you about the possible future allocation. In fact, even today, scanning for allocations is actually much harder than I made it ought to be: you can easily see if your function allocates, but you can’t easily see what its callees do. You have to read deeply into all of your dependencies and, if there are function pointers or dyn Trait values, figure out what code is potentially being called. With compiler/language support, we could make that whole process much more first-class and better.

In a way, though, the technical arguments are besides the point. “Rust makes allocations explicit” is widely seen as a key attribute of Rust’s design. In making this change, we would be tweaking that rule to be something like ”Rust makes allocations explicit most of the time”. This would be harder for users to understand, and it would introduce doubt as whether Rust really intends to be the kind of language that can replace C and C++4.

Looking to the Rustacean design principles for guidance

Some time back, Josh and I drew up a draft set of design principles for Rust. It’s interesting to look back on them and see what they have to say about this question:

  • ⚙️ Reliable: “if it compiles, it works”

Cameron Kaiser: September patch set for TenFourFox

Sunday 18th of September 2022 04:46:31 PM
102 is now the next Firefox Extended Support Release, so it's time for spring cleaning — if you're a resident of the Southern Hemisphere — in the TenFourFox repository. Besides refreshing the maintenance scripts to pull certificate, timezone and HSTS updates from this new source, I also implemented all the relevant security and stability patches from the last gasp of 91ESR (none likely to be exploitable on Power Macs without a direct attack, but many likely to crash them), added an Fx102 user agent choice to the TenFourFox preference pane, updated the ATSUI font blacklist (thanks to Chris T for the report) and updated zlib to 1.2.12, picking up multiple bug fixes and some modest performance improvements. This touches a lot of low-level stuff so updating will require a complete rebuild from scratch (instructions). Sorry about that, it's necessary!

If you're new to building your own copy of TenFourFox, this article from last year is still current with the process and what's out there for alternatives and assistance.

Mozilla Performance Blog: A different perspective

Thursday 15th of September 2022 07:22:28 PM

Usually, in our articles, we talk about performance from the performance engineer’s perspective, but in this one, I want to take a step back and look at it from another perspective. Earlier this year, I talked to an engineer about including more debugging information in the bugs we are filing for regressions. Trying to make a context out of the discussion, I realized the performance sheriffing process is complex and that many of our engineers have limited knowledge of how we detect regressions, how we identify the patch that introduced it, and how to respond to a notification of a regression.

As a result, I decided to make a recording about how a Sheriff catches a regression and files the bug, and then how the engineer that wrote the source code causing the regression can get the information they need to resolve it. The video below has a simplified version of how the Performance Sheriffs open a performance regression.

In short, if there’s no test gap between the last good and the first regressed revision, a regression will be filed on the bug that caused it and linked to the alert.

Filing a regression – Demo

I caused a regression! Now what?

If you caused a regression then a sheriff will open a regression bug and set the regressor bug’s id to the regressed by field. In the regression description, you’ll find the tests regressed and you’ll be able to view a particular graph or the updated alert. Note: almost always the alert contains more items than the description. The video below will show you how to zoom in and find the regressing data point, see the job, trigger a profiler, and see the taskcluster task for it. There you’ll find the payload, dependencies, artifacts, or parsed log.

Investigating a regression – Demo

The full process of investigating an alert and finding the cause of a regression is much more complex than these examples. It has three phases before, and one after, which are: triaging the alerts, investigating the graphs, and filing the regression bug. The one after is following up and offering support to the author of the regressing bug to understand and/or fix the issue. These phases are illustrated below.

Sheriffing Workflow


We have made several small improvements to the regression bug template that are worth noting:

  • We added links to the ratio (magnitude) column that opens the graph of each alert item
  • Previously the performance sheriffs set the severity of the regression, but we now allow the triage owners to determine severity based on the data provided
  • We added a paragraph that lets you know you can trigger profiling jobs for the regressed tests before and after the commit, or ask the sheriff to do this for you.
  • Added a cron job that will trigger performance tests for patches that are most likely to change the performance numbers
Work in progress

There are also three impactful projects in terms of performance:

  1. Integrating the side-by-side script to CI, the ultimate goal being to have side-by-side video comparisons generated automatically on regressions. Currently, there’s a local perftest-tools command that does the comparison.
  2. Having the profiler automatically triggered for the same purpose: having more investigation data available when a regression happens.
  3. Developing a more user-friendly performance comparison tool, PerfCompare, to replace Perfherder Compare View.

Mozilla Open Policy & Advocacy Blog: Mozilla Responds to EU General Court’s Judgment on Google Android

Thursday 15th of September 2022 06:17:51 PM

This week, the EU’s General Court largely upheld the decision sanctioning Google for restricting competition on the Android mobile operating system. But, on their own, the judgment and the record fine do not help to unlock competition and choice online, especially when it comes to browsers.

In July 2018, when the European Commission announced its decision, we expressed hope that the result would help to level the playing field for independent browsers like Firefox and provide real choice for consumers. Sadly for billions of people around the world who use browsers every day, this hope has not been realized – yet.

The case may rumble on in appeals for several more years, but Mozilla will continue to advocate for an Internet which is open, accessible, private, and secure for all, and we will continue to build products which advance this vision. We hope that those with the power to improve browser choice for consumers will also work towards these tangible goals.

The post Mozilla Responds to EU General Court’s Judgment on Google Android appeared first on Open Policy & Advocacy.

The Rust Programming Language Blog: Const Eval (Un)Safety Rules

Thursday 15th of September 2022 12:00:00 AM

In a recent Rust issue (#99923), a developer noted that the upcoming 1.64-beta version of Rust had started signalling errors on their crate, icu4x. The icu4x crate uses unsafe code during const evaluation. Const evaluation, or just "const-eval", runs at compile-time but produces values that may end up embedded in the final object code that executes at runtime.

Rust's const-eval system supports both safe and unsafe Rust, but the rules for what unsafe code is allowed to do during const-eval are even more strict than what is allowed for unsafe code at runtime. This post is going to go into detail about one of those rules.

(Note: If your const code does not use any unsafe blocks or call any const fn with an unsafe block, then you do not need to worry about this!)

A new diagnostic to watch for

The problem, reduced over the course of the comment thread of #99923, is that certain static initialization expressions (see below) are defined as having undefined behavior (UB) at compile time (playground):

pub static FOO: () = unsafe { let illegal_ptr2int: usize = std::mem::transmute(&()); let _copy = illegal_ptr2int; };

(Many thanks to @eddyb for the minimal reproduction!)

The code above was accepted by Rust versions 1.63 and earlier, but in the Rust 1.64-beta, it now causes a compile time error with the following message:

error[E0080]: could not evaluate static initializer --> | 3 | let _copy = illegal_ptr2int; | ^^^^^^^^^^^^^^^ unable to turn pointer into raw bytes | = help: this code performed an operation that depends on the underlying bytes representing a pointer = help: the absolute address of a pointer is not known at compile-time, so such operations are not supported

As the message says, this operation is not supported: the transmute above is trying to reinterpret the memory address &() as an integer of type usize. The compiler cannot predict what memory address the () would be associated with at execution time, so it refuses to allow that reinterpretation.

When you write safe Rust, then the compiler is responsible for preventing undefined behavior. When you write any unsafe code (be it const or non-const), you are responsible for preventing UB, and during const-eval, the rules about what unsafe code has defined behavior are even more strict than the analogous rules governing Rust's runtime semantics. (In other words, more code is classified as "UB" than you may have otherwise realized.)

If you hit undefined behavior during const-eval, the Rust compiler will protect itself from adverse effects such as the undefined behavior leaking into the type system, but there are few guarantees other than that. For example, compile-time UB could lead to runtime UB. Furthermore, if you have UB at const-eval time, there is no guarantee that your code will be accepted from one compiler version to another.

What is new here

You might be thinking: "it used to be accepted; therefore, there must be some value for the memory address that the previous version of the compiler was using here."

But such reasoning would be based on an imprecise view of what the Rust compiler was doing here.

The const-eval machinery of the Rust compiler (also known as "the CTFE engine") is built upon a MIR interpreter which uses an abstract model of a hypothetical machine as the foundation for evaluating such expressions. This abstract model doesn't have to represent memory addresses as mere integers; in fact, to support fine-grained checking for UB, it uses a much richer datatype for the values that are held in the abstract memory store.

(The aforementioned MIR interpreter is also the basis for Miri, a research tool that interprets non-const Rust code, with a focus on explicit detection of undefined behavior. The Miri developers are the primary contributors to the CTFE engine in the Rust compiler.)

The details of the CTFE engine's value representation do not matter too much for our discussion here. We merely note that earlier versions of the compiler silently accepted expressions that seemed to transmute memory addresses into integers, copied them around, and then transmuted them back into addresses; but that was not what was acutally happening under the hood. Instead, what was happening was that the values were passed around blindly (after all, the whole point of transmute is that it does no transformation on its input value, so it is a no-op in terms of its operational semantics).

The fact that it was passing a memory address into a context where you would expect there to always be an integer value would only be caught, if at all, at some later point.

For example, the const-eval machinery rejects code that attempts to embed the transmuted pointer into a value that could be used by runtime code, like so (playground):

pub static FOO: usize = unsafe { let illegal_ptr2int: usize = std::mem::transmute(&()); illegal_ptr2int };

Likewise, it rejects code that attempts to perform arithmetic on that non-integer value, like so (playground):

pub static FOO: () = unsafe { let illegal_ptr2int: usize = std::mem::transmute(&()); let _incremented = illegal_ptr2int + 1; };

Both of the latter two variants are rejected in stable Rust, and have been for as long as Rust has accepted pointer-to-integer conversions in static initializers (see e.g. Rust 1.52).

More similar than different

In fact, all of the examples provided above are exhibiting undefined behavior according to the semantics of Rust's const-eval system.

The first example with _copy was accepted in Rust versions 1.46 through 1.63 because of CTFE implementation artifacts. The CTFE engine puts considerable effort into detecting UB, but does not catch all instances of it. Furthermore, by default, such detection can be delayed to a point far after where the actual problematic expression is found.

But with nightly Rust, we can opt into extra checks for UB that the engine provides, by passing the unstable flag -Z extra-const-ub-checks. If we do that, then for all of the above examples we get the same result:

error[E0080]: could not evaluate static initializer --> | 2 | let illegal_ptr2int: usize = std::mem::transmute(&()); | ^^^^^^^^^^^^^^^^^^^^^^^^ unable to turn pointer into raw bytes | = help: this code performed an operation that depends on the underlying bytes representing a pointer = help: the absolute address of a pointer is not known at compile-time, so such operations are not supported

The earlier examples had diagnostic output that put the blame in a misleading place. With the more precise checking -Z extra-const-ub-checks enabled, the compiler highlights the expression where we can first witness UB: the original transmute itself! (Which was stated at the outset of this post; here we are just pointing out that these tools can pinpoint the injection point more precisely.)

Why not have these extra const-ub checks on by default? Well, the checks introduce performance overhead upon Rust compilation time, and we do not know if that overhead can be made acceptable. (However, recent debate among Miri developers indicates that the inherent cost here might not be as bad as they had originally thought. Perhaps a future version of the compiler will have these extra checks on by default.)

Change is hard

You might well be wondering at this point: "Wait, when is it okay to transmute a pointer to a usize during const evaluation?" And the answer is simple: "Never."

Transmuting a pointer to a usize during const-eval has always been undefined behavior, ever since const-eval added support for transmute and union. You can read more about this in the const_fn_transmute / const_fn_union stabilization report, specifically the subsection entitled "Pointer-integer-transmutes". (It is also mentioned in the documentation for transmute.)

Thus, we can see that the classification of the above examples as UB during const evaluation is not a new thing at all. The only change here was that the CTFE engine had some internal changes that made it start detecting the UB rather than silently ignoring it.

This means the Rust compiler has a shifting notion of what UB it will explicitly catch. We anticipated this: RFC 3016, "const UB", explicitly says:

[...] there is no guarantee that UB is reliably detected during CTFE. This can change from compiler version to compiler version: CTFE code that causes UB could build fine with one compiler and fail to build with another. (This is in accordance with the general policy that unsound code is not subject to stability guarantees.)

Having said that: So much of Rust's success has been built around the trust that we have earned with our community. Yes, the project has always reserved the right to make breaking changes when resolving soundness bugs; but we have also strived to mitigate such breakage whenever feasible, via things like future-incompatible lints.

Today, with our current const-eval architecture, it is not feasible to ensure that changes such as the one that injected issue #99923 go through a future-incompat warning cycle. The compiler team plans to keep our eye on issues in this space. If we see evidence that these kinds of changes do cause breakage to a non-trivial number of crates, then we will investigate further how we might smooth the transition path between compiler releases. However, we need to balance any such goal against the fact that Miri has very a limited set of developers: the researchers determining how to define the semantics of unsafe languages like Rust. We do not want to slow their work down!

What you can do for safety's sake

If you observe the could not evaluate static initializer message on your crate atop Rust 1.64, and it was compiling with previous versions of Rust, we want you to let us know: file an issue!

We have performed a crater run for the 1.64-beta and that did not find any other instances of this particular problem. If you can test compiling your crate atop the 1.64-beta before the stable release goes out on September 22nd, all the better! One easy way to try the beta is to use rustup's override shortand for it:

$ rustup update beta $ cargo +beta build

As Rust's const-eval evolves, we may see another case like this arise again. If you want to defend against future instances of const-eval UB, we recommend that you set up a continuous integration service to invoke the nightly rustc with the unstable -Z extra-const-ub-checks flag on your code.

Want to help?

As you might imagine, a lot of us are pretty interested in questions such as "what should be undefined behavior?"

See for example Ralf Jung's excellent blog series on why pointers are complicated (parts I, II, III), which contain some of the details elided above about the representation of pointer values, and spell out reasons why you might want to be concerned about pointer-to-usize transmutes even outside of const-eval.

If you are interested in trying to help us figure out answers to those kinds of questions, please join us in the unsafe code guidelines zulip.

If you are interested in learning more about Miri, or contributing to it, you can say Hello in the miri zulip.


To sum it all up: When you write safe Rust, then the compiler is responsible for preventing undefined behavior. When you write any unsafe code, you are responsible for preventing undefined behavior. Rust's const-eval system has a stricter set of rules governing what unsafe code has defined behavior: specifically, reinterpreting (aka "transmuting") a pointer value as a usize is undefined behavior during const-eval. If you have undefined behavior at const-eval time, there is no guarantee that your code will be accepted from one compiler version to another.

The compiler team is hoping that issue #99923 is an exceptional fluke and that the 1.64 stable release will not encounter any other surprises related to the aforementioned change to the const-eval machinery.

But fluke or not, the issue provided excellent motivation to spend some time exploring facets of Rust's const-eval architecture and the interpreter that underlies it. We hope you enjoyed reading this as much as we did writing it.

This Week In Rust: This Week in Rust 460

Wednesday 14th of September 2022 04:00:00 AM

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community Official Foundation Newsletters Project/Tooling Updates Observations/Thoughts Rust Walkthroughs Miscellaneous Crate of the Week

This week's crate is bstr, a fast and featureful byte-string library.

Thanks to 8573 for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

324 pull requests were merged in the last week

Rust Compiler Performance Triage

From the viewpoint of metrics gathering, this was an absolutely terrible week, because the vast majority of this week's report is dominated by noise. Several benchmarks (html5ever, cranelift-codegen, and keccak) have all been exhibiting bimodal behavior where their compile-times would regress and improve randomly from run to run. Looking past that, we had one small win from adding an inline directive.

Triage done by @pnkfelix. Revision range: e7cdd4c0..17cbdfd0


(instructions:u) mean range count Regressions ❌
(primary) 1.1% [0.2%, 6.2%] 26 Regressions ❌
(secondary) 1.9% [0.1%, 5.6%] 34 Improvements ✅
(primary) -1.8% [-29.4%, -0.2%] 42 Improvements ✅
(secondary) -1.3% [-5.3%, -0.2%] 50 All ❌✅ (primary) -0.7% [-29.4%, 6.2%] 68

11 Regressions, 11 Improvements, 13 Mixed; 11 of them in rollups 71 artifact comparisons made in total

Full report here

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New and Updated RFCs
  • No New or Updated RFCs were created this week.
Upcoming Events

Rusty Events between 2022-09-14 - 2022-10-12

The Rust Programming Language Blog: Security advisories for Cargo (CVE-2022-36113, CVE-2022-36114)

Wednesday 14th of September 2022 12:00:00 AM

This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.

The Rust Security Response WG was notified that Cargo did not prevent extracting some malformed packages downloaded from alternate registries. An attacker able to upload packages to an alternate registry could fill the filesystem or corrupt arbitary files when Cargo downloaded the package.

These issues have been assigned CVE-2022-36113 and CVE-2022-36114. The severity of these vulnerabilities is "low" for users of alternate registries. Users relying on are not affected.

Note that by design Cargo allows code execution at build time, due to build scripts and procedural macros. The vulnerabilities in this advisory allow performing a subset of the possible damage in a harder to track down way. Your dependencies must still be trusted if you want to be protected from attacks, as it's possible to perform the same attacks with build scripts and procedural macros.

Arbitrary file corruption (CVE-2022-36113)

After a package is downloaded, Cargo extracts its source code in the ~/.cargo folder on disk, making it available to the Rust projects it builds. To record when an extraction is successfull, Cargo writes "ok" to the .cargo-ok file at the root of the extracted source code once it extracted all the files.

It was discovered that Cargo allowed packages to contain a .cargo-ok symbolic link, which Cargo would extract. Then, when Cargo attempted to write "ok" into .cargo-ok, it would actually replace the first two bytes of the file the symlink pointed to with ok. This would allow an attacker to corrupt one file on the machine using Cargo to extract the package.

Disk space exhaustion (CVE-2022-36114)

It was discovered that Cargo did not limit the amount of data extracted from compressed archives. An attacker could upload to an alternate registry a specially crafted package that extracts way more data than its size (also known as a "zip bomb"), exhausting the disk space on the machine using Cargo to download the package.

Affected versions

Both vulnerabilities are present in all versions of Cargo. Rust 1.64, to be released on September 22nd, will include fixes for both of them.

Since these vulnerabilities are just a more limited way to accomplish what a malicious build scripts or procedural macros can do, we decided not to publish Rust point releases backporting the security fix. Patch files for Rust 1.63.0 are available in the wg-security-response repository for people building their own toolchains.


We recommend users of alternate registries to excercise care in which package they download, by only including trusted dependencies in their projects. Please note that even with these vulnerabilities fixed, by design Cargo allows arbitrary code execution at build time thanks to build scripts and procedural macros: a malicious dependency will be able to cause damage regardless of these vulnerabilities. implemented server-side checks to reject these kinds of packages years ago, and there are no packages on exploiting these vulnerabilities. users still need to excercise care in choosing their dependencies though, as the same concerns about build scripts and procedural macros apply here.


We want to thank Ori Hollander from JFrog Security Research for responsibly disclosing this to us according to the Rust security policy.

We also want to thank Josh Triplett for developing the fixes, Weihang Lo for developing the tests, and Pietro Albini for writing this advisory. The disclosure was coordinated by Pietro Albini and Josh Stone.

Support.Mozilla.Org: Tribute to FredMcD

Tuesday 13th of September 2022 06:55:05 AM

It brings us great sadness to share the news that FredMcD has recently passed away.

If you ever posted a question to our Support Forum, you may be familiar with a contributor named “FredMcD”. Fred was one of the most active contributors in Mozilla Support, and for many years remains one of our core contributors. He was regularly awarded a forum contributor badge every year since 2013 for his consistency in contributing to the Support Forum.

He was a dedicated contributor, super helpful, and very loyal to Firefox users making over 81400 contributions to the Support Forum since 2013.  During the COVID-19 lockdown period, he focussed on helping people all over the world when they were online the most – at one point he was doing approximately 3600 responses in 90 days, an average of 40 a day.

In March 2022, I learned the news that he was hospitalized for a few weeks. He was back active in our forum shortly after he was discharged. But then we never heard from him again after his last contribution on May 5, 2022. There’s very little we know about Fred. But we were finally able to confirm his passing just recently.

We surely lost a great contributor. He was a helpful community member and his assistance with incidents was greatly appreciated. His support approach has always been straightforward and simple. It’s not rare, that he was able to solve a problem in one go like this or this one.

To honor his passing, we added his name to the about:credits page to make sure that his contribution and impact on Mozilla will never be forgotten. He will surely be missed by the community.

I’d like to thank Paul for his collaboration in this post and for his help in getting Fred’s name to the about:credits page. Thanks, Paul!


Mozilla Thunderbird: Thunderbird Tip: Customize Colors In The Spaces Toolbar

Monday 12th of September 2022 02:51:25 PM

In our last video tip, you learned how to manually sort the order of all your mail and account folders. Let’s keep that theme of customization rolling forward with a quick video guide on customizing the Spaces Toolbar that debuted in Thunderbird 102.

The Spaces Toolbar is on the left hand side of your Thunderbird client and gives you fast, easy access to your most important activities. With a single click you can navigate between Mail, Address Books, Calendars, Tasks, Chat, settings, and your installed add-ons and themes.

Watch below how to customize it!

Video Guide: Customizing The Spaces Toolbar In Thunderbird

This 2-minute tip video shows you how to easily customize the Spaces Toolbar in Thunderbird 102.

*Note that the color tools available to you will vary depending on the operating system you’re using. If you’re looking to discover some pleasing color palettes, we recommend the excellent, free tools at

Have You Subscribed To Our YouTube Channel?

We’re currently building the next exciting era of Thunderbird, and developing a Thunderbird experience for mobile. We’re also putting out more content and communication across various platforms to keep you informed. And, of course, to show you some great usage tips along the way.

To accomplish that, we’ve launched our YouTube channel to help you get the most out of Thunderbird. You can subscribe here. Help us reach more people than ever before by liking each video and leaving a comment if it helped!

Another Tip Before You Go?

The post Thunderbird Tip: Customize Colors In The Spaces Toolbar appeared first on The Thunderbird Blog.

The Mozilla Blog: Announcing Carlos Torres, Mozilla’s new Chief Legal Officer

Monday 12th of September 2022 12:59:42 PM

I am pleased to announce that starting today, September 12, Carlos Torres has joined Mozilla as our Chief Legal Officer. In this role Carlos will be responsible for leading our global legal and public policy teams, developing legal, regulatory and policy strategies that support Mozilla’s mission. He will also manage all regulatory issues and serve as a strategic business partner helping us accelerate our growth and evolution. Carlos will also serve as Corporate Secretary. He will report to me and join our steering committee.

<figcaption>Carlos Torres joins Mozilla executive team.</figcaption>

Carlos stood out in the interview process because of his great breadth of experience across many topics including strategic and commercial agreements, product, privacy, IP, employment, board matters, investments, regulatory and litigation. He brings experience in both large and small companies, and in organizations with different risk profiles as well as a deep belief in Mozilla’s commitment to innovation and to an open internet.

“Mozilla continues to be a unique and respected voice in technology, in a world that needs trusted institutions more than ever,” said Torres. “There is no other organization that combines community, product, technology and advocacy to produce trusted innovative products that people love. I’ve always admired Mozilla for its principled, people-focused approach and I’m grateful for the opportunity to contribute to Mozilla’s mission and evolution.”

Carlos comes to us most recently from Flashbots where he led the company’s legal and strategic initiatives. Prior to that, he was General Counsel for two start ups and spent over a decade at Salesforce in a variety of leadership roles including VP, Business Development and Strategic Alliances and VP, Associate General Counsel, Chief of Staff. He also served as senior counsel of a biotech company and started his legal career at Orrick, Herrington & Sutcliffe.

The post Announcing Carlos Torres, Mozilla’s new Chief Legal Officer appeared first on The Mozilla Blog.

IRL (podcast): The AI Medicine Cabinet

Monday 12th of September 2022 07:05:00 AM

Life, death and data. AI’s capacity to support research on human health is well documented. But so are the harms of biased datasets and misdiagnoses. How can AI developers build healthier systems? We take a look at a new dataset for Black skin health, a Covid chatbot in Rwanda, AI diagnostics in rural India, and elusive privacy in mental health apps.

Avery Smith is a software engineer in Maryland who lost his wife to skin cancer. This inspired him to create the Black Skin Health AI Dataset and the web app, Melalogic.

Remy Muhire works on open source speech recognition software in Rwanda, including a Covid-19 chatbot, Mbaza, which 2 million people have used so far.

Radhika Radhakrishnan is a feminist scholar who studies how AI diagnostic systems are deployed in rural India by tech companies and hospitals, as well as the limits of consent.

Jen Caltrider is the lead investigator on a special edition of Mozilla’s “Privacy Not Included” buyer’s guide that investigated the privacy and security of mental health apps.

IRL is an original podcast from Mozilla, the non-profit behind Firefox. In Season 6, host Bridget Todd shares stories of people who make AI more trustworthy in real life. This season doubles as Mozilla’s 2022 Internet Health Report. Go to the report for show notes, transcripts, and more.



More in Tux Machines

today's howtos

  • How to install go1.19beta on Ubuntu 22.04 – NextGenTips

    In this tutorial, we are going to explore how to install go on Ubuntu 22.04 Golang is an open-source programming language that is easy to learn and use. It is built-in concurrency and has a robust standard library. It is reliable, builds fast, and efficient software that scales fast. Its concurrency mechanisms make it easy to write programs that get the most out of multicore and networked machines, while its novel-type systems enable flexible and modular program constructions. Go compiles quickly to machine code and has the convenience of garbage collection and the power of run-time reflection. In this guide, we are going to learn how to install golang 1.19beta on Ubuntu 22.04. Go 1.19beta1 is not yet released. There is so much work in progress with all the documentation.

  • molecule test: failed to connect to bus in systemd container - openQA bites

    Ansible Molecule is a project to help you test your ansible roles. I’m using molecule for automatically testing the ansible roles of geekoops.

  • How To Install MongoDB on AlmaLinux 9 - idroot

    In this tutorial, we will show you how to install MongoDB on AlmaLinux 9. For those of you who didn’t know, MongoDB is a high-performance, highly scalable document-oriented NoSQL database. Unlike in SQL databases where data is stored in rows and columns inside tables, in MongoDB, data is structured in JSON-like format inside records which are referred to as documents. The open-source attribute of MongoDB as a database software makes it an ideal candidate for almost any database-related project. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you the step-by-step installation of the MongoDB NoSQL database on AlmaLinux 9. You can follow the same instructions for CentOS and Rocky Linux.

  • An introduction (and how-to) to Plugin Loader for the Steam Deck. - Invidious
  • Self-host a Ghost Blog With Traefik

    Ghost is a very popular open-source content management system. Started as an alternative to WordPress and it went on to become an alternative to Substack by focusing on membership and newsletter. The creators of Ghost offer managed Pro hosting but it may not fit everyone's budget. Alternatively, you can self-host it on your own cloud servers. On Linux handbook, we already have a guide on deploying Ghost with Docker in a reverse proxy setup. Instead of Ngnix reverse proxy, you can also use another software called Traefik with Docker. It is a popular open-source cloud-native application proxy, API Gateway, Edge-router, and more. I use Traefik to secure my websites using an SSL certificate obtained from Let's Encrypt. Once deployed, Traefik can automatically manage your certificates and their renewals. In this tutorial, I'll share the necessary steps for deploying a Ghost blog with Docker and Traefik.

Red Hat Hires a Blind Software Engineer to Improve Accessibility on Linux Desktop

Accessibility on a Linux desktop is not one of the strongest points to highlight. However, GNOME, one of the best desktop environments, has managed to do better comparatively (I think). In a blog post by Christian Fredrik Schaller (Director for Desktop/Graphics, Red Hat), he mentions that they are making serious efforts to improve accessibility. Starting with Red Hat hiring Lukas Tyrychtr, who is a blind software engineer to lead the effort in improving Red Hat Enterprise Linux, and Fedora Workstation in terms of accessibility. Read more

Today in Techrights

Android Leftovers