Language Selection

English French German Italian Portuguese Spanish

Mozilla: Rust, WebRender, AV1

Filed under
Moz/FF
  • Splash 2018 Mid-Week Report

    I really enjoyed this talk by Felienne Hermans entitled “Explicit Direct Instruction in Programming Education”. The basic gist of the talk was that, when we teach programming, we often phrase it in terms of “exploration” and “self-expression”, but that this winds up leaving a lot of folks in the cold and may be at least partly responsible for the lack of diversity in computer science today. She argued that this is like telling kids that they should just be able to play a guitar and create awesome songs without first practicing their chords1 – it kind of sets them up to fail.

    The thing that really got me excited about this was that it seemed very connected to mentoring and open source. If you watched the Rust Conf keynote this year, you’ll remember Aaron talking about “OSS by Serendipity” – this idea that we should just expect people to come and produce PRs. This is in contrast to the “OSS by Design” that we’ve been trying to practice and preach, where there are explicit in-roads for people to get involved in the project through mentoring, as well as explicit priorities and goals (created, of course, through open processes like the roadmap and so forth). It seems to me that the things like working groups, intro bugs, quest issues, etc, are all ways for people to “practice the basics” of a project before they dive into creating major new features.

  • WebRender newsletter #29

    To introduce this week’s newsletter I’ll write about culling. Culling refers to discarding invisible content and is performed at several stages of the rendering pipeline. During frame building on the CPU we go through all primitives and discard the ones that are off-screen by computing simple rectangle intersections. As a result we avoid transferring a lot of data to the GPU and we can skip processing them as well.

    Unfortunately this isn’t enough. Web page are typically built upon layers and layers of elements stacked on top of one another. The traditional way to render web pages is to draw each element in back-to-front order, which means that for a given pixel on the screen we may have rendered many primitives. This is frustrating because there are a lot of opaque primitives that completely cover the work we did on that pixel for element beneath it, so there is a lot of shading work and memory bandwidth that goes to waste, and memory bandwidth is a very common bottleneck, even on high end hardware.

    Drawing on the same pixels multiple times is called overdraw, and overdraw is not our friend, so a lot effort goes into reducing it.
    In its early days, to mitigate overdraw WebRender divided the screen in tiles and all primitives were assigned to the tiles they covered (primitives that overlap several tiles would be split into a primitive for each tile), and when an opaque primitive covered an entire tile we could simply discard everything that was below it. This tiling approach was good at reducing overdraw with large occluders and also made the batching blended primitives easier (I’ll talk about batching in another episode). It worked quite well for axis-aligned rectangles which is the vast majority of what web pages are made of, but it was hard to split transformed primitives.

  • Into the Depths: The Technical Details Behind AV1

    Since AOMedia officially cemented the AV1 v1.0.0 specification earlier this year, we’ve seen increasing interest from the broadcasting industry. Starting with the NAB Show (National Association of Broadcasters) in Las Vegas earlier this year, and gaining momentum through IBC (International Broadcasting Convention) in Amsterdam, and more recently the NAB East Show in New York, AV1 keeps picking up steam. Each of these industry events attract over 100,000 media professionals. Mozilla attended these shows to demonstrate AV1 playback in Firefox, and showed that AV1 is well on its way to being broadly adopted in web browsers.

More in Tux Machines

Deepin 15.8 - Attractive and Efficient, Excellent User Experience

Deepin is an open source GNU/Linux operating system, based on Linux kernel and desktop applications, supporting laptops, desktops and all-in-ones. deepin preinstalls Deepin Desktop Environment (DDE) and nearly 30 deepin native applications, as well as several applications from the open source community to meet users’ daily learning and work needs. In addition, about a thousand of applications are offered in Deepin Store to meet your more needs. deepin, developed by a professional operating system R&D team and deepin technical community (www.deepin.org), is from the name of deepin technical community - “deepin”, which means deep pursuit and exploration of the life and the future. Compared with deepin 15.7, the ISO size of deepin 15.8 has been reduced by 200MB. The new release is featured with newly designed control center, dock tray and boot theme, as well as improved deepin native applications, hoping to bring users a more beautiful and efficient experience. Read more

Kernel: Zinc and 4.20 Merge Window

  • Zinc: a new kernel cryptography API
    We looked at the WireGuard virtual private network (VPN) back in August and noted that it is built on top of a new cryptographic API being developed for the kernel, which is called Zinc. There has been some controversy about Zinc and why a brand new API was needed when the kernel already has an extensive crypto API. A recent talk by lead WireGuard developer Jason Donenfeld at Kernel Recipes 2018 would appear to be a serious attempt to reach out, engage with that question, and explain the what, how, and why of Zinc. WireGuard itself is small and, according to Linus Torvalds, a work of art. Two of its stated objectives are maximal simplicity and high auditability. Donenfeld initially did try to implement WireGuard using the existing kernel cryptography API, but after trying to do so, he found it impossible to do in any sane way. That led him to question whether it was even possible to meet those objectives using the existing API. By way of a case study, he considered big_key.c. This is kernel code that is designed to take a key, store it encrypted on disk, and then return the key to someone asking for it if they are allowed to have access to it. Donenfeld had taken a look at it, and found that the crypto was totally broken. For a start, it used ciphers in Electronic Codebook (ECB) mode, which is known to leave gross structure in ciphertext — the encrypted image of Tux on the left may still contain data perceptible to your eye — and so is not recommended for any serious cryptographic use. Furthermore, according to Donenfeld, it was missing authentication tags (allowing ciphertext to be undetectably modified), it didn't zero keys out of memory after use, and it didn't use its sources of randomness correctly; there were many CVEs associated with it. So he set out to rewrite it using the crypto API, hoping to better learn the API with a view to using it for WireGuard. The first step with the existing API is to allocate an instance of a cipher "object". The syntax for so doing is arguably confusing — for example, you pass the argument CRYPTO_ALG_ASYNC to indicate that you don't want the instance to be asynchronous. When you've got it set up and want to encrypt something, you can't simply pass data by address. You must use scatter/gather to pass it, which in turn means that data in the vmalloc() area or on the stack can't just be encrypted with this API. The key you're using ends up attached not to the object you just allocated, but to the global instance of the algorithm in question, so if you want to set the key you must take a mutex lock before doing so, in order to be sure that someone else isn't changing the key underneath you at the same time. This complexity has an associated resource cost: the memory requirements for a single key can approach a megabyte, and some platforms just can't spare that much. Normally one would use kvalloc() to get around this, but the crypto API doesn't permit it. Although this was eventually addressed, the fix was not trivial.
  • 4.20 Merge window part 2
    At the end of the 4.20 merge window, 12,125 non-merge changesets had been pulled into the mainline kernel repository; 6,390 came in since last week's summary was written. As is often the case, the latter part of the merge window contained a larger portion of cleanups and fixes, but there were a number of new features in the mix as well.

Limiting the power of package installation in Debian

There is always at least a small risk when installing a package for a distribution. By its very nature, package installation is an invasive process; some packages require the ability to make radical changes to the system—changes that users surely would not want other packages to take advantage of. Packages that are made available by distributions are vetted for problems of this sort, though, of course, mistakes can be made. Third-party packages are an even bigger potential problem because they lack this vetting, as was discussed in early October on the debian-devel mailing list. Solutions in this area are not particularly easy, however. Lars Wirzenius brought up the problem: "when a .deb package is installed, upgraded, or removed, the maintainer scripts are run as root and can thus do anything." Maintainer scripts are included in a .deb file to be run before and after installation or removal. As he noted, maintainer scripts for third-party packages (e.g. Skype, Chrome) sometimes add entries to the lists of package sources and signing keys; they do so in order to get security updates to their packages safely, but it may still be surprising or unwanted. Even simple mistakes made in Debian-released packages might contain unwelcome surprises of various sorts. He suggested that there could be a set of "profiles" that describe the kinds of changes that might be made by a package installation. He gave a few different examples, such as a "default" profile that only allowed file installation in /usr, a "kernel" profile that can install in /boot and trigger rebuilds of the initramfs, or "core" that can do anything. Packages would then declare which profile they required. The dpkg command could arrange that package's install scripts could only make the kinds of changes allowed by its profile. Read more

SpamAssassin is back

The SpamAssassin 3.4.2 release was the first from that project in well over three years. At the 2018 Open Source Summit Europe, Giovanni Bechis talked about that release and those that will be coming in the near future. It would seem that, after an extended period of quiet, the SpamAssassin project is back and has rededicated itself to the task of keeping junk out of our inboxes. Bechis started by noting that spam filtering is hard because everybody's spam is different. It varies depending on which languages you speak, what your personal interests are, which social networks you use, and so on. People vary, so results vary; he knows a lot of Gmail users who say that its spam filtering works well, but his Gmail account is full of spam. Since Google knows little about him, it is unable to train itself to properly filter his mail. Just like Gmail, SpamAssassin isn't the perfect filter for everybody right out of the box; it's really a framework that can be used to create that filter. Getting the best out of it can involve spending some time to write rules, for example. Read more