Language Selection

English French German Italian Portuguese Spanish

Kernel Planet

Syndicate content
Kernel Planet - http://planet.kernel.org
Updated: 53 min 27 sec ago

James Morris: Linux Security Summit North America 2020: CFP and Registration

Thursday 27th of February 2020 09:46:58 PM

The CFP for the 2020 Linux Security Summit North America is currently open, and closes on March 31st.

The CFP details are here: https://events.linuxfoundation.org/linux-security-summit-north-america/program/cfp/

You can register as an attendee here: https://events.linuxfoundation.org/linux-security-summit-north-america/register/

Note that the conference this year has moved from August to June (24-26).  The location is Austin, TX, and we are co-located with the Open Source Summit as usual.

We’ll be holding a 3-day event again, after the success of last year’s expansion, which provides time for tutorials and ad-hoc break out sessions.  Please note that if you intend to submit a tutorial, you should be a core developer of the project or otherwise recognized leader in the field, per this guidance from the CFP:

Tutorial sessions should be focused on advanced Linux security defense topics within areas such as the kernel, compiler, and security-related libraries.  Priority will be given to tutorials created for this conference, and those where the presenter is a leading subject matter expert on the topic.

This will be the 10th anniversary of the Linux Security Summit, which was first held in 2010 in Boston as a one day event.

Get your proposals for 2020 in soon!

Linux Plumbers Conference: Videos for microconferences

Wednesday 26th of February 2020 03:09:25 PM

The videos for all the talks in microconferences at the 2019 edition of Linux Plumbers are now linked to the schedule. Clicking on the link titled “video” will take you to the right spot in the microconference video. Hopefully, watching all of these talks will get you excited for the 2020 edition which we are busy preparing! Watch out for our call for microconferences and for our refereed track both of which are to be released soon. So now’s the time to start thinking about all the exciting problems you want to discuss and solve.

Matthew Garrett: What usage restrictions can we place in a free software license?

Thursday 20th of February 2020 01:33:16 AM
Growing awareness of the wider social and political impact of software development has led to efforts to write licenses that prevent software being used to engage in acts that are seen as socially harmful, with the Hippocratic License being perhaps the most discussed example (although the JSON license's requirement that the software be used for good, not evil, is arguably an earlier version of the theme). The problem with these licenses is that they're pretty much universally considered to fall outside the definition of free software or open source licenses due to their restrictions on use, and there's a whole bunch of people who have very strong feelings that this is a very important thing. There's also the more fundamental underlying point that it's hard to write a license like this where everyone agrees on whether a specific thing is bad or not (eg, while many people working on a project may feel that it's reasonable to prohibit the software being used to support drone strikes, others may feel that the project shouldn't have a position on the use of the software to support drone strikes and some may even feel that some people should be the victims of drone strikes). This is, it turns out, all quite complicated.

But there is something that many (but not all) people in the free software community agree on - certain restrictions are legitimate if they ultimately provide more freedom. Traditionally this was limited to restrictions on distribution (eg, the GPL requires that your recipient be able to obtain corresponding source code, and for GPLv3 must also be able to obtain the necessary signing keys to be able to replace it in covered devices), but more recently there's been some restrictions that don't require distribution. The best known is probably the clause in the Affero GPL (or AGPL) that requires that users interacting with covered code over a network be able to download the source code, but the Cryptographic Autonomy License (recently approved as an Open Source license) goes further and requires that users be able to obtain their data in order to self-host an equivalent instance.

We can construct examples of where these prevent certain fields of endeavour, but the tradeoff has been deemed worth it - the benefits to user freedom that these licenses provide is greater than the corresponding cost to what you can do. How far can that tradeoff be pushed? So, here's a thought experiment. What if we write a license that's something like the following:

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. All permissions granted by this license must be passed on to all recipients of modified or unmodified versions of this work
2. This work may not be used in any way that impairs any individual's ability to exercise the permissions granted by this license, whether or not they have received a copy of the covered work

This feels like the logical extreme of the argument. Any way you could use the covered work that would restrict someone else's ability to do the same is prohibited. This means that, for example, you couldn't use the software to implement a DRM mechanism that the user couldn't replace (along the lines of GPLv3's anti-Tivoisation clause), but it would also mean that you couldn't use the software to kill someone with a drone (doing so would impair their ability to make use of the software). The net effect is along the lines of the Hippocratic license, but it's framed in a way that is focused on user freedom.

To be clear, I don't think this is a good license - it has a bunch of unfortunate consequences like it being impossible to use covered code in self-defence if doing so would impair your attacker's ability to use the software. I'm not advocating this as a solution to anything. But I am interested in seeing whether the perception of the argument changes when we refocus it on user freedom as opposed to an independent ethical goal.

Thoughts?

Edit:

Rich Felker on Twitter had an interesting thought - if clause 2 above is replaced with:

2. Your rights under this license terminate if you impair any individual's ability to exercise the permissions granted by this license, even if the covered work is not used to do so

how does that change things? My gut feeling is that covering actions that are unrelated to the use of the software might be a reach too far, but it gets away from the idea that it's your use of the software that triggers the clause.

comments

Kees Cook: security things in Linux v5.4

Wednesday 19th of February 2020 12:37:02 AM

Previously: v5.3.

Linux kernel v5.4 was released in late November. The holidays got the best of me, but better late than never! ;) Here are some security-related things I found interesting:

waitid() gains P_PIDFD
Christian Brauner has continued his pidfd work by adding a critical mode to waitid(): P_PIDFD. This makes it possible to reap child processes via a pidfd, and completes the interfaces needed for the bulk of programs performing process lifecycle management. (i.e. a pidfd can come from /proc or clone(), and can be waited on with waitid().)

kernel lockdown
After something on the order of 8 years, Linux can now draw a bright line between “ring 0” (kernel memory) and “uid 0” (highest privilege level in userspace). The “kernel lockdown” feature, which has been an out-of-tree patch series in most Linux distros for almost as many years, attempts to enumerate all the intentional ways (i.e. interfaces not flaws) userspace might be able to read or modify kernel memory (or execute in kernel space), and disable them. While Matthew Garrett made the internal details fine-grained controllable, the basic lockdown LSM can be set to either disabled, “integrity” (kernel memory can be read but not written), or “confidentiality” (no kernel memory reads or writes). Beyond closing the many holes between userspace and the kernel, if new interfaces are added to the kernel that might violate kernel integrity or confidentiality, now there is a place to put the access control to make everyone happy and there doesn’t need to be a rehashing of the age old fight between “but root has full kernel access” vs “not in some system configurations”.

tagged memory relaxed syscall ABI
Andrey Konovalov (with Catalin Marinas and others) introduced a way to enable a “relaxed” tagged memory syscall ABI in the kernel. This means programs running on hardware that supports memory tags (or “versioning”, or “coloring”) in the upper (non-VMA) bits of a pointer address can use these addresses with the kernel without things going crazy. This is effectively teaching the kernel to ignore these high bits in places where they make no sense (i.e. mathematical comparisons) and keeping them in place where they have meaning (i.e. pointer dereferences).

As an example, if a userspace memory allocator had returned the address 0x0f00000010000000 (VMA address 0x10000000, with, say, a “high bits” tag of 0x0f), and a program used this range during a syscall that ultimately called copy_from_user() on it, the initial range check would fail if the tag bits were left in place: “that’s not a userspace address; it is greater than TASK_SIZE (0x0000800000000000)!”, so they are stripped for that check. During the actual copy into kernel memory, the tag is left in place so that when the hardware dereferences the pointer, the pointer tag can be checked against the expected tag assigned to referenced memory region. If there is a mismatch, the hardware will trigger the memory tagging protection.

Right now programs running on Sparc M7 CPUs with ADI (Application Data Integrity) can use this for hardware tagged memory, ARMv8 CPUs can use TBI (Top Byte Ignore) for software memory tagging, and eventually there will be ARMv8.5-A CPUs with MTE (Memory Tagging Extension).

boot entropy improvement
Thomas Gleixner got fed up with poor boot-time entropy and trolled Linus into coming up with reasonable way to add entropy on modern CPUs, taking advantage of timing noise, cycle counter jitter, and perhaps even the variability of speculative execution. This means that there shouldn’t be mysterious multi-second (or multi-minute!) hangs at boot when some systems don’t have enough entropy to service getrandom() syscalls from systemd or the like.

userspace writes to swap files blocked
From the department of “how did this go unnoticed for so long?”, Darrick J. Wong fixed the kernel to not allow writes from userspace to active swap files. Without this, it was possible for a user (usually root) with write access to a swap file to modify its contents, thereby changing memory contents of a process once it got paged back in. While root normally could just use CAP_PTRACE to modify a running process directly, this was a loophole that allowed lesser-privileged users (e.g. anyone in the “disk” group) without the needed capabilities to still bypass ptrace restrictions.

limit strscpy() sizes to INT_MAX
Generally speaking, if a size variable ends up larger than INT_MAX, some calculation somewhere has overflowed. And even if not, it’s probably going to hit code somewhere nearby that won’t deal well with the result. As already done in the VFS core, and vsprintf(), I added a check to strscpy() to reject sizes larger than INT_MAX.

ld.gold support removed
Thomas Gleixner removed support for the gold linker. While this isn’t providing a direct security benefit, ld.gold has been a constant source of weird bugs. Specifically where I’ve noticed, it had been pain while developing KASLR, and has more recently been causing problems while stabilizing building the kernel with Clang. Having this linker support removed makes things much easier going forward. There are enough weird bugs to fix in Clang and ld.lld. ;)

Intel TSX disabled
Given the use of Intel’s Transactional Synchronization Extensions (TSX) CPU feature by attackers to exploit speculation flaws, Pawan Gupta disabled the feature by default on CPUs that support disabling TSX.

That’s all I have for this version. Let me know if I missed anything. :) Next up is Linux v5.5!

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Michael Kerrisk (manpages): man-pages-5.05 is released

Sunday 9th of February 2020 04:55:37 PM
I've released man-pages-5.05. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.

This release resulted from patches, bug reports, reviews, and comments from more than 40 contributors. The release includes approximately 110 commits that change around 50 pages.

David Sterba: Btrfs hilights in 5.5: 3-copy and 4-copy block groups

Saturday 1st of February 2020 11:00:00 PM

A bit more detailed overview of a btrfs update that I find interesting, see the pull request for the rest.

New block group profiles RAID1C3 and RAID1C4

There are two new block group profiles enhancing capabilities of the RAID1 types with more copies than 2. Brief overview of the profiles is in the table below, for table with all profiles see manual page of mkfs.brtfs, also available on wiki.

Profile Copies Utilization Min devices RAID1 2 50% 2 RAID1C3 3 33% 3 RAID1C4 4 25% 4

The way all the RAID1 types work is that there are 2 / 3 / 4 exact copies over all available devices. The terminology is different from linux MD RAID, that can do any number of copies. We decided not to do that in btrfs to keep the implementation simple. Another point for simplicity is from the users’ perspective. That RAID1C3 provides 3 copies is clear from the type. Even after adding a new device and not doing balance, the guarantees about redundancy still hold. Newly written data will use the new device together with 2 devices from the original set.

Compare that with a hypothetical RAID1CN, on a filesystem with M devices (N <= M). When the filesystem starts with 2 devices, equivalent to RAID1, adding a new one will have mixed redundancy guarantees after writing more data. Old data with RAID1, new with RAID1C3 – but all accounted under RAID1CN profile. A full re-balance would be required to make it a reliable 3-copy RAID1. Add another device, going to RAID1C4, same problem with more data to shuffle around.

The allocation policy would depend on number of devices, making it hard for the user to know the redundancy level. This is already the case for RAID0/RAID5/RAID6. For the striped profile RAID0 it’s not much of a problem, there’s no redundancy. For the parity profiles it’s been a known problem and new balance filter stripe has been added to support fine grained selection of block groups.

Speaking about RAID6, there’s the elephant in the room, trying to cover write hole. Lack of a resiliency against 2 device damage has been bothering all of us because of the known write hole problem in the RAID6 implementation. How this is going to be addressed is for another post, but for now, the newly added RAID1C3 profile is a reasonable substitute for RAID6.

How to use it

On a freshly created filesystem it’s simple:

# mkfs.btrfs -d raid1c3 -m raid1c4 /dev/sd[abcd]

The command combines both new profiles for sake of demonstration, you should always consider the expected use and required guarantees and choose the appropriate profiles.

Changing the profile later on an existing filesystem works as usual, you can use:

# btrfs balance start -mconvert=raid1c3 /mnt/path

Provided there are enough devices and enough space to do the conversion, this will go through all metadadata block groups and after it finishes, all of them will be of the of the desired type.

Backward compatibility

The new block groups are not understood by old kernels and can’t be mounted, not even in the read-only mode. To prevent that a new incompatibility bit is introduced, called raid1c34. Its presence on a device can be checked by btrfs inspect-internal dump-super in the incompat_flags. On a running system the incompat features are exported in sysfs, /sys/fs/btrfs/UUID/features/raid1c34.

Outlook

There is no demand for RAID1C5 at the moment (I asked more than once). The space utilization is low already, the RAID1C4 survives 3 dead devices so IMHO this is enough for most users. Extending resilience to more devices should perhaps take a different route.

With more copies there’s potential for parallelization of reads from multiple devices. Up to now this is not optimal, there’s a decision logic that’s semi-random based on process ID of the btrfs worker threads or process submitting the IO. Better load balancing policy is a work in progress and could appear in 5.7 at the earliest (because 5.6 development is now in fixes-only mode).

Look back

The history of the patchset is a bit bumpy. There was enough motivation and requests for the functionality, so I started the analysis what needs to be done. Several cleanups were necessary to unify code and to make it easily extendable for more copies while using the same mirroring code. In the end change a few constants and be done.

Following with testing, I tried simple mkfs and conversions, that worked well. Then scrub, overwrite some blocks and let the auto-repair do the work. No hiccups. The remaining and important part was the device replace, as the expected use case was to substitute RAID6, replacing a missing or damaged disk. I wrote the test script, replace 1 missing, replace 2 missing. And it did not work. While the filesystem was mounted, everything seemed OK. Unmount, check again and the devices were still missing. Not cool, right.

Due to lack of time before the upcoming merge window (a code freeze before next development cycle), I had to declare it not ready and put it aside. This was in late 2018. For a highly requested feature this was not an easy decision. Should it be something less important, the development cycle between rc1 and final release provides enough time to fix things up. But due to the maintainer role with its demands I was not confident that I could find enough time to debug and fix the remaining problem. Also nobody offered help to continue the work, but that’s how it goes.

At the late 2019 I had some spare time and looked at the pending work again. Enhanced the test script with more debugging messages and more checks. The code worked well, the test script was subtly broken. Oh well, what a blunder. That cost a year, but on the other hand releasing a highly requested feature that lacks an important part was not an appealing option.

The patchset was added to 5.5 development queue at about the last time before freeze, final 5.5 release happened a week ago.

Matthew Garrett: Avoiding gaps in IOMMU protection at boot

Tuesday 28th of January 2020 11:19:58 PM
When you save a large file to disk or upload a large texture to your graphics card, you probably don't want your CPU to sit there spending an extended period of time copying data between system memory and the relevant peripheral - it could be doing something more useful instead. As a result, most hardware that deals with large quantities of data is capable of Direct Memory Access (or DMA). DMA-capable devices are able to access system memory directly without the aid of the CPU - the CPU simply tells the device which region of memory to copy and then leaves it to get on with things. However, we also need to get data back to system memory, so DMA is bidirectional. This means that DMA-capable devices are able to read and write directly to system memory.

As long as devices are entirely under the control of the OS, this seems fine. However, this isn't always true - there may be bugs, the device may be passed through to a guest VM (and so no longer under the control of the host OS) or the device may be running firmware that makes it actively malicious. The third is an important point here - while we usually think of DMA as something that has to be set up by the OS, at a technical level the transactions are initiated by the device. A device that's running hostile firmware is entirely capable of choosing what and where to DMA.

Most reasonably recent hardware includes an IOMMU to handle this. The CPU's MMU exists to define which regions of memory a process can read or write - the IOMMU does the same but for external IO devices. An operating system that knows how to use the IOMMU can allocate specific regions of memory that a device can DMA to or from, and any attempt to access memory outside those regions will fail. This was originally intended to handle passing devices through to guests (the host can protect itself by restricting any DMA to memory belonging to the guest - if the guest tries to read or write to memory belonging to the host, the attempt will fail), but is just as relevant to preventing malicious devices from extracting secrets from your OS or even modifying the runtime state of the OS.

But setting things up in the OS isn't sufficient. If an attacker is able to trigger arbitrary DMA before the OS has started then they can tamper with the system firmware or your bootloader and modify the kernel before it even starts running. So ideally you want your firmware to set up the IOMMU before it even enables any external devices, and newer firmware should actually do this automatically. It sounds like the problem is solved.

Except there's a problem. Not all operating systems know how to program the IOMMU, and if a naive OS fails to remove the IOMMU mappings and asks a device to DMA to an address that the IOMMU doesn't grant access to then things are likely to explode messily. EFI has an explicit transition between the boot environment and the runtime environment triggered when the OS or bootloader calls ExitBootServices(). Various EFI components have registered callbacks that are triggered at this point, and the IOMMU driver will (in general) then tear down the IOMMU mappings before passing control to the OS. If the OS is IOMMU aware it'll then program new mappings, but there's a brief window where the IOMMU protection is missing - and a sufficiently malicious device could take advantage of that.

The ideal solution would be a protocol that allowed the OS to indicate to the firmware that it supported this functionality and request that the firmware not remove it, but in the absence of such a protocol we're left with non-ideal solutions. One is to prevent devices from being able to DMA in the first place, which means the absence of any IOMMU restrictions is largely irrelevant. Every PCI device has a busmaster bit - if the busmaster bit is disabled, the device shouldn't start any DMA transactions. Clearing that seems like a straightforward approach. Unfortunately this bit is under the control of the device itself, so a malicious device can just ignore this and do DMA anyway. Fortunately, PCI bridges and PCIe root ports should only forward DMA transactions if their busmaster bit is set. If we clear that then any devices downstream of the bridge or port shouldn't be able to DMA, no matter how malicious they are. Linux will only re-enable the bit after it's done IOMMU setup, so we should then be in a much more secure state - we still need to trust that our motherboard chipset isn't malicious, but we don't need to trust individual third party PCI devices.

This patch just got merged, adding support for this. My original version did nothing other than clear the bits on bridge devices, but this did have the potential for breaking devices that were still carrying out DMA at the moment this code ran. Ard modified it to call the driver shutdown code for each device behind a bridge before disabling DMA on the bridge, which in theory makes this safe but does still depend on the firmware drivers behaving correctly. As a result it's not enabled by default - you can either turn it on in kernel config or pass the efi=disable_early_pci_dma kernel command line argument.

In combination with firmware that does the right thing, this should ensure that Linux systems can be protected against malicious PCI devices throughout the entire boot process.

comments

Pete Zaitcev: Too Real

Monday 27th of January 2020 04:46:20 PM

From CKS:

The first useful property Python has is that you can't misplace the source code for your deployed Python programs.

Paul E. Mc Kenney: Confessions of a Recovering Proprietary Programmer, Part XVII

Sunday 26th of January 2020 01:22:39 AM
One of the gatherings I attended last year featured a young man asking if anyone felt comfortable doing git rebase “without adult supervision”, as he put it. He seemed as surprised to see anyone answer in the affirmative as I was to see only a very few people so answer. This seems to me to be a suboptimal state of affairs, and thus this post describes how you, too, can learn to become comfortable doing git rebase “without adult supervision”.

Use gitk to See What You Are DoingThe first trick is to be able to see what you are doing while you are doing it. This is nothing particularly obscure or new, and is in fact why screen editors are preferred over line-oriented editors (remember ed?). And gitk displays your commits and shows how they are connected, including any branching and merging. The current commit is evident (yellow rather than blue circle) as is the current branch, if any (branch name in bold font). As with screen editors, this display helps avoid inevitable errors stemming from you and git disagreeing on the state of the repository. Such disagreements were especially common when I was first learning git. Given that git always prevailed in these sorts of disagreements, I heartily recommend using gitk even when you are restricting yourself to the less advanced git commands.

Note that gitk opens a new window, which may not work in all environments. In such cases, the --graph --pretty=oneline arguments to the git log command will give you a static ASCII-art approximation of the gitk display. As such, this approach is similar to using a line-oriented editor, but printing out the local lines every so often. In other words, it is better than nothing, but not as good as might be hoped for.

Fortunately, one of my colleagues pointed me at tig, which provides a dynamic ASCII-art display of the selected commits. This is again not as good as gitk, but it is probably as good as it gets in a text-only environment.

These tools do have their limits, and other techniques are required if you are actively rearranging more than a few hundred commits. If you are in that situation, you should look into the workflows used by high-level maintainers or by the -stable maintainer, who commonly wrangle many hundreds or even thousands of commits. Extreme numbers of commits will of course require significant automation, and many large-scale maintainers do in fact support their workflows with elaborate scripting.

Doing advanced git work without being able to see what you are doing is about as much a recipe for success as chopping wood in the dark. So do yourself a favor and use tools that allow you to see what you are doing!

Make Sure You Can Get Back To Where You StartedA common git rebase horror story involves a mistake made while rebasing, but with the git garbage collector erasing the starting point, so that there is no going back. As the old saying goes, “to err is human”, so such stories are all too plausible. But it is dead simple to give this horror story a happy ending: Simply create a branch at your starting point before doing git rebase:

git branch starting-point git rebase -i --onto destination-commit base-commit rebase-branch # The rebased commits are broken, perhaps misresolved conflicts? git checkout starting-point # or maybe: git checkout -B rebase-branch starting-branch

Alternatively, if you are using git in a distributed environment, you can push all your changes to the master repository before trying the unfamiliar command. Then if things go wrong, you can simply destroy your copy, re-clone the repository, and start over.

Whichever approach you choose, the benefit of ensuring that you can return to your starting point is the ability to repeat the git rebase as many times as needed to arrive at the desired result. Sort of like playing a video game, when you think about it.

Practice on an Experimental RepositoryOn-the-job training can be a wonderful thing, but sometimes it is better to create an experimental repository for the sole purpose of practicing your git commands. But sometimes, you need a repository with lots of commits to provide a realistic environment for your practice session. In that case, it might be worthwhile to clone another copy of your working repository and do your practicing there. After all, you can always remove the repository after you have finished practicing.

And there are some commands that have such far-reaching effects that I always do a dry-run on a sacrificial repository before trying it in real life. The poster boy for such a command is git filter-branch, which has impressive power for both good and evil.

 

In summary, to use advanced git commands without adult supervision, first make sure that you can see what you are doing, then make sure that you can get back to where you started, and finally, practice makes perfect!

Linux Plumbers Conference: Welcome to the 2020 Linux Plumbers Conference blog

Tuesday 21st of January 2020 10:27:17 PM

Planning for the 2020 Linux Plumbers Conference is well underway. The planning committee will be posting various informational blurbs here, including information on hotels, microconference acceptance, evening events, scheduling, and so on. Next up will be a “call for microconferences” that should appear soon.

LPC this year will be held at the Marriott Harbourfront Hotel in Halifax, Nova Scotia from 25-27 August.

David Sterba: BLAKE3 vs BLAKE2 for BTRFS

Monday 20th of January 2020 11:00:00 PM

Irony isn’t it. The paint of BLAKE2 as BTRFS checksum algorithm hasn’t dried yet, 1-2 weeks to go but there’s a successor to it. Faster, yet still supposed to be strong. For a second or two I considered ripping out all the work and … no not really but I do admit the excitement.

Speed and strength are competing goals for a hash algorithm. The speed can be evaluated by anyone, not so much for the strength. I am no cryptographer and for that area rely on expertise and opinion of others. That BLAKE was a SHA3 finalist is a good indication, where BLAKE2 is it’s successor, weakened but not weak. BLAKE3 is yet another step trading off strength and speed.

Regarding BTRFS, BLAKE2 is going to be the faster of strong hashes for now (the other one is SHA256). The argument I have for it now is proof of time. It’s been deployed in many projects (even crypto currencies!), there are optimized implementations, various language ports.

The look ahead regarding more checksums is to revisit them in about 5 years. Hopefully by that time there will be deployments, real workload performance evaluations and overall user experience that will back future decisions.

Maybe there are going to be new strong yet fast hashes developed. During my research I learned about Kangaroo 12 that’s a reduced version of SHA3 (Keccak). The hash is constructed in a different way, perhaps there might be a Kangaroo 2π one day on par with BLAKE3. Or something else. Why not EDON-R, it’s #1 in many of the cr.yp.to/hash benchmarks? Another thing I learned during the research is that hash algorithms are twelve in a dozen, IOW too many to choose from. That Kangaroo 12 is internally of a different construction might be a point for selecting it to have wider range of “building block types”.

Quick evaluation

For BTRFS I have a micro benchmark, repeatedly hashing a 4 KiB block and using cycles per block as a metric.

  • Block size: 4KiB
  • Iterations: 10000000
  • Digest size: 256 bits (32 bytes)
Hash Total cycles Cycles/iteration Perf vs BLAKE3 Perf vs BLAKE2b BLAKE3 (AVX2) 111260245256 11126 1.0 0.876 (-13%) BLAKE2b (AVX2) 127009487092 12700 1.141 (+14%) 1.0 BLAKE2b (AVX) 166426785907 16642 1.496 (+50%) 1.310 (+31%) BLAKE2b (ref) 225053579540 22505 2.022 (+102%) 1.772 (+77%)

Right now there’s only the reference Rust implementation and a derived C implementation of BLAKE3, claimed not to be optimized but from my other experience the compiler can do a good job optimizing programmers ideas away. There’s only one BLAKE3 entry with the AVX2 implementation, the best hardware support my testing box provides. As I had the other results of BLAKE2 at hand, they’re in the table for comparison, but the most interesting pair are the AVX2 versions anyway.

The improvement is 13-14%. Not much ain’t it, way less that the announced 4+x faster than BLAKE2b. Well, it’s always important to interpret results of a benchmark with respect to the environment of measurement and the tested parameters.

For BTRFS filesystem the block size is always going to be in kilobytes. I can’t find what was the size of the official benchmark results, the bench.rs script iterates over various sizes, so I assume it’s an average. Short input buffers can skew the results as the setup/output overhead can be significant, while for long buffers the compression phase is significant. I don’t have explanation for the difference and won’t draw conclusions about BLAKE3 in general.

One thing that I dare to claim is that I can sleep well because upon the above evaluation, BLAKE3 won’t bring a notable improvement if used as a checksum hash.

References Personal addendum

During the evaluations now and in the past, I’ve found it convenient if there’s an offer of implementations in various languages. That eg. Keccak project pages does not point me directly to a C implementation slightly annoyed me, but the reference implementation in C++ was worse than BLAKE2 I did not take the next step to compare the C version, wherever I would find it.

BLAKE3 is fresh and Rust seems to be the only thing that has been improved since the initial release. A plain C implementation without any warning-not-optimized labels would be good. I think that C versions will appear eventually, besides that Rust is now the new language hotness, there are projects not yet “let’s rewrite it in Rust”. Please Bear with us.

Matthew Garrett: Verifying your system state in a secure and private way

Monday 20th of January 2020 12:53:19 PM
Most modern PCs have a Trusted Platform Module (TPM) and firmware that, together, support something called Trusted Boot. In Trusted Boot, each component in the boot chain generates a series of measurements of next component of the boot process and relevant configuration. These measurements are pushed to the TPM where they're combined with the existing values stored in a series of Platform Configuration Registers (PCRs) in such a way that the final PCR value depends on both the value and the order of the measurements it's given. If any measurements change, the final PCR value changes.

Windows takes advantage of this with its Bitlocker disk encryption technology. The disk encryption key is stored in the TPM along with a policy that tells it to release it only if a specific set of PCR values is correct. By default, the TPM will release the encryption key automatically if the PCR values match and the system will just transparently boot. If someone tampers with the boot process or configuration, the PCR values will no longer match and boot will halt to allow the user to provide the disk key in some other way.

Unfortunately the TPM keeps no record of how it got to a specific state. If the PCR values don't match, that's all we know - the TPM is unable to tell us what changed to result in this breakage. Fortunately, the system firmware maintains an event log as we go along. Each measurement that's pushed to the TPM is accompanied by a new entry in the event log, containing not only the hash that was pushed to the TPM but also metadata that tells us what was measured and why. Since the algorithm the TPM uses to calculate the hash values is known, we can replay the same values from the event log and verify that we end up with the same final value that's in the TPM. We can then examine the event log to see what changed.

Unfortunately, the event log is stored in unprotected system RAM. In order to be able to trust it we need to compare the values in the event log (which can be tampered with) with the values in the TPM (which are much harder to tamper with). Unfortunately if someone has tampered with the event log then they could also have tampered with the bits of the OS that are doing that comparison. Put simply, if the machine is in a potentially untrustworthy state, we can't trust that machine to tell us anything about itself.

This is solved using a procedure called Remote Attestation. The TPM can be asked to provide a digital signature of the PCR values, and this can be passed to a remote system along with the event log. That remote system can then examine the event log, make sure it corresponds to the signed PCR values and make a security decision based on the contents of the event log rather than just on the final PCR values. This makes the system significantly more flexible and aids diagnostics. Unfortunately, it also means you need a remote server and an internet connection and then some way for that remote server to tell you whether it thinks your system is trustworthy and also you need some way to believe that the remote server is trustworthy and all of this is well not ideal if you're not an enterprise.

Last week I gave a talk at linux.conf.au on one way around this. Basically, remote attestation places no constraints on the network protocol in use - while the implementations that exist all do this over IP, there's no requirement for them to do so. So I wrote an implementation that runs over Bluetooth, in theory allowing you to use your phone to serve as the remote agent. If you trust your phone, you can use it as a tool for determining if you should trust your laptop.

I've pushed some code that demos this. The current implementation does nothing other than tell you whether UEFI Secure Boot was enabled or not, and it's also not currently running on a phone. The phone bit of this is pretty straightforward to fix, but the rest is somewhat harder.

The big issue we face is that we frequently don't know what event log values we should be seeing. The first few values are produced by the system firmware and there's no standardised way to publish the expected values. The Linux Vendor Firmware Service has support for publishing these values, so for some systems we can get hold of this. But then you get to measurements of your bootloader and kernel, and those change every time you do an update. Ideally we'd have tooling for Linux distributions to publish known good values for each package version and for that to be common across distributions. This would allow tools to download metadata and verify that measurements correspond to legitimate builds from the distribution in question.

This does still leave the problem of the initramfs. Since initramfs files are usually generated locally, and depend on the locally installed versions of tools at the point they're built, we end up with no good way to precalculate those values. I proposed a possible solution to this a while back, but have done absolutely nothing to help make that happen. I suck. The right way to do this may actually just be to turn initramfs images into pre-built artifacts and figure out the config at runtime (dracut actually supports a bunch of this already), so I'm going to spend a while playing with that.

If we can pull these pieces together then we can get to a place where you can boot your laptop and then, before typing any authentication details, have your phone compare each component in the boot process to expected values. Assistance in all of this extremely gratefully received.

comments

Paul E. Mc Kenney: The Old Man and His Macbook

Monday 20th of January 2020 01:52:03 AM
I received a MacBook at the same time I received the smartphone. This was not my first encounter with a Mac, in fact, I long ago had the privilege of trying out a Lisa. I occasionally made use of the original Macintosh (perhaps most notably to prepare a resume when applying for a job at Sequent), and even briefly owned an iMac, purchased to run some educational software for my children. But that iMac was my last close contact with the Macintosh line, some 20 years before the MacBook: Since then, I have used Windows and more recently, Linux.

So how does the MacBook compare? Let's start with some positives:

  • Small and light package, especially when compared to my rcutorture-capable ThinkPad. On the other hand, the MacBook would not be particularly useful for running rcutorture.
  • Much of the familiar UNIX userspace is right at my fingertips.
  • The GUI remembers which windows were on the external display, and restores them when plugged back into that display.
  • Automatically powers off when not in use, but resumes where you left off.
  • Most (maybe all) applications resume where they left off after rebooting for an upgrade, which was an extremely pleasant surprise.
  • Wireless works seamlessly.


There are of course some annoyances:

  • My typing speed and accuracy took a serious hit. Upon closer inspection, this turned out to be due to the keyboard being smaller than standard. I have no idea why this “interesting” design choice was made, given that there appears to be ample room for full-sized keys. Where possible, I connect a full-sized keyboard, thus restoring full-speed typing.
  • I detest trackpads, but that is the only built-in mouse available, which defeats my usual strategy of disabling them. As with the keyboard, where possible I connect a full-sized mouse. In pleasing contrast to the earlier Macs, this MacBook understands that a mouse can have more than one button.
  • I found myself detesting the MacBook trackpad even more than usual, in part because brushing up against it can result in obnoxious pop-up windows offering to sell me songs and other products related to RCU. I disabled this advertising “feature” only to find that it was now putting up obnoxious pop-up windows offering to look up RCU-related words in the dictionary. In both cases, these pop-up windows grab focus, which makes them especially unfriendly to touch-typists. Again, the solution is to attach a full-sized keyboard and standard mouse. Perhaps my next trip will motivate me to disable this misfeature, but who knows what other misfeature lies hidden behind it?
  • Connectivity. You want to connect to video? A memory stick? Ethernet? You will need a special adapter.
  • Command key instead of control key for cut-and-paste. Nor can I reasonably remap the keys, at least not if I want to continue using control-C to interrupt unruly UNIX-style applications. On the other hand, I freely admit that Linux's rather anarchic approach to paste buffers is at best an acquired taste.
  • The control key appears only on the left-hand side of the keyboard, which is also unfriendly to touch-typists.
  • Multiple workspaces are a bit spooky. They sometimes change order, or maybe I am accidentally hitting some key combination that moves them. Thankfully, it is very easy to move them where you want them: Control-uparrow, then drag and drop with the mouse.
  • I tried porting perfbook, but TexLive took forever to install. I ran out of patience long before it ran out of whatever it was downloading.


Overall impression? It is yet another laptop, with its own advantages, quirks, odd corners, and downsides. I can see how people who grew up on Macbook and who use nothing else could grow to love it passionately. But switching back and forth between MacBook and Linux is a bit jarring, though of course MacBook and Linux have much more in common than did the five different systems I switched back and forth between in the late 1970s.

My current plan is to stick with it for a year (nine months left!), and decide where to go from there. I might continue sticking with it, or I might try moving to Linux. We will see!

Paul E. Mc Kenney: Other weighty matters

Sunday 19th of January 2020 11:53:17 PM
I used to be one of those disgusting people who could eat whatever he wanted, whenever he wanted, and as much as he wanted—and not gain weight.

In fact, towards the end of my teen years, I often grew very tired of eating. You see, what with all my running and growing, in order to maintain weight I had to eat until I felt nauseous. I would feel overstuffed for about 30 minutes and then I would feel fine for about two hours. Then I would be hungry again. In retrospect, perhaps I should have adopted hobbit-like eating habits, but then again, six meals a day does not mesh well with school and workplace schedules, to say nothing of with family traditions.

Once I stopped growing in my early 20s, I was able to eat more normally. Nevertheless, I rarely felt full. In fact, on one of those rare occasions when I did profess a feeling of fullness, my friends not only demanded that I give it to them in writing, but also that I sign and date the resulting document. This document was rendered somewhat less than fully official due to its being written on a whiteboard.

And even by age 40, eating what most would consider to be a normal diet caused my weight to drop dramatically and abruptly.

However, my metabolism continued to slow down, and my body's ability to tolerate vigorous exercise waned as well. But these change took place slowly, and so the number on the scale crept up almost imperceptibly.

But so what if I am carrying a little extra weight? Why should I worry?

Because I have a goal: Should I reach age 80, I would very much like to walk under my own power. And it doesn't take great powers of observation to conclude that carrying extra weight is not consistent with that goal. Therefore, I must pay close attention to the scale.

But life flowed quickly, so I continued failing to pay attention to the scale, at least not until a visit to airport in Florida. After passing through one of the full-body scanners, I was called out for a full-body search. A young man patted me down quite thoroughly, but wasn't able to find whatever it was that he was looking for. He called in a more experienced colleague, who quickly determined that what had apparently appeared to be a explosive device under my shirt was instead an embarrassingly thick layer of body fat. And yes, I did take entirely too much satisfaction from the fact that he chose to dress down his less-experienced colleague, but I could no longer deny that I was a good 25-30 pounds overweight. And in the poor guy's defense, the energy content of that portion of my body fat really did rival that of a small bomb. And, more to the point, the sheer mass of that fat was in no way consistent with my goal to be able to walk under my own power at age 80.

So let that be a lesson to you. If you refuse take the hint from your bathroom scale, you might well find yourself instead taking it from the United States of America's Transportation Security Administration.

Accepting the fact that I was overweight was one thing. Actually doing something about it was quite another. You see, my body had become a card-carrying member of House Stark, complete with their slogan: “Winter is coming.” And my body is wise in the ways of winter. It knows not only that winter is coming, but also that food will be hard to come by, especially given my slowing reflexes and decreasing agility. Now, my body has never actually seen such a winter, but countless generations of of natural selection have hammered a deep and abiding anticipation of such winters into my very DNA. Furthermore, my body knows exactly one way to deal with such a winter, and that is to eat well while the eating is good.

However, I have thus far had the privilege of living in a time and place where the eating is always good and where winter never comes, at least not the fearsome winters that my body is fanatically motivated to prepare for.

This line of thought reminded me of a piece written long ago by the late Isaac Asimov, in which he suggested that we should stop eating before we feel full. (Shortly after writing this, an acquaintance is said to have pointed out that Asimov could stand to lose some weight, and Asimov is said to have reacted by re-reading his own writing and then successfully implementing its recommendation.) The fact that I now weighed in at more than 210 pounds provided additional motivation.

With much effort, I was able to lose more than ten pounds, but then my weight crept back up again. I was able to keep my weight to about 205, and there it remained for some time.

At least, there it remained until I lost more than ten pounds due to illness. I figured that since I had paid the price of the illness, I owed it to myself to take full advantage of the resulting weight loss. Over a period of some months, I managed to get down to 190 pounds, which was a great improvement over 210, but significantly heavier than my 180-pound target weight.

But my weight remained stubbornly fixed at about 190 for some months.

Then I remembered the control systems class I took decades ago and realized that my body and I comprised a control system designed to maintain my weight at 190. You see, my body wanted a good fifty pounds of fat to give me a good chance of surviving the food-free winter that it knew was coming. So, yes, I wanted my weight to be 180. But only when the scale read 190 or more would I panic and take drastic action, such as fasting for a day, inspired by several colleagues' lifestyle fasts. Below 190, I would eat normally, that is, I would completely give in to my body's insistence that I gain weight.

As usual, the solution was simple but difficult to implement. I “simply” slowly decreased my panic point from 190 downwards, one pound at a time.

One of the ways that my body convinces me to overeat is through feelings of anxiety. “If I miss this meal, bad things will happen!!!” However, it is more difficult for my body to convince me that missing a meal would be a disaster if I have recently fasted. Therefore, fasting turned out to be an important component of my weight-loss regimen. A fast might mean just skipping breakfast, it might mean skipping both breakfast and lunch, or it might be a 24-hour fast. But note that a 24-hour fast skips first dinner, then breakfast, and finally lunch. Skipping breakfast, lunch, and then dinner results in more than 30 hours of fasting, which seems a bit excessive.

Of course, my body is also skilled at exploiting any opportunity for impulse eating, and I must confess that I do not yet consistently beat it at this game.

Exercise continues to be important, but it also introduces some complications. You see, exercise is inherently damaging to muscles. The strengthening effects of exercise are not due to the exercise itself, but rather to the body's efforts to repair the damage and then some. Therefore, in the 24 hours or so after exercise, my muscles suffer significant inflammation due to this damage, which results in a pound or two of added water weight (but note that everyone's body is different, so your mileage may vary). My body is not stupid, and so it quickly figured out that one of the consequences of a heavy workout was reduced rations the next day. It therefore produced all sorts of reasons why a heavy workout would be a bad idea, and with a significant rate of success.

So I allow myself an extra pound the day after a heavy workout. This way my body enjoys the exercise and gets to indulge the following day. Win-win! ;–)

There are also some foods that result in added water weight, with corned beef, ham, and bacon being prominent among them. The amount of water weight seems to vary based on I know not what, but sometimes ranges up to three pounds. I have not yet worked out exactly what to do about this, but one strategy might be to eat these types of food only on the day of a heavy workout. Another strategy would be to avoid them completely, but that is crazy talk, especially in the case of bacon.

So after two years, I have gotten down to 180, and stayed there for several months. What does the future hold?

Sadly, it is hard to say. In my case it appears that something like 90% of the effort required to lose weight is required to keep that weight off. So if you really do want to know what the future holds, all I can say is “Ask me in the future.”

But the difficulty of keeping weight off should come as no surprise.

After all, my body is still acutely aware that winter is coming!

Pete Zaitcev: Nobody is Google

Tuesday 14th of January 2020 08:05:53 PM

A little while ago I been to a talk by Michael Verhulst of Terminal Labs. He made a career of descending into Devops disaster zones and righting things up, and shared some observations. One of his aphorisms was:

Nobody Is Google — Not Even Google

If I understood him right, he meant to say that scalability costs money and businesses flounder on buying more scalability than they need. Or, people think they are Google, but they are not. Even inside Google, only a small fraction of services operate at Google scale (aka planetary scale).

Apparently this sort of thing happens quite often.

Linux Plumbers Conference: Happy New Year!

Monday 13th of January 2020 08:39:25 PM

The new year is in full swing and so are the preparations for the Linux Plumbers Conference in 2020! Updates coming soon! Until then you can watch some videos.

Paul E. Mc Kenney: Parallel Programming: December 2019 Update

Wednesday 8th of January 2020 05:02:31 AM
There is a new release of Is Parallel Programming Hard, And, If So, What Can You Do About It?.

This release features a number of formatting and build-system improvements by the indefatigible Akira Yokosawa. On the formatting side, we have listings automatically generated from source code, clever references, selective PDF hyperlink highlighting, and finally settling the old after-period one-space/two-space debate by mandating newline instead. On the build side, we improved checks for incompatible packages, SyncTeX database file generation (instigated by Balbir Singh), better identification of PDFs, build notes for recent Fedora releases, fixes for some multiple-figure page issues, and improved font handling, and a2ping workarounds for ever-troublesome Ghostscript. In addition, the .bib file format was dragged kicking and screaming out of the 1980s, prompted by Stamatis Karnouskos. The new format is said to be more compatible with modern .bib-file tooling.

On the content side, the “Hardware and its Habits”, “Tools of the Trade”, “Locking”, “Deferred Processing”, “Data Structures”, and “Formal Verification” chapters received some much needed attention, the latter by Akira, who also updated the “Shared-Variable Shenanigans” section based on a recent LWN article. SeongJae Park, Stamatis, and Zhang Kai fixed a large quantity of typos and addressed numerous other issues. There is now a full complement of top-level section epigraphs, and there are a few scalability results up to 420 CPUs, courtesy of a system provided by my new employer.

On the code side, there have been a number of bug fixes and updates from ACCESS_ONCE() to READ_ONCE() or WRITE_ONCE(), with significant contributions from Akira, Junchang Wang, and Slavomir Kaslev.

A full list of the changes since the previous release may be found here, and as always, git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git will be updated in real time.

Paul E. Mc Kenney: Exit Libris, Take Two

Friday 27th of December 2019 07:04:09 PM
Still the same number of bookshelves, and although I do have a smartphone, I have not yet succumbed to the ereader habit. So some books must go!


  • Books about science and computing, in some cases rather loosely speaking:

    • “The Rocks Don't Lie”, David R. Montgomery. Great story of how geologists spent a great many years rediscovering the second century's received wisdom that the Book of Genesis should not be given a literal interpretation, specifically the part regarding Noah's Flood. Only to spend the rest of their lives resisting J. Harlen Bretz's work on the catastrophic floods that shaped the Columbia Gorge. The book also covers a number of other suspected catastrophic floods, showing how science sometimes catches up with folklore. Well w-orth a read, but discarded in favor of a biography focusing on J. Harlen Bretz. Which is around here somewhere...
    • “This Book Warps Space and Time”, Normal Sperling. Nice collection of science-related humor. Of course, they say that every book warps space and time.
    • “Scarcity: The True Cost of not Having Enough”, Sendhil Mullainathan and Eldar Shafir. Not a bad book for its genre, for example, covering more than mere money. Interesting proposals, but less validation of the proposals than one might hope. (Yes, I do write and validate software. Why do you ask?)
    • “the smartest kids in the world, and how they got that way”, amanda ripley [sic]. Classic case of generalizing from too little data taken over too short a time. But kudos to a book about education with a punctuation-free all-lowercase front cover, I suppose...
    • “Thinking, Fast and Slow”, Daniel Kahneman. Classic book, well worth reading, but it takes up a lot of space on a shelf.
    • “The Information, A Theory, A History, A Flood”, James Gleick. Ditto.
    • “The Human-Computer Interaction Handbook”, Julie A. Jacko and Andrew Sears. This is the textbook from the last university class I took back in 2004. I have kept almost all of my textbooks, but this one is quite large, is a collection of independent papers (most of which are not exactly timeless), and way outside my field.
    • “The Two-Mile Time Machine, Richard B. Alley”. Account of the learnings from ice cores collected in Greenland, whose two-mile-thick ice sheets give the book its name.
    • “Dirt: The Erosion of Civilizations”, David R. Montgomery. If you didn't grow up in a farming community, read this book so you can learn that dirt does in fact matter a great deal.
    • “Advanced Topics in Broadband ATM Networks”, Ender Ayanoglu and Malathi Veeraraghanavan. Yes, Asynchronous Transfer Mode networks were going to take over the entirety of the computing world, and anyone who said otherwise just wasn't with it. (Ender looked too old to have been named after the protagonist of “Ender's Game” so your guess is as good as mine.)
    • “Recent Advances in the Algorithmic Analysis of Queues”, David M. Lucantoni. I had been hoping to apply this to my mid-90s analysis work, but no joy. On the other hand, if I remember correctly, this was the session in which an academic reproached me for understanding the material despite being from industry rather than academia, a situation that she felt was totally reprehensible and not to be tolerated. Philistine that I am, I still feel no shame. ;-)
    • “The Principia”, Isaac Newton. A great man, but there are more accessible sources of this information. Besides, the copy I have is not the original text, but rather an English translation.

  • Related to my recent change of employer:

    • “Roget's Thesaurus in Dictionary Form”, C.O. Sylvester Mawson. Duplicate, and largely obsoleted by the world wide web.
    • “Webster's New World Dictionary of the American Language (College Edition)”. Ditto. This one is only a year older than I am, in contract with the thesaurus which is more than 20 years older than I am.
    • “Guide to LaTeX, Fourth Edition”, Helmut Kopka and Patrick W. Daly. Ditto, though much younger.
    • “Pattern Languages of Program Design, Book 2”, Edited by John M. Vlissides, James O. Coplien, and Norman L. Kerth. Ditto.
    • “Pattern-Oriented Software Architecture Volume 2: Patterns for Concurrent and Networked Objects”, Douglas Schmidt, Michael Stal, Hans Rohnert, and Frank Buschmann. Ditto.
    • “Strengths Finder 2.0”, Tom Rath. Ditto.
    • Books on IBM: “IBM Redux”, Doug Garr; “Saving Big Blue”, Robert Slater; “Who's Afraid of Big Blue”, Regis McKenna; “After the Merger”, Max M. Habeck, Fritz Kroeger, and Michael R. Traem. Worth a read, but not quite of as much interest as they were previously. But I am keeping Louis Gerstner's classic “Who Says Elephants Can't Dance?”

  • Self-help books, in some cases very loosely speaking:

    • “Getting to Yes: Negotiating Agreement Without Giving In”, by Roger Fisher and William Ury. A classic, but I somehow ended up with two of them, and both at home.
    • “How to Make People Think You're Normal”, Ben Goode.
    • “Geezerhood: What to expect from life now that you're as old as dirt”, Ben Goode.
    • “So You Think You Can ’Geezer’: Instructions for becoming the old coot you have always dreamed of”, Ben Goode.
    • “The Challenger Customer”, Brent Adamson, Matthew Dixon, Pat Spenner, and Nick Toman. Good insights on how tough customers can help you get to the next level and how to work with them, but numerous alternative sources.
    • “The Innovator's Solution”, Clayton M. Christensen and Michael E. Raynor. Not bad, but keeping “The Innovator's Dilemma” instead.

  • Recent USA miltary writings:

    • “Back in Action”, Captain David Rozelle
    • “Imperial Grunts”, Robert D. Kaplan
    • “Shadow War”, Richard Miniter
    • “Imperial Hubris”, Anonymous
    • “American Heroes”, Oliver North

    A good set of widely ranging opinions, but I am keeping David Kilcullen's series. Kilcullen was actually there (as was Rozelle and to some extent Kaplan) and has much more experience and a broader perspective than the above five. Yes, Anonymous is unknown, but that book was published in 2004 as compared to Kilcullen's series that spans the Bush and Obama administrations. You get to decide whether Kilcullen's being Australian is a plus or a minus. Choose wisely! ;-)

  • Brain teasers:

    • “The Riddle of Scheherazade and Other Amazing Puzzles”, Raymond Smullyan
    • “Lateral Thinking: Creativity Step by Step”, Edward de Bono
    • “The Great IQ Challenge”, Philip J. Carter and Ken A. Russell

  • Social commentary:

    • “A Darwinian Left: Politics, Evolution, and Cooperation”, Peter Singer
    • “Rigged”, Ben Mezrich
    • “Braiding Sweetgrass: Indigenous Wisdom, Scientific Knowledge, and the Teaching of Plants”, Robin Wall Kimmerer
    • “Injustice”, J. Christian Adams
    • “The Intimidation Game”, Kimberley Strassel

Matthew Garrett: Wifi deauthentication attacks and home security

Friday 27th of December 2019 03:26:07 AM
I live in a large apartment complex (it's literally a city block big), so I spend a disproportionate amount of time walking down corridors. Recently one of my neighbours installed a Ring wireless doorbell. By default these are motion activated (and the process for disabling motion detection is far from obvious), and if the owner subscribes to an appropriate plan these recordings are stored in the cloud. I'm not super enthusiastic about the idea of having my conversations recorded while I'm walking past someone's door, so I decided to look into the security of these devices.

One visit to Amazon later and I had a refurbished Ring Video Doorbell 2™ sitting on my desk. Tearing it down revealed it uses a TI SoC that's optimised for this sort of application, linked to a DSP that presumably does stuff like motion detection. The device spends most of its time in a sleep state where it generates no network activity, so on any wakeup it has to reassociate with the wireless network and start streaming data.

So we have a device that's silent and undetectable until it starts recording you, which isn't a great place to start from. But fortunately wifi has a few, uh, interesting design choices that mean we can still do something. The first is that even on an encrypted network, the packet headers are unencrypted and contain the address of the access point and whichever device is communicating. This means that it's possible to just dump whatever traffic is floating past and build up a collection of device addresses. Address ranges are allocated by the IEEE, so it's possible to map the addresses you see to manufacturers and get some idea of what's actually on the network[1] even if you can't see what they're actually transmitting. The second is that various management frames aren't encrypted, and so can be faked even if you don't have the network credentials.

The most interesting one here is the deauthentication frame that access points can use to tell clients that they're no longer welcome. These can be sent for a variety of reasons, including resource exhaustion or authentication failure. And, by default, they're entirely unprotected. Anyone can inject such a frame into your network and cause clients to believe they're no longer authorised to use the network, at which point they'll have to go through a new authentication cycle - and while they're doing that, they're not able to send any other packets.

So, the attack is to simply monitor the network for any devices that fall into the address range you want to target, and then immediately start shooting deauthentication frames at them once you see one. I hacked airodump-ng to ignore all clients that didn't look like a Ring, and then pasted in code from aireplay-ng to send deauthentication packets once it saw one. The problem here is that wifi cards can only be tuned to one frequency at a time, so unless you know the channel your potential target is on, you need to keep jumping between frequencies while looking for a target - and that means a target can potentially shoot off a notification while you're looking at other frequencies.

But even with that proviso, this seems to work reasonably reliably. I can hit the button on my Ring, see it show up in my hacked up code and see my phone receive no push notification. Even if it does get a notification, the doorbell is no longer accessible by the time I respond.

There's a couple of ways to avoid this attack. The first is to use 802.11w which protects management frames. A lot of hardware supports this, but it's generally disabled by default. The second is to just ignore deauthentication frames in the first place, which is a spec violation but also you're already building a device that exists to record strangers engaging in a range of legal activities so paying attention to social norms is clearly not a priority in any case.

Finally, none of this is even slightly new. A presentation from Def Con in 2016 covered this, demonstrating that Nest cameras could be blocked in the same way. The industry doesn't seem to have learned from this.

[1] The Ring Video Doorbell 2 just uses addresses from TI's range rather than anything Ring specific, unfortunately

comments

Paul E. Mc Kenney: The Old Man and His Smartphone, 2019 Holiday Season Episode

Wednesday 25th of December 2019 12:11:45 AM
I used my smartphone as a camera very early on, but the need to log in made it less than attractive for snapshots. Except that I saw some of my colleagues whip out their smartphones and immediately take photos. They kindly let me in on the secret: Double-clicking the power button puts the phone directly into camera mode. This resulted in a substantial uptick in my smartphone-as-camera usage. And the camera is astonishingly good by decade-old digital-camera standards, to say nothing of old-school 35mm film standards.

I also learned how to make the camera refrain from mirror-imaging selfies, but this proved hard to get right. The selfie looks wrong when immediately viewed if it is not mirror imaged! I eventually positioned myself to include some text in the selfie in order to reliably verify proper orientation.

Those who know me will be amused to hear that I printed a map the other day, just from force of habit. But in the event, I forgot to bring not only both the map and the smartphone, but also the presents that I was supposed to be transporting. In pleasant contrast to a memorable prior year, I remembered the presents before crossing the Columbia, which was (sort of) in time to return home to fetch them. I didn't bother with either the map or the smartphone, but reached my destination nevertheless. Cautionary tales notwithstanding, sometimes you just have to trust the old neural net's direction-finding capabilities. (Or at least that is what I keep telling myself!)

I also joined the non-exclusive group who uses a smartphone to photograph whiteboards prior to erasing them. I still have not succumbed to the food-photography habit, though. Taking a selfie with the non-selfie lens through a mirror is possible, but surprisingly challenging.

I have done a bit of ride-sharing, and the location-sharing features are quite helpful when meeting someone—no need to agree on a unique landmark, only to find the hard way that said landmark is not all that unique!

The smartphone is surprisingly useful for browsing the web while on the go, with any annoyances over the small format heavily outweighed by the ability to start and stop browsing very quickly. But I could not help but feel a pang of jealousy while watching a better equipped smartphone user type using swiping motions rather than a finger-at-a-time approach. Of course, I could not help but try it. Imagine my delight to learn that the swiping-motion approach was not some add-on extra, but instead standard! Swiping typing is not a replacement for a full-sized keyboard, but it is a huge improvement over finger-at-a-time typing, to say nothing of my old multi-press flip phone.

Recent foreign travel required careful prioritization and scheduling of my sole international power adapter among the three devices needing it. But my new USB-A-to-USB-C adapter allows me to charge my smartphone from my heavy-duty rcutorture-capable ThinkPad, albeit significantly more slowly than via AC adapter, and even more slowly when the laptop is powered off. Especially when I am actively using the smartphone. To my surprise, I can also charge my MacBook from my ThinkPad using this same adapter—but only when the MacBook is powered off. If the MacBook is running, all this does is extend the MacBook's battery life. Which admittely might still be quite useful.

All in all, it looks like I can get by with just the one international AC adapter. This is a good thing, especially considering how bulky those things are!

My smartphone's notifications are still a bit annoying, though I have gotten it a bit better trained to only bother me when it is important. And yes, I have missed a few important notifications!!! When using my laptop, which also receives all these notifications, my defacto strategy has been to completely ignore the smartphone. Which more than once has had the unintended consequence of completely draining my smartphone's battery. The first time this happened was quite disconcerting because it appeared that I had bricked my new smartphone. Thankfully, a quick web search turned up the unintuitive trick of simultaneously depressing the volume-down and power buttons for ten seconds.

But if things go as they usually do, this two-button salute will soon become all too natural!

More in Tux Machines

Qt 5.15 Beta1 Released

I am happy to announce to you Qt 5.15 is moved to Beta phase and we have released Qt 5.15 Beta1 today. As earlier our plan is to publish new Beta N releases regularly until Qt 5.15 is ready for RC. Current estimate for Qt 5.15 RC is ~ end of April, see details from Qt 5.15 releasing wiki. Please take a tour now & test Beta1 packages. As usual you can get Qt 5.15 Beta1 by using Qt online installer (for new installations) or by using maintenance tool from your existing Qt online installation. Separate Beta1 source packages are also available in qt account and in download.qt.io Read more

Fedora’s gaggle of desktops

There are 38 different desktops or window managers in Fedora 31. You could try a different one every day for a month, and still have some left over. Some have very few features. Some have so many features they are called a desktop environment. This article can’t go into detail on each, but it’s interesting to see the whole list in one place. To be on this list, the desktop must show up on the desktop manager’s selection list. If the desktop has more than one entry in the desktop manager list, they are counted just as that one desktop. An example is “GNOME”, “GNOME Classic” and “GNOME (Wayland).” These all show up on the desktop manager list, but they are still just GNOME. Read more

Programming: 'DevOps', Caddyfile, GCC 8.4 RC and Forth

  • A beginner's guide to everything DevOps

    While there is no single definition, I consider DevOps to be a process framework that ensures collaboration between development and operations teams to deploy code to production environments faster in a repeatable and automated way. We will spend the rest of this article unpacking that statement. The word "DevOps" is an amalgamation of the words "development" and "operations." DevOps helps increase the speed of delivering applications and services. It allows organizations to serve their customers efficiently and become more competitive in the market. In simple terms, DevOps is an alignment between development and IT operations with better communication and collaboration. DevOps assumes a culture where collaboration among the development, operations, and business teams is considered a critical aspect of the journey. It's not solely about the tools, as DevOps in an organization creates continuous value for customers. Tools are one of its pillars, alongside people and processes. DevOps increases organizations' capability to deliver high-quality solutions at a swift pace. It automates all processes, from build to deployment, of an application or a product.

  • How to solve the DevOps vs. ITSM culture clash

    Since its advent, DevOps has been pitted against IT service management (ITSM) and its ITIL framework. Some say "ITIL is under siege," some ask you to choose sides, while others frame them as complementary. What is true is that both DevOps and ITSM have fans and detractors, and each method can influence software delivery and overall corporate culture.

  • JFrog Launches JFrog Multi-Cloud Universal DevOps Platform

    DevOps technology company JFrog has announced its new hybrid, multi-cloud, universal DevOps platform called the JFrog Platform that drives continuous software releases from any source to any destination. By delivering tools in an all-in-one solution, the JFrog Platform aims to empower organizations, developers and DevOps engineers to meet increased delivery requirements. For the uninitiated, JFrog is the creator of Artifactory, the heart of the Universal DevOps platform for automating, managing, securing, distributing, and monitoring all types of technologies.

  • New Caddyfile and more

    The new Caddyfile enables experimental HTTP3 support. Also I’ve added a few redirects to my new domain. All www prefix requests get redirected to their version without www prefix. My old domain nullday.de redirects now to my new domain shibumi.dev. Also I had to add connect-src 'self' to my CSP, because Google Lighthouse seems to have problems with defalt-src 'none'. If just default-src 'none' is being set, Google Lighthouse can’t access your robot.txt. This seems to be an issue in the Google Lighthouse implementation, the Google Search Bot is not affected.

  • Content Addressed Vocabulary

    How can systems communicate and share meaning? Communication within systems is preceded by a form of meta-communication; we must have a sense that we mean the same things by the terms we use before we can even use them. This is challenging enough for humans who must share meaning, but we can resolve ambiguities with context clues from a surrounding narrative. Machines, in general, need a context more explicitly laid out for them, with as little ambiguity as possible. Standards authors of open-world systems have long struggled with such systems and have come up with some reasonable systems; unfortunately these also suffer from several pitfalls. With minimal (or sometimes none at all) adjustment to our tooling, I propose a change in how we manage ontologies.

  • GCC 8.4 Release Candidate available from gcc.gnu.org
    The first release candidate for GCC 8.4 is available from
    
     https://gcc.gnu.org/pub/gcc/snapshots/8.4.0-RC-20200226/
     ftp://gcc.gnu.org/pub/gcc/snapshots/8.4.0-RC-20200226/
    
    and shortly its mirrors.  It has been generated from git commit
    r8-10091-gf80c40f93f9e8781b14f1a8301467f117fd24051.
    
    I have so far bootstrapped and tested the release candidate on
    x86_64-linux and i686-linux.  Please test it and report any issues to
    bugzilla.
    
    If all goes well, I'd like to release 8.4 on Wednesday, March 4th.
    
  • GCC 8.4 RC Compiler Released For Testing

    GCC 8.4 will hopefully be released next week but for now a release candidate is available for testing the latest bug fixes in the mature GCC8 series. GCC 8.4 is aiming for release next week as potentially the last of the GCC8 series while GCC 9.3 is also coming soon. GCC 8.4 represents all of the relevant bug fixes over the past year for back-porting to users still on GCC 8. GCC 10 (in the form of version GCC 10.1) meanwhile as the next feature release should be out in the next month or two.

  • Excellent Free Tutorials to Learn Forth

    Forth is an imperative stack-based programming language, and a member of the class of extensible interactive languages. It was created by Charles Moore in 1970 to control telescopes in observatories using small computers. Because of its roots, Forth stresses efficiency, compactness, flexible and efficient hardware/software interaction. Forth has a number of properties that contrast it from many other programming languages. In particular, Forth has no inherent keywords and is extensible. It is both a low level and high level language. It has the interesting property of being able to compile itself into a new compiler, debug itself and to experiment in real time as the system is built. Forth is an extremely flexible language, with high portability, compact source and object code, and a language that is easy to learn, program and debug. It has an incremental compiler, an interpreter and a very fast edit-compile-test cycle. Forth uses a stack to pass data between words, and it uses the raw memory for more permanent storage. It also lets coders write their own control structures. Forth has often being deployed in embedded systems due to the compactness of object code. Forth is also used in boot loaders such as Open Firmware (developed by Sun Microsystems) as well as scientific fields such as astronomy, mathematics, oceanography and electrical engineering.

Python Programming

  • Adding Metadata to PDFs

    For both Django Crash Course and the forthcoming Two Scoops of Django 3.x, we're using a new process to render the PDFs. Unfortunately, until just a few days ago that process didn't include the cover. Instead, covers were inserted manually using Adobe Acrobat. [...] The lesson I learned writing this little utility is that as useful as Google and Stack Overflow might be, sometimes you need to explore reference manuals. Which, if you ask me, is a lot of fun. :-)

  • A Week At A Time - Building SaaS #46

    In this episode, we worked on a weekly view for the Django app. We made navigation that would let users click from one week to the next, then fixed up the view to pull time from that particular week. The first thing that I did was focus on the UI required to navigate to a new weekly view in the app. We mocked out the UI and talked briefly about the flexbox layout that is available to modern browsers. From the UI mock up, I changed the view code to include a previous_week_date and next_week_date in the view context so we could change the links to show real dates. From there, we needed a destination URL. I create a new path in the URLconf that connected the weekly URL to the existing app view that shows the week data. After wiring things together, I was able to extract the week date from the URL and make the view pull from the specified day and show that in the UI. Finally, we chatted about the tricky offset calculation that needs to happen to pull the right course tasks, but I ended the stream at that stage because the logic changes for that problem are tedious and very specific to my particular app.

  • Python 3.6.9 : Google give a new tool for python users.

    Today I discovered a real surprise gift made by the team from Google for the evolution of programmers. I say this because not everyone can afford hardware resources.

  • Learn Python Dictionary Data Structure – Part 3

    In this Part 3 of Python Data Structure series, we will be discussing what is a dictionary, how it differs from other data structure in python, how to create, delete dictionary objects and methods of dictionary objects.