Language Selection

English French German Italian Portuguese Spanish

Kernel Planet

Syndicate content
Kernel Planet - http://planet.kernel.org
Updated: 5 hours 5 min ago

Kees Cook: security things in Linux v5.2

Thursday 18th of July 2019 12:07:36 AM

Previously: v5.1.

Linux kernel v5.2 was released last week! Here are some security-related things I found interesting:

page allocator freelist randomization
While the SLUB and SLAB allocator freelists have been randomized for a while now, the overarching page allocator itself wasn’t. This meant that anything doing allocation outside of the kmem_cache/kmalloc() would have deterministic placement in memory. This is bad both for security and for some cache management cases. Dan Williams implemented this randomization under CONFIG_SHUFFLE_PAGE_ALLOCATOR now, which provides additional uncertainty to memory layouts, though at a rather low granularity of 4MB (see SHUFFLE_ORDER). Also note that this feature needs to be enabled at boot time with page_alloc.shuffle=1 unless you have direct-mapped memory-side-cache (you can check the state at /sys/module/page_alloc/parameters/shuffle).

stack variable initialization with Clang
Alexander Potapenko added support via CONFIG_INIT_STACK_ALL for Clang’s -ftrivial-auto-var-init=pattern option that enables automatic initialization of stack variables. This provides even greater coverage than the prior GCC plugin for stack variable initialization, as Clang’s implementation also covers variables not passed by reference. (In theory, the kernel build should still warn about these instances, but even if they exist, Clang will initialize them.) Another notable difference between the GCC plugins and Clang’s implementation is that Clang initializes with a repeating 0xAA byte pattern, rather than zero. (Though this changes under certain situations, like for 32-bit pointers which are initialized with 0x000000AA.) As with the GCC plugin, the benefit is that the entire class of uninitialized stack variable flaws goes away.

Kernel Userspace Access Prevention on powerpc
Like SMAP on x86 and PAN on ARM, Michael Ellerman and Russell Currey have landed support for disallowing access to userspace without explicit markings in the kernel (KUAP) on Power9 and later PPC CPUs under CONFIG_PPC_RADIX_MMU=y (which is the default). This is the continuation of the execute protection (KUEP) in v4.10. Now if an attacker tries to trick the kernel into any kind of unexpected access from userspace (not just executing code), the kernel will fault.

Microarchitectural Data Sampling mitigations on x86
Another set of cache memory side-channel attacks came to light, and were consolidated together under the name Microarchitectural Data Sampling (MDS). MDS is weaker than other cache side-channels (less control over target address), but memory contents can still be exposed. Much like L1TF, when one’s threat model includes untrusted code running under Symmetric Multi Threading (SMT: more logical cores than physical cores), the only full mitigation is to disable hyperthreading (boot with “nosmt“). For all the other variations of the MDS family, Andi Kleen (and others) implemented various flushing mechanisms to avoid cache leakage.

unprivileged userfaultfd sysctl knob
Both FUSE and userfaultfd provide attackers with a way to stall a kernel thread in the middle of memory accesses from userspace by initiating an access on an unmapped page. While FUSE is usually behind some kind of access controls, userfaultfd hadn’t been. To avoid things like Use-After-Free heap grooming, Peter Xu added the new “vm.unprivileged_userfaultfd” sysctl knob to disallow unprivileged access to the userfaultfd syscall.

temporary mm for text poking on x86
The kernel regularly performs self-modification with things like text_poke() (during stuff like alternatives, ftrace, etc). Before, this was done with fixed mappings (“fixmap”) where a specific fixed address at the high end of memory was used to map physical pages as needed. However, this resulted in some temporal risks: other CPUs could write to the fixmap, or there might be stale TLB entries on removal that other CPUs might still be able to write through to change the target contents. Instead, Nadav Amit has created a separate memory map for kernel text writes, as if the kernel is trying to make writes to userspace. This mapping ends up staying local to the current CPU, and the poking address is randomized, unlike the old fixmap.

ongoing: implicit fall-through removal
Gustavo A. R. Silva is nearly done with marking (and fixing) all the implicit fall-through cases in the kernel. Based on the pull request from Gustavo, it looks very much like v5.3 will see -Wimplicit-fallthrough added to the global build flags and then this class of bug should stay extinct in the kernel.

That’s it for now; let me know if you think I should add anything here. We’re almost to -rc1 for v5.3!

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Linux Plumbers Conference: System Boot and Security Microconference Accepted into 2019 Linux Plumbers Conference

Wednesday 17th of July 2019 04:57:44 PM

We are pleased to announce that the System Boot and Security Microconference has been accepted into the 2019 Linux Plumbers Conference! Computer-system security is a topic that has gotten a lot of serious attention over the years, but there has not been anywhere near as much attention paid to the system firmware. But the firmware is also a target for those looking to wreak havoc on our systems. Firmware is now being developed with security in mind, but provides incomplete solutions. This microconference will focus on the security of the system especially from the time the system is powered on.

Expected topics for this year include:

Come and join us in the discussion of keeping your system secure even at boot up.

We hope to see you there!

Linux Plumbers Conference: Power Management and Thermal Control Microconference Accepted into 2019 Linux Plumbers Conference

Wednesday 10th of July 2019 10:29:04 PM

We are pleased to announce that the Power Management and Thermal Control Microconference has been accepted into the 2019 Linux Plumbers Conference! Power management and thermal control are important areas in the Linux ecosystem to help improve the environment of the planet. In recent years, computer systems have been becoming more and more complex and thermally challenged at the same time and the energy efficiency expectations regarding them have been growing. This trend is likely to continue in the foreseeable future and despite the progress made in the power-management and thermal-control problem space since the Linux Plumbers Conference last year. That progress includes, but is not limited to, the merging of the energy-aware scheduling patch series and CPU idle-time management improvements; there will be more work to do in those areas. This gathering will focus on continuing to have Linux meet the power-management and thermal-control challenge.

Topics for this year include:

  • CPU idle-time management improvements
  • Device power management based on platform firmware
  • DVFS in Linux
  • Energy-aware and thermal-aware scheduling
  • Consumer-producer workloads, power distribution
  • Thermal-control methods
  • Thermal-control frameworks

Come and join us in the discussion of how to extend the battery life of your laptop while keeping it cool.

We hope to see you there!

Linux Plumbers Conference: Android Microconference Accepted into 2019 Linux Plumbers Conference

Tuesday 9th of July 2019 11:29:19 PM

We are pleased to announce that the Android Microconference has been accepted into the 2019 Linux Plumbers Conference! Android has a long history at Linux Plumbers and has continually made progress as a direct result of these meetings. This year’s focus will be a fairly ambitious goal to create a Generic Kernel Image (GKI) (or one kernel to rule them all!). Having a GKI will allow silicon vendors to be independent of the Linux kernel running on the device. As such, kernels could be easily upgraded without requiring any rework of the initial hardware porting efforts. This microconference will also address areas that have been discussed in the past.

The proposed topics include:

Come and join us in the discussion of improving what is arguably the most popular operating system in the world!

We hope to see you there!

Matthew Garrett: Bug bounties and NDAs are an option, not the standard

Tuesday 9th of July 2019 09:15:21 PM
Zoom had a vulnerability that allowed users on MacOS to be connected to a video conference with their webcam active simply by visiting an appropriately crafted page. Zoom's response has largely been to argue that:

a) There's a setting you can toggle to disable the webcam being on by default, so this isn't a big deal,
b) When Safari added a security feature requiring that users explicitly agree to launch Zoom, this created a poor user experience and so they were justified in working around this (and so introducing the vulnerability), and,
c) The submitter asked whether Zoom would pay them for disclosing the bug, and when Zoom said they'd only do so if the submitter signed an NDA, they declined.

(a) and (b) are clearly ludicrous arguments, but (c) is the interesting one. Zoom go on to mention that they disagreed with the severity of the issue, and in the end decided not to change how their software worked. If the submitter had agreed to the terms of the NDA, then Zoom's decision that this was a low severity issue would have led to them being given a small amount of money and never being allowed to talk about the vulnerability. Since Zoom apparently have no intention of fixing it, we'd presumably never have heard about it. Users would have been less informed, and the world would have been a less secure place.

The point of bug bounties is to provide people with an additional incentive to disclose security issues to companies. But what incentive are they offering? Well, that depends on who you are. For many people, the amount of money offered by bug bounty programs is meaningful, and agreeing to sign an NDA is worth it. For others, the ability to publicly talk about the issue is worth more than whatever the bounty may award - being able to give a presentation on the vulnerability at a high profile conference may be enough to get you a significantly better paying job. Others may be unwilling to sign an NDA on principle, refusing to trust that the company will ever disclose the issue or fix the vulnerability. And finally there are people who can't sign such an NDA - they may have discovered the issue on work time, and employer policies may prohibit them doing so.

Zoom are correct that it's not unusual for bug bounty programs to require NDAs. But when they talk about this being an industry standard, they come awfully close to suggesting that the submitter did something unusual or unreasonable in rejecting their bounty terms. When someone lets you know about a vulnerability, they're giving you an opportunity to have the issue fixed before the public knows about it. They've done something they didn't need to do - they could have just publicly disclosed it immediately, causing significant damage to your reputation and potentially putting your customers at risk. They could potentially have sold the information to a third party. But they didn't - they came to you first. If you want to offer them money in order to encourage them (and others) to do the same in future, then that's great. If you want to tie strings to that money, that's a choice you can make - but there's no reason for them to agree to those strings, and if they choose not to then you don't get to complain about that afterwards. And if they make it clear at the time of submission that they intend to publicly disclose the issue after 90 days, then they're acting in accordance with widely accepted norms. If you're not able to fix an issue within 90 days, that's very much your problem.

If your bug bounty requires people sign an NDA, you should think about why. If it's so you can control disclosure and delay things beyond 90 days (and potentially never disclose at all), look at whether the amount of money you're offering for that is anywhere near commensurate with the value the submitter could otherwise gain from the information and compare that to the reputational damage you'll take from people deciding that it's not worth it and just disclosing unilaterally. And, seriously, never ask for an NDA before you're committing to a specific $ amount - it's never reasonable to ask that someone sign away their rights without knowing exactly what they're getting in return.

tl;dr - a bug bounty should only be one component of your vulnerability reporting process. You need to be prepared for people to decline any restrictions you wish to place on them, and you need to be prepared for them to disclose on the date they initially proposed. If they give you 90 days, that's entirely within industry norms. Remember that a bargain is being struck here - you offering money isn't being generous, it's you attempting to provide an incentive for people to help you improve your security. If you're asking people to give up more than you're offering in return, don't be surprised if they say no.

comments

Linux Plumbers Conference: Update on LPC 2019 registration waiting list

Tuesday 9th of July 2019 01:36:52 AM

Here is an update regarding the registration situation for LPC2019.

The considerable interest for participation this year meant that the conference sold out earlier than ever before.

Instead of a small release of late-registration spots, the LPC planning committee has decided to run a waiting list, which will be used as the exclusive method for additional registrations. The planning committee will reach out to individuals on the waiting list and inviting them to register at the regular rate of $550, as spots become available.

With the majority of the Call for Proposals (CfP) still open, it is not yet possible to release passes. The planning committee and microconferences leads are working together to allocate the passes earmarked for microconferences. The Networking Summit and Kernel Summit speakers are yet to be confirmed also.

The planning committee understands that many of those who added themselves to the waiting list wish to find out soon whether they will be issued a pass. We anticipate the first passes to be released on July 22nd at the earliest.

Please follow us on social media, or here on this blog for further updates.

Linux Plumbers Conference: VFIO/IOMMU/PCI Microconference Accepted into 2019 Linux Plumbers Conference

Monday 8th of July 2019 05:56:09 PM

We are pleased to announce that the VFIO/IOMMU/PCI Microconference has been accepted into the 2019 Linux Plumbers Conference!

The PCI interconnect specification and the devices implementing it are incorporating more and more features aimed at high performance systems. This requires the kernel to coordinate the PCI devices, the IOMMUs they are connected to and the VFIO layer used to manage them (for user space access and device pass-through) so that users (and virtual machines) can use them effectively. The kernel interfaces to control PCI devices have to be designed in-sync for all three subsystems, which implies that there are lots of intersections in the design of kernel control paths for VFIO/IOMMU/PCI requiring kernel code design discussions involving the three subsystems at once.

A successful VFIO/IOMMU/PCI Microconference was held at Linux Plumbers in 2017, where:

  • The ground work for paravirtualized IOMMU (virtio-iommu) was defined
  • Implementation of surprise removal kernel policy was debated and brought a solution forward
  • Advanced Error Reporting (AER) user space interface was proposed and agreed upon
  • Configuration Request/Retry Status (CRS) handling and kernel implementation were defined
  • Shared Virtual Address (SVA) solutions from x86 and ARM were highlighted and the microconference discussions laid the path towards a unified solution

This year, the microconference will follow up on the previous microconference agendas and focus on ongoing patches review/design aimed at VFIO/IOMMU/PCI subsystems.

Topics for this year include:

  • VFIO
  • IOMMU
    • DMA-API layer interactions and how to move towards generic dma-ops for IOMMU drivers
  • PCI
    • Resources claiming/assignment consolidation
    • Peer-to-Peer
    • PCI error management
    • prefetchable vs non-prefetchable BAR address mappings (cacheability)
    • Kernel NoSnoop TLP attribute handling
    • CCIX and accelerators management

Come and join us in the discussion in helping Linux keep up with the new features being added to the PCI interconnect specification.

We hope to see you there!

Matthew Garrett: Creating hardware where no hardware exists

Sunday 7th of July 2019 08:15:09 PM
The laptop industry was still in its infancy back in 1990, but it still faced a core problem that we do today - power and thermal management are hard, but also critical to a good user experience (and potentially to the lifespan of the hardware). This is in the days where DOS and Windows had no memory protection, so handling these problems at the OS level would have been an invitation for someone to overwrite your management code and potentially kill your laptop. The safe option was pushing all of this out to an external management controller of some sort, but vendors in the 90s were the same as vendors now and would do basically anything to avoid having to drop an extra chip on the board. Thankfully(?), Intel had a solution.

The 386SL was released in October 1990 as a low-powered mobile-optimised version of the 386. Critically, it included a feature that let vendors ensure that their power management code could run without OS interference. A small window of RAM was hidden behind the VGA memory[1] and the CPU configured so that various events would cause the CPU to stop executing the OS and jump to this protected region. It could then do whatever power or thermal management tasks were necessary and return control to the OS, which would be none the wiser. Intel called this System Management Mode, and we've never really recovered.

Step forward to the late 90s. USB is now a thing, but even the operating systems that support USB usually don't in their installers (and plenty of operating systems still didn't have USB drivers). The industry needed a transition path, and System Management Mode was there for them. By configuring the chipset to generate a System Management Interrupt (or SMI) whenever the OS tried to access the PS/2 keyboard controller, the CPU could then trap into some SMM code that knew how to talk to USB, figure out what was going on with the USB keyboard, fake up the results and pass them back to the OS. As far as the OS was concerned, it was talking to a normal keyboard controller - but in reality, the "hardware" it was talking to was entirely implemented in software on the CPU.

Since then we've seen even more stuff get crammed into SMM, which is annoying because in general it's much harder for an OS to do interesting things with hardware if the CPU occasionally stops in order to run invisible code to touch hardware resources you were planning on using, and that's even ignoring the fact that operating systems in general don't really appreciate the entire world stopping and then restarting some time later without any notification. So, overall, SMM is a pain for OS vendors.

Change of topic. When Apple moved to x86 CPUs in the mid 2000s, they faced a problem. Their hardware was basically now just a PC, and that meant people were going to try to run their OS on random PC hardware. For various reasons this was unappealing, and so Apple took advantage of the one significant difference between their platforms and generic PCs. x86 Macs have a component called the System Management Controller that (ironically) seems to do a bunch of the stuff that the 386SL was designed to do on the CPU. It runs the fans, it reports hardware information, it controls the keyboard backlight, it does all kinds of things. So Apple embedded a string in the SMC, and the OS tries to read it on boot. If it fails, so does boot[2]. Qemu has a driver that emulates enough of the SMC that you can provide that string on the command line and boot OS X in qemu, something that's documented further here.

What does this have to do with SMM? It turns out that you can configure x86 chipsets to trap into SMM on arbitrary IO port ranges, and older Macs had SMCs in IO port space[3]. After some fighting with Intel documentation[4] I had Coreboot's SMI handler responding to writes to an arbitrary IO port range. With some more fighting I was able to fake up responses to reads as well. And then I took qemu's SMC emulation driver and merged it into Coreboot's SMM code. Now, accesses to the IO port range that the SMC occupies on real hardware generate SMIs, trap into SMM on the CPU, run the emulation code, handle writes, fake up responses to reads and return control to the OS. From the OS's perspective, this is entirely invisible[5]. We've created hardware where none existed.

The tree where I'm working on this is here, and I'll see if it's possible to clean this up in a reasonable way to get it merged into mainline Coreboot. Note that this only handles the SMC - actually booting OS X involves a lot more, but that's something for another time.

[1] If the OS attempts to access this range, the chipset directs it to the video card instead of to actual RAM.
[2] It's actually more complicated than that - see here for more.
[3] IO port space is a weird x86 feature where there's an entire separate IO bus that isn't part of the memory map and which requires different instructions to access. It's low performance but also extremely simple, so hardware that has no performance requirements is often implemented using it.
[4] Some current Intel hardware has two sets of registers defined for setting up which IO ports should trap into SMM. I can't find anything that documents what the relationship between them is, but if you program the obvious ones nothing happens and if you program the ones that are hidden in the section about LPC decoding ranges things suddenly start working.
[5] Eh technically a sufficiently enthusiastic OS could notice that the time it took for the access to occur didn't match what it should on real hardware, or could look at the CPU's count of the number of SMIs that have occurred and correlate that with accesses, but good enough

comments

Linux Plumbers Conference: Scheduler Microconference Accepted into 2019 Linux Plumbers Conference

Sunday 7th of July 2019 02:51:00 PM

We are pleased to announce that the Scheduler Microconference has been accepted into the 2019 Linux Plumbers Conference! The scheduler determines what runs on the CPU at any given time. The lag of your desktop is affected by the scheduler, for example. There are a few different scheduling classes for a user to choose from, such as the default class (SCHED_OTHER) or a real-time class (SCHED_FIFO, SCHED_RT and SCHED_DEADLINE). The deadline scheduler is the newest and allows the user to control the amount of bandwidth received by a task or group of tasks. With cloud computing becoming popular these days, controlling bandwidth of containers or virtual machines is becoming more important. The Real-Time patch is also destined to become mainline, which will add more strain on the scheduling of tasks to make sure that real-time tasks make their deadlines (although, this Microconference will focus on non real-time aspects of the scheduler. Please defer real-time topics to the Real-time Microconference). This requires verification techniques to ensure the scheduler is properly designed.

Topics for this year include:

Come and join us in the discussion of controlling what tasks get to run on your machine and when.

We hope to see you there!

Linux Plumbers Conference: RDMA Microconference Accepted into 2019 Linux Plumbers Conference

Wednesday 3rd of July 2019 10:54:07 PM

We are pleased to announce that the RDMA Microconference has been accepted into the 2019 Linux Plumbers Conference! RDMA has been a microconference at Plumbers for the last three years and will be continuing its productive work for a fourth year. The RDMA meetings at the previous Plumbers have been critical in getting improvements to the RDMA subsystem merged into mainline. These include a new user API, container support, testability/syzkaller, system bootup, Soft iWarp, and more. There are still difficult open issues that need to be resolved, and this year’s Plumbers RDMA Microconfernence is sure to come up with answers to these tough problems.

Topics for this year include:

  • RDMA and PCI peer to peer for GPU and NVMe applications, including HMM and DMABUF topics
  • RDMA and DAX (carry over from LSF/MM)
  • Final pieces to complete the container work
  • Contiguous system memory allocations for userspace (unresolved from 2017)
  • Shared protection domains and memory registrations
  • NVMe offload
  • Integration of HMM and ODP

And new developing areas of interest:

  • Multi-vendor virtualized ‘virtio’ RDMA
  • Non-standard driver features and their impact on the design of the subsystem
  • Encrypted RDMA traffic
  • Rework and simplification of the driver API

Come and join us in the discussion of improving Linux’s ability to access direct memory across high-speed networks.

We hope to see you there!

Linux Plumbers Conference: Announcing the LPC 2019 registration waiting list

Tuesday 2nd of July 2019 07:22:02 PM

 

The current pool of registrations for the 2019 Linux Plumbers Conference has sold out.

Those not yet registered who wish to attend should fill out the form here to get on the waiting list.

As registration spots open up, the Plumbers organizing committee will allocate them to those  on the waiting list with priority given to those who will be participating in microconferences and BoFs.

 

 

Linux Plumbers Conference: Preliminary schedule for LPC 2019 has been published

Tuesday 2nd of July 2019 04:48:14 PM

The LPC committee is pleased to announce the preliminary schedule for the 2019 Linux Plumbers Conference.

The vast majority of the LPC refereed track talks have been accepted and are listed there. The same is true for microconferences. While there are a few talks and microconferences to be announced, you will find the current overview LPC schedule here. The LPC refereed track talks can be seen here.

The call for proposals (CfP) is still open for the Kernel Summit, Networking Summit, BOFs and topics for accepted microconferences.

As new microconferences, talks, and BOFs are accepted, they will be published to the schedule.

Matthew Garrett: Which smart bulbs should you buy (from a security perspective)

Sunday 30th of June 2019 08:10:15 PM
People keep asking me which smart bulbs they should buy. It's a great question! As someone who has, for some reason, ended up spending a bunch of time reverse engineering various types of lightbulb, I'm probably a reasonable person to ask. So. There are four primary communications mechanisms for bulbs: wifi, bluetooth, zigbee and zwave. There's basically zero compelling reasons to care about zwave, so I'm not going to.

Wifi
Advantages: Doesn't need an additional hub - you can just put the bulbs wherever. The bulbs can connect out to a cloud service, so you can control them even if you're not on the same network.
Disadvantages: Only works if you have wifi coverage, each bulb has to have wifi hardware and be configured appropriately.
Which should you get: If you search Amazon for "wifi bulb" you'll get a whole bunch of cheap bulbs. Don't buy any of them. They're mostly based on a custom protocol from Zengge and they're shit. Colour reproduction is bad, there's no good way to use the colour LEDs and the white LEDs simultaneously, and if you use any of the vendor apps they'll proxy your device control through a remote server with terrible authentication mechanisms. Just don't. The ones that aren't Zengge are generally based on the Tuya platform, whose security model is to have keys embedded in some incredibly obfuscated code and hope that nobody can find them. TP-Link make some reasonably competent bulbs but also use a weird custom protocol with hand-rolled security. Eufy are fine but again there's weird custom security. Lifx are the best bulbs, but have zero security on the local network - anyone on your wifi can control the bulbs. If that's something you care about then they're a bad choice, but also if that's something you care about maybe just don't let people you don't trust use your wifi.
Conclusion: If you have to use wifi, go with lifx. Their security is not meaningfully worse than anything else on the market (and they're better than many), and they're better bulbs. But you probably shouldn't go with wifi.

Bluetooth
Advantages: Doesn't need an additional hub. Doesn't need wifi coverage. Doesn't connect to the internet, so remote attack is unlikely.
Disadvantages: Only one control device at a time can connect to a bulb, so harder to share. Control device needs to be in Bluetooth range of the bulb. Doesn't connect to the internet, so you can't control your bulbs remotely.
Which should you get: Again, most Bluetooth bulbs you'll find on Amazon are shit. There's a whole bunch of weird custom protocols and the quality of the bulbs is just bad. If you're going to go with anything, go with the C by GE bulbs. Their protocol is still some AES-encrypted custom binary thing, but they use a Bluetooth controller from Telink that supports a mesh network protocol. This means that you can talk to any bulb in your network and still send commands to other bulbs - the dual advantages here are that you can communicate with bulbs that are outside the range of your control device and also that you can have as many control devices as you have bulbs. If you've bought into the Google Home ecosystem, you can associate them directly with a Home and use Google Assistant to control them remotely. GE also sell a wifi bridge - I have one, but haven't had time to review it yet, so make no assertions around its competence. The colour bulbs are also disappointing, with much dimmer colour output than white output.

Zigbee
Advantages: Zigbee is a mesh protocol, so bulbs can forward messages to each other. The bulbs are also pretty cheap. Zigbee is a standard, so you can obtain bulbs from several vendors that will then interoperate - unfortunately there are actually two separate standards for Zigbee bulbs, and you'll sometimes find yourself with incompatibility issues there.
Disadvantages: Your phone doesn't have a Zigbee radio, so you can't communicate with the bulbs directly. You'll need a hub of some sort to bridge between IP and Zigbee. The ecosystem is kind of a mess, and you may have weird incompatibilities.
Which should you get: Pretty much every vendor that produces Zigbee bulbs also produces a hub for them. Don't get the Sengled hub - anyone on the local network can perform arbitrary unauthenticated command execution on it. I've previously recommended the Ikea Tradfri, which at the time only had local control. They've since added remote control support, and I haven't investigated that in detail. But overall, I'd go with the Philips Hue. Their colour bulbs are simply the best on the market, and their security story seems solid - performing a factory reset on the hub generates a new keypair, and adding local control users requires a physical button press on the hub to allow pairing. Using the Philips hub doesn't tie you into only using Philips bulbs, but right now the Philips bulbs tend to be as cheap (or cheaper) than anything else.

But what about
If you're into tying together all kinds of home automation stuff, then either go with Smartthings or roll your own with Home Assistant. Both are definitely more effort if you only want lighting.

My priority is software freedom
Excellent! There are various bulbs that can run the Espurna or AiLight firmwares, but you'll have to deal with flashing them yourself. You can tie that into Home Assistant and have a completely free stack. If you're ok with your bulbs being proprietary, Home Assistant can speak to most types of bulb without an additional hub (you'll need a supported Zigbee USB stick to control Zigbee bulbs), and will support the C by GE ones as soon as I figure out why my Bluetooth transmissions stop working every so often.

Conclusion
Outside niche cases, just buy a Hue. Philips have done a genuinely good job. Don't buy cheap wifi bulbs. Don't buy a Sengled hub.

(Disclaimer: I mentioned a Google product above. I am a Google employee, but do not work on anything related to Home.)

comments

Linux Plumbers Conference: Databases Microconference Accepted into 2019 Linux Plumbers Conference

Wednesday 26th of June 2019 03:14:55 PM

We are pleased to announce that the Databases Microconference has been accepted into the 2019 Linux Plumbers Conference! Linux plumbing is heavily important to those who implement databases and their users who expect fast and durable data handling.

Durability is a promise never to lose data after advising a user of a successful update, even in the face of power loss. It requires a full-stack solution from the application to the database, then to Linux (filesystem, VFS, block interface, driver), and on to the hardware.

Fast means getting a database user a response in less that tens of milliseconds, which requires that Linux filesystems, memory and CPU management, and the networking stack do everything with the utmost effectiveness and efficiency.

For all Linux users, there is a benefit in having database developers interact with system developers; it will ensure that the promise of durability and speed are both kept as newer hardware technologies emerge, existing CPU/RAM resources grow, and while data stored grows even faster.

Topics for this Microconference include:

  • The architecture of individual databases
  • The kernel interfaces utilized, particularly those critical to performance and integrity
  • What is a general performance profile of databases with respect to kernel interfaces
  • Database difficulties with the kernel
  • What kernel interfaces are particularly useful
  • What kernel interfaces would have been nice to use
  • Work arounds caused by lack of proper system calls
  • The direction of database development and what interfaces to newer hardware, like NVDIMM and atomic write storage, would be desirable

Come and join us in the discussion about making databases run smoother and faster.

We hope to see you there!

James Morris: Linux Security Summit North America 2019: Schedule Published

Tuesday 25th of June 2019 08:43:30 PM

The schedule for the 2019 Linux Security Summit North America (LSS-NA) is published.

This year, there are some changes to the format of LSS-NA. The summit runs for three days instead of two, which allows us to relax the schedule somewhat while also adding new session types.  In addition to refereed talks, short topics, BoF sessions, and subsystem updates, there are now also tutorials (one each day), unconference sessions, and lightning talks.

The tutorial sessions are:

These tutorials will be 90 minutes in length, and they’ll run in parallel with unconference sessions on the first two days (when the space is available at the venue).

The refereed presentations and short topics cover a range of Linux security topics including platform boot security, integrity, container security, kernel self protection, fuzzing, and eBPF+LSM.

Some of the talks I’m personally excited about include:

The schedule last year was pretty crammed, so with the addition of the third day we’ve been able to avoid starting early, and we’ve also added five minute transitions between talks. We’re hoping to maximize collaboration via the more relaxed schedule and the addition of more types of sessions (unconference, tutorials, lightning talks).  This is not a conference for simply consuming talks, but to also participate and to get things done (or started).

Thank you to all who submitted proposals.  As usual, we had many more submissions than can be accommodated in the available time.

Also thanks to the program committee, who spent considerable time reviewing and discussing proposals, and working out the details of the schedule. The committee for 2019 is:

  • James Morris (Microsoft)
  • Serge Hallyn (Cisco)
  • Paul Moore (Cisco)
  • Stephen Smalley (NSA)
  • Elena Reshetova (Intel)
  • John Johnansen (Canonical)
  • Kees Cook (Google)
  • Casey Schaufler (Intel)
  • Mimi Zohar (IBM)
  • David A. Wheeler (Institute for Defense Analyses)

And of course many thanks to the event folk at Linux Foundation, who handle all of the logistics of the event.

LSS-NA will be held in San Diego, CA on August 19-21. To register, click here. Or you can register for the co-located Open Source Summit and add LSS-NA.

 

Linux Plumbers Conference: Real-Time Microconference Accepted into 2019 Linux Plumbers Conference

Wednesday 19th of June 2019 10:55:46 PM

We are pleased to announce that the Real-Time Microconference has been accepted into the 2019 Linux Plumbers Conference! The PREEMPT_RT patch set (aka “The Real-Time Patch”) was created in 2004 in the effort to make Linux into a hard real-time designed operating system. Over the years much of the RT patch has made it into mainline Linux, which includes: mutexes, lockdep, high-resolution timers, Ftrace, RCU_PREEMPT, priority inheritance, threaded interrupts and much more. There’s just a little left to get RT fully into mainline, and the light at the end of the tunnel is finally in view. It is expected that the RT patch will be in mainline within a year, which changes the topics of discussion. Once it is in Linus’s tree, a whole new set of issues must be handled. The focus on this year’s Plumbers events will include:

Come and join us in the discussion of making the LWN prediction of RT coming into mainline “this year” a reality!

We hope to see you there!

Linux Plumbers Conference: Testing and Fuzzing Microconference Accepted into 2019 Linux Plumbers Conference

Wednesday 19th of June 2019 02:29:13 AM

We are pleased to announce that the Testing and Fuzzing Microconference has been accepted into the 2019 Linux Plumbers Conference! Testing and fuzzing are crucial to the stability that the Linux kernel demands.

Last year’s microconference brought about a number of discussions; for example, syzkaller evolved as syzbot, which keeps track of fuzzing efforts and the resulting fixes. The closing ceremony pointed out all the work that still has to be done: There are a number of overlapping efforts, and those need to be consolidated. The use of KASAN should be increased. Where is fuzzing going next? With real-time moving forward from “if” to “when” in the mainline, how does RT test coverage increase? The unit-testing frameworks may need some unification. Also, KernelCI will be announced as an LF project this time around. Stay around for the KernelCI hackathon after the conference to help further those efforts.

Come and join us for the discussion!

We hope to see you there!

Linux Plumbers Conference: Toolchains Microconference Accepted into 2019 Linux Plumbers Conference

Monday 17th of June 2019 06:10:02 PM

We are pleased to announce that the Toolchains Microconference has been accepted into the 2019 Linux Plumbers Conference! The Linux kernel may
be one of the most powerful systems around, but it takes a powerful toolchain to make that happen. The kernel takes advantage of any feature
that the toolchains provide, and collaboration between the kernel and toolchain developers will make that much more seamless.

Toolchains topics will include:

  • Header harmonization between kernel and glibc
  • Wrapping syscalls in glibc
  • eBPF support in toolchains
  • Potential impact/benefit/detriment of recently developed GCC optimizations on the kernel
  • Kernel hot-patching and GCC
  • Online debugging information: CTF and BTF
  • Development and parity between GCC and LLVM

Come and join us in the discussion of what makes it possible to build the most robust and flexible kernel in the world!

We hope to see you there!

Greg Kroah-Hartman: Linux stable tree mirror at github

Saturday 15th of June 2019 08:10:07 PM

As everyone seems to like to put kernel trees up on github for random projects (based on the crazy notifications I get all the time), I figured it was time to put up a semi-official mirror of all of the stable kernel releases on github.com

It can be found at: https://github.com/gregkh/linux and I will try to keep it up to date with the real source of all kernel stable releases at https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/

It differs from Linus’s tree at: https://github.com/torvalds/linux in that it contains all of the different stable tree branches and stable releases and tags, which many devices end up building on top of.

So, mirror away!

Also note, this is a read-only mirror, any pull requests created on it will be gleefully ignored, just like happens on Linus’s github mirror.

If people think this is needed on any other git hosting site, just let me know and I will be glad to push to other places as well.

This notification was also cross-posted on the new http://people.kernel.org/ site, go follow that for more kernel developer stuff.

Linux Plumbers Conference: Open Printing Microconference Accepted into 2019 Linux Plumbers Conference

Friday 14th of June 2019 03:18:38 PM

We are pleased to announce that the Open Printing Microconference has been accepted into the 2019 Linux Plumbers Conference! In today’s world much is done online. But getting a hardcopy is still very much needed, even today. Then there’s the case of having a hardcopy and wanting to scan it to make it digital. All of this is needed to be functional on Linux to keep Linux-based and open source operating systems relevant. Also, with the progress in technology, the usage of modern printers and scanners is becoming simple. The driverless concept has made printing and scanning easier and gets the job done with some simple clicks without requiring the user to install any kind of driver software. The Open Printing organization has been tasked with getting this job done. This Microconference will focus on what needs to be accomplished to keep Linux and open source operating systems a leader in today’s market.

Topics for this Microconference include:

Come and join us in the discussion of keeping your printers working.

We hope to see you there!

More in Tux Machines

Database News on YugaByte Going for Apache 2.0 Licence

  • YugaByte Becomes 100% Open Source Under Apache 2.0 License

    YugaByte, a provider of open source distributed SQL databases, announced that YugaByte DB is now 100% open source under the Apache 2.0 license, bringing previously commercial features into the open source core. The transition breaks the boundaries between YugaByte’s Community and Enterprise editions by bringing previously commercial-only, closed-source features such as Distributed Backups, Data Encryption, and Read Replicas into the open source core project distributed under the permissive Apache 2.0 license. Starting immediately, there is only one edition of YugaByte DB for developers to build their business-critical, cloud-native applications.

  • YugaByte's Apache 2.0 License Delivers 100% Open Source Distributed SQL Database

    YugaByte, the open source distributed SQL databases comapny, announced that YugaByte DB is now 100 percent open source under the Apache 2.0 license, bringing previously commercial features into the open source core. The move, in addition to other updates available now through YugaByte DB 1.3, allows users to more openly collaborate across what is now the world’s most powerful open source distributed SQL database.

  • SD Times Open-Source Project of the Week: YugaByte DB

    This week’s SD Times Open Source Project of the Week is the newly open-sourced YugaByte DB, which allows users to better collaborate on the distributed SQL database. The move to the open-source core project distributed under the Apache 2.0 license makes previously closed-sourced features such as distributed backups, data encryption and read replicas more accessible, according to the team. By doing this, YugaByte plans to break the boundaries between YugaByte’s Community and Enterprise editions. “YugaByte DB combines PostgreSQL’s language breadth with Oracle-like reliability, but on modern cloud infrastructure. With our licensing changes, we have removed every barrier that developers face in adopting a business-critical database and operations engineers face in running a fleet of database clusters, with extreme ease,” said Kannan Muthukkaruppan, co-founder and CEO of YugaByte.

Programming: Ruby, NativeScript, Python, Rust/C/C++ FUD From Microsoft

Security Leftovers

  • Alas, Poor PGP

    The first is an assertion that email is inherently insecure and can’t be made secure. There are some fairly convincing arguments to be made on that score; as it currently stands, there is little ability to hide metadata from prying eyes. And any format that is capable of talking on the network — as HTML is — is just begging for vulnerabilities like EFAIL. But PGP isn’t used just for this. In fact, one could argue that sending a binary PGP message as an attachment gets around a lot of that email clunkiness — and would be right, at the expense of potentially more clunkiness (and forgetfulness). What about the web-of-trust issues? I’m in agreement. I have never really used WoT to authenticate a key, only in rare instances trusting an introducer I know personally and from personal experience understand how stringent they are in signing keys. But this is hardly a problem for PGP alone. Every encryption tool mentioned has the problem of validating keys. The author suggests Signal. Signal has some very strong encryption, but you have to have a phone number and a smartphone to use it. Signal’s strength when setting up a remote contact is as strong as SMS. Let that disheartening reality sink in for a bit. (A little social engineering could probably get many contacts to accept a hijacked SIM in Signal as well.) How about forward secrecy? This is protection against a private key that gets compromised in the future, because an ephemeral session key (or more than one) is negotiated on each communication, and the secret key is never stored. This is a great plan, but it really requires synchronous communication (or something approaching it) between the sender and the recipient. It can’t be used if I want to, for instance, burn a backup onto a Bluray and give it to a friend for offsite storage without giving the friend access to its contents. There are many, many situations where synchronous key negotiation is impossible, so although forward secrecy is great and a nice enhancement, we should assume it to be always applicable. [...] My current estimate is that there’s no magic solution right now. The Sequoia PGP folks seem to have a good thing going, as does Saltpack. Both projects are early in development, so as a privacy-concerned person, should you trust them more than GPG with appropriate options? That’s really hard to say.

  • Armadillo Is An Open-Source “USB Firewall” Device To Protect You Against USB Attacks

    Exchanging data using USB devices is something that we do on a daily basis. But how often do you think that the next USB device that you’ll plug into your PC’s port could be malicious? In the past, researchers have unveiled 29 types of USB attacks that could compromise your sensitive data by simply plugging in a USB device. Globotron’s Armadillo is a device that you could use to protect yourself from USB attacks.

  • Open source solutions in autonomous driving: safety is more than an afterthought [Ed: A lot less likely to contain back doors, unlike proprietary software where this has become rather 'standard' a 'feature']

    In the automotive industry, in-vehicle infotainment (IVI) systems were one of the early adopters of open source operating systems, namely Linux. Today’s innovation and success with IVIs can largely be attributed to this approach. Collaborative efforts such as the GENIVI Alliance and Automotive Grade Linux—where automakers, suppliers, and their competitors agree to share common elements of the IVI software stack—are enabling rapid development in this area.

  • New open source solution reduces the risks associated with cloud deployments [Ed: This is an inherently flawed kind of logic because if you handed over control to AWS, then the Pentagon already controls everything and thus you have zero security, you're 'pwned' by definition]

    The Galahad software will be deployed to AWS and provides a nested hypervisor on AWS instances. There, it will monitor role-based virtual machines virtually across all levels of the application stack including the docker container: the basic unit of software that packages an application to run quickly between computing environments.

  • Open-Source Exploit: Private Keys in MyDashWallet Exposed for Two Months- Users Should Move Funds Immediately [Ed: Highly misleading headline. This has nothing to do with "Open Source"; it's about some fool who uploaded private keys]

    The private keys of Dash crypto coins being held in online software “hot wallet” called MyDashWallet have been exposed to hackers for two months, and anyone using the wallet should immediately move funds out. A “hot wallet” is any cryptocurrency software “wallet” connected to the Internet.

Devices: 'IoT', SparkFun and Beelink L55

  • Top 20 Best Internet of Things Projects (IoT Projects) That You can Make Right Now

    Internet of Things (IoT) is a new predominant technology for this advanced world. This technology can change the lifestyle people lead. Question is what the Internet of Things is? IoT can be described as a network of physical objects connected through the internet. Physical objects could be anything that contains embedded electronics, software, sensor, etc. with the internet. Using the IP addresses, those smart objects can exchange data among the network and can make a decision. A significant number of researches is going on over the IoT trends and projects. In this article, we will talk about a few IoT project ideas based on standard IoT protocols, so that readers get the basic knowledge about the Internet of Things. These internet of things example are keen, useful, and interesting to build.

  • Open-Source SparkFun Module Supports Low-Power TensorFlow Machine Learning

    SparkFun has released the SparkFun Artemis, Engineering Version, an open-source embedded development kit that supports the TensorFlow machine learning environment. Designed for toolchain-agnostic, low-power machine learning development, the 15.5 mm x 10.5 mm Artemis board includes... [...] In addition to a secure firmware update system, flexible, serial peripherals, a suite of clock sources, and camera compatibility, the Artemis board features large SMD pads that support carrier board implementations. SparkFun has launched three carrier boards in conjunction with the release of the Artemis, Engineering version board: the BlackBoard Artemis (Arduino Uno footprint); BlackBoard Artemis Nano (smallest form factor); and BlackBoard Artemis ATP (with 48 GPIO pins).

  • Beelink L55 Review – An Intel Core i3-5005U Mini PC Tested with Windows 10 & Ubuntu 18.04

    With the shortage of Gemini Lake processors, some manufacturers have taken to releasing new mini PCs using older CPUs