Language Selection

English French German Italian Portuguese Spanish

Kernel Planet

Syndicate content
Kernel Planet - http://planet.kernel.org
Updated: 1 hour 41 min ago

Matthew Garrett: It's time to talk about post-RMS Free Software

Saturday 14th of September 2019 11:57:34 AM
Richard Stallman has once again managed to demonstrate incredible insensitivity[1]. There's an argument that in a pure technical universe this is irrelevant and we should instead only consider what he does in free software[2], but free software isn't a purely technical topic - the GNU Manifesto is nakedly political, and while free software may result in better technical outcomes it is fundamentally focused on individual freedom and will compromise on technical excellence if otherwise the result would be any compromise on those freedoms. And in a political movement, there is no way that we can ignore the behaviour and beliefs of that movement's leader. Stallman is driving away our natural allies. It's inappropriate for him to continue as the figurehead for free software.

But I'm not calling for Stallman to be replaced. If the history of social movements has taught us anything, it's that tying a movement to a single individual is a recipe for disaster. The FSF needs a president, but there's no need for that person to be a leader - instead, we need to foster an environment where any member of the community can feel empowered to speak up about the importance of free software. A decentralised movement about returning freedoms to individuals can't also be about elevating a single individual to near-magical status. Heroes will always end up letting us down. We fix that by removing the need for heroes in the first place, not attempting to find increasingly perfect heroes.

Stallman was never going to save us. We need to take responsibility for saving ourselves. Let's talk about how we do that.

[1] There will doubtless be people who will leap to his defense with the assertion that he's neurodivergent and all of these cases are consequences of that.

(A) I am unaware of a formal diagnosis of that, and I am unqualified to make one myself. I suspect that basically everyone making that argument is similarly unqualified.
(B) I've spent a lot of time working with him to help him understand why various positions he holds are harmful. I've reached the conclusion that it's not that he's unable to understand, he's just unwilling to change his mind.

[2] This argument is, obviously, bullshit

comments

Davidlohr Bueso: Linux v5.2: Performance Goodies

Tuesday 10th of September 2019 07:26:59 PM
locking/rwsem: optimize trylocking for the uncontended caseThis applies the idea that in most cases, a rwsem will be uncontended (single threaded). For example, experimentation showed that page fault paths really expect this. The change itself makes the code basically not read in a cacheline in a tight loop over and over. Note however that this can be a double edged sword, as microbenchmarks have show performance deterioration upon high amounts of tasks, albeit mainly pathological workloads.[Commit ddb20d1d3aed a338ecb07a33]

lib/lockref: limit number of cmpxchg loop retriesUnbounded loops are rather froned upon, specially ones ones doing CAS operations. As such, Linus suggested adding an arbitrary upper bound to the loop to force the slowpath (spinlock fallback), which was seen to improve performance on an adhoc testcase on hardware that incurrs in the loop retry game.[Commit 893a7d32e8e0]
 
rcu: avoid unnecessary softirqs when system is idleUpon an idle system with no pending callbacks, rcu sofirqs to process callbacks were being triggered repeatedly. Specifically the mismatch between cpu_no_qs and core_need_rq was addressed.[Commit 671a63517cf9]

rcu: fix potential cond_resched() slowdownsWhen using the jiffies_till_sched_qs kernel boot parameter, a bug made jiffies_to_sched_qs become uinitialized as zero and therefore impacts negatively in cond_resched().
[Commit 6973032a602e]

mm: improve vmap allocationDoing a vmalloc can be quite slow at times, and with it being done with preemption disabled, can affect workloads that are sensible to this. The problem relies in the fact that a new VA area is done over a busy list iteration until a suitable hole is found between two busy areas. The changes propose the always reliable red-black tree to keep blocks sorted by their offsets along with a list keeping the free space in order of increasing addresses. [Commit 68ad4a330433 68571be99f32]


mm/gup: safe usage of get_user_pages_fast() with DAXUsers of get_user_pages_fast() have potential performance benefits compared to its non-fast cousin, by avoiding mmap_sem, than it's non-fast equivalent. However drivers such as rdma can pin these pages for a significant amount of time, where a number of issues come with the filesystem as referenced pages will block a number of critical operations and is known to mess up DAX. A new FOLL_LONGTERM flag is added and checked accordingly; which also means that other users such as xdp can now also be converted to gup_fast.[Commit 932f4a630a69 b798bec4741b 73b0140bf0fe 7af75561e171 9fdf4aa15673 664b21e717cf f3b4fdb18cb5 ]

lib/sort: faster and smallerBecause CONFIG_RETPOLINE has made indirect calls much more expensive, these changes reduce the number made by the library sort functions, lib/sort and lib/list_sort. A number of optimizations and clever tricks are used such as a more efficient bottom up heapsort and playing nicer with store buffers.[Commit 37d0ec34d111 22a241ccb2c1 8fb583c4258d 043b3f7b6388 b5c56e0cdd62]
ipc/mqueue: make msg priorities truly O(1)By keeping the pointer to the tree's rightmost node, the process of consuming a message can be done in constant time, instead of logarithmic.[Commit a5091fda4e3c]

x86/fpu: load FPU registers on return to userlandThis is a large, 27-patch, cleanup and optimization to only load fpu registers on return to userspace, instead of upon every context switch. This means that tasks that remain in kernel space do not load the registers. Accessing the fpu registers in the kernel requires disabling preemption and bottom-halfs  for scheduler and softirqs, accordingly. [Commit 2722146eb784 ... a5eff7259790]
x86/hyper-v: implement EOI optimizationAvoid a vmexit on EOI. This was seen to slightly improve IOPS when testing nvme disks with raid and ext4.[Commit ba696429d290
btrfs: improve performance on fsync of files with multiple hardlinksA fix to a performance regression seen in pgbench which can make fsync a full transaction commit in order to avoid losing hard links and new ancestors of the fsynced inode.[Commit b8aa330d2acb]
fsnotify: fix unlink performance regressionThis restores an unlink performance optimization that avoids take_dentry_name_snapshot().[Commit 4d8e7055a405]

block/bfq: do not merge queues on flash storage with queuingDisable queue merging on non-rotational devices with internal queueing, thus boosting throughput on interleaved IO.[Commit 8cacc5ab3eac]

James Bottomley: The Mythical Economic Model of Open Source

Sunday 8th of September 2019 09:35:47 AM

It has become fashionable today to study open source through the lens of economic benefits to developers and sometimes draw rather alarming conclusions. It has also become fashionable to assume a business model tie and then berate the open source community, or their licences, for lack of leadership when the business model fails. The purpose of this article is to explain, in the first part, the fallacy of assuming any economic tie in open source at all and, in the second part, go on to explain how economics in open source is situational and give an overview of some of the more successful models.

Open Source is a Creative Intellectual Endeavour

All the creative endeavours of humanity, like art, science or even writing code, are often viewed as activities that produce societal benefit. Logically, therefore, the people who engage in them are seen as benefactors of society, but assuming people engage in these endeavours purely to benefit society is mostly wrong. People engage in creative endeavours because it satisfies some deep need within themselves to exercise creativity and solve problems often with little regard to the societal benefit. The other problem is that the more directed and regimented a creative endeavour is, the less productive its output becomes. Essentially to be truly creative, the individual has to be free to pursue their own ideas. The conundrum for society therefore is how do you harness this creativity for societal good if you can’t direct it without stifling the very creativity you want to harness? Obviously society has evolved many models that answer this (universities, benefactors, art incubation programmes, museums, galleries and the like) with particular inducements like funding, collaboration, infrastructure and so on.

Why Open Source development is better than Proprietary

Simply put, the Open Source model, involving huge freedoms to developers to decide direction and great opportunities for collaboration stimulates the intellectual creativity of those developers to a far greater extent than when you have a regimented project plan and a specific task within it. The most creatively deadening job for any engineer is to find themselves strictly bound within the confines of a project plan for everything. This, by the way, is why simply allowing a percentage of paid time for participating in Open Source seems to enhance input to proprietary projects: the liberated creativity has a knock on effect even in regimented development. However, obviously, the goal for any Corporation dependent on code development should be to go beyond the knock on effect and actually employ open source methodologies everywhere high creativity is needed.

What is Open Source?

Open Source has it’s origin in code sharing models, permissive from BSD and reciprocal from GNU. However, one of its great values is the reasons why people do open source aren’t the same reasons why the framework was created in the first place. Today Open Source is a framework which stimulates creativity among developers and helps them create communities, provides economic benefits to corportations (provided they understand how to harness them) and produces a great societal good in general in terms of published reusable code.

Economics and Open Source

As I said earlier, the framework of Open Source has no tie to economics, in the same way things like artistic endeavour don’t. It is possible for a great artist to make money (as Picasso did), but it’s equally possible for a great artist to live all their lives in penury (as van Gough did). The demonstration of the analogy is that trying to measure the greatness of the art by the income of the artist is completely wrong and shortsighted. Developing the ability to exploit your art for commercial gain is an additional skill an artist can develop (or not, as they choose) it’s also an ability they could fail in and in all cases it bears no relation to the societal good their art produces. In precisely the same way, finding an economic model that allows you to exploit open source (either individually or commercially) is firstly a matter of choice (if you have other reasons for doing Open Source, there’s no need to bother) and secondly not a guarantee of success because not all models succeed. Perhaps the easiest way to appreciate this is through the lens of personal history.

Why I got into Open Source

As a physics PhD student, I’d always been interested in how operating systems functioned, but thanks to the BSD lawsuit and being in the UK I had no access to the actual source code. When Linux came along as a distribution in 1992, it was a revelation: not only could I read the source code but I could have a fully functional UNIX like system at home instead of having to queue for time to write up my thesis in TeX on the limited number of department terminals.

After completing my PhD I was offered a job looking after computer systems in the department and my first success was shaving a factor of ten off the computing budget by buying cheap pentium systems running Linux instead of proprietary UNIX workstations. This success was nearly derailed by an NFS bug in Linux but finding and fixing the bug (and getting it upstream into the 1.0.2 kernel) cemented the budget savings and proved to the department that we could handle this new technology for a fraction of the cost of the old. It also confirmed my desire to poke around in the Operating System which I continued to do, even as I moved to America to work on Proprietary software.

In 2000 I got my first Open Source break when the product I’d been working on got sold to a silicon valley startup, SteelEye, whose business plan was to bring High Availability to Linux. As the only person on the team with an Open Source track record, I became first the Architect and later CTO of the company, with my first job being to make the somewhat eccentric Linux SCSI subsystem work for the shared SCSI clusters LifeKeeper then used. Getting SCSI working lead to fund interactions with the Linux community, an Invitation to present on fixing SCSI to the Kernel Summit in 2002 and the maintainership of SCSI in 2003. From that point, working on upstream open source became a fixture of my Job requirements but progressing through Novell, Parallels and now IBM it also became a quality sought by employers.

I have definitely made some money consulting on Open Source, but it’s been dwarfed by my salary which does get a boost from my being an Open Source developer with an external track record.

The Primary Contributor Economic Models

Looking at the active contributors to Open Source, the primary model is that either your job description includes working on designated open source projects so you’re paid to contribute as your day job
or you were hired because of what you’ve already done in open source and contributing more is a tolerated use of your employer’s time, a third, and by far smaller group is people who work full-time on Open Source but fund themselves either by shared contributions like patreon or tidelift or by actively consulting on their projects. However, these models cover existing contributors and they’re not really a route to becoming a contributor because employers like certainty so they’re unlikely to hire someone with no track record to work on open source, and are probably not going to tolerate use of their time for developing random open source projects. This means that the route to becoming a contributor, like the route to becoming an artist, is to begin in your own time.

Users versus Developers

Open Source, by its nature, is built by developers for developers. This means that although the primary consumers of open source are end users, they get pretty much no say in how the project evolves. This lack of user involvement has been lamented over the years, especially in projects like the Linux Desktop, but no real community solution has ever been found. The bottom line is that users often don’t know what they want and even if they do they can’t put it in technical terms, meaning that all user driven product development involves extensive and expensive product research which is far beyond any open source project. However, this type of product research is well within the ability of most corporations, who can also afford to hire developers to provide input and influence into Open Source projects.

Business Model One: Reflecting the Needs of Users

In many ways, this has become the primary business model of open source. The theory is simple: develop a traditional customer focussed business strategy and execute it by connecting the gathered opinions of customers to the open source project in exchange for revenue for subscription, support or even early shipped product. The business value to the end user is simple: it’s the business value of the product tuned to their needs and the fact that they wouldn’t be prepared to develop the skills to interact with the open source developer community themselves. This business model starts to break down if the end users acquire developer sophistication, as happens with Red Hat and Enterprise users. However, this can still be combatted by making sure its economically unfeasible for a single end user to match the breadth of the offering (the entire distribution). In this case, the ability of the end user to become involved in individual open source projects which matter to them is actually a better and cheaper way of doing product research and feeds back into the synergy of this business model.

This business model entirely breaks down when, as in the case of the cloud service provider, the end user becomes big enough and technically sophisticated enough to run their own distributions and sees doing this as a necessary adjunct to their service business. This means that you can no-longer escape the technical sophistication of the end user by pursuing a breadth of offerings strategy.

Business Model Two: Drive Innovation and Standardization

Although venture capitalists (VCs) pay lip service to the idea of constant innovation, this isn’t actually what they do as a business model: they tend to take an innovation and then monetize it. The problem is this model doesn’t work for open source: retaining control of an open source project requires a constant stream of innovation within the source tree itself. Single innovations get attention but unless they’re followed up with another innovation, they tend to give the impression your source tree is stagnating, encouraging forks. However, the most useful property of open source is that by sharing a project and encouraging contributions, you can obtain a constant stream of innovation from a well managed community. Once you have a constant stream of innovation to show, forking the project becomes much harder, even for a cloud service provider with hundreds of developers, because they must show they can match the innovation stream in the public tree. Add to that Standardization which in open source simply means getting your project adopted for use by multiple consumers (say two different clouds, or a range of industry). Further, if the project is largely run by a single entity and properly managed, seeing the incoming innovations allows you to recruit the best innovators, thus giving you direct ownership of most of the innovation stream. In the early days, you make money simply by offering user connection services as in Business Model One, but the ultimate goal is likely acquisition for the talent possesed, which is a standard VC exit strategy.

All of this points to the hypothesis that the current VC model is wrong. Instead of investing in people with the ideas, you should be investing in people who can attract and lead others with ideas

Other Business Models

Although the models listed above have proven successful over time, they’re by no means the only possible ones. As the space of potential business models gets explored, it could turn out they’re not even the best ones, meaning the potential innovation a savvy business executive might bring to open source is newer and better business models.

Conclusions

Business models are optional extras with open source and just because you have a successful open source project does not mean you’ll have an equally successful business model unless you put sufficient thought into constructing and maintaining it. Thus a successful open source start up requires three elements: A sound business model, or someone who can evolve one, a solid community leader and manager and someone with technical ability in the problem space.

If you like working in Open Source as a contributor, you don’t necessarily have to have a business model at all and you can often simply rely on recognition leading to opportunities that provide sufficient remuneration.

Although there are several well known business models for exploiting open source, there’s no reason you can’t create your own different one but remember: a successful open source project in no way guarantees a successful business model.

Linux Plumbers Conference: LPC waiting list closed; just a few days until the conference

Wednesday 4th of September 2019 09:31:53 PM

The waiting list for this year’s Linux Plumbers Conference is now closed. All of the spots available have been allocated, so anyone who is not registered at this point will have to wait for next year. There will be no on-site registration. We regret that we could not accommodate everyone. The good news is that all of the microconferences, refereed talks, Kernel summit track, and Networking track will be recorded on video and made available as soon as possible after the conference. Anyone who could not make it to Lisbon this year will at least be able to catch up with what went on. Hopefully those who wanted to come will make it to a future LPC.

For those who are attending, we are just a few days away; you should have received an email with more details. Beyond that, the detailed schedule is available. There are also some tips on using the metro to get to the venue. As always, please send any questions or comments to “contact@linuxplumbersconf.org”.

Pete Zaitcev: Docker Block Storage... say what again?

Friday 30th of August 2019 05:52:04 PM

Found an odd job posting at the website of Rancher:

What you will be doing

  • Design and implement a block storage solution for Docker containers
  • Working on development of various aspects of the storage stack: consistency, reliability, replication and performance
  • Using Go for product development

Okay. Since they talk about consistency and replication together, this thing probably provides actual service, in addition to the necessary orchestration. Kind of the ill-fated Sheepdog. They may under-estimate the amount of work necesary, sure. Look no further than Ceph RBD. Remember how much work it took for a genius like Sage? But a certain arrogance is essential in a start-up, and Rancher only employs 150 people.

Also, nobody is dumb enough to write orchestration in Go, right? So this probably is not just a layer on top of Ceph or whatever.

Well, it's still possible that it's merely an in-house equivalent of OpenStack Cinder, and they want it in Go because they are a Go house and if you have a hammer everything looks like a nail.

Either way, here's the main question: what does block storage have to do with Docker?

Docker, as far as I know, is a container runtime. And containers do not consume block storage. They plug into a Linux kernel that presents POSIX to them, only namespaced. Granted, certain applications consume block storage through Linux, that is why we have O_DIRECT. But to roll out a whole service like this just for those rare appliations... I don't think so.

Why would anyone want block storage for (Docker) containers? Isn't it absurd? What am I missing and what is Rancher up to?

UPDATE:

The key to remember here is that while running containers aren't using block storage, Docker containers are distributed as disk images, and they get a their own root filesystem by default. Therefore, any time anyone adds a Docker container, they have to allocate a block device and dump the application image into it. So, yes, it is some kind of Docker Cinder they are trying to build.

See Red Hat docs about managing Docker block storage in Atomic Host (h/t penguin42).

Pete Zaitcev: POST, PUT, and CRUD

Thursday 15th of August 2019 07:10:08 PM

Anyone who ever worked with object storage knows that PUT creates, GET reads, POST updates, and DELETE deletes. Naturally, right? POST is such a strange verb with oddball encodings that it's perfect to update, while GET and PUT are matching twins like read(2) and write(2). Imagine my surprise, then, when I found that the official definition of RESTful makes POST create objects and PUT update them. There is even a FAQ, which uses sophistry and appeals to the authority of RFCs in order to justify this.

So, in the world of RESTful solipcism, you would upload an object foo into a bucket buk by issuing "POST /buk?obj=foo" [1], while "PUT /buk/foo" applies to pre-existing resources. Although, they had to admit that RFC-2616 assumes that PUT creates.

All this goes to show, too much dogma is not good for you.

[1] It's worse, actually. They want you to do "POST /buk", and receive a resource ID, generated by the server, and use that ID to refer to the resource.

Greg Kroah-Hartman: Patch workflow with mutt - 2019

Wednesday 14th of August 2019 12:37:35 PM

Given that the main development workflow for most kernel maintainers is with email, I spend a lot of time in my email client. For the past few decades I have used (mutt), but every once in a while I look around to see if there is anything else out there that might work better.

One project that looks promising is (aerc) which was started by (Drew DeVault). It is a terminal-based email client written in Go, and relies on a lot of other go libraries to handle a lot of the “grungy” work in dealing with imap clients, email parsing, and other fun things when it comes to free-flow text parsing that emails require.

aerc isn’t in a usable state for me just yet, but Drew asked if I could document exactly how I use an email client for my day-to-day workflow to see what needs to be done to aerc to have me consider switching.

Note, this isn’t a criticism of mutt at all. I love the tool, and spend more time using that userspace program than any other. But as anyone who knows email clients, they all suck, it’s just that mutt sucks less than everything else (that’s literally their motto)

I did a (basic overview of how I apply patches to the stable kernel trees quite a few years ago) but my workflow has evolved over time, so instead of just writing a private email to Drew, I figured it was time to post something showing others just how the sausage really is made.

Anyway, my email workflow can be divided up into 3 different primary things that I do:

  • basic email reading, management, and sorting
  • reviewing new development patches and applying them to a source repository.
  • reviewing potential stable kernel patches and applying them to a source repository.

Given that all stable kernel patches need to already be in Linus’s kernel tree first, the workflow of the how to work with the stable tree is much different from the new patch workflow.

Basic email reading

All of my email ends up in either two “inboxes” on my local machine. One for everything that is sent directly to me (either with To: or Cc:) as well as a number of mailing lists that I ensure I read all messages that are sent to it because I am a maintainer of those subsystems (like (USB), or (stable)). The second inbox consists of other mailing lists that I do not read all messages of, but review as needed, and can be referenced when I need to look something up. Those mailing lists are the “big” linux-kernel mailing list to ensure I have a local copy to search from when I am offline (due to traveling), as well as other “minor” development mailing lists that I like to keep a copy locally like linux-pci, linux-fsdevel, and a few other smaller vger lists.

I get these maildir folders synced with the mail server using (mbsync) which works really well and is much faster than using (offlineimap), which I used for many many years ends up being really slow for when you do not live on the same continent as the mail server. (Luis’s) recent post of switching to mbsync finally pushed me to take the time to configure it all properly and I am glad that I did.

Let’s ignore my “lists” inbox, as that should be able to be read by any email client by just pointing it at it. I do this with a simple alias:

alias muttl='mutt -f ~/mail_linux/'

which allows me to type muttl at any command line to instantly bring it up:

What I spend most of the time in is my “main” mailbox, and that is in a local maildir that gets synced when needed in ~/mail/INBOX/. A simple mutt on the command line brings this up:

Yes, everything just ends up in one place, in handling my mail, I prune relentlessly. Everything ends up in one of 3 states for what I need to do next:

  • not read yet
  • read and left in INBOX as I need to do something “soon” about it
  • read and it is a patch to do something with

Everything that does not require a response, or I’ve already responded to it, gets deleted from the main INBOX at that point in time, or saved into an archive in case I need to refer back to it again (like mailing list messages).

That last state makes me save the message into one of two local maildirs, todo and stable. Everything in todo is a new patch that I need to review, comment on, or apply to a development tree. Everything in stable is something that has to do with patches that need to get applied to the stable kernel tree.

Side note, I have scripts that run frequently that email me any patches that need to be applied to the stable kernel trees, when they hit Linus’s tree. That way I can just live in my email client and have everything that needs to be applied to a stable release in one place.

I sweep my main INBOX ever few hours, and sort things out either quickly responding, deleting, archiving, or saving into the todo or stable directory. I don’t achieve a constant “inbox zero”, but if I only have 40 or so emails in there, I am doing well.

So, for this main workflow, I need an easy way to:

  • filter the INBOX by a pattern so that I only see one “type” of message at a time (more below)
  • read an email
  • write an email
  • respond to existing email, and use vim as an editor as my hands have those key bindings burned into them.
  • delete an email
  • save an email to one of two mboxes with a press of a few keystrokes
  • bulk delete/archive/save emails all at once

These are all tasks that I bet almost everyone needs to do all the time, so a tool like aerc should be able to do that easily.

A note about filtering. As everything comes into one inbox, it is easier to filter that mbox based on things so I can process everything at once.

As an example, I want to read all of the messages sent to the linux-usb mailing list right now, and not see anything else. To do that, in mutt, I press l (limit) which brings up a prompt for a filter to apply to the mbox. This ability to limit messages to one type of thing is really powerful and I use it in many different ways within mutt.

Here’s an example of me just viewing all of the messages that are sent to the linux-usb mailing list, and saving them off after I have read them:

This isn’t that complex, but it has to work quickly and well on mailboxes that are really really big. As an example, here’s me opening my “all lists” mbox and filtering on the linux-api mailing list messages that I have not read yet. It’s really fast as mutt caches lots of information about the mailbox and does not require reading all of the messages each time it starts up to generate its internal structures.

All messages that I want to save to the todo directory I can do with a two keystroke sequence, .t which saves the message there automatically

Again, that’s a binding I set up years ago, , jumps to the specific mbox, and . copies the message to that location.

Now you see why using mutt is not exactly obvious, those bindings are not part of the default configuration and everyone ends up creating their own custom key bindings for whatever they want to do. It takes a good amount of time to figure this out and set things up how you want, but once you are over that learning curve, you can do very complex things easily. Much like an editor (emacs, vim), you can configure them to do complex things easily, but getting to that level can take a lot of time and knowledge. It’s a tool, and if you are going to rely on it, you should spend the time to learn how to use your tools really well.

Hopefully aerc can get to this level of functionality soon. Odds are everyone else does something much like this, as my use-case is not unusual.

Now let’s get to the unusual use cases, the fun things:

Development Patch review and apply

When I decide it’s time to review and apply patches, I do so by subsystem (as I maintain a number of different ones). As all pending patches are in one big maildir, I filter the messages by the subsystem I care about at the moment, and save all of the messages out to a local mbox file that I call s (hey, naming is hard, it gets worse, just wait…)

So, in my linux/work/ local directory, I keep the development trees for different subsystems like usb, char-misc, driver-core, tty, and staging.

Let’s look at how I handle some staging patches.

First, I go into my ~/linux/work/staging/ directory, which I will stay in while doing all of this work. I open the todo mbox with a quick ,t pressed within mutt (a macro I picked from somewhere long ago, I don’t remember where…), and then filter all staging messages, and save them to a local mbox with the following keystrokes:

mutt ,t l staging T s ../s

Yes, I could skip the l staging step, and just do T staging instead of T, but it’s nice to see what I’m going to save off first before doing so:

Now all of those messages are in a local mbox file that I can open with a single keystroke, ’s’ on the command line. That is an alias:

alias s='mutt -f ../s'

I then dig around in that mbox, sort patches by driver type to see everything for that driver at once by filtering on the name and then save those messages to another mbox called ‘s1’ (see, I told you the names got worse.)

s l erofs T s ../s1

I have lots of local mbox files all “intuitively” named ‘s1’, ‘s2’, and ‘s3’. Of course I have aliases to open those files quickly:

alias s1='mutt -f ../s1' alias s2='mutt -f ../s2' alias s3='mutt -f ../s3'

I have a number of these mbox files as sometimes I need to filter even further by patch set, or other things, and saving them all to different mboxes makes things go faster.

So, all the erofs patches are in one mbox, let’s open it up and review them, and save the patches that look good enough to apply to another mbox:

Turns out that not all patches need to be dealt with right now (moving erofs out of the staging directory requires other people to review it, so I just save those messages back to the todo mbox:

Now I have a single patch that I want to apply, but I need to add some acks from the maintainers of erofs provided. I do this by editing the “raw” message directly from within mutt. I open the individual messages from the maintainers, cut their reviewed-by line, and then edit the original patch and add those lines to the patch:

Some kernel maintainers right now are screaming something like “Automate this!”, “Patchwork does this for you!”, “Are you crazy?” Yeah, this is one place that I need to work on, but the time involved to do this is not that much and it’s not common that others actually review patches for subsystems I maintain, unfortunately.

The ability to edit a single message directly within my email client is essential. I end up having to fix up changelog text, editing the subject line to be correct, fixing the mail headers to not do foolish things with text formats, and in some cases, editing the patch itself for when it is corrupted or needs to be fixed (I want a Linkedin skill badge for “can edit diff files by hand and have them still work”)

So one hard requirement I have is “editing a raw message from within the email client.” If an email client can not do this, it’s a non-starter for me, sorry.

So we now have a single patch that needs to be applied to the tree. I am already in the ~/linux/work/staging/ directory, and on the correct git branch for where this patch needs to go (how I handle branches and how patches move between them deserve a totally different blog post…)

I can apply this patch in one of two different ways, using git am -s ../s1 on the command line, piping the whole mbox into git and applying the patches directly, or I can apply them within mutt individually by using a macro.

When I have a lot of patches to apply, I just pipe the mbox file to git am -s as I’m comfortable with that, and it goes quick for multiple patches. It also works well as I have lots of different terminal windows open in the same directory when doing this and I can quickly toggle between them.

But we are talking about email clients at the moment, so here’s me applying a single patch to the local git tree:

All it took was hitting the L key. That key is set up as a macro in my mutt configuration file with a single line:

macro index L '| git am -s'\n

This macro pipes the output of the current message to git am -s.

The ability of mutt to pipe the current message (or messages) to external scripts is essential for my workflow in a number of different places. Not having to leave the email client but being able to run something else with that message, is a very powerful functionality, and again, a hard requirement for me.

So that’s it for applying development patches. It’s a bunch of the same tasks over and over:

  • collect patches by a common theme
  • filter the patches by a smaller subset
  • review them manually and responding if there are problems
  • saving “good” patches off to apply
  • applying the good patches
  • jump back to the first step

Doing that all within the email program and being able to quickly get in, and out of the program, as well as do work directly from the email program, is key.

Of course I do a “test build and sometimes test boot and then push git trees and notify author that the patch is applied” set of steps when applying patches too, but those are outside of my email client workflow and happen in a separate terminal window.

Stable patch review and apply

The process of reviewing patches for the stable tree is much like the development patch process, but it differs in that I never use ‘git am’ for applying anything.

The stable kernel tree, while under development, is kept as a series of patches that need to be applied to the previous release. This series of patches is maintained by using a tool called (quilt). Quilt is very powerful and handles sets of patches that need to be applied on top of a moving base very easily. The tool was based on a crazy set of shell scripts written by Andrew Morton a long time ago, and is currently maintained by Jean Delvare and has been rewritten in perl to make them more maintainable. It handles thousands of patches easily and quickly and is used by many developers to handle kernel patches for distributions as well as other projects.

I highly recommend it as it allows you to reorder, drop, add in the middle of the series, and manipulate patches in all sorts of ways, as well as create new patches directly. I do this for the stable tree as lots of times we end up dropping patches from the middle of the series when reviewers say they should not be applied, adding new patches where needed as prerequisites of existing patches, and other changes that with git, would require lots of rebasing.

Rebasing a git does not work for when you have developers working “down” from your tree. We usually have the rule with kernel development that if you have a public tree, it never gets rebased otherwise no one can use it for development.

Anyway, the stable patches are kept in a quilt series in a repository that is kept under version control in git (complex, yeah, sorry.) That queue can always be found (here).

I do create a linux-stable-rc git tree that is constantly rebased based on the stable queue for those who run test systems that can not handle quilt patches. That tree is found (here) and should not ever be used by anyone for anything other than automated testing. See (this email for a bit more explanation of how these git trees should, and should not, be used.

With all that background information behind us, let’s look at how I take patches that are in Linus’s tree, and apply them to the current stable kernel queues:

First I open the stable mbox. Then I filter by everything that has upstream in the subject line. Then I filter again by alsa to only look at the alsa patches. I look at the individual patches, looking at the patch to verify that it really is something that should be applied to the stable tree and determine what order to apply the patches in based on the date of the original commit.

I then hit F to pipe the message to a script that looks up the Fixes: tag in the message to determine what stable tree, if any, the commit that this fix was contained in.

In this example, the patch only should go back to the 4.19 kernel tree, so when I apply it, I know to stop at that place and not go further.

To apply the patch, I hit A which is another macro that I define in my mutt configuration

macro index A |'~/linux/stable/apply_it_from_email'\n macro pager A |'~/linux/stable/apply_it_from_email'\n

It is defined “twice” as you can have different key bindings when you are looking at mailbox’s index of all messages from when you are looking at the contents of a single message.

In both cases, I pipe the whole email message to my apply_it_from_email script.

That script digs through the message, finds the git commit id of the patch in Linus’s tree, then runs a different script that takes the commit id, exports the patch associated with that id, edits the message to add my signed-off-by to the patch as well as dropping me into my editor to make any needed tweaks that might be needed (sometimes files get renamed so I have to do that by hand, and it gives me one final change to review the patch in my editor which is usually easier than in the email client directly as I have better syntax highlighting and can search and review the text better.

If all goes well, I save the file and the script continues and applies the patch to a bunch of stable kernel trees, one after another, adding the patch to the quilt series for that specific kernel version. To do all of this I had to spawn a separate terminal window as mutt does fun things to standard input/output when piping messages to a script, and I couldn’t ever figure out how to do this all without doing the extra spawn process.

Here it is in action, as a video as (asciinema) can’t show multiple windows at the same time.

Once I have applied the patch, I save it away as I might need to refer to it again, and I move on to the next one.

This sounds like a lot of different steps, but I can process a lot of these relatively quickly. The patch review step is the slowest one here, as that of course can not be automated.

I later take those new patches that have been applied and run kernel build tests and other things before sending out emails saying they have been applied to the tree. But like with development patches, that happens outside of my email client workflow.

Bonus, sending email from the command line

In writing this up, I remembered that I do have some scripts that use mutt to send email out. I don’t normally use mutt for this for patch reviews, as I use other scripts for that (ones that eventually got turned into git send-email), so it’s not a hard requirement, but it is nice to be able to do a simple:

mutt -s "${subject}" "${address}" < ${msg} >> error.log 2>&1

from within a script when needed.

Thunderbird also can do this, I have used:

thunderbird --compose "to='${address}',subject='${subject}',message=${msg}"

at times in the past when dealing with email servers that mutt can not connect to easily (i.e. gmail when using oauth tokens).

Summary of what I need from an email client

So, to summarize it all for Drew, here’s my list of requirements for me to be able to use an email client for kernel maintainership roles:

  • work with local mbox and maildir folders easily
  • open huge mbox and maildir folders quickly.
  • custom key bindings for any command. Defaults that are sane is always good, but everyone is used to a previous program and training fingers can be hard.
  • create new key bindings for common tasks (like save a message to a specific mbox)
  • easily filter messages based on various things. Full regexes are not needed, see the PATTERNS section of ‘man muttrc’ for examples of what people have come up with over the years as being needed by an email client.
  • when sending/responding to an email, bring it up in the editor of my choice, with full headers. I know aerc already uses vim for this, which is great as that makes it easy to send patches or include other files directly in an email body
  • edit a message directly from the email client and then save it back to the local mbox it came from
  • pipe the current message to an external program

That’s what I use for kernel development.

Oh, I forgot:

  • handle gpg encrypted email. Some mailing lists I am on send everything encrypted with a mailing list key which is needed to both decrypt the message and to encrypt messages sent to the list. SMIME could be used if GPG can’t work, but both of them are probably equally horrid to code support for, so I recommend GPG as that’s probably used more often.

Bonus things that I have grown to rely on when using mutt is:

  • handle displaying html email by piping to an external tool like w3m
  • send a message from the command line with a specific subject and a specific attchment if needed.
  • specify the configuration file to use as a command line option. It is usually easier to have one configuration file for a “work” account, and another one for a “personal” one, with different email servers and settings provided for both.
  • configuration files that can include other configuration files. Mutt allows me to keep all of my “core” keybindings in one config file and just specific email server options in separate config files, allowing me to make configuration management easier.

If you have made it this far, and you aren’t writing an email client, that’s amazing, it must be a slow news day, which is a good thing. I hope this writeup helps others to show them how mutt can be used to handle developer workflows easily, and what is required of a good email client in order to be able to do all of this.

Hopefully other email clients can get to state where they too can do all of this. Competition is good and maybe aerc can get there someday.

Pete Zaitcev: Comment to 'О цифровой экономике и глобальных проблемах человечества' by omega_hyperon

Saturday 10th of August 2019 12:46:26 PM
Я это всё каждый день слышу. Эти люди берут вполне определившуюся тенденцию к замедлению научно-технического прогресса, и говорят - мы лучше знаем, что человечеству нужно. Отберите деньги у недостойных, и дайте таким умным как я, и прогресс снова пойдёт. А заодно защитим природу! И всегда капитализм виноват.

О том, что цивилизация топчется на месте, спору нет. А вот пара вещей о которых этот гандон умалчивает.

Во-первых, если отнять деньги у Диснея и отдать их исследовательскому институту, то денег не будет. Вроде по-русски говорит, а такого простого урука из распада СССР не вынес. Американская наука разгромила советскую науку во времена НТР прежде всего потому, что капиталистическая экономика предоставила экономическую базу для этой науки, а социалистическая экономика была провальной.

Вообще, если сравнить бюджет Эппла и Диснея с бюджетом Housing and Urban Development и аналогичных учреждений, то там разница на 2 порядка. Если кто-то хочет дать науке больше денег, то нужно не грабить Дисней, а прекратить давать халявщикам бесплатное жильё. Замедление науки и капитализма идут рука об руку и вызваны государственной политикой, а не каким-то там биткойном.

Во-вторых, кто вообще верит этим шарлатанам? Нам забивали баки про детей в Африке десятилетиями, а за время ужасного голода в Эфиопии её население увеличилось с 38 миллионов до 75 миллионов. То же самое произошло с белыми медведями. Площадь лесов на планете растёт. Допустим в Бразилии срубили какие-то леса под паздбища... Но кто в это поверит?

Этот кризис экспертизы - не шутка. Боязнь вакцин создана не капитализмом и биткойном, а загниванием и распадом системы научных исследований в целом. Он не назвал институт, бюджет которого он сравнил с Диснеем, а вот интересно, сколько там бездельников среди сотрудников.

Коллапс науки отражается не только в том как публика утратила веру в учёних. Объективные показатели тоже просели. Подтверждаемость публикаций очень плохая, и идёт вниз. Тоже биткойн виноват?

В обшцем большая часть этого нытя мне видится крайне вредной. Если он не в состоянии диагностировать причины кризиса, предлогаемые решения ничего нам не дадут, и биодиверии не прибавится.

View the entire thread this comment is a part of

Michael Kerrisk (manpages): man-pages-5.02 is released

Friday 2nd of August 2019 09:04:53 AM
I've released man-pages-5.02. The release tarball is available on kernel.org. The browsable online pages can be found on man7.org. The Git repository for man-pages is available on kernel.org.

This release resulted from patches, bug reports, reviews, and comments from 28 contributors. The release includes around 120 commits that change more than 50 pages.

The most notable of the changes in man-pages-5.02 is the following:
  • Thanks to Matthew Bobrowski, extensive documentation of FAN_REPORT_FID and directory modification events has been added to the fanotify(7), fanotify_init(2), and fanotify_mark(2) pages.
  • I have added several details on the dlopen API to the dlopen(3) page.
  • I've corrected a few longstanding errors and omissions in the pivot_root(2) manual page, and much more extensive changes for this page are in the pipeline for the next release.

Pete Zaitcev: Swift is 2 to 4 times faster than any competitor

Thursday 25th of July 2019 02:38:23 AM

Or so they say, at least for certain workloads.

In January of 2015 I led a project to evaluate and select a next-generation storage platform that would serve as the central storage (sometimes referred to as an active archive or tier 2) for all workflows. We identified the following features as being key to the success of the platform:

  • Utilization of erasure coding for data/failure protection (no RAID!)
  • Open hardware and the ability to mix and match hardware (a.k.a. support heterogeneous environments)
  • Open source core (preferred, but not required)
  • Self-healing in response to failures (no manual processes required, like replacing a drive)
  • Expandable online to exabyte-scale (no downtime for expansions or upgrades)
  • High availability / fault tolerance (no single point of failure)
  • Enterprise-grade support (24/7/365)
  • Visibility (dashboards to depict load, errors, etc.)
  • RESTful API access (S3/Swift)
  • SMB/NFS access to the same data (preferred, but not required)

In hindsight, I wish we would have included two additional requirements:

  • Transparently tier and migrate data to and from public cloud storage
  • Span multiple geographic regions while maintaining a single global namespace

We spent the next ~1.5 years evaluating the following systems:

  • SwiftStack
  • Ceph (InkTank/RedHat/Canonical)
  • Scality
  • Cloudian
  • Caringo
  • Dell/EMC ECS
  • Cleversafe / IBM COS
  • HGST/WD ActiveScale
  • NetApp StorageGRID
  • Nexenta
  • Qumulo
  • Quantum Lattus
  • Quobyte
  • Hedvig
  • QFS (Quantcast File System)
  • AWS S3
  • Sohonet FileStore

SwiftStack was the only solution that literally checked every box on our list of desired features, but that’s not the only reason we selected it over the competition.

The top three reasons behind our selection of SwiftStack were as follows:

  • Speed – SwiftStack was—by far—the highest-performing object storage platform — capable of line speed and 2-4x faster than competitors. The ability to move assets between our “tier 1 NAS” and “tier 2 object” with extremely high throughput was paramount to the success of the architecture.
  • [...]

Note: While SwiftStack 1space was not a part of the SwiftStack platform at the time of our evaluation and purchase, it would have been an additional deciding factor in favor of SwiftStack if it had been.

Interesting. It should be noted that performance of Swift is a great match for some workloads, but not for others. In particluar, Swift is weak on small-file workloads, such as Gnocchi, which writes a ton of 16-byte objects again and again. The overhead is a killer there, and not just on the wire: Swift has to update its accounting databases each and every time a write is done, so that "swift stat" shows things like quotas. Swift is also not particularly good at HPC-style workloads, which benefit from a great bisectional bandwidth, because we transfer all user data through so-called "proxy" servers. Unlike e.g. Ceph, Swift keeps the cluster topology hidden from the client, while a Ceph client actually tracks the ring changes, placement groups and their leaders, etc.. But as we can see, once the object sizes start climbing and the number of clients increases, Swift rapidly approaches the wire speed.

I cannot help noticing that the architecture in question has a front-facing cache of pool (tier 1), which is what the ultimate clients see instead of Swift. Most of the time, Swift is selected for its ability to serve tens of thousands of clients simultaneously, but not in this case. Apparently, the end-user invented ProxyFS independently.

There's no mention of Red Hat selling Swift in the post. Either it was not part of the evaluation at all, or the author forgot about it for the passing of time. He did list a bunch of rather weird and obscure storage solutions though.

Kees Cook: security things in Linux v5.2

Thursday 18th of July 2019 12:07:36 AM

Previously: v5.1.

Linux kernel v5.2 was released last week! Here are some security-related things I found interesting:

page allocator freelist randomization
While the SLUB and SLAB allocator freelists have been randomized for a while now, the overarching page allocator itself wasn’t. This meant that anything doing allocation outside of the kmem_cache/kmalloc() would have deterministic placement in memory. This is bad both for security and for some cache management cases. Dan Williams implemented this randomization under CONFIG_SHUFFLE_PAGE_ALLOCATOR now, which provides additional uncertainty to memory layouts, though at a rather low granularity of 4MB (see SHUFFLE_ORDER). Also note that this feature needs to be enabled at boot time with page_alloc.shuffle=1 unless you have direct-mapped memory-side-cache (you can check the state at /sys/module/page_alloc/parameters/shuffle).

stack variable initialization with Clang
Alexander Potapenko added support via CONFIG_INIT_STACK_ALL for Clang’s -ftrivial-auto-var-init=pattern option that enables automatic initialization of stack variables. This provides even greater coverage than the prior GCC plugin for stack variable initialization, as Clang’s implementation also covers variables not passed by reference. (In theory, the kernel build should still warn about these instances, but even if they exist, Clang will initialize them.) Another notable difference between the GCC plugins and Clang’s implementation is that Clang initializes with a repeating 0xAA byte pattern, rather than zero. (Though this changes under certain situations, like for 32-bit pointers which are initialized with 0x000000AA.) As with the GCC plugin, the benefit is that the entire class of uninitialized stack variable flaws goes away.

Kernel Userspace Access Prevention on powerpc
Like SMAP on x86 and PAN on ARM, Michael Ellerman and Russell Currey have landed support for disallowing access to userspace without explicit markings in the kernel (KUAP) on Power9 and later PPC CPUs under CONFIG_PPC_RADIX_MMU=y (which is the default). This is the continuation of the execute protection (KUEP) in v4.10. Now if an attacker tries to trick the kernel into any kind of unexpected access from userspace (not just executing code), the kernel will fault.

Microarchitectural Data Sampling mitigations on x86
Another set of cache memory side-channel attacks came to light, and were consolidated together under the name Microarchitectural Data Sampling (MDS). MDS is weaker than other cache side-channels (less control over target address), but memory contents can still be exposed. Much like L1TF, when one’s threat model includes untrusted code running under Symmetric Multi Threading (SMT: more logical cores than physical cores), the only full mitigation is to disable hyperthreading (boot with “nosmt“). For all the other variations of the MDS family, Andi Kleen (and others) implemented various flushing mechanisms to avoid cache leakage.

unprivileged userfaultfd sysctl knob
Both FUSE and userfaultfd provide attackers with a way to stall a kernel thread in the middle of memory accesses from userspace by initiating an access on an unmapped page. While FUSE is usually behind some kind of access controls, userfaultfd hadn’t been. To avoid various heap grooming and heap spraying techniques for exploiting Use-after-Free flaws, Peter Xu added the new “vm.unprivileged_userfaultfd” sysctl knob to disallow unprivileged access to the userfaultfd syscall.

temporary mm for text poking on x86
The kernel regularly performs self-modification with things like text_poke() (during stuff like alternatives, ftrace, etc). Before, this was done with fixed mappings (“fixmap”) where a specific fixed address at the high end of memory was used to map physical pages as needed. However, this resulted in some temporal risks: other CPUs could write to the fixmap, or there might be stale TLB entries on removal that other CPUs might still be able to write through to change the target contents. Instead, Nadav Amit has created a separate memory map for kernel text writes, as if the kernel is trying to make writes to userspace. This mapping ends up staying local to the current CPU, and the poking address is randomized, unlike the old fixmap.

ongoing: implicit fall-through removal
Gustavo A. R. Silva is nearly done with marking (and fixing) all the implicit fall-through cases in the kernel. Based on the pull request from Gustavo, it looks very much like v5.3 will see -Wimplicit-fallthrough added to the global build flags and then this class of bug should stay extinct in the kernel.

CLONE_PIDFD added
Christian Brauner added the new CLONE_PIDFD flag to the clone() system call, which complements the pidfd work in v5.1 so that programs can now gain a handle for a process ID right at fork() (actually clone()) time, instead of needing to get the handle from /proc after process creation. With signals and forking now enabled, the next major piece (already in linux-next) will be adding P_PIDFD to the waitid() system call, and common process management can be done entirely with pidfd.

Edit: added CLONE_PIDFD notes, as reminded by Christian Brauner. :)

That’s it for now; let me know if you think I should add anything here. We’re almost to -rc1 for v5.3!

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Linux Plumbers Conference: System Boot and Security Microconference Accepted into 2019 Linux Plumbers Conference

Wednesday 17th of July 2019 04:57:44 PM

We are pleased to announce that the System Boot and Security Microconference has been accepted into the 2019 Linux Plumbers Conference! Computer-system security is a topic that has gotten a lot of serious attention over the years, but there has not been anywhere near as much attention paid to the system firmware. But the firmware is also a target for those looking to wreak havoc on our systems. Firmware is now being developed with security in mind, but provides incomplete solutions. This microconference will focus on the security of the system especially from the time the system is powered on.

Expected topics for this year include:

Come and join us in the discussion of keeping your system secure even at boot up.

We hope to see you there!

Linux Plumbers Conference: Power Management and Thermal Control Microconference Accepted into 2019 Linux Plumbers Conference

Wednesday 10th of July 2019 10:29:04 PM

We are pleased to announce that the Power Management and Thermal Control Microconference has been accepted into the 2019 Linux Plumbers Conference! Power management and thermal control are important areas in the Linux ecosystem to help improve the environment of the planet. In recent years, computer systems have been becoming more and more complex and thermally challenged at the same time and the energy efficiency expectations regarding them have been growing. This trend is likely to continue in the foreseeable future and despite the progress made in the power-management and thermal-control problem space since the Linux Plumbers Conference last year. That progress includes, but is not limited to, the merging of the energy-aware scheduling patch series and CPU idle-time management improvements; there will be more work to do in those areas. This gathering will focus on continuing to have Linux meet the power-management and thermal-control challenge.

Topics for this year include:

  • CPU idle-time management improvements
  • Device power management based on platform firmware
  • DVFS in Linux
  • Energy-aware and thermal-aware scheduling
  • Consumer-producer workloads, power distribution
  • Thermal-control methods
  • Thermal-control frameworks

Come and join us in the discussion of how to extend the battery life of your laptop while keeping it cool.

We hope to see you there!

Linux Plumbers Conference: Android Microconference Accepted into 2019 Linux Plumbers Conference

Tuesday 9th of July 2019 11:29:19 PM

We are pleased to announce that the Android Microconference has been accepted into the 2019 Linux Plumbers Conference! Android has a long history at Linux Plumbers and has continually made progress as a direct result of these meetings. This year’s focus will be a fairly ambitious goal to create a Generic Kernel Image (GKI) (or one kernel to rule them all!). Having a GKI will allow silicon vendors to be independent of the Linux kernel running on the device. As such, kernels could be easily upgraded without requiring any rework of the initial hardware porting efforts. This microconference will also address areas that have been discussed in the past.

The proposed topics include:

Come and join us in the discussion of improving what is arguably the most popular operating system in the world!

We hope to see you there!

Matthew Garrett: Bug bounties and NDAs are an option, not the standard

Tuesday 9th of July 2019 09:15:21 PM
Zoom had a vulnerability that allowed users on MacOS to be connected to a video conference with their webcam active simply by visiting an appropriately crafted page. Zoom's response has largely been to argue that:

a) There's a setting you can toggle to disable the webcam being on by default, so this isn't a big deal,
b) When Safari added a security feature requiring that users explicitly agree to launch Zoom, this created a poor user experience and so they were justified in working around this (and so introducing the vulnerability), and,
c) The submitter asked whether Zoom would pay them for disclosing the bug, and when Zoom said they'd only do so if the submitter signed an NDA, they declined.

(a) and (b) are clearly ludicrous arguments, but (c) is the interesting one. Zoom go on to mention that they disagreed with the severity of the issue, and in the end decided not to change how their software worked. If the submitter had agreed to the terms of the NDA, then Zoom's decision that this was a low severity issue would have led to them being given a small amount of money and never being allowed to talk about the vulnerability. Since Zoom apparently have no intention of fixing it, we'd presumably never have heard about it. Users would have been less informed, and the world would have been a less secure place.

The point of bug bounties is to provide people with an additional incentive to disclose security issues to companies. But what incentive are they offering? Well, that depends on who you are. For many people, the amount of money offered by bug bounty programs is meaningful, and agreeing to sign an NDA is worth it. For others, the ability to publicly talk about the issue is worth more than whatever the bounty may award - being able to give a presentation on the vulnerability at a high profile conference may be enough to get you a significantly better paying job. Others may be unwilling to sign an NDA on principle, refusing to trust that the company will ever disclose the issue or fix the vulnerability. And finally there are people who can't sign such an NDA - they may have discovered the issue on work time, and employer policies may prohibit them doing so.

Zoom are correct that it's not unusual for bug bounty programs to require NDAs. But when they talk about this being an industry standard, they come awfully close to suggesting that the submitter did something unusual or unreasonable in rejecting their bounty terms. When someone lets you know about a vulnerability, they're giving you an opportunity to have the issue fixed before the public knows about it. They've done something they didn't need to do - they could have just publicly disclosed it immediately, causing significant damage to your reputation and potentially putting your customers at risk. They could potentially have sold the information to a third party. But they didn't - they came to you first. If you want to offer them money in order to encourage them (and others) to do the same in future, then that's great. If you want to tie strings to that money, that's a choice you can make - but there's no reason for them to agree to those strings, and if they choose not to then you don't get to complain about that afterwards. And if they make it clear at the time of submission that they intend to publicly disclose the issue after 90 days, then they're acting in accordance with widely accepted norms. If you're not able to fix an issue within 90 days, that's very much your problem.

If your bug bounty requires people sign an NDA, you should think about why. If it's so you can control disclosure and delay things beyond 90 days (and potentially never disclose at all), look at whether the amount of money you're offering for that is anywhere near commensurate with the value the submitter could otherwise gain from the information and compare that to the reputational damage you'll take from people deciding that it's not worth it and just disclosing unilaterally. And, seriously, never ask for an NDA before you're committing to a specific $ amount - it's never reasonable to ask that someone sign away their rights without knowing exactly what they're getting in return.

tl;dr - a bug bounty should only be one component of your vulnerability reporting process. You need to be prepared for people to decline any restrictions you wish to place on them, and you need to be prepared for them to disclose on the date they initially proposed. If they give you 90 days, that's entirely within industry norms. Remember that a bargain is being struck here - you offering money isn't being generous, it's you attempting to provide an incentive for people to help you improve your security. If you're asking people to give up more than you're offering in return, don't be surprised if they say no.

comments

Linux Plumbers Conference: Update on LPC 2019 registration waiting list

Tuesday 9th of July 2019 01:36:52 AM

Here is an update regarding the registration situation for LPC2019.

The considerable interest for participation this year meant that the conference sold out earlier than ever before.

Instead of a small release of late-registration spots, the LPC planning committee has decided to run a waiting list, which will be used as the exclusive method for additional registrations. The planning committee will reach out to individuals on the waiting list and inviting them to register at the regular rate of $550, as spots become available.

With the majority of the Call for Proposals (CfP) still open, it is not yet possible to release passes. The planning committee and microconferences leads are working together to allocate the passes earmarked for microconferences. The Networking Summit and Kernel Summit speakers are yet to be confirmed also.

The planning committee understands that many of those who added themselves to the waiting list wish to find out soon whether they will be issued a pass. We anticipate the first passes to be released on July 22nd at the earliest.

Please follow us on social media, or here on this blog for further updates.

Linux Plumbers Conference: VFIO/IOMMU/PCI Microconference Accepted into 2019 Linux Plumbers Conference

Monday 8th of July 2019 05:56:09 PM

We are pleased to announce that the VFIO/IOMMU/PCI Microconference has been accepted into the 2019 Linux Plumbers Conference!

The PCI interconnect specification and the devices implementing it are incorporating more and more features aimed at high performance systems. This requires the kernel to coordinate the PCI devices, the IOMMUs they are connected to and the VFIO layer used to manage them (for user space access and device pass-through) so that users (and virtual machines) can use them effectively. The kernel interfaces to control PCI devices have to be designed in-sync for all three subsystems, which implies that there are lots of intersections in the design of kernel control paths for VFIO/IOMMU/PCI requiring kernel code design discussions involving the three subsystems at once.

A successful VFIO/IOMMU/PCI Microconference was held at Linux Plumbers in 2017, where:

  • The ground work for paravirtualized IOMMU (virtio-iommu) was defined
  • Implementation of surprise removal kernel policy was debated and brought a solution forward
  • Advanced Error Reporting (AER) user space interface was proposed and agreed upon
  • Configuration Request/Retry Status (CRS) handling and kernel implementation were defined
  • Shared Virtual Address (SVA) solutions from x86 and ARM were highlighted and the microconference discussions laid the path towards a unified solution

This year, the microconference will follow up on the previous microconference agendas and focus on ongoing patches review/design aimed at VFIO/IOMMU/PCI subsystems.

Topics for this year include:

  • VFIO
  • IOMMU
    • DMA-API layer interactions and how to move towards generic dma-ops for IOMMU drivers
  • PCI
    • Resources claiming/assignment consolidation
    • Peer-to-Peer
    • PCI error management
    • prefetchable vs non-prefetchable BAR address mappings (cacheability)
    • Kernel NoSnoop TLP attribute handling
    • CCIX and accelerators management

Come and join us in the discussion in helping Linux keep up with the new features being added to the PCI interconnect specification.

We hope to see you there!

Matthew Garrett: Creating hardware where no hardware exists

Sunday 7th of July 2019 08:15:09 PM
The laptop industry was still in its infancy back in 1990, but it still faced a core problem that we do today - power and thermal management are hard, but also critical to a good user experience (and potentially to the lifespan of the hardware). This is in the days where DOS and Windows had no memory protection, so handling these problems at the OS level would have been an invitation for someone to overwrite your management code and potentially kill your laptop. The safe option was pushing all of this out to an external management controller of some sort, but vendors in the 90s were the same as vendors now and would do basically anything to avoid having to drop an extra chip on the board. Thankfully(?), Intel had a solution.

The 386SL was released in October 1990 as a low-powered mobile-optimised version of the 386. Critically, it included a feature that let vendors ensure that their power management code could run without OS interference. A small window of RAM was hidden behind the VGA memory[1] and the CPU configured so that various events would cause the CPU to stop executing the OS and jump to this protected region. It could then do whatever power or thermal management tasks were necessary and return control to the OS, which would be none the wiser. Intel called this System Management Mode, and we've never really recovered.

Step forward to the late 90s. USB is now a thing, but even the operating systems that support USB usually don't in their installers (and plenty of operating systems still didn't have USB drivers). The industry needed a transition path, and System Management Mode was there for them. By configuring the chipset to generate a System Management Interrupt (or SMI) whenever the OS tried to access the PS/2 keyboard controller, the CPU could then trap into some SMM code that knew how to talk to USB, figure out what was going on with the USB keyboard, fake up the results and pass them back to the OS. As far as the OS was concerned, it was talking to a normal keyboard controller - but in reality, the "hardware" it was talking to was entirely implemented in software on the CPU.

Since then we've seen even more stuff get crammed into SMM, which is annoying because in general it's much harder for an OS to do interesting things with hardware if the CPU occasionally stops in order to run invisible code to touch hardware resources you were planning on using, and that's even ignoring the fact that operating systems in general don't really appreciate the entire world stopping and then restarting some time later without any notification. So, overall, SMM is a pain for OS vendors.

Change of topic. When Apple moved to x86 CPUs in the mid 2000s, they faced a problem. Their hardware was basically now just a PC, and that meant people were going to try to run their OS on random PC hardware. For various reasons this was unappealing, and so Apple took advantage of the one significant difference between their platforms and generic PCs. x86 Macs have a component called the System Management Controller that (ironically) seems to do a bunch of the stuff that the 386SL was designed to do on the CPU. It runs the fans, it reports hardware information, it controls the keyboard backlight, it does all kinds of things. So Apple embedded a string in the SMC, and the OS tries to read it on boot. If it fails, so does boot[2]. Qemu has a driver that emulates enough of the SMC that you can provide that string on the command line and boot OS X in qemu, something that's documented further here.

What does this have to do with SMM? It turns out that you can configure x86 chipsets to trap into SMM on arbitrary IO port ranges, and older Macs had SMCs in IO port space[3]. After some fighting with Intel documentation[4] I had Coreboot's SMI handler responding to writes to an arbitrary IO port range. With some more fighting I was able to fake up responses to reads as well. And then I took qemu's SMC emulation driver and merged it into Coreboot's SMM code. Now, accesses to the IO port range that the SMC occupies on real hardware generate SMIs, trap into SMM on the CPU, run the emulation code, handle writes, fake up responses to reads and return control to the OS. From the OS's perspective, this is entirely invisible[5]. We've created hardware where none existed.

The tree where I'm working on this is here, and I'll see if it's possible to clean this up in a reasonable way to get it merged into mainline Coreboot. Note that this only handles the SMC - actually booting OS X involves a lot more, but that's something for another time.

[1] If the OS attempts to access this range, the chipset directs it to the video card instead of to actual RAM.
[2] It's actually more complicated than that - see here for more.
[3] IO port space is a weird x86 feature where there's an entire separate IO bus that isn't part of the memory map and which requires different instructions to access. It's low performance but also extremely simple, so hardware that has no performance requirements is often implemented using it.
[4] Some current Intel hardware has two sets of registers defined for setting up which IO ports should trap into SMM. I can't find anything that documents what the relationship between them is, but if you program the obvious ones nothing happens and if you program the ones that are hidden in the section about LPC decoding ranges things suddenly start working.
[5] Eh technically a sufficiently enthusiastic OS could notice that the time it took for the access to occur didn't match what it should on real hardware, or could look at the CPU's count of the number of SMIs that have occurred and correlate that with accesses, but good enough

comments

Linux Plumbers Conference: Scheduler Microconference Accepted into 2019 Linux Plumbers Conference

Sunday 7th of July 2019 02:51:00 PM

We are pleased to announce that the Scheduler Microconference has been accepted into the 2019 Linux Plumbers Conference! The scheduler determines what runs on the CPU at any given time. The lag of your desktop is affected by the scheduler, for example. There are a few different scheduling classes for a user to choose from, such as the default class (SCHED_OTHER) or a real-time class (SCHED_FIFO, SCHED_RT and SCHED_DEADLINE). The deadline scheduler is the newest and allows the user to control the amount of bandwidth received by a task or group of tasks. With cloud computing becoming popular these days, controlling bandwidth of containers or virtual machines is becoming more important. The Real-Time patch is also destined to become mainline, which will add more strain on the scheduling of tasks to make sure that real-time tasks make their deadlines (although, this Microconference will focus on non real-time aspects of the scheduler. Please defer real-time topics to the Real-time Microconference). This requires verification techniques to ensure the scheduler is properly designed.

Topics for this year include:

Come and join us in the discussion of controlling what tasks get to run on your machine and when.

We hope to see you there!

Linux Plumbers Conference: RDMA Microconference Accepted into 2019 Linux Plumbers Conference

Wednesday 3rd of July 2019 10:54:07 PM

We are pleased to announce that the RDMA Microconference has been accepted into the 2019 Linux Plumbers Conference! RDMA has been a microconference at Plumbers for the last three years and will be continuing its productive work for a fourth year. The RDMA meetings at the previous Plumbers have been critical in getting improvements to the RDMA subsystem merged into mainline. These include a new user API, container support, testability/syzkaller, system bootup, Soft iWarp, and more. There are still difficult open issues that need to be resolved, and this year’s Plumbers RDMA Microconfernence is sure to come up with answers to these tough problems.

Topics for this year include:

  • RDMA and PCI peer to peer for GPU and NVMe applications, including HMM and DMABUF topics
  • RDMA and DAX (carry over from LSF/MM)
  • Final pieces to complete the container work
  • Contiguous system memory allocations for userspace (unresolved from 2017)
  • Shared protection domains and memory registrations
  • NVMe offload
  • Integration of HMM and ODP

And new developing areas of interest:

  • Multi-vendor virtualized ‘virtio’ RDMA
  • Non-standard driver features and their impact on the design of the subsystem
  • Encrypted RDMA traffic
  • Rework and simplification of the driver API

Come and join us in the discussion of improving Linux’s ability to access direct memory across high-speed networks.

We hope to see you there!

More in Tux Machines

Chromium/Mozilla Firefox: Chrome 78 Beta, Keygen Setback and iframes

  • Chrome 78 Beta: a new Houdini API, native file system access and more

    Unless otherwise noted, changes described below apply to the newest Chrome Beta channel release for Android, Chrome OS, Linux, macOS, and Windows. Find more information about the features listed here through the provided links or from the list on ChromeStatus.com. Chrome 78 is beta as of September 19, 2019.

  • Chrome 78 Hits Beta With Native File System API, Much Faster WebSockets

    Google on Friday released the Chrome 78 web-browser beta following last week's release of Chrome 77. Chrome 78 Beta is coming with a new Houdini API or more formally known as the CSS Properties and Values API Level 1, which lets developers register variables as fully custom CSS properties and can better handle animations and other use-cases.

  • Firefox 69 dropped support for <keygen>

    With version 69, firefox removed the support for the <keygen> feature to easily deploy TLS client certificates. It's kind of sad how used I've become to firefox giving me less and less reasons to use it...

  • [Mozilla] Restricting third-party iframe widgets using the sandbox attribute, referrer policy and feature policy

    Adding third-party embedded widgets on a website is a common but potentially dangerous practice. Thankfully, the web platform offers a few controls that can help mitigate the risks. While this post uses the example of an embedded SurveyMonkey survey, the principles can be used for all kinds of other widgets. Note that this is by no means an endorsement of SurveyMonkey's proprietary service. If you are looking for a survey product, you should consider a free and open source alternative like LimeSurvey.

DM-Clone Target Added To Linux 5.4 For Efficient Remote Replication Of A Block Device

Added to the device mapper (DM) code with the Linux 5.4 kernel is an interesting addition that benefits those wanting to carry out some interesting use-cases around remote replication of block devices. As explained in the original patch proposal for dm-clone, "dm-clone produces a one-to-one copy of an existing, read-only device (origin) into a writable device (clone): It presents a virtual block device which makes all data appear immediately, and redirects reads and writes accordingly. The main use case of dm-clone is to clone a potentially remote, high-latency, read-only, archival-type block device into a writable, fast, primary-type device for fast, low-latency I/O. The cloned device is visible/mountable immediately and the copy of the origin device to the clone device happens in the background, in parallel with user I/O." Read more

Devices: One Laptop Per Child (OLPC) XO-1.75, PiCAN3 CAN-Bus Board and BeagleBoard

IBM, Red Hat and Fedora

  • OpenShift Commons Gathering in Milan 2019 – Recap [Slides]

    On September 18th, 2019, the first OpenShift Commons Gathering Milan brought together over 300 experts to discuss container technologies, operators, the operator framework and the open source software projects that support the OpenShift ecosystem. This was the first OpenShift Commons Gathering to take place in Italy. The standing room only event hosted 11 talks in a whirlwind day of discussions. Of particular interest to the community was Christian Glombek’s presentation updating the status and roadmap for OKD4 and CoreOS. Highlights from the Gathering induled an OpenShift 4 Roadmap Update, customer stories from Amadeus, the leading travel technology company, and local stories from Poste Italiane and SIA S.p.A. In addition to the technical updates and customer talks, there was plenty of time to network during the breaks and enjoy the famous Italian coffee.

  • Powering the hybrid cloud on next-generation hardware: Red Hat Enterprise Linux on IBM System Z and LinuxONE

    For more than five years we have been driving our technology strategy around the idea that the future of enterprise IT does not reside solely in an enterprise datacenter or in the public cloud. Instead the next wave of computing is built on a blend of these technologies and infrastructure: in short, the future is hybrid. The value of hybrid clouds comes from the choice it delivers, pairing the control of the corporate datacenter alongside the scale and flexibility of public clouds. We strongly feel, however, that the most valuable hybrid clouds are those that offer not only a choice of deployment type and location, but also a choice of the underlying architecture and the capacity to run on multiple public clouds. [....] With RHEL available on Z15 and LinuxONE III, this helps pave the way for the rest of Red Hat’s hybrid cloud portfolio, including Red Hat OpenShift, to emerge on IBM enterprise platforms. We’re pleased to continue our work with IBM in bringing the world’s leading enterprise Linux platform to their next-generation systems.

  • Red Hat OpenStack Platform 15 Marks End of Short-Term Support

    Red Hat OpenStack Platform 15 is the last release that will only be supported for a year, as the company moves to a new model to support the open-source cloud platform.

  • Fedora rawhide – fixed bugs 2019/07