Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 2 hours 15 min ago

Jonathan Dowland: Bose on-ear wireless headphones

Wednesday 10th of July 2019 09:27:47 AM

Azoychka modelling the headphones

Earlier this year, and after about five years, I've had to accept that my beloved AKG K451 fold-able headphones have finally died, despite the best efforts of a friendly colleague in the Newcastle Red Hat office, who had replaced and re-soldered all the wiring through the headband, and disassembled the left ear-cup to remove a stray metal ring that got jammed in the jack, most likely snapped from one of several headphone wires I'd gone through.

The K451's were really good phones. They didn't sound quite as good as my much larger, much less portable Sennheisers, but the difference was close, and the portability aspect gave them fantastic utility. They remained comfortable to wear and listen to for hours on end, and were surprisingly low-leaking. I became convinced that on-ear was a good form factor for portable headphones.

To replace them, I decided to finally give wireless headphones a try. There are not a lot of on-ear, smaller form-factor wireless headphone models. I really wanted to like the Sony WH-H800s, which (I thought) looked stylish, and reviews for their bigger brother (the 1000 series over-ear) are fantastic. The 800s proved very hard to audition. I could only find one vendor in Newcastle with a pair for evaluation, Currys PC World, but the circumstances were very poor: a noisy store, the headphones tethered to a security frame on a very short leash, so I had to stoop to put them on; no ability to try my own music through the headset. The headset in actuality seemed poorly constructed, with the hard plastic seeming to be ill-fitting such that the headset rattled when I picked it up.

I therefore ended up buying the Bose on-ear wireless headphones. I was able to audition them in several different environments, using my own music, both over Bluetooth and via a cable. They are very comfortable, which is important for the use-case. I was a little nervous about reports on Bose sound quality, which is described as more sculpted than true to the source material, but I was happy with what I could hear in my demonstrations. What clinched it was a few other circumstances (that I won't elaborate on here) which brought the price down to comparable to what I paid for the AKG K451s.

A few months in, and the only criticism I have of the Bose headphones is I can get some mild discomfort on my helix if I have positioned them poorly. This has not turned out to be a big problem. One consequence of having wireless headphones, asides from increased convenience in the same listening circumstances I used wired headphones, is all the situations that I can use them that I wouldn't have bothered before, including a far wider range of house work chores, going up and down ladders, DIY jobs, etc. I'm finding myself consuming a lot more podcasts and programmes from BBC Radio, and experimenting more with streaming music.

Matthew Garrett: Bug bounties and NDAs are an option, not the standard

Tuesday 9th of July 2019 09:15:21 PM
Zoom had a vulnerability that allowed users on MacOS to be connected to a video conference with their webcam active simply by visiting an appropriately crafted page. Zoom's response has largely been to argue that:

a) There's a setting you can toggle to disable the webcam being on by default, so this isn't a big deal,
b) When Safari added a security feature requiring that users explicitly agree to launch Zoom, this created a poor user experience and so they were justified in working around this (and so introducing the vulnerability), and,
c) The submitter asked whether Zoom would pay them for disclosing the bug, and when Zoom said they'd only do so if the submitter signed an NDA, they declined.

(a) and (b) are clearly ludicrous arguments, but (c) is the interesting one. Zoom go on to mention that they disagreed with the severity of the issue, and in the end decided not to change how their software worked. If the submitter had agreed to the terms of the NDA, then Zoom's decision that this was a low severity issue would have led to them being given a small amount of money and never being allowed to talk about the vulnerability. Since Zoom apparently have no intention of fixing it, we'd presumably never have heard about it. Users would have been less informed, and the world would have been a less secure place.

The point of bug bounties is to provide people with an additional incentive to disclose security issues to companies. But what incentive are they offering? Well, that depends on who you are. For many people, the amount of money offered by bug bounty programs is meaningful, and agreeing to sign an NDA is worth it. For others, the ability to publicly talk about the issue is worth more than whatever the bounty may award - being able to give a presentation on the vulnerability at a high profile conference may be enough to get you a significantly better paying job. Others may be unwilling to sign an NDA on principle, refusing to trust that the company will ever disclose the issue or fix the vulnerability. And finally there are people who can't sign such an NDA - they may have discovered the issue on work time, and employer policies may prohibit them doing so.

Zoom are correct that it's not unusual for bug bounty programs to require NDAs. But when they talk about this being an industry standard, they come awfully close to suggesting that the submitter did something unusual or unreasonable in rejecting their bounty terms. When someone lets you know about a vulnerability, they're giving you an opportunity to have the issue fixed before the public knows about it. They've done something they didn't need to do - they could have just publicly disclosed it immediately, causing significant damage to your reputation and potentially putting your customers at risk. They could potentially have sold the information to a third party. But they didn't - they came to you first. If you want to offer them money in order to encourage them (and others) to do the same in future, then that's great. If you want to tie strings to that money, that's a choice you can make - but there's no reason for them to agree to those strings, and if they choose not to then you don't get to complain about that afterwards. And if they make it clear at the time of submission that they intend to publicly disclose the issue after 90 days, then they're acting in accordance with widely accepted norms. If you're not able to fix an issue within 90 days, that's very much your problem.

If your bug bounty requires people sign an NDA, you should think about why. If it's so you can control disclosure and delay things beyond 90 days (and potentially never disclose at all), look at whether the amount of money you're offering for that is anywhere near commensurate with the value the submitter could otherwise gain from the information and compare that to the reputational damage you'll take from people deciding that it's not worth it and just disclosing unilaterally. And, seriously, never ask for an NDA before you're committing to a specific $ amount - it's never reasonable to ask that someone sign away their rights without knowing exactly what they're getting in return.

tl;dr - a bug bounty should only be one component of your vulnerability reporting process. You need to be prepared for people to decline any restrictions you wish to place on them, and you need to be prepared for them to disclose on the date they initially proposed. If they give you 90 days, that's entirely within industry norms. Remember that a bargain is being struck here - you offering money isn't being generous, it's you attempting to provide an incentive for people to help you improve your security. If you're asking people to give up more than you're offering in return, don't be surprised if they say no.

comments

Sean Whitton: Upload to Debian with just 'git tag' and 'git push'

Tuesday 9th of July 2019 08:49:41 PM

At a sprint over the weekend, Ian Jackson and I designed and implemented a system to make it possible for Debian Developers to upload new versions of packages by simply pushing a specially formatted git tag to salsa (Debian’s GitLab instance). That’s right: the only thing you will have to do to cause new source and binary packages to flow out to the mirror network is sign and push a git tag.

It works like this:

  1. DD signs and pushes a git tag containing some metadata. The tag is placed on the commit you want to release (which is probably the commit where you ran dch -r).

  2. This triggers a GitLab webhook, which passes the public clone URI of your salsa project and the name of the newly pushed tag to a cloud service called tag2upload.

  3. tag2upload verifies the signature on the tag against the Debian keyring,1 produces a .dsc and .changes, signs these, and uploads the result to ftp-master.2

    (tag2upload does not have, nor need, push access to anyone’s repos on salsa. It doesn’t make commits to the maintainer’s branch.)

  4. ftp-master and the autobuilder network push out the source and binary packages in the usual way.

The first step of this should be as easy as possible, so we’ve produced a new script, git debpush, which just wraps git tag and git push to sign and push the specially formatted git tag.

We’ve fully implemented tag2upload, though it’s not running in the cloud yet. However, you can try out this workflow today by running tag2upload on your laptop, as if in response to a webhook. We did this ourselves for a few real uploads to sid during the sprint.

  1. First get the tools installed. tag2upload reuses code from dgit and dgit-infrastructure, and lives in bin:dgit-infrastructure. git debpush is in a completely independent binary package which does not make any use of dgit.3

    % apt-get install git-debpush dgit-infrastructure dgit debian-keyring

    (you need version 9.1 of the first three of these packages, in Debian testing, unstable and buster-backports at the time of writing).

  2. Prepare a source-only upload of some package that you normally push to salsa. When you are ready to upload this, just type git debpush.

    If the package is non-native, you will need to pass a quilt option to inform tag2upload what git branch layout you are using—it has to know this in order to produce a .dsc. See the git-debpush(1) manpage for the supported quilt options.

    The quilt option you select gets stored in the newly created tag, so for your next upload you won’t need it, and git debpush alone will be enough.

    See the git-debpush(1) manpage for more options, but we’ve tried hard to ensure most users won’t need any.

  3. Now you need to simulate salsa’s sending of a webhook to the tag2upload service. This is how you can do that:

    % mkdir -p ~/tmp/t2u % cd ~/tmp/t2u % DGIT_DRS_EMAIL_NOREPLY=myself@example.org dgit-repos-server \ debian . /usr/share/keyrings/debian-keyring.gpg,a --tag2upload \ https://salsa.debian.org/dgit-team/dgit-test-dummy.git debian/1.23

    … substituting your own service admin e-mail address, salsa repo URI and new tag name.

    Check the file ~/tmp/t2u/overall.log to see what happened, and perhaps take a quick look at Debian’s upload queue.

A few other notes about trying this out:

  • tag2upload will delete various files and directories in your working directory, so be sure to invoke it in an empty directory like ~/tmp/t2u.

  • You won’t see any console output, and the command might feel a bit slow. Neither of these will matter when tag2upload is running as a cloud service, of course. If there is an error, you’ll get an e-mail.

  • Running the script like this on your laptop will use your default PGP key to sign the .dsc and .changes. The real cloud service will have its own PGP key.

  • The shell invocation given above is complicated, but once the cloud service is deployed, no human is going to ever need to type it!

    What’s important to note is the two pieces of user input the command takes: your salsa repo URI, and the new tag name. The GitLab web hook will provide the tag2upload service with (only) these two parameters.

For some more discussion of this new workflow, see the git-debpush(1) manpage. We hope you have fun trying it out.

  1. Unfortunately, DMs can’t try tag2upload out on their laptops, though they will certainly be able to use the final cloud service version of tag2upload.
  2. Only source-only uploads are supported, but this is by design.
  3. Do not be fooled by the string ‘dgit’ appearing in the generated tags! We are just reusing a tag metadata convention that dgit also uses.

Thomas Lange: Talks, articles and a podcast in German

Tuesday 9th of July 2019 02:31:36 PM

In April I gave a talk about FAI a the GUUG-Frühjahrsfachgespräch in Karlsruhe (Slides). At this nice meeting Ingo from RadioTux made an interview with me (https://www.radiotux.de/index.php?/archives/8050-RadioTux-Sendung-April-2019.html) (from 0:35:30).

Then I found an article in the iX Special 2019 magazine about automation in the data center which mentioned FAI. Nice. But I was very supprised and happy when I saw a whole article about FAI in the Linux Magazin 7/2019. A very good article with a some focus on network things, but also the class system and installing other distributions is described. And they will also publish another article about the FAI.me service in a few months. I'm excited!

In a few days, I going to DebConf19 in Curitiba for two weeks. I will work on Debian web stuff, check my other packages (rinse, dracut, tcsh) and hope to meet a lot of friendly people.

And in August I'm giving a talk at FrOSCon about FAI.

FAI

Steve Kemp: Upgraded my first host to buster

Tuesday 9th of July 2019 09:01:00 AM

I upgrade the first of my personal machines to Debian's new stable release, buster, yesterday. So far two minor niggles, but nothing major.

My hosts are controlled, sometimes, by puppet. The puppet-master is running stretch and has puppet 4.8.2 installed. After upgrading my test-host to the new stable I discovered it has puppet 5.5 installed:

root@git ~ # puppet --version 5.5.10

I was not sure if there would be compatibility problems, but after reading the release notes nothing jumped out. Things seemed to work, once I fixed this immediate problem:

# puppet agent --test Warning: Unable to fetch my node definition, but the agent run will continue: Warning: SSL_connect returned=1 errno=0 state=error: dh key too small Info: Retrieving pluginfacts ..

This error-message was repeated multiple times:

SSL_connect returned=1 errno=0 state=error: dh key too small

To fix this comment out the line in /etc/ssl/openssl.cnf which reads:

CipherString = DEFAULT@SECLEVEL=2

The second problem was that I use borg to run backups, once per day on most systems, and twice per day on others. I have an invocation which looks like this:

borg create ${flags} --compression=zlib --stats ${dest}${self}::$(date +%Y-%m-%d-%H:%M:%S) \ --exclude=/proc \ --exclude=/swap.file \ --exclude=/sys \ --exclude=/run \ --exclude=/dev \ --exclude=/var/log \ /

That started to fail :

borg: error: unrecognized arguments: /

I fixed this by re-ordering the arguments such that it ended "destination path", and changing --exclude=x to --exclude x:

borg create ${flags} --compression=zlib --stats \ --exclude /proc \ --exclude /swap.file \ --exclude /sys \ --exclude /run \ --exclude /dev \ --exclude /var/log \ ${dest}${self}::$(date +%Y-%m-%d-%H:%M:%S) /

That approach works on my old and new hosts.

I'll leave this single system updated for a few more days to see what else is broken, if anything. Then I'll upgrade them in turn.

Good job!

Daniel Kahn Gillmor: DANE OPENPGPKEY for debian.org

Tuesday 9th of July 2019 04:00:00 AM
DANE OPENPGPKEY for debian.org

I recently announced the publication of Web Key Directory for @debian.org e-mail addresses. This blog post announces another way to fetch OpenPGP certificates for @debian.org e-mail addresses, this time using only the DNS. These two mechanisms are complementary, not in competition. We want to make sure that whatever certificate lookup scheme your OpenPGP client supports, you will be able to find the appropriate certificate.

The additional mechanism we're now supporting (since a few days ago) is DANE OPENPGPKEY, specified in RFC 7929.

How does it work?

DANE OPENPGPKEY works by storing a minimized OpenPGP certificate in the DNS, ideally in a subdomain at label based on a hashed version of the local part of the e-mail address.

With modern GnuPG, if you're interested in retrieving the OpenPGP certificate for dkg as served by the DNS, you can do:

gpg --auto-key-locate clear,nodefault,dane --locate-keys dkg@debian.org

If you're interested in how this DNS zone is populated, take a look at can the code that handles it. Please request improvements if you see ways that this could be improved.

Unfortunately, GnuPG does not currently do DNSSEC validation on these records, so the cryptographic protections offered by this client are not as strong as those provided by WKD (which at least checks the X.509 certificate for a given domain name against the list of trusted root CAs).

Why offer both DANE OPENPGPKEY and WKD?

I'm hoping that the Debian project can ensure that no matter whatever sensible mechanism any OpenPGP client implements for certificate lookup, it will be able to find the appropriate OpenPGP certificate for contacting someone within the @debian.org domain.

A clever OpenPGP client might even consider these two mechanisms -- DANE OPENPGPKEY and WKD -- as corroborative mechanisms, since an attacker who happens to compromise one of them may find it more difficult to compromise both simultaneously.

How to update?

If you are a Debian developer and you want your OpenPGP certificate updated in the DNS, please follow the normal procedures for Debian keyring maintenance like you always have. When a new debian-keyring package is released, we will update these DNS records at the same time.

Thanks

Setting this up would not have been possible without help from weasel on the Debian System Administration team, and Noodles from the keyring-maint team providing guidance.

DANE OPENPGPKEY was documented and shepherded through the IETF by Paul Wouters.

Thanks to all of these people for making it possible.

Rodrigo Siqueira: Status Update, June 2019

Tuesday 9th of July 2019 03:00:00 AM

For a long period of time, I’m cultivating the desire of having a habit of writing monthly status updates. Someway, Drew DeVault’s Blog posts and Martin Peres’s advice leverage me towards this direction. So, here I am! I have decided to embrace the challenge of composing a report per month. I hope this new habit helps me to improve my writing and communication skills but most importantly, help me to keep track of my work. I want to start this update by describing my work conditions and then focus on the technical stuff.

In the last two months, I’ve been facing an infrastructure problem to work. I’m dealing with obstacles such as restricted Internet access and long hours in public transportation from my home to my workplace. Unfortunately, I can’t work in my house due to the lack of space, and the best place to work is a public library at the University of Brasilia (UnB). Going to UnB every day makes me waste around 3h per day in a bus. The library has a great environment, but it also has thousands of internet restrictions. The fact that I can’t access websites with ‘.me’ domain and connect to my IRC bouncer is an example of that. In summary: It’s been hard to work these days. So let’s stop talking about non-technical stuff and get into the heart of the matter.

I really like working on VKMS. I know this is not news to anybody, and in June, most of my efforts were dedicated to VKMS. One of my paramount endeavors it was found and fixed a bug in vkms that makes kms_cursor_crc, and kms_pipe_crc_basic fails. I was chasing this bug for a long time as can be seen here [1]. After many hours debugging it, I sent a patch for handling this issue [2], however, after Daniel’s review, I realized that my patch didn’t fix correctly the problem. So Daniel decided to dig into this issue to find the root of the problem and later sent a final fix. If you want to see the solution, take a look at [3]. One day, I want to write a post about this fix since it is an interesting subject to discuss.

Daniel also noticed some concurrency problems in the CRC code and sent a patchset composed of 10 patches that tackle the issue. These patches focused on creating better framebuffers manipulation and avoiding race conditions. It took me around 4 days to take a look and test this series. During my review, I asked many things related to concurrency and other clarification about DRM. Daniel always replied with a very nice and detailed explanation. If you want to learn a little bit more about locks, I recommend you to take a look at [4]. Seriously, it is really nice!

I also worked for adding the writeback support in vkms; since XDC2018 I could not stop to think about the idea of adding writeback connector in vkms due to the benefits it could bring, such as new test and assist developers with visual output. As a result, I started some clumsy attempts to implement it in January; but I really dove in this issue in the middle of April, and in June I was focused on making it work. It was tough for me to implement these features due to the following reasons:

  1. There is not i-g-t test for writeback in the main repository, I had to use a WIP patchset made by Brian and Liviu.
  2. I was not familiar with framebuffer, connectors, and fancy manipulation.

As a result of the above limitations, I had to invest many hours reading the documentation and the DRM/IGT code. In the end, I think that add writeback connectors paid well for me since I feel much more comfortable with many things related to DRM these days. The writeback support was not landed yet, however, at this moment the patch is under review (V3) and changed a lot since the first version; for details about this series take a look at [5]. I’ll write a post about this feature after it gets merged.

After having the writeback connectors working in vkms, I felt so grateful for Brian, Liviu, and Daniel for all the assistance they provided to me. In particular, I was thrilled that Brian and Liviu made kms_writeback test which worked as an implementation guideline for me. As a result, I updated their patchsets for making it work in the latest version of IGT and made some tiny fixes. My goal was helping them to upstream kms_writeback. I submitted the series with the hope to see it landed in the IGT [9].

Parallel to my work with ‘writeback’ I was trying to figure out how I could expose vkms configurations to the userspace via configfs. After many efforts, I submitted the first version of configfs support; in this patchset I exposed the virtual and writeback connectors. Take a look at [6] for more information about this feature, and definitely, I’ll write a post about this feature after it gets landed.

Finally, I’m still trying to upstream a patch that makes drm_wait_vblank_ioctl return EOPNOTSUPP instead of EINVAL if the driver does not support vblank get landed. Since this change is in the DRM core and also change the userspace, it is not easy to make this patch get landed. For the details about this patch, you can take a look here [7]. I also implemented some changes in the kms_flip to validate the changes that I made in the function drm_wait_vblank_ioctl and it got landed [8].

July Aims

In June, I was totally dedicated to vkms, now I want to slow my road a little bit and study more about userspace. I want to take a step back and make some tiny programs using libdrm with the goal of understanding the interaction among userspace and kernel space. I also want to take a look at the theoretical part related to computer graphics.

I want to put some effort to improve a tool named kw that help me during my work with Linux Kernel. I also want to take a look at real overlay planes support in vkms. I noted that I have to find a “contribution protocol” (review/write code) that works for me in my current work conditions; otherwise, work will become painful for my relatives and me. Finally, and most importantly, I want to take some days off to enjoy my family.

Info: If you find any problem with this text, please let me know. I will be glad to fix it.

References

[1] “First discussion in the Shayenne’s patch about the CRC problem”. URL: https://lkml.org/lkml/2019/3/10/197

[2] “Patch fix for the CRC issue”. URL: https://patchwork.freedesktop.org/patch/308617/

[3] “Daniel final fix for CRC”. URL: https://patchwork.freedesktop.org/patch/308881/?series=61703&rev=1

[4] “Rework crc worker”. URL: https://patchwork.freedesktop.org/series/61737/

[5] “Introduces writeback support”. URL: https://patchwork.freedesktop.org/series/61738/

[6] “Introduce basic support for configfs”. URL: https://patchwork.freedesktop.org/series/63010/

[7] “Change EINVAL by EOPNOTSUPP when vblank is not supported”. URL: https://patchwork.freedesktop.org/patch/314399/?series=50697&rev=7

[8] “Skip VBlank tests in modules without VBlank”. URL: https://gitlab.freedesktop.org/drm/igt-gpu-tools/commit/2d244aed69165753f3adbbd6468db073dc1acf9a

[9] “Add support for testing writeback connectors”. URL: https://patchwork.freedesktop.org/series/39229/

Jonathan Wiltshire: Too close?

Monday 8th of July 2019 08:39:06 PM

At times of stress I’m prone to topical nightmares, but they are usually fairly mundane – last night, for example, I dreamed that I’d mixed up bullseye and bookworm in one of the announcements of future code names.

But Saturday night was a whole different game. Imagine taking a rucksack out of the cupboard under the stairs, and thinking it a bit too heavy for an empty bag. You open the top and it’s full of small packages tied up with brown paper and string. As you take each one out and set it aside you realise, with mounting horror, that these are all packages missing from buster and which should have been in the release. But it’s too late to do anything about that now; you know the press release went out already because you signed it off yourself, so you can’t do anything else but get all the packages out of the bag and see how many were forgotten. And you dig, and count, and dig, and it’s like Mary Poppins’ carpet bag, and still they keep on coming…

Sometimes I wonder if I get too close to stuff!

Andy Simpkins: Buster Release Party – Cambridge, UK

Monday 8th of July 2019 02:21:17 PM

With the release of Debian GNU/Linux 10.0.0 “Buster” completing in the small hours of yesterday morning (0200hrs UTC or thereabouts) most of the ‘release parties’ had already been and gone…. Not so for the Cambridge contingent who had scheduled a get together for the Sunday [0], knowing that various attendees would have been working on the release until the end.

The Sunday afternoon saw a gathering in the Haymakers pub to celebrate a successful release.   We would publicly like to thank the Raspberry Pi foundation [1], and Mythic Beasts [2] who between them picked up the tab for our bar bill – Cheers and thank you!

 

 

 

 

Ian and Sean also managed to join us, taking time out from the dgit sprint that had been running since Friday [3].

For most of the afternoon we had the inside of the pub all to ourselves.   

This was a friendly and low key event as we decompressed from the previous day’s activities, but we were also joined many other DDs, users and supporters, mostly hanging out in the garden but I confess to staying most of the time indoors in the shade and with a breeze through the pub…

Laptops are still needed even at a party

The start of the next project…

 

[0]    http://www.chiark.greenend.org.uk/pipermail/debian-uk/2019-June/000481.html
       https://wiki.debian.org/ReleasePartyBuster/UK/Cambridge

[1]   https://www.raspberrypi.org/

[2]    https://www.mythic-beasts.com/

[3]    https://wiki.debian.org/Sprints/2019/DgitDebrebase

Matthew Garrett: Creating hardware where no hardware exists

Sunday 7th of July 2019 07:46:01 PM
The laptop industry was still in its infancy back in 1990, but it still faced a core problem that we do today - power and thermal management are hard, but also critical to a good user experience (and potentially to the lifespan of the hardware). This is in the days where DOS and Windows had no memory protection, so handling these problems at the OS level would have been an invitation for someone to overwrite your management code and potentially kill your laptop. The safe option was pushing all of this out to an external management controller of some sort, but vendors in the 90s were the same as vendors now and would do basically anything to avoid having to drop an extra chip on the board. Thankfully(?), Intel had a solution.

The 386SL was released in October 1990 as a low-powered mobile-optimised version of the 386. Critically, it included a feature that let vendors ensure that their power management code could run without OS interference. A small window of RAM was hidden behind the VGA memory[1] and the CPU configured so that various events would cause the CPU to stop executing the OS and jump to this protected region. It could then do whatever power or thermal management tasks were necessary and return control to the OS, which would be none the wiser. Intel called this System Management Mode, and we've never really recovered.

Step forward to the late 90s. USB is now a thing, but even the operating systems that support USB usually don't in their installers (and plenty of operating systems still didn't have USB drivers). The industry needed a transition path, and System Management Mode was there for them. By configuring the chipset to generate a System Management Interrupt (or SMI) whenever the OS tried to access the PS/2 keyboard controller, the CPU could then trap into some SMM code that knew how to talk to USB, figure out what was going on with the USB keyboard, fake up the results and pass them back to the OS. As far as the OS was concerned, it was talking to a normal keyboard controller - but in reality, the "hardware" it was talking to was entirely implemented in software on the CPU.

Since then we've seen even more stuff get crammed into SMM, which is annoying because in general it's much harder for an OS to do interesting things with hardware if the CPU occasionally stops in order to run invisible code to touch hardware resources you were planning on using, and that's even ignoring the fact that operating systems in general don't really appreciate the entire world stopping and then restarting some time later without any notification. So, overall, SMM is a pain for OS vendors.

Change of topic. When Apple moved to x86 CPUs in the mid 2000s, they faced a problem. Their hardware was basically now just a PC, and that meant people were going to try to run their OS on random PC hardware. For various reasons this was unappealing, and so Apple took advantage of the one significant difference between their platforms and generic PCs. x86 Macs have a component called the System Management Controller that (ironically) seems to do a bunch of the stuff that the 386SL was designed to do on the CPU. It runs the fans, it reports hardware information, it controls the keyboard backlight, it does all kinds of things. So Apple embedded a string in the SMC, and the OS tries to read it on boot. If it fails, so does boot[2]. Qemu has a driver that emulates enough of the SMC that you can provide that string on the command line and boot OS X in qemu, something that's documented further here.

What does this have to do with SMM? It turns out that you can configure x86 chipsets to trap into SMM on arbitrary IO port ranges, and older Macs had SMCs in IO port space[3]. After some fighting with Intel documentation[4] I had Coreboot's SMI handler responding to writes to an arbitrary IO port range. With some more fighting I was able to fake up responses to reads as well. And then I took qemu's SMC emulation driver and merged it into Coreboot's SMM code. Now, accesses to the IO port range that the SMC occupies on real hardware generate SMIs, trap into SMM on the CPU, run the emulation code, handle writes, fake up responses to reads and return control to the OS. From the OS's perspective, this is entirely invisible[5]. We've created hardware where none existed.

The tree where I'm working on this is here, and I'll see if it's possible to clean this up in a reasonable way to get it merged into mainline Coreboot. Note that this only handles the SMC - actually booting OS X involves a lot more, but that's something for another time.

[1] If the OS attempts to access this range, the chipset directs it to the video card instead of to actual RAM.
[2] It's actually more complicated than that - see here for more.
[3] IO port space is a weird x86 feature where there's an entire separate IO bus that isn't part of the memory map and which requires different instructions to access. It's low performance but also extremely simple, so hardware that has no performance requirements is often implemented using it.
[4] Some current Intel hardware has two sets of registers defined for setting up which IO ports should trap into SMM. I can't find anything that documents what the relationship between them is, but if you program the obvious ones nothing happens and if you program the ones that are hidden in the section about LPC decoding ranges things suddenly start working.
[5] Eh technically a sufficiently enthusiastic OS could notice that the time it took for the access to occur didn't match what it should on real hardware, or could look at the CPU's count of the number of SMIs that have occurred and correlate that with accesses, but good enough

comments

Debian GSoC Kotlin project blog: Week 4 & 5 Update

Sunday 7th of July 2019 11:51:23 AM
Finished downgrading the project to be buildable by gradle 4.4.1

I have finished downgrading the project to be buildable using gradle 4.4.1. The project still needed a part of gradle 4.8 that I have successfully patched into sid gradle. here is the link to the changes that I have made.

Now we are officially done with making the project build with our gradle; so we can now go on ahead and finally start mapping out and packaging the dependencies.

Packaging dependencies for Kotlin-1.3.30

I split this task into two sub tasks that can be done independently. The 2 subtasks are as follows:
->part 1: make the entire project build successfully without :buildSrc:prepare-deps:intellij-sdk:build
--->part1.1:package these dependencies
->part 2: package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build ; i.e try to recreate whatever is in it.

The task has been split into this exact model because this folder has a variety of jars that the project uses and we ll have to minimize it and package the needed jars from it. Also the project uses other plugins and jars other than this one main folder which we can simultaneously map and package out.

I now have successfully mapped out the dependencies need by part 1: all that remains is to package em. I have copied the dependencies from the original cache(one created when I build the project using ./gradlew -Pteamcity=true dist) to /usr/share/maven-repo so some of these dependencies still need to have their dependencies clearly defined as in which of their dependencies we can omit and which we need. I have marked such dependencies with a *. So here are the dependencies:

jengeleman:shadow:4.0.3 --> https://github.com/johnrengelman/shadow (DONE: https://salsa.debian.org/m36-guest/jengelman-shadow) trove4j 1.x -> https://github.com/JetBrains/intellij-deps-trove4j (DONE: https://salsa.debian.org/java-team/libtrove-intellij-java) proguard:6.0.3 in jdk8 (DONE: released as libproguard-java 6.0.3-2) io.javaslang:2.0.6 --> https://github.com/vavr-io/vavr/tree/javaslang-v2.0.6 (DONE:https://salsa.debian.org/m36-guest/javaslang) jline 3.0.3 --> https://github.com/jline/jline3/tree/jline-3.3.1 protobuf-2.6.1 in jdk8 (DONE: https://salsa.debian.org/java-team/protobuf-2) *com.jcabi:jcabi-aether:1.0 *org.sonatype.aether:aether-api:1.13.1

!!NOTE:Please note that I might have missed out some; I ll add em to the list once I get em mapped out proper!!

So if any of you kind souls wanna help me out please kindly take on any of these and pacakge em.

!!NOTE-ping me if you want to build kotlin in your system and are stuck!!

Here is a link to the work I have done so far. You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates weekly.

Eriberto Mota: Debian: repository changed its ‘Suite’ value from ‘testing’ to ‘stable’

Sunday 7th of July 2019 03:44:52 AM

Debian 10 (Buster) was released two hours ago. \o/

When using ‘apt-get update’, I can see the message:

E: Repository 'http://security.debian.org/debian-security buster/updates InRelease' changed its 'Suite' value from 'testing' to 'stable' N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details.

 

Solution:

# apt-get --allow-releaseinfo-change update

 

Enjoy!

Bits from Debian: Debian 10 "buster" has been released!

Sunday 7th of July 2019 01:25:00 AM

You've always dreamt of a faithful pet? He is here, and his name is Buster! We're happy to announce the release of Debian 10, codenamed buster.

Want to install it? Choose your favourite installation media and read the installation manual. You can also use an official cloud image directly on your cloud provider, or try Debian prior to installing it using our "live" images.

Already a happy Debian user and you only want to upgrade? You can easily upgrade from your current Debian 9 "stretch" installation; please read the release notes.

Do you want to celebrate the release? We provide some buster artwork that you can share or use as base for your own creations. Follow the conversation about buster in social media via the #ReleasingDebianBuster and #Debian10Buster hashtags or join an in-person or online Release Party!

Andy Simpkins: Debian Buster Release

Sunday 7th of July 2019 12:30:49 AM

I spent all day smoke testing the install images for yesterday’s (this mornings – gee just after midnight local so we still had 11 hours to spare) Debian GNU/Linux 10.0.0 “Buster” release.

This year we had our “Biglyest test matrix ever”[0], 111 tests were completed at the point the release images were signed and the ftp team pushed the button.  Although more tests were reported to have taken place in IRC we had a total of 9 people completing tests called for in the wiki test matrix.

We also had a large number of people in irc during the image test phase of the release – peeking at 117…

Steve kindly hosted his front room a few of us – using a local mirror[1] of the images so our VM tests have the image available as an NFS share really speeds things up!  Between the 4 of us here in Cambridge we were testing on a total of 14 different machines, mainly AMD64, a couple of “only i386 capable laptops” and 3 entirely different ARM64 machines! (Mustang, Synquacer, & MacchiatoBin).

Room heaters on a hot day – photo RandomBird

In the photo are Sledge, Codehelp, Isy and myself…[2]

A Special thanks should also be given to Schweer who spent time testing the Debian Edu images and to Zdykstra for testing on ppc64el [2].

 

Finally sledge posted a link of the bandwidth utilization on the distribution network servers last week it looked like this:

Debian primary distribution network last week

That is seeing 500MBytes/s daily peek in traffic.

Pushing Buster today the graph looks like:

Guess when Buster release went live?

1.5GByte/Second – that is some network load :-)

 

Anyway – Time to head home.  I have a release party to attend later this afternoon and would like *some* sleep first.

 

 

[0] https://wiki.debian.org/Teams/DebianCD/ReleaseTesting/Buster_r0

[1] Local mirror is another ARM Synquacer ‘Server’ – 24 core Cortex A53  (A53 series is targeted at Mobile Devices)

[2] irc nicks have been used to protect the identity of the guilty :-)

François Marier: SIP Encryption on VoIP.ms

Saturday 6th of July 2019 11:00:00 PM

My VoIP provider recently added support for TLS/SRTP-based call encryption. Here's what I did to enable this feature on my Asterisk server.

First of all, I changed the registration line in /etc/asterisk/sip.conf to use the "tls" scheme:

[general] register => tls://mydid:mypassword@servername.voip.ms

then I enabled incoming TCP connections:

tcpenable=yes

and TLS:

tlsenable=yes tlscapath=/etc/ssl/certs/

Finally, I changed my provider entry in the same file to:

[voipms] type=friend host=servername.voip.ms secret=mypassword username=mydid context=from-voipms allow=ulaw allow=g729 insecure=port,invite transport=tls encryption=yes

(Note the last two lines.)

The dialplan didn't change and so I still have the following in /etc/asterisk/extensions.conf:

[pstn-voipms] exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>) exten => _1NXXNXXXXXX,n,Dial(SIP/voipms/${EXTEN}) exten => _1NXXNXXXXXX,n,Hangup() exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>) exten => _NXXNXXXXXX,n,Dial(SIP/voipms/1${EXTEN}) exten => _NXXNXXXXXX,n,Hangup() exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551234567>) exten => _011X.,n,Authenticate(1234) ; require password for international calls exten => _011X.,n,Dial(SIP/voipms/${EXTEN}) exten => _011X.,n,Hangup(16) Server certificate

The only thing I still need to fix is to make this error message go away in my logs:

asterisk[8691]: ERROR[8691]: tcptls.c:966 in __ssl_setup: TLS/SSL error loading cert file. <asterisk.pem>

It appears to be related to the fact that I didn't set tlscertfile in /etc/asterisk/sip.conf and that it's using its default value of asterisk.pem, a non-existent file.

Since my Asterisk server is only acting as a TLS client, and not a TLS server, there's probably no harm in not having a certificate. That said, it looks pretty easy to use a Let's Encrypt cert with Asterisk.

Jonathan Wiltshire: Daisy and George Help Debian

Saturday 6th of July 2019 07:33:53 PM

Daisy and George have decided to get stuck into testing images for #ReleasingDebianBuster.

George is driving the keyboard while Daisy takes notes about their test results.

This test looks like a success. Next!

Jonathan Wiltshire: Testing in Teams

Saturday 6th of July 2019 03:46:44 PM

The Debian CD images are subjected to a battery of tests before release, even more so when the release is a major new version. Debian has volunteers all over the world testing images as they come off the production line, but it can be a lonely task.

Getting together in a group and having a bit of competitive fun always makes the day go faster:

Debian CD testers in Cambridge, GB (photo: Jo McIntyre)

And, of course, they’re a valuable introduction to the life-cycle of a Debian release for future Debian Developers.

Niels Thykier: A decline in the use of hints in the release team

Saturday 6th of July 2019 10:16:11 AM

While we working the release, I had a look at how many hints we have deployed during buster like I did for wheezy a few years back.  As it seemed we were using a lot fewer hints than previously and I decided to take a detour into “stats-land”.

Jonathan Wiltshire: What to expect on buster release day

Saturday 6th of July 2019 07:19:37 AM

The ‘buster’ release day is today! This is mostly a re-hash of previous checklists, since we’ve done this a few times now and we have a pretty good rhythm.

There have been some preparations going on in advance:

  1. Last week we imposed a “quiet period” on migrations. That’s about as frozen as we can get; it means that even RC bugs need to be exceptional if they aren’t to be deferred to the first point release. Only late-breaking documentation (like the install guide) was accepted.
  2. The security team opened buster-updates for business and carried out a test upload
  3. The debian-installer team made a final release.
  4. Final debtags data was updated.
  5. Yesterday the testing migration script britney and other automatic maintenance scripts that the release team run were disabled for the duration.
  6. We made final preparations of things that can be done in advance, such as drafting the publicity announcements. These have to be done in advance so translators get chance to do their work overnight (translations are starting to arrive right now!).

The following checklist makes the release actually happen:

  1. Once dinstall is completed at 07:52, archive maintenance is suspended – the FTP masters will do manual work for now.
  2. Very large quantities of coffee will be prepared all across Europe.
  3. Release managers carry out consistency checks of the buster index files, and confirm to FTP masters that there are no last-minute changes to be made. RMs get a break to make more coffee.
  4. While they’re away FTP masters begin the process of relabelling stretch as oldstable and buster as stable. If an installer needs to be, er, installed as well, that happens at this point. Old builds of the installer are removed.
  5. A new suite for bullseye (Debian 11) is initialised and populated, and labelled testing.
  6. Release managers check that the newly-generated suite index files look correct and consistent with the checks made earlier in the day. Everything is signed off – both in logistical and cryptographic terms.
  7. FTP masters trigger a push of all the changes to the CD-building mirror so that production of images can begin. As each image is completed, several volunteers download and test it in as many ways as they can dream up (booting and choosing different paths through the installer to check integrity).
  8. Finally a full mirror push is triggered by FTP masters, and the finished CD images are published.
  9. Announcements are sent by the publicity team to various places, and by the release team to the developers at large.
  10. Archive maintenance scripts are re-enabled.
  11. The release team take a break for a couple of weeks before getting back into the next cycle.

During the day much of the coordination happens in the #debian-release, #debian-ftp and #debian-cd IRC channels. You’re welcome to follow along if you’re interested in the process, although we ask that you are read-only while people are still concentrating (during the Squeeze release, a steady stream of people turned up to say “congratulations!” at the most critical junctures; it’s not particularly helpful while the process is still going on). The publicity team will be tweeting and denting progress as it happens, so that makes a good overview too.

If everything goes to plan, enjoy the parties!

(Disclaimer: inaccuracies are possible since so many people are involved and there’s a lot to happen in each step; all errors and omissions are entirely mine.)

Mike Gabriel: My Work on Debian LTS/ELTS (June 2019)

Friday 5th of July 2019 09:02:22 PM

In June 2019, I did not at all reach my goal of LTS/ELTS hours, unfortunately. (At this point, I could come up with a long story about our dog'ish family member and the infection diseases he got, the vet visits we did and the daily care and attention he needed, but I won't...).

I have worked on the Debian LTS project for 9,75 hours (of 17 hours planned) and on the Debian ELTS project just for 1 hour (of 12 hours planned) as a paid contributor.

LTS Work
  • LTS: Setup physical box running Debian jessie (for qemu testing)
  • LTS: Bug hunting mupdf regarding my CVE-2018-5686 patch backport
  • LTS: Upload to jessie-security: mupdf (DLA-1838-1), 3 CVEs [1]
  • LTS: Glib2.0: request CVE Id (CVE-2019-13012) + email communication with upstream [2] (minor issue for Glib2.0 << 2.60)
  • LTS: cfengine3: triage CVE-2019-9929, email communication with upstream (helping out security team) [3]
ELTS Work
  • Upload to wheezy-lts: expat (ELA 136-1), 1 CVE [4]
References

More in Tux Machines

Audiocasts/Shows: Jupiter (Linux Academy) and TLLTS

Android Leftovers

KMyMoney 5.0.6 released

The KMyMoney development team today announces the immediate availability of version 5.0.6 of its open source Personal Finance Manager. Another maintenance release is ready: KMyMoney 5.0.6 comes with some important bugfixes. As usual, problems have been reported by our users and the development team fixed some of them in the meantime. The result of this effort is the brand new KMyMoney 5.0.6 release. Despite even more testing we understand that some bugs may have slipped past our best efforts. If you find one of them, please forgive us, and be sure to report it, either to the mailing list or on bugs.kde.org. Read more

Games: Don't Starve Together, Cthulhu Saves the World, EVERSPACE 2 and Stadia

  • Don't Starve Together has a big free update adding in boats and a strange island

    Klei Entertainment have given the gift of new features to their co-op survival game Don't Starve Together, with the Turn of Tides update now available. Taking a little inspiration from the Shipwrecked DLC available for the single-player version Don't Starve, this new free update enables you to build a boat to carry you and other survivors across the sea. Turn of Tides is the first part of a larger update chain they're calling Return of Them, so I'm excited to see what else is going to come to DST.

  • Cthulhu Saves the World has an unofficial Linux port available

    In response to an announcement to a sequel to Cthulhu Saves the World, Ethan Lee AKA flibitijibibo has made a unofficial port for the original and a few other previously Windows-only games. As a quick reminder FNA is a reimplementation of the proprietary XNA API created by Micrsosoft and quite a few games were made with that technology. We’ve gotten several ports thanks to FNA over the years though Ethan himself has mostly moved on to other projects like working on FAudio and Steam Play.

  • EVERSPACE 2 announced, with more of a focus on exploration and it will release for Linux

    EVERSPACE is probably one of my absolute favourite space shooters from the last few years, so I'm extremely excited to see EVERSPACE 2 be announced and confirmed for Linux. For the Linux confirmation, I reached out on Twitter where the developer replied with "#Linux support scheduled for full release in 2021!".

  • Google reveal more games with the latest Stadia Connect, including Cyberpunk 2077

    Today, Google went back to YouTube to show off an impressive list of games coming to their Stadia game streaming service, which we already know is powered by Debian Linux and Vulkan. As a reminder, Google said not to see Stadia as if it was the "Netflix of games", as it's clearly not. Stadia Base requires you to buy all your games as normal, with Stadia Pro ($9.99 monthly) giving you a trickle of free games to access on top of 4K and surround sound support.