Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 2 hours 33 min ago

Steve McIntyre: DebConf in Brazil again!

Saturday 10th of August 2019 06:33:00 PM

I was lucky enough to meet up with my extended Debian family again this year. We went back to Brazil for the first time since 2004, this time in Curitiba. And this time I didn't lose anybody's clothes! :-)

I had a very busy time, as usual - lots of sessions to take part in, and lots of conversations with people from all over. As part of the Community Team (ex-AH Team), I had a lot of things to catch up on too, and a sprint report to send. Despite all that, I even managed to do some technical things too!

I ran sessions about UEFI Secure Boot, the Arm ports and the Community Team. I was meant to be running a session for the web team too, but the dreaded DebConf 'flu took me out for a day. It's traditional - bring hundreds of people together from all over the world, mix them up with too much alcohol and not enough sleep and many people get ill... :-( Once I'm back from vacation, I'll be doing my usual task of sending session summaries to the Debian mailing lists to describe what happened in my sessions.

Maddog showed a group of us round the micro-brewery at Hop'n'Roll which was extra fun. I'm sure I wasn't the only experienced guy there, but it's always nice to listen to geeky people talking about their passion.

Of course, I could't get to all the sessions I wanted to - there's just too many things going on in DebConf week, and sessions clash at the best of times. So I have a load of videos on my laptop to watch while I'm away. Heartfelt thanks to our always-awesome video team for their efforts to make that possible. And I know that I had at least one follower at home watching the live streams too!

Daniel Lange: Cleaning a broken GnuPG (gpg) key

Saturday 10th of August 2019 03:38:55 PM

I've long said that the main tools in the Open Source security space, OpenSSL and GnuPG (gpg), are broken and only a complete re-write will solve this. And that is still pending as nobody came forward with the funding. It's not a sexy topic, so it has to get really bad before it'll get better.

Gpg has a UI that is close to useless. That won't substantially change with more bolted-on improvements.

Now Robert J. Hansen and Daniel Kahn Gillmor had somebody add ~50k signatures (read 1, 2, 3, 4 for the g{l}ory details) to their keys and - oops - they say that breaks gpg.

But does it?

I downloaded Robert J. Hansen's key off the SKS-Keyserver network. It's a nice 45MB file when de-ascii-armored (gpg --dearmor broken_key.asc ; mv broken_key.asc.gpg broken_key.gpg).

Now a friendly:

$ /usr/bin/time -v gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

User-ID "Robert J. Hansen <rjh@sixdemonbag.org>": 49705 Signaturen entfernt
User-ID "Robert J. Hansen <rob@enigmail.net>": 49704 Signaturen entfernt
User-ID "Robert J. Hansen <rob@hansen.engineering>": 49701 Signaturen entfernt

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

        Command being timed: "gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit"
        User time (seconds): 3911.14
        System time (seconds): 2442.87
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 1:45:56
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 107660
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 1
        Minor (reclaiming a frame) page faults: 26630
        Voluntary context switches: 43
        Involuntary context switches: 59439
        Swaps: 0
        File system inputs: 112
        File system outputs: 48
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
 

And the result is a nicely useable 3835 byte file of the clean public key. If you supply a keyring instead of --no-default-keyring it will also keep the non-self signatures that are useful for you (as you apparently know the signing party).

So it does not break gpg. It does break things that call gpg at runtime and not asynchronously. I heard Enigmail is affected, quelle surprise.

Now the main problem here is the runtime. 1h45min is just ridiculous. As Filippo Valsorda puts it:

Someone added a few thousand entries to a list that lets anyone append to it. GnuPG, software supposed to defeat state actors, suddenly takes minutes to process entries. How big is that list you ask? 17 MiB. Not GiB, 17 MiB. Like a large picture. https://dev.gnupg.org/T4592

If I were a gpg / SKS keyserver developer, I'd

  • speed this up so the edit-key run above completes in less than 10 s (just getting rid of the lseek/read dance and deferring all time-based decisions should get close)
  • (ideally) make the drop-sig import-filter syntax useful (date-ranges, non-reciprocal signatures, ...)
  • clean affected keys on the SKS keyservers (needs coordination of sysops, drop servers from unreachable people)
  • (ideally) use the opportunity to clean all keyserver filesystem and the message board over pgp key servers keys, too
  • only accept new keys and new signatures on keys extending the strong set (rather small change to the existing codebase)

That way another key can only be added to the keyserver network if it contains at least one signature from a previously known strong-set key. Attacking the keyserver network would become at least non-trivial. And the web-of-trust thing may make sense again.

Updates

09.07.2019

GnuPG 2.2.17 has been released with another set of quickly bolted together fixes:

* gpg: Ignore all key-signatures received from keyservers. This change is required to mitigate a DoS due to keys flooded with faked key-signatures. The old behaviour can be achieved by adding keyserver-options no-self-sigs-only,no-import-clean to your gpg.conf. [#4607] * gpg: If an imported keyblocks is too large to be stored in the keybox (pubring.kbx) do not error out but fallback to an import using the options "self-sigs-only,import-clean". [#4591] * gpg: New command --locate-external-key which can be used to refresh keys from the Web Key Directory or via other methods configured with --auto-key-locate. * gpg: New import option "self-sigs-only". * gpg: In --auto-key-retrieve prefer WKD over keyservers. [#4595] * dirmngr: Support the "openpgpkey" subdomain feature from draft-koch-openpgp-webkey-service-07. [#4590]. * dirmngr: Add an exception for the "openpgpkey" subdomain to the CSRF protection. [#4603] * dirmngr: Fix endless loop due to http errors 503 and 504. [#4600] * dirmngr: Fix TLS bug during redirection of HKP requests. [#4566] * gpgconf: Fix a race condition when killing components. [#4577]

Bug T4607 shows that these changes are all but well thought-out. They introduce artificial limits, like 64kB for WKD-distributed keys or 5MB for local signature imports (Bug T4591) which weaken the web-of-trust further.

I recommend to not run gpg 2.2.17 in production environments without extensive testing as these limits and the unverified network traffic may bite you. Do validate your upgrade with valid and broken keys that have segments (packet groups) surpassing the above mentioned limits. You may be surprised what gpg does. On the upside: you can now refresh keys (sans signatures) via WKD. So if your buddies still believe in limiting their subkey validities, you can more easily update them bypassing the SKS keyserver network. NB: I have not tested that functionality. So test before deploying.

10.08.2019

Christopher Wellons (skeeto) has released his pgp-poisoner tool. It is a go program that can add thousands of malicious signatures to a GNUpg key per second. He comments "[pgp-poisoner is] proof that such attacks are very easy to pull off. It doesn't take a nation-state actor to break the PGP ecosystem, just one person and couple evenings studying RFC 4880. This system is not robust." He also hints at the next likely attack vector, public subkeys can be bound to a primary key of choice.

Petter Reinholdtsen: Legal to share more than 16,000 movies listed on IMDB?

Saturday 10th of August 2019 10:00:00 AM

The recent announcement of from the New York Public Library on its results in identifying books published in the USA that are now in the public domain, inspired me to update the scripts I use to track down movies that are in the public domain. This involved updating the script used to extract lists of movies believed to be in the public domain, to work with the latest version of the source web sites. In particular the new edition of the Retro Film Vault web site now seem to list all the films available from that distributor, bringing the films identified there to more than 12.000 movies, and I was able to connect 46% of these to IMDB titles.

The new total is 16307 IMDB IDs (aka films) in the public domain or creative commons licensed, and unknown status for 31460 movies (possibly duplicates of the 16307).

The complete data set is available from a public git repository, including the scripts used to create it.

Anyway, this is the summary of the 28 collected data sources so far:

2361 entries ( 50 unique) with and 22472 without IMDB title ID in free-movies-archive-org-search.json 2363 entries ( 146 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json 299 entries ( 32 unique) with and 93 without IMDB title ID in free-movies-cinemovies.json 88 entries ( 52 unique) with and 36 without IMDB title ID in free-movies-creative-commons.json 3190 entries ( 1532 unique) with and 13 without IMDB title ID in free-movies-fesfilm-xls.json 620 entries ( 24 unique) with and 283 without IMDB title ID in free-movies-fesfilm.json 1080 entries ( 165 unique) with and 651 without IMDB title ID in free-movies-filmchest-com.json 830 entries ( 13 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json 19 entries ( 19 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-gb.json 7410 entries ( 7101 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-us.json 1205 entries ( 41 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json 163 entries ( 22 unique) with and 88 without IMDB title ID in free-movies-infodigi-pd.json 158 entries ( 103 unique) with and 0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json 113 entries ( 4 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json 182 entries ( 71 unique) with and 0 without IMDB title ID in free-movies-letterboxd-silent.json 248 entries ( 85 unique) with and 0 without IMDB title ID in free-movies-manual.json 158 entries ( 4 unique) with and 64 without IMDB title ID in free-movies-mubi.json 85 entries ( 1 unique) with and 23 without IMDB title ID in free-movies-openflix.json 520 entries ( 22 unique) with and 244 without IMDB title ID in free-movies-profilms-pd.json 343 entries ( 14 unique) with and 10 without IMDB title ID in free-movies-publicdomainmovies-info.json 701 entries ( 16 unique) with and 560 without IMDB title ID in free-movies-publicdomainmovies-net.json 74 entries ( 13 unique) with and 60 without IMDB title ID in free-movies-publicdomainreview.json 698 entries ( 16 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json 5506 entries ( 2941 unique) with and 6585 without IMDB title ID in free-movies-retrofilmvault.json 16 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-thehillproductions.json 110 entries ( 2 unique) with and 29 without IMDB title ID in free-movies-two-movies-net.json 73 entries ( 20 unique) with and 131 without IMDB title ID in free-movies-vodo.json 16307 unique IMDB title IDs in total, 12509 only in one list, 31460 without IMDB title ID

New this time is a list of all the identified IMDB titles, with title, year and running time, provided in free-complete.json. this file also indiciate which source is used to conclude the video is free to distribute.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Andy Simpkins: gov.uk paperwork

Friday 9th of August 2019 01:11:36 PM

[Edit: Removed accusation of non UK hosting – thank you to Richard Mortimer & Philipp Edelmann for pointing out I had incorrectly looked up the domain “householdresponce.com” in place of  “householdresponse.com”.  Learn to spell…]

I live in England, the government keeps an Electoral Roll, a list of people registered to vote.  This list needs to be maintained, so once a year we are required to update the database.  To make sure we don’t forget we get sent a handy letter through the door looking like this:

Is it a scam?

Well that’s the first page anyway.   Correctly addressed to the “Current Occupier”.   So why am I posting about this?

Phishing emails land in our inbox all the time (hopefully only a few because our spam filters eat the rest).  These are unsolicited emails trying to trick us into doing something, usually they look like something official and warn us about something that we should take action about, for example an email that looks like it has come from your bank warning about suspicious activity in your account, they then ask you to follow a link to the ‘banks website’ where you can login and confirm if the activity is genuine – obviously taking you through a ‘man in the middle’ website that harvests your account credentials.

The government is justifiably concerned about this (as to are banks and other businesses that are impersonated in this way) and so run media campaigns to educate the public in the dangers of such scams and what to look out for.

So back to the “Household Enquiry” form…

How do I know that this is genuine?  Well I don’t.  I can’t easily verify the letter, I can’t be sure who sent it, It arrived through my letterbox unbidden, and even if I was expecting it wouldn’t the perfect time to send such a scam letter be at the same time genuine letters are being distributed?

All I can do read the letter carefully and apply the same rational tests that I would to any unsolicited (e)mail.

1) Does it claim to come from a source I would have dealings with (bulk mailing is so cheep that sending to huge numbers of people is still effective even if most of the recipients will know it is a scam because they wouldn’t have dealings with the alleged sender).  Yes it claims to have been sent by South Cambridge District Council and They are my county council and would send be this letter.

2) Do all the communication links point to the sender?  No.  Stop this is probably a scam.

 

Alarm bells should now be ringing – their preferred method of communication is for me to visits the website www.householdresponse.com/southcambs.  Sure they have gov.uk website mentioned and they claim to be south Cambridgeshire District Council and they have an email address elections@scambs.gov.uk  but all the fake emails claiming to come from my bank look like the come from my bank as well – the only thing that doesn’t is the link they want you to follow.  Just like this letter….

Ok Time for a bit of detective work

:~$whois householdresponse.com Domain Name: HOUSEHOLDRESPONSE.COM Registry Domain ID: 2036860356_DOMAIN_COM-VRSN Registrar WHOIS Server: whois.easyspace.com Registrar URL: http://www.easyspace.com Updated Date: 2018-05-23T05:56:38Z Creation Date: 2016-06-22T09:24:15Z Registry Expiry Date: 2020-06-22T09:24:15Z Registrar: EASYSPACE LIMITED Registrar IANA ID: 79 Registrar Abuse Contact Email: abuse@easyspace.com Registrar Abuse Contact Phone: +44.3707555066 Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited Domain Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited Name Server: NS1.NAMECITY.COM Name Server: NS2.NAMECITY.COM DNSSEC: unsigned URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/ >>> Last update of whois database: 2019-08-09T17:05:57Z <<< <snip>

Really?  just a hosting companies details for a domain claiming to be from local government?

:~$nslookup householdresponse.com <snip> Name: householdresponse.com Address: 62.25.101.164 :~$ whois 62.25.101.164 <snip> % Information related to '62.25.64.0 - 62.25.255.255' % Abuse contact for '62.25.64.0 - 62.25.255.255' is 'ipabuse@vodafone.co.uk' inetnum: 62.25.64.0 - 62.25.255.255 netname: UK-VODAFONE-WORLDWIDE-20000329 country: GB org: ORG-VL225-RIPE admin-c: GNOC4-RIPE tech-c: GNOC4-RIPE status: ALLOCATED PA mnt-by: RIPE-NCC-HM-MNT mnt-by: VODAFONE-WORLDWIDE-MNTNER mnt-lower: VODAFONE-WORLDWIDE-MNTNER mnt-domains: VODAFONE-WORLDWIDE-MNTNER mnt-routes: VODAFONE-WORLDWIDE-MNTNER created: 2017-10-18T09:50:20Z last-modified: 2017-10-18T09:50:20Z source: RIPE # Filtered organisation: ORG-VL225-RIPE org-name: Vodafone Limited org-type: LIR address: Vodafone House, The Connection address: RG14 2FN address: Newbury address: UNITED KINGDOM phone: +44 1635 33251 admin-c: GSOC-RIPE tech-c: GSOC-RIPE abuse-c: AR40377-RIPE mnt-ref: CW-EUROPE-GSOC mnt-by: RIPE-NCC-HM-MNT mnt-by: CW-EUROPE-GSOC created: 2017-05-11T14:35:11Z last-modified: 2018-01-03T15:48:36Z source: RIPE # Filtered role: Cable and Wireless IP GNOC Munich remarks: Cable&Wireless Worldwide Hostmaster address: Smale House address: London SE1 address: UK admin-c: DOM12-RIPE admin-c: DS3356-RIPE admin-c: EJ343-RIPE admin-c: FM1414-RIPE admin-c: MB4 tech-c: AB14382-RIPE tech-c: MG10145-RIPE tech-c: DOM12-RIPE tech-c: JO361-RIPE tech-c: DS3356-RIPE tech-c: SA79-RIPE tech-c: EJ343-RIPE tech-c: MB4 tech-c: FM1414-RIPE abuse-mailbox: ipabuse@vodafone.co.uk nic-hdl: GNOC4-RIPE mnt-by: CW-EUROPE-GSOC created: 2004-02-03T16:44:58Z last-modified: 2017-05-25T12:03:34Z source: RIPE # Filtered % Information related to '62.25.64.0/18AS1273' route: 62.25.64.0/18 descr: Vodafone Hosting origin: AS1273 mnt-by: ENERGIS-MNT created: 2019-02-28T08:50:03Z last-modified: 2019-02-28T08:57:04Z source: RIPE % Information related to '62.25.64.0/18AS2529' route: 62.25.64.0/18 descr: Energis UK origin: AS2529 mnt-by: ENERGIS-MNT created: 2014-03-26T16:21:40Z last-modified: 2014-03-26T16:21:40Z source: RIPE

Is this a scam…
I only wish it was :-(

A quick search of https://www.scambs.gov.uk/elections/electoral-registration-faqs/ and the very first thing on the webpage is a link to www.householdresponse.com/southcambs…

A phone call to the council, just to confirm that they haven’t been hacked and I am told yes this is for real.

OK lets look at the Privacy statement (on the same letter)

Right a link to a uk gov website… https://www.scambs.gov.uk/privacynotice

A Copy of this page as of 2019-08-09 because websites have a habit of changing can be found here
http://koipond.org.uk/photo/Screenshot_2019-08-09_CustomerPrivacyNotice.png

[Edit
I originally thought that this was being hosted outside the UK (on a US based server) which would be outside of GPDR.  I am still pissed off that this looks and feels ‘spammy’ and that the site is being hosted outside of a gov.uk based server, but this is not the righteous rage that I previously felt]

Summary Of Issue
  1. UK Government, District and Local Councils should be an exemplar of best practice.  Any correspondence from any part of UK government should only use websites within the subdomain gov.uk  (fraud prevention)
Actions taken
  • 2019-08-09
    • I spoke with South Cambridgeshire District Council and confirmed that this was genuine
    • Spoke with South Cambridgeshire District Council Electoral Services Team and made them aware of both issues (and sent follow up email)
    • Spoke with the ICO and asked for advice.  The will take up the issue if South Cambs do not resolve this within 20 working days.
    • Spoken again with ICO – even though I had mistakenly believed this was being hosted outside UK and this is not the case, they are still interested in pushing for a move to a .gov.uk domain

 

Mike Gabriel: Cudos to the Rspamd developers

Friday 9th of August 2019 01:02:23 PM

I just migrated the first / a customer's mail server site away from Amavis+SpamAssassin to Rspamd. Main reasons for the migration were speed and the setup needed a polish up anyway. People on site had been complaining about too much SPAM for quite a while. Plus, it is always good to dive into something new. Mission accomplished.

Implemented functionalities:

  • Sophos AV (savdi) antivirus checks backend
  • Clam AV antivirus backend as fallback
  • Auto-Learner CRON Job for SPAM mails published by https://artinvoice.hu
  • Work-around lacking http proxy support

Unfortunately, I could not enable the full scope of Rspamd features, as that specific site I worked on is on a private network, behind a firewall, etc. Some features don't make sense there (e.g. greylisting) or are hard-disabled in Rspamd once it detects that the mail host is on some local network infrastructure (local as in RFC-1918, or the corresponding fd00:: RFC for IPv6 I currently can't remember).

Cudos + Thanks!

Rspamd is just awesome!!! I am really really pleased with the result (and so is the customer, I heard). Thanks to the upstream developers, thanks to the Debian maintainers of the rspamd Debian package. [1]

Credits + Thanks for sharing your Work

The main part of the work had already been documented in a blog post [2] by someome with the nick "zac" (no real name found). Thanks for that!

The Sophos AV integration was a little tricky at the start, but worked out well, after some trial and error, log reading, Rspamd code studies, etc.

On half way through, there was popped up one tricky part, that could be avoided by the Rspamd upstream maintainers in future releases. As far as I took from [3], Rspamd lacks support for retrieving its map files and such (hosted on *.rspamd.com, or other 3rd party providers) via a http proxy server. This was nearly a full blocker for my last project, as the customer's mail gateway is part of a larger infrastructure and hosted inside a double ring of firewalls. Only access to the internet leads over a non-transparent squid proxy server (one which I don't have control over).

To work around this, I set up a transparent https proxy on "localhost", using a neat Python script [4]. Thanks for sharing this script.

I love all the sharing we do in FLOSS

Working on projects like this is just pure fun. And deeply interesting, as well. Such a project is fun as this one has been 98% FLOSS and 100% in the spirit of FLOSS and the correlated sharing mentality. I love this spirit of sharing ones work with the rest of the world, may someone find what I have to share useful or not.

I invite everyone to join in with sharing and in fact, for the IT business, I dearly recommend it.

I did not post config snippets here and such (as some of them are really customer specific), but if you stumble over similar issues when setting up your anti-SPAM gateway mail site using Rspamd, feel free to poke me and I'll see how I can help.

light+love
Mike (aka sunweaver at debian.org)

References

Dirk Eddelbuettel: RQuantLib 0.4.10: Pure maintenance

Wednesday 7th of August 2019 02:32:00 PM

A new version 0.4.10 of RQuantLib just got onto CRAN; a Debian upload will follow in due course.

QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language.

This version does two things related to the new upstream QuantLib release 1.16. First, it updates the Windows build script in two ways: it uses binaries for the brand new 1.16 release as prepapred by Jeroen, and it sets win-builder up for the current and “prospective next version”, also set up by Jeroen. I also updated the Dockerfile used for CI to pick QuantLib 1.16 from Debian’s unstable repo as it is too new to have moved to testing (which the r-base container we build on defaults to). The complete set of changes is listed below:

Changes in RQuantLib version 0.4.10 (2019-08-07)
  • Changes in RQuantLib build system:

    • The src/Makevars.win and tools/winlibs.R file get QuantLib 1.16 for either toolchain (Jeroes in #136).

    • The custom Docker container now downloads QuantLib from Debian unstable to get release 1.16 (from yesterday, no less)

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonas Meurer: debian lts report 2019.07

Tuesday 6th of August 2019 10:17:04 AM
Debian LTS report for July 2019

This month I was allocated 17 hours. I also had 2 hours left over from Juney, which makes a total of 19 hours. I spent all of them on the following tasks/ issues.

  • DLA-1843-1: Fixed CVE-2019-10162 and CVE-2019-10163 in pdns.
  • DLA-1852-1: Fixed CVE-2019-9948 in python3.4. Also found, debugged and fixed several further regressions in the former CVE-2019-9740 patches.
  • Improved testing of LTS uploads: We had some internal discussion in the Debian LTS team on how to improve the overall quality of LTS security uploads by doing more (semi-)automated testing of the packages before uploading them to jessie-security. I tried to summarize the internal discussion, bringing it to the public debian-lts mailinglist. I also did a lot of testing and worked on Jessie support in Salsa-CI. Now that salsa-ci-team/images MR !74 and ci-team/debci MR !89 got merged, we only have to wait for a new debci release in order to enable autopkgtest Jessie support in Salsa-CI. Afterwards, we can use the Salsa-CI pipeline for (semi-)automatic testing of packages targeted at jessie-security.
Links

Louis-Philippe Véronneau: Paying for The Internet

Tuesday 6th of August 2019 04:00:00 AM

For a while now, I've been paying for The Internet. Not the internet connection provided by my ISP, mind you, but for the stuff I enjoy online and the services I find useful.

Most of the Internet as we currently know it is funded by ads. I hate ads and I take a vicious pride in blocking them with the help of great projects like uBlock Orign and NoScript. More fundamentally, I believe the web shouldn't be funded via ads:

  • they control your brain (that alone should be enough to ban ads)
  • they create morally wrong economic incentives towards consumerism
  • they create important security risks and make websites gather data on you

I could go on like this, but I feel those are pretty strong arguments. Feel free to disagree.

So I've started paying. Paying for my emails. Paying for the comics I enjoy online 1. Paying for the few YouTube channels I like. Paying for the newspapers I read.

At the moment, The Internet costs me around 260 USD per year. Luckily for me, I'm privileged enough that it doesn't have a significant impact on my finances. I also pay for a lot of the software I use and enjoy by making patches and spending time working on them. I feel that's a valid way to make The Internet a more sustainable place.

I don't think individual actions like this one have a very profound impact on how things work, but like riding your bike to work or eating locally produced organic food, it opens a window into a possible future. A better future.

  1. I currently like these comics enough to pay for them:

Dirk Eddelbuettel: #23: Debugging with Docker and Rocker – A Concrete Example helping on macOS

Tuesday 6th of August 2019 01:48:00 AM

Welcome to the 23nd post in the rationally reasonable R rants series, or R4 for short. Today’s post was motivated by an exchange on the r-devel list earlier in the day, and a few subsequent off-list emails.

Roger Koenker posted a question: how to best debug an issue arising only with gfortran-9 which is difficult to get hold off on his macOS development platform. Some people followed up, and I mentioned that I had good success using Docker, and particularly our Rocker containers—and outlined a quick mini-tutorial (which had one mini-typo lacking the imporant slash in -w /work). Roger and I followed up over a few more off-list emails, and by and large this worked for him.

So what follows below is a jointly written / edited ‘mini HOWTO’ of how to deploy Docker on macOS for debugging under particular toolchains more easily available on Linux. Windows and Linux use should be very similar, albeit differ in the initial install. In fact, I frequently debug or test in Docker sessions when I do not want to install on my Linux host system. Roger sent one version (I had also edited) back to the list. What follows is my final version.

Debugging with Docker: Getting Hold of Particular Compilers

Context: The quantreg package was seen exhibiting errors when compiled with gfortran-9. The following shows how to use gfortran-9 on macOS by virtue of Docker. It is written in Roger Koenker’s voice, but authored by Roger and myself.

With extensive help from Dirk Eddelbuettel I have installed docker on my mac mini from

https://hub.docker.com/editions/community/docker-ce-desktop-mac

which installs from a dmg in quite standard fashion. This has allowed me to simulate running R in a Debian environment with gfortran-9 and begin the process of debugging my ancient rqbr.f code.

Some further details:

Step 0: Install Docker and Test

Install Docker for macOS following this Docker guide. Do some initial testing, e.g.

docker --version docker run hello-world Step 1: Download r-base and test OS

We use the plainest Rocker container rocker/r-base, in the aliased form of the official Docker container for, i.e. r-base. We first ‘pull’, then test the version and drop into bash as second test.

docker pull r-base # downloads r-base for us docker run --rm -ti r-base R --version # to check we have the R we want docker run --rm -ti r-base bash # now in shell, Ctrl-d to exit Step 2: Setup the working directory

We tell Docker to run from the current directory and access the files therein. For the work on quantreg package this is projects/rq for RogerL

cd projects/rq docker run --rm -ti -v ${PWD}:/work -w /work r-base bash

This put the contents of projects/rq into the /work directory, and starts the session in /work (as can be seen from the prompt).

Next, we update the package information inside the container:

root@90521904fa86:/work# apt-get update Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [149 kB] Get:2 http://cdn-fastly.deb.debian.org/debian testing InRelease [117 kB] Get:3 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8,385 kB] Get:4 http://cdn-fastly.deb.debian.org/debian testing/main amd64 Packages [7,916 kB] Fetched 16.6 MB in 4s (4,411 kB/s) Reading package lists... Done Step 3: Install gcc-9 and gfortran-9 root@90521904fa86:/work# apt-get install gcc-9 gfortran-9 Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: cpp-9 gcc-9-base libasan5 libatomic1 libcc1-0 libgcc-9-dev libgcc1 libgfortran-9-dev libgfortran5 libgomp1 libitm1 liblsan0 libquadmath0 libstdc++6 libtsan0 libubsan1 Suggested packages: gcc-9-locales gcc-9-multilib gcc-9-doc libgcc1-dbg libgomp1-dbg libitm1-dbg libatomic1-dbg libasan5-dbg liblsan0-dbg libtsan0-dbg libubsan1-dbg libquadmath0-dbg gfortran-9-multilib gfortran-9-doc libgfortran5-dbg libcoarrays-dev The following NEW packages will be installed: cpp-9 gcc-9 gfortran-9 libgcc-9-dev libgfortran-9-dev The following packages will be upgraded: gcc-9-base libasan5 libatomic1 libcc1-0 libgcc1 libgfortran5 libgomp1 libitm1 liblsan0 libquadmath0 libstdc++6 libtsan0 libubsan1 13 upgraded, 5 newly installed, 0 to remove and 71 not upgraded. Need to get 35.6 MB of archives. After this operation, 107 MB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libasan5 amd64 9.1.0-10 [390 kB] Get:2 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libubsan1 amd64 9.1.0-10 [128 kB] Get:3 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libtsan0 amd64 9.1.0-10 [295 kB] Get:4 http://cdn-fastly.deb.debian.org/debian testing/main amd64 gcc-9-base amd64 9.1.0-10 [190 kB] Get:5 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libstdc++6 amd64 9.1.0-10 [500 kB] Get:6 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libquadmath0 amd64 9.1.0-10 [145 kB] Get:7 http://cdn-fastly.deb.debian.org/debian testing/main amd64 liblsan0 amd64 9.1.0-10 [137 kB] Get:8 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libitm1 amd64 9.1.0-10 [27.6 kB] Get:9 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgomp1 amd64 9.1.0-10 [88.1 kB] Get:10 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgfortran5 amd64 9.1.0-10 [633 kB] Get:11 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libcc1-0 amd64 9.1.0-10 [47.7 kB] Get:12 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libatomic1 amd64 9.1.0-10 [9,012 B] Get:13 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgcc1 amd64 1:9.1.0-10 [40.5 kB] Get:14 http://cdn-fastly.deb.debian.org/debian testing/main amd64 cpp-9 amd64 9.1.0-10 [9,667 kB] Get:15 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgcc-9-dev amd64 9.1.0-10 [2,346 kB] Get:16 http://cdn-fastly.deb.debian.org/debian testing/main amd64 gcc-9 amd64 9.1.0-10 [9,945 kB] Get:17 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgfortran-9-dev amd64 9.1.0-10 [676 kB] Get:18 http://cdn-fastly.deb.debian.org/debian testing/main amd64 gfortran-9 amd64 9.1.0-10 [10.4 MB] Fetched 35.6 MB in 6s (6,216 kB/s) debconf: delaying package configuration, since apt-utils is not installed (Reading database ... 17787 files and directories currently installed.) Preparing to unpack .../libasan5_9.1.0-10_amd64.deb ... Unpacking libasan5:amd64 (9.1.0-10) over (9.1.0-8) ... Preparing to unpack .../libubsan1_9.1.0-10_amd64.deb ... Unpacking libubsan1:amd64 (9.1.0-10) over (9.1.0-8) ... Preparing to unpack .../libtsan0_9.1.0-10_amd64.deb ... Unpacking libtsan0:amd64 (9.1.0-10) over (9.1.0-8) ... Preparing to unpack .../gcc-9-base_9.1.0-10_amd64.deb ... Unpacking gcc-9-base:amd64 (9.1.0-10) over (9.1.0-8) ... Setting up gcc-9-base:amd64 (9.1.0-10) ... (Reading database ... 17787 files and directories currently installed.) Preparing to unpack .../libstdc++6_9.1.0-10_amd64.deb ... Unpacking libstdc++6:amd64 (9.1.0-10) over (9.1.0-8) ... Setting up libstdc++6:amd64 (9.1.0-10) ... (Reading database ... 17787 files and directories currently installed.) Preparing to unpack .../0-libquadmath0_9.1.0-10_amd64.deb ... Unpacking libquadmath0:amd64 (9.1.0-10) over (9.1.0-8) ... Preparing to unpack .../1-liblsan0_9.1.0-10_amd64.deb ... Unpacking liblsan0:amd64 (9.1.0-10) over (9.1.0-8) ... Preparing to unpack .../2-libitm1_9.1.0-10_amd64.deb ... Unpacking libitm1:amd64 (9.1.0-10) over (9.1.0-8) ... Preparing to unpack .../3-libgomp1_9.1.0-10_amd64.deb ... Unpacking libgomp1:amd64 (9.1.0-10) over (9.1.0-8) ... Preparing to unpack .../4-libgfortran5_9.1.0-10_amd64.deb ... Unpacking libgfortran5:amd64 (9.1.0-10) over (9.1.0-8) ... Preparing to unpack .../5-libcc1-0_9.1.0-10_amd64.deb ... Unpacking libcc1-0:amd64 (9.1.0-10) over (9.1.0-8) ... Preparing to unpack .../6-libatomic1_9.1.0-10_amd64.deb ... Unpacking libatomic1:amd64 (9.1.0-10) over (9.1.0-8) ... Preparing to unpack .../7-libgcc1_1%3a9.1.0-10_amd64.deb ... Unpacking libgcc1:amd64 (1:9.1.0-10) over (1:9.1.0-8) ... Setting up libgcc1:amd64 (1:9.1.0-10) ... Selecting previously unselected package cpp-9. (Reading database ... 17787 files and directories currently installed.) Preparing to unpack .../cpp-9_9.1.0-10_amd64.deb ... Unpacking cpp-9 (9.1.0-10) ... Selecting previously unselected package libgcc-9-dev:amd64. Preparing to unpack .../libgcc-9-dev_9.1.0-10_amd64.deb ... Unpacking libgcc-9-dev:amd64 (9.1.0-10) ... Selecting previously unselected package gcc-9. Preparing to unpack .../gcc-9_9.1.0-10_amd64.deb ... Unpacking gcc-9 (9.1.0-10) ... Selecting previously unselected package libgfortran-9-dev:amd64. Preparing to unpack .../libgfortran-9-dev_9.1.0-10_amd64.deb ... Unpacking libgfortran-9-dev:amd64 (9.1.0-10) ... Selecting previously unselected package gfortran-9. Preparing to unpack .../gfortran-9_9.1.0-10_amd64.deb ... Unpacking gfortran-9 (9.1.0-10) ... Setting up libgomp1:amd64 (9.1.0-10) ... Setting up libasan5:amd64 (9.1.0-10) ... Setting up libquadmath0:amd64 (9.1.0-10) ... Setting up libatomic1:amd64 (9.1.0-10) ... Setting up libgfortran5:amd64 (9.1.0-10) ... Setting up libubsan1:amd64 (9.1.0-10) ... Setting up cpp-9 (9.1.0-10) ... Setting up libcc1-0:amd64 (9.1.0-10) ... Setting up liblsan0:amd64 (9.1.0-10) ... Setting up libitm1:amd64 (9.1.0-10) ... Setting up libtsan0:amd64 (9.1.0-10) ... Setting up libgcc-9-dev:amd64 (9.1.0-10) ... Setting up gcc-9 (9.1.0-10) ... Setting up libgfortran-9-dev:amd64 (9.1.0-10) ... Setting up gfortran-9 (9.1.0-10) ... Processing triggers for libc-bin (2.28-10) ... root@90521904fa86:/work# pwd

Here filenames and versions reflect the Debian repositories as of today, August 5, 2019. While minor details may change at a future point in time, the key fact is that we get the components we desire via a single call as the Debian system has a well-honed package system

Step 4: Prepare Package

At this point Roger removed some dependencies from the package quantreg that he knew were not relevant to the debugging problem at hand.

Step 5: Set Compiler Flags

Next, set compiler flags as follows:

root@90521904fa86:/work# mkdir ~/.R; vi ~/.R/Makevars

adding the values

CC=gcc-9 FC=gfortran-9 F77=gfortran-9

to the file. Alternatively, one can find the settings of CC, FC, CXX, … in /etc/R/Makeconf (which for the Debian package is a softlink to R’s actual Makeconf) and alter them there.

Step 6: Install the Source Package

Now run

R CMD INSTALL quantreg_5.43.tar.gz

which uses the gfortran-9 compiler, and this version did reproduce the error initially reported by the CRAN maintainers.

Step 7: Debug!

With the tools in place, and the bug reproduces, it is (just!) a matter of finding the bug and fixing it.

And that concludes the tutorial.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Romain Perier: Free software activities in May, June and July 2019

Monday 5th of August 2019 06:32:06 PM
Hi Planet, it is been a long time since my last post.
Here is an update covering what I have been doing in my free software activities during May, June and July 2019.
MayOnly contributions related to Debian were done in May
  •  linux: Update to 5.1 (including porting of all debian patches to the new release)
  • linux: Update to 5.1.2
  • linux: Update to 5.1.3
  • linux: Update to 5.1.5
  • firmware-nonfree: misc-nonfree: Add GV100 signed firmwares (Closes: #928672)
June Debian
  • linux: Update to 5.1.7
  • linux: Update to 5.1.8
  • linux: Update to 5.1.10
  • linux: Update to 5.1.11
  • linux: Update to 5.1.15
  • linux: [sparc64] Fix device naming inconsistency between sunhv_console and sunhv_reg (Closes: #926539)
  • raspi3-firmware:  New upstream version 1.20190517
  • raspi3-firmware: New upstream version 1.20190620+1
Kernel Self Protection ProjectI have recently joined the kernel self protection protect, which basically intends to harden the mainline linux kernel the most as possible by adding subsystems that improve the security or make internal subsystems more robust to some common errors that might lead to security issues.

As a first contribution, Kees Cook asked me to check all the NLA_STRING for non-nul terminated strings. Internal functions of NLA attrs expect to have standard nul-terminated strings and use standard strings functions like strcmp() or equivalent. Few drivers were using non-nul terminated strings in some cases, which might lead to buffer overflow. I have checked all the NLA_STRING uses in all drivers and forwarded a status for all of these. Everything were already fixed in linux-next (hopefully).
JulyDebian
  • linux: Update to 5.1.16
  • linux: Update to 5.2-rc7 (including porting of all debian patches to the new release)
  • linux: Update to 5.2
  • linux: Update to 5.2.1
  • linux: [rt] Update to 5.2-rt1
  • linux: Update to 5.2.4
  • ethtool: New upstream version 5.2
  • raspi3-firmware: Fixed lintians warnings about the binaries blobs for the raspberry PI 4
  • raspi3-firmware: New upstream version 1.20190709
  • raspi3-firmware: New upstream version 1.20190718
The following CVEs are for buster-security:
  • linux: [x86] x86/insn-eval: Fix use-after-free access to LDT entry (CVE-2019-13233)
  • linux: [powerpc*] mm/64s/hash: Reallocate context ids on fork (CVE-2019-12817)
  • linux: nfc: Ensure presence of required attributes in the deactivate_target handler (CVE-2019-12984)
  • linux: binder: fix race between munmap() and direct reclaim (CVE-2019-1999)
  • linux: scsi: libsas: fix a race condition when smp task timeout (CVE-2018-20836)
  • linux: Input: gtco - bounds check collection indent level (CVE-2019-13631)
Kernel Self Protection ProjectI am currently improving the API of the internal kernel subsystem "tasklet". This is an old API and like "timer" it has several limitations regarding the way informations are passed to the callback handler. A future patch set will be sent to upstream, I will probably write a blog post about it.

Reproducible Builds: Reproducible Builds in July 2019

Monday 5th of August 2019 04:06:30 PM

Welcome to the July 2019 report from the Reproducible Builds project!

In these reports we outline the most important things that we have been up over the past month. As a quick recap, whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.

The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

In July’s report, we cover:

  • Front pageMedia coverage, upstream news, etc.
  • Distribution workShenanigans at DebConf19
  • Software developmentSoftware transparency, yet more diffoscope work, etc.
  • On our mailing listGNU tools, education and buildinfo files
  • Getting in touch… and how to contribute

If you are interested in contributing to our project, we enthusiastically invite you to visit our Contribute page on our website.

Front page

Nico Alt wrote a detailed and well-researched article titled “Trust is good, control is better” which discusses Reproducible builds in F-Droid the alternative application repository for Android mobile phones. In contrast to the bigger commercial app stores F-Droid only offers apps that are free and open source software. The post not only demonstrates using diffoscope but talks more generally about how reproducible builds can prevent single developers or other important centralised infrastructure becoming targets for toolchain-based attacks.

Later in the month, F-Droid’s aforementioned reproducibility status was mentioned on episode 68 of the Late Night Linux podcast. (direct link to 14:12)

Morten (“Foxboron”) Linderud published his academic thesis “Reproducible Builds: break a log, good things come in trees” which investigates and describes how transparency log overlays can provide additional security guarantees for computers automatically producing software packages. The thesis was part of Morten’s studies at the University of Bergen, Norway and is an extension of the work New York University Tandon School of Engineering has been doing with package rebuilder integration in APT.

Mike Hommey posted to his blog about Reproducing the Linux builds of Firefox 68 which leverages that builds shipped by Mozilla should be reproducible from this version. He discusses the problems caused by the builds being optimised with Profile-Guided Optimisation (PGO) but armed with the now-published profiling data, Mike provides Docker-based instructions how to reproduce the published builds yourself.

Joel Galenson has been making progress on a reproducible Rust compiler which includes support for a --remap-path-prefix argument, related to the concepts and problems involved in the BUILD_PATH_PREFIX_MAP proposal to fix issues with build paths being embedded in binaries.

Lastly, Alessio Treglia posted to their blog about Cosmos Hub and Reproducible Builds which describes the reproducibility work happening in the Cosmos Hub, a network of interconnected blockchains. Specifically, Alessio talks about work being done on the Gaia development kit for the Hub.


Distribution work

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution where enabling. Enabling Link Time Optimization (LTO) in this distribution’s “Tumbleweed” branch caused multiple issues due to the number of cores on the build host being added to the CFLAGS variable. This affected, for example, a debuginfo/rpm header as well as resulted in in CFLAGS appearing in built binaries such as fldigi, gmp, haproxy, etc.

As highlighted in last month’s report, the OpenWrt project (a Linux operating system targeting embedded devices such as wireless network routers) hosted a summit in Hamburg, Germany. Their full summit report and roundup is now available that covers many general aspects within that distribution, including the work on reproducible builds that was done during the event.

Debian

It was an extremely productive time in Debian this month in and around DebConf19, the 20th annual conference for both contributors and users and was held at the Federal University of Technology in Paraná (UTFPR) in Curitiba, Brazil, from July 21 to 28. The conference was preceded by “DebCamp” from the 14th until the 19th with an additional “Open Day” that is targeted at the more-general public on the 20th.

There were a number of talks touching on the topic of reproducible builds and secure toolchains throughout the conference, including:

There were naturally countless discussions regarding Reproducible Builds in and around the conference on the questions of tooling, infrastructure and our next steps as a project.

The release of Debian 10 buster has also meant the release cycle for the next release (codenamed “bullseye”) has just begun. As part of this, the Release Team recently announced that Debian will no longer allow binaries built and uploaded by maintainers on their own machines to be part of the upcoming release. This is great news not only for toolchain security in general but also in that it will ensure that all binaries that will form part of this release will likely have a .buildinfo file and thus metadata that could be used by others to reproduce and verify the builds.

Holger Levsen filed a bug against the underlying tool that maintains the Debian archive (“dak”) after he noticed that .buildinfo metadata files were not being automatically propagated if packages had to be manually approved or processed in the so-called “NEW queue”. After it was pointed out that the files were being retained in a separate location, Benjamin Hof proposed a potential patch for the issue which is pending review.

David Bremner posted to his blog post about “Yet another buildinfo database” that provides a SQL interface for querying .buildinfo attestation documents, particularly focusing on identifying packages that were built with a specific — and possibly buggy — build-dependency. Later at DebConf, David demonstrated his tool live (starting at 36:30).

Ivo de Decker (“ivodd”) scheduled rebuilds of over 600 packages that last experienced an upload to the archive in December 2016 or earlier. This was so that they would be built using a version of the low-level dpkg package build tool that supports the generation of reproducible binary packages. The effect of this on the main archive will be deliberately staggered and thus visible throughout the upcoming weeks, potentially resulting in some of these packages now failing to build.

Joaquin de Andres posted an update regarding the work being done on continuous integration on Debian’s GitLab instance at DebConf19 in which he mentions, inter alia, a tool called atomic-reprotest. This is a relatively new utility to help debug failures logged by our reprotest tool which attempts to test whether a build is reproducible or not. This tool was also mentioned in a subsequent lightning talk.

Chris Lamb filed two bugs to drop the test jobs for both strip-nondeterminism (#932366) and reprotest (#932374) after modifying them to build on the Salsa server’s own continuous integration platform and Holger Levsen shortly resolved them.

Lastly, 63 reviews of Debian packages were added, 72 were updated and 22 were removed this month, adding to our large knowledge about identified issues. Chris Lamb added and categorised four new issue types, umask_in_java_jar_file, built_by-in_java_manifest_mf, timestamps_in_manpages_generated_by_lopsubgen and codadef_coda_data_files.

Software development

The goal of Benjamin Hof’s Software Transparency effort is to improve on the cryptographic signatures of the APT package manager by introducing a Merkle tree-based transparency log for package metadata and source code, in a similar vein to certificate transparency. This month, he pushed a number of repositories to our revision control system for further future development and review.

In addition, Bernhard M. Wiedemann updated his (deliberately) unreproducible demonstration project to add support for floating point variations as well as changes in the project’s copyright year.

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Neal Gompa, Michael Schröder & Miro Hrončok responded to Fedora’s recent change to rpm-config with some new developments within rpm to fix an unreproducible “Build Date” and reverted a change to the Python interpreter to switch back to unreproducible/time-based compile caches.

Lastly, kpcyrd submitted a pull request for Alpine Linux to add SOURCE_DATE_EPOCH support to the abuild build tool in this operating system.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, Chris Lamb made the following changes:

  • Add support for Java .jmod modules (#60). However, not all versions of file(1) support detection of these files yet, so we perform a manual comparison instead [].
  • If a command fails to execute but does not print anything to standard error, try and include the first line of standard output in the message we include in the difference. This was motivated by readelf(1) returning its error messages on standard output. [#59) []
  • Add general support for file(1) 5.37 (#57) but also adjust the code to not fail in tests when, eg, we do not have sufficiently newer or older version of file(1) (#931881).
  • Factor out the ability to ignore the exit codes of zipinfo and zipinfo -v in the presence of non-standard headers. [] but only override the exit code from our special-cased calls to zipinfo(1) if they are 1 or 2 to avoid potentially masking real errors [].
  • Cease ignoring test failures in stable-backports. []
  • Add missing textual DESCRIPTION headers for .zip and “Mozilla”-optimised .zip files. []
  • Merge two overlapping environment variables into a single DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS. []
  • Update some reporting:
    • Re-add “return code” noun to “Command foo exited with X” error messages. []
    • Use repr(..)-style output when printing DIFFOSCOPE_TESTS_FAIL_ON_MISSING_TOOLS in skipped test rationale text. []
    • Skip the extra newline in Output:\nfoo. []
  • Add some explicit return values to appease Pylint, etc. []
  • Also include the python3-tlsh in the Debian test dependencies. []
  • Released and uploaded releasing versions 116, 117, 118, 119 & 120. [][][][][]

In addition, Marc Herbert provided a patch to catch failures to disassemble ELF binaries. []


Project website

There was a yet more effort put into our our website this month, including:

  • Bernhard M. Wiedemann:
    • Update multiple works to use standard (or at least consistent) terminology. []
    • Document an alternative Python snippet in the SOURCE_DATE_EPOCH examples examples. []
  • Chris Lamb:
    • Split out our non-fiscal sponsors with a description [] and make them non-display three-in-a-row [].
    • Correct references to 1&1 IONOS (née Profitbricks). []
    • Reduce ambiguity in our environment names. []
    • Recreate the badge image, saving the .svg alongside it. []
    • Update our fiscal sponsors. [][][]
    • Tidy the weekly reports section on the news page [], fixup the typography on the documentation page [] and make all headlines stand out a bit more [].
    • Drop some old CSS files and fonts. []
    • Tidy news page a bit. []
    • Fixup a number of issues in the report template and previous reports. [][][][][][]

Holger Levsen also added explanations on how to install diffoscope on OpenBSD [] and FreeBSD [] to its homepage and Arnout Engelen added a preliminary and work-in-progress idea for a badge or “shield” program for upstream projects. [][][].

A special thank you to Alexander Borkowski [] Georg Faerber [], and John Scott [] for their individual fixes. To err is human; to reproduce, divine.


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, Niko Tyni provided a patch to use the Perl Sub::Override library for some temporary workarounds for issues in Archive::Zip instead of Monkey::Patch which was due for deprecation. [].

In addition, Chris Lamb made the following changes:

  • Identify data files from the COmmon Data Access (CODA) framework as being .zip files. []
  • Support OpenJDK “.jmod” files. []
  • Pass --no-sandbox if necessary to bypass seccomp-enabled version of file(1) which was causing a huge number of regressions in our testing framework.
  • Don’t just run the tests but build the Debian package instead using Salsa’s centralised scripts so that we get code coverage, Lintian, autopkgtests, etc. [][]
  • Update tests:
    • Don’t build release Git tags on salsa.debian.org. []
    • Merge the debian branch into the master branch to simplify testing and deployment [] and update debian/gbp.conf to match [].
  • Drop misleading and outdated MANIFEST and MANIFEST.SKIP files as they are not used by our release process. []


Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. The following changes were performed in the last month:

  • Holger Levsen:
    • Debian-specific changes:
      • Make a large number of adjustments to support the new Debian bullseye distribution and the release of buster. [][][][][][][] [][][][]
      • Fix the colours for the five suites now being built. []
      • Make a number code improvements to the calculation of our “metapackage” sets including refactoring and changes of email address, etc. [][][][][]
      • Add the “http-proxy” variable to the displayed node info. []
    • Alpine changes:
      • Rebuild the webpages every two hours (instead of twice per hour). []
    • Reproducible tooling:
      • Fix the detection of version number in Arch Linux. []
      • Drop reprotest and strip-nondeterminism jobs as we run that via Salsa CI now. [][]
      • Add a link to current SQL database schema. []
  • Mattia Rizzolo:
    • Make a number of adjustments to support the new Debian bullseye distribution. [][][][]
    • Ensure that our arm64 hosts always trust the Debian archive keyring. []
    • Enable the backports repositories on the arm64 build hosts. []

Holger Levsen [][][] and Mattia Rizzolo [][][] performed the usual node maintenance and lastly, Vagrant Cascadian added support to generate a reproducible-tracker.json metadata file for the next release of Debian (bullseye). []

On the mailing list

Chris Lamb cross-posted his reply to the “Re: file(1) now with seccomp support enabled thread that was originally started on the debian-devel Debian list. In his post, he refers to a strip-nondeterminism not being able to accommodate the additional security hardening in file(1) and the changes made to the tool in order to do fix this issue which was causing a huge number of regressions in our testing framework.

Matt Bearup wrote about his experience when he generated different checksums for the libgcrypt20 package which resulted in some pointers etc. in that one should be using the equivalent .buildinfo post-build certificate when attempting to reproduce any particular build.

Vagrant Cascadian posted a request for comments regarding a potential proposal to the GNU Tools “Cauldron” gathering to be held in Montréal, Canada during September 2019 and Bernhard M. Wiedemann posed a query about using consistent terms on our webpages and elsewhere.

Lastly, in a thread titled “Reproducible Builds - aiming for bullseye: comments and a purpose” Jathan asked about whether we had considered offering “101”-like beginner sessions to fix packages that are not currently reproducible.


Getting in touch

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

This month’s report was written by Benjamin Hof, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

Thorsten Alteholz: My Debian Activities in July 2019

Sunday 4th of August 2019 06:30:16 PM

FTP master

After the release of Buster I could start with real work in NEW again. Even the temperature could not hinder me to reject something. So this month I accepted 279 packages and rejected 15. The overall number of packages that got accepted was 308.

Debian LTS

This was my sixty first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 18.5h. During that time I did LTS uploads of:

  • [DLA 1849-1] zeromq3 security update for one CVE
  • [DLA 1833-2] bzip2 regression update for one patch
  • [DLA 1856-1] patch security update for one CVE
  • [DLA 1859-1] bind9 security update for one CVE
  • [DLA 1864-1] patch security update for one CVE

I am glad that I could finish the bind9 upload this month.
I also started to work on ruby-mini-magick and python2.7. Unfortunatley when building both packages (even without new patches), the test suite fails. So I first have to fix that as well.

Last but not least I did ten days of frontdesk duties. This was more than a week as everybody was at DebConf and I seemed to be the only one at home …

Debian ELTS

This month was the fourteenth ELTS month.

During my allocated time I uploaded:

  • ELA-132-2 of bzip2 for an upstream regression
  • ELA-144-1 of patch for one CVE
  • ELA-147-1 of patch for one CVE
  • ELA-148-1 of bind9 for one CVE

I also did some days of frontdesk duties.

Other stuff

This month I reuploaded some go packages, that would not migrate due to being binary uploads.

I also filed rm bugs to remove all alljoyn packages. Upstream is dead, no one is using this software anymore and bugs won’t be fixed.

Emmanuel Kasper: Debian 9 -> 10 Ugrade report

Sunday 4th of August 2019 03:23:13 PM
I upgraded my laptop and VPS to Debian 10, as usual in Debian everything worked out of the box, the necessary daemons restarted without problems.
I followed my usual upgrade approach, which involves upgrading a backup of the root FS of the server in a container, to test the upgrade path, followed by a config file merge.

I had one major problem, though, connecting to my php based Dokuwiki subsole.org website, which displayed a rather unwelcoming screen after the upgrade:




I was a bit unsure at first, as I thought I would need to fight my way through the nine different config files of the dokuwiki debian package in /etc/dokuwiki

However the issue was not so complicated: as  the apache2 php module was disabled, apache2 was outputting the source code of dokuwiki instead of executing it. As you see, I don't php that often.

A simple
a2enmod php7.3
systemctl restart apache2


fixed the issue.

I understood the problem after noticing that a simple phpinfo() would not get executed by the server.

I would have expected the upgrade to automatically enable the new php7.3 module, since the oldstable php7.0 apache module was removed as part of the upgrade, but I am not sure what the Debian policy would recommend here, or if I am missing something else.
If I can reproduce the issue in a upgrade scenario, I'll probably submit a bug to the php package maintainers.

Debian GSoC Kotlin project blog: Packaging Dependencies Part 2; and plan on how to.

Sunday 4th of August 2019 11:52:26 AM
Mapping and packaging dependencies part 1.

Hey all, I had my exams during weeks 8 ad 9 so I couldn't update my blog nor get much accomplished; but last week was completely free so I managed to finish packaging all the dependencies from pacakging dependencies part 1. Since some of you may not remember how I planned to tackle pacakging dependencies I'll mention it here one more time.

"I split this task into two sub tasks that can be done independently. The 2 subtasks are as follows:
->part 1: make the entire project build successfully without :buildSrc:prepare-deps:intellij-sdk:build
--->part1.1:package these dependencies
->part 2: package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build ; i.e try to recreate whatever is in it."

This is taken from my last blog which was specifically on packaging dependencies in part 1. Now I am happy to tell all of you that packaging dependencies for part 1 is now complete and all the needed pacakges are either in the new queue or already in sid archive as of 04 August 2019. I would like to thank ebourg, seamlik and andrewsh for helping me with this.

How to build kotlin 1.3.30 after dependency packaging part 1 and design choices.

Before I go into how to build the project as it is now I'll briefly talk of some of the choices I made while packaging dependencies in part 1 and general things you should know.

Two dependencies in part 1 were Jcabi-aether and sonatype-aether, both of these are incompatible with maven-3 and these were only used in one single file in the entire dist task graph. Considering the time it would take to migrate these dependencies to maven-3 I chose to patch out the one file that needed both of these and that change is denoted by this commit. Also it must be noted that so far we are only trying to build the dist task which only and only builds the basic kotlin compiler; it doesn't build the maven artifacts with poms nor does it build the kotlin-gradle-plugin. Those things are built and installed in the local maven repository (.m2 file in surce project when you invoke debuild) using the install task which I am planning to do once we finish successfully building the dist task. Invoking the install task in our master as of Aug 04 2019 will build and install all available maven artifacts into the local maven repo but this again will not have kotlin-gradle-plugin or such since I have removed those subprojects as they aren't needed by the dist task. Keeping them would mean that I have to convert and patch them to groovy if they are written in .kts since they are evaluated during the initialization phase.

Now we are ready to build the project. I have written a simple makefile which copies all the needed bootstrap jars and prebuilts to their proper places. All you need to build the project is

1.git clone https://salsa.debian.org/m36-guest/kotlin-1.3.30.git 2.cd kotlin-1.3.30 3.git checkout buildv1 4.debian/pseudoBootstrap bootstrap 5.debuild -b -rfakeroot -us -uc Note that we need only do steps 1 though 4 the very first time you are building this project. everytime after that just invoke step 5 Packaging dependencies part 2.

Now packaging dependencies part 2 involves package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build. This is the folder that is taking up the most space in Kotlin-1.3.30-temp-requirements. The sole purpose of this task is reduce the jars in this folder and substitue them with jar from the debian environment. I have managed to map out the needed jars from these for the dist task graph and they are

```
saif@Hope:/srv/chroot/KotlinCh/home/kotlin/kotlin-1.3.30-debian-maintained/buildSrc/prepare-deps/intellij-sdk/repo/kotlin.build.custom.deps/183.5153.4$ ls -R .: intellij-core intellij-core.ivy.xml intellijUltimate intellijUltimate.ivy.xml jps-standalone jps-standalone.ivy.xml

./intellij-core: asm-all-7.0.jar intellij-core.jar java-compatibility-1.0.1.jar ./intellijUltimate: lib ./intellijUltimate/lib: asm-all-7.0.jar guava-25.1-jre.jar jna.jar log4j.jar openapi.jar picocontainer-1.2.jar platform-impl.jar trove4j.jar extensions.jar jdom.jar jna-platform.jar lz4-1.3.0.jar oro-2.0.8.jar platform-api.jar streamex-0.6.7.jar util.jar ./jps-standalone: jps-model.jar ```

This folder is treated as an ant repository and the code to that is here. Build.gradle files use this via methods like this which tells the project to take only the needed jars from the collection. I am planning on replacing this with just plain old maven repository resolution using format like compile(groupID:artifactId:version) but we will need the jars to be in our system anyways, atleast now we know that this particular file structure can be avoided.

Please note that these jars listed above by me are only needed for the dist task and the ones needed for other subprojects in the original install task can still be found here.

The following are the dependencies need for part 2. * denotes what I am not sure of. Contact me before you attempt to pacakge any of the intellij dependencies as we only need parts from those and I have a script to tell what we need.

1.->java-compatibility-1.0.1 -> https://github.com/JetBrains/intellij-deps-java-compatibility (DONE: here)
2.->jps-model -> https://github.com/JetBrains/intellij-community/tree/master/jps
3.->intellij-core, open-api -> https://github.com/JetBrains/intellij-community/tree/183.5153
4.->streamex-0.6.7 -> https://github.com/amaembo/streamex/tree/streamex-0.6.7 (DONE: here)
5.->guava-25.1 -> https://github.com/google/guava/tree/v25.1 ([WIP-Saif])
6.->lz4-java -> https://github.com/lz4/lz4-java/blob/1.3.0/build.xml(DONE:here)
7.->libjna-java & libjna-platform-java recompiled in jdk 8. -> https://salsa.debian.org/java-team/libjna-java (DONE : commit)
8.->liboro-java recompiled in jdk8 -> https://salsa.debian.org/java-team/liboro-java (DONE : commit)
9.->picocontainer-1.3 refining -> https://salsa.debian.org/java-team/libpicocontainer-1-java (DONE: here)
10.-> * platform-api -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform
11.-> * util -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform
12.-> * platform-impl -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform

So if any of you want to help please kindly take on any of these and package them.

!!NOTE-ping me if you want to build kotlin in your system and are stuck!!

Here is a link to the work I have done so far. You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates weekly.

Mike Gabriel: MATE 1.22 landed in Debian unstable

Sunday 4th of August 2019 10:55:13 AM

Last week, I did a bundle upload of (nearly) all MATE 1.22 related components to Debian unstable. Packages should have been built by now for most of the 24 architectures supported by Debian (I just fixed an FTBFS of mate-settings-daemon on non-Linux host archs). The current/latest build status can be viewed on the DDPO page of the Debian+Ubuntu MATE Packaging Team [1].

Credits

Again a big thanks goes to the packaging team and also to the upstream maintainers of the MATE desktop environment. Martin Wimpress and I worked on most parts of the packaging for the 1.22 release series this time. On the upstream side, a big thanks goes to all developers, esp. Vlad Orlov and Wolfgang Ulbrich for fixing / reviewing many many issues / merge requests. Good work, folks!!! plus Big Thanks!!!

References


light+love,
Mike Gabriel (aka sunweaver)

Andy Simpkins: Debconf19: Curitiba, Brazil – AV Setup

Saturday 3rd of August 2019 06:37:48 PM

I write this on Monday whilst sat in the airport in São Paulo awaiting my onward flight back to the UK and the fun of the change of personnel in Downing street that has been something I have fortunately been able to ignore whilst at DebConf.  [Edit: and finishing writing the Saturday after getting home after much sleep]

Arriving on the first Sunday of DebCamp meant that I was one of the first people to arrive; however most of the video team were either arriving about the same time or had landed before me.  We spent most of our daytime time during DebCamp setting up for the following weeks conference.

Step one was getting a network operational.  We had been offered space for our servers in a university machine room, but chose instead to occupy the two ‘green’ rooms below the main auditorium stage, using one as a makeshift V/NOC and the other as our machine room as this enabled us continuous and easy access [0] to our servers whilst separating us from the fan noise.  I ran additional network cable between the back of the stage and our makeshift machine room, routing the cable around the back of the stage and into the ceiling void to just outside the V/NOC was relatively simple.   Routing into the V/NOC needed a bit of help to get the cable through a small gap we found where some other cables ran through the ‘fire break’.  Getting a cable between the two ‘green rooms’ however was a PITA.  Many people, including myself, eventually giving up before I finally returned to the problem and with the aid of a fully extended server rail gaffer taped to a clothing rail to make a 4m long pole I was eventually able to deliver a cable through the 3 floor supports / fire breaks that separated the two rooms (and before someone suggests I should have used a ‘fish’ wire that was what we tried first).   The university were providing us with backbone network but it did take a couple of meetings to get our video network in it’s own separate VLAN and get it to pass traffic unmolested between nodes.

The final network setup (for video that is – the conference was piggy-backing on the university WiFi network and there was also a DebConf network in the National Inn) was to make live the fibre links that had been installed prior to our arrival.  Two links had been pulled through so that we could join the ‘Video Confrencia’ room and the ‘Front Desk’ to the rest of the university network, however when we came to commission them we discovered that the wrong media converters had been supplied, they should have been for single mode fibre but multi-mode converters had been delivered.  Nothing that the university IT department couldn’t solve, indeed they did as soon as we pointed out the mistake.  The provided us with replacement media converters capable of driving a signal down *both* single and multi-mode fiber, something I have not seen before.

For the rest of the week Paddatrapper and myself spent most of our time running cables and setting up the three talk rooms that were to be filmed.  Phls had been able to provide us with details of the venue’s AV system AND scale plans of the three talk rooms, this along with the photos provided by the local team, & Tumbleweed’s visit to the sight enabled us to plan the cable runs right down to the location of power sockets.

I am going to add scale plans & photos to the things that we request for all future DebConfs.  They made planning and setup so much easier and faster.  Of cause we still ended up running more cables than we originally expected – we ran Ethernet front to back in all three rooms when we originally intended to only do this in Video Confrencia (the small BoF room), this was because it turned out that the sockets at different ends of the room were on differing packet switches that in turn feed into the university backbone.  We were informed that the backbone is 1Gb/s which meant that the video LAN would have consumed the entire bandwidth of the backbone with nothing left over.

We have 200Mb/s streams from between OPSIS frame grabbers and a 2nd 200Mb/s output stream from each room.  That equates to exactly 1Gb/s (the video-confrencia BoF room is small enough that we were always going to run a front/back cable) and that is before any backups of recordings to our server.  As it turns out that wasn’t true but by then we had already run the cables and got things working…

I won’t blog about the software setup the servers, our back-end CDN or the review process – this is not my area of expertise.  You need to thank Olasd, Tumbleweed & Ivo for the on-site system setup and Walter for the review process.  Actually I there is also Carlfk, Ubec, Valhalla and I am sure numerous other people that I am too tired to remember, I appologise for forgetting you…

So back to physical setup.  The main auditorium was operational.  I had re-patched the mixing desk to give a setup as close as possible in all three talk rooms – we are most interested in audio for the stream/recording and so use the main mix output for this, and move the room PA onto a sub group output.  Unusually for a DebConf, I found that I needed to ensure that there *was* a ground connection at the desk for all output feeds – It appears that there was NO earth in the entire auditorium; well there was at some point back in time but had been systematically removed either by cutting off the earth pin on power plugs, or unfortunately for us, by cutting and removing cables from any bonding points, behind sockets etc.   Done, probably, because RCDs kept tripping and clearly the problem is that there is an earth present to leak into and not that there is a leak in the first place, or just long cable runs into inductive loads that mean that a different ‘trip curve’ needed to be selected <sigh>.

We still had significant mains hum on the PA system (slightly less than was present before I started room setup so nothing I had done).  The venue AV team pointed out that they had a magnetic coupler AND an audio DSP unit in front of the PA amplifier stack – telling me that this was to reduce the hum. Fortunately for us the venue had 4 equalisers that I could use, one for each of the mics So I was able to knock out 60Hz, 120Hz and dip higher harmonics, this again made an improvement.  Apparently we were getting the best results in living memory of the on-site AV team so at this point I stooped tweaking the setup “It was good enough”, we could live with the remaining hum.

The other two talk rooms were pretty much the same setup, only the rooms are smaller.  The exception being that whilst we do have a small portable PA in the Video Conferancia room we only use it for audio from the presenters laptop – the room was so small there was no point in amplifying presenters…

Right I could now move on to ‘lighting’.  We were supposed to use the flood ‘work’ lights above the stage, but quite a few of the halogen lamps were blown.  This meant that there were far too many ‘dark’ patches along the stage.  Additionally the colour temperatures of the different work lights were all over the place, and this would cause havoc with white balance, still we could have lived with this…  I asked about getting the lamps replaced.  Initially I was told no, but once I pointed out the problem to a more senior member of staff they agreed that the lamps could be replaced and that it would be done the following day.  It wasn’t.  I offered that we could replace the lamps but was then told that they would now be doing this as part of a service in a few weeks time.  I was however told that instead, if I was prepared to rig them myself, that we could use the stage lights on the dimmers.  Win!  This would have been my preferred option all along and I suspect we were only offered this having started to build a reasonable working relationship with the site AV team.  I was able to sign out a bunch of lamps from the stores and rig then as I saw fit.  I was given a large wooden step ladder, and shown how to access the catwalk.  I could now rig lights where I wanted them.

Two over head floods and two spots were used to light the lectern from different angles.  Three overhead floods and three focused cans were used to light the discussion table.  I also hung to forward facing spots to illuminate someone stood at the question mic, and finally 4 cans (2 focus cans and a pair of 1kW par cans sharing the same plug) to add some light to the front 5 or 6 rows of the audience.  The Venue AV team repaired the DMX cable to the lighting dimmers and we were good to go…  well just as soon as I had worked out the DMX addressing / cable patching at the dimmer banks and then there was a short time whilst I read the instructions for the desk – enough to apply ‘soft patches’ so I could allocate a fader to each dimmer channel we were using.  I then red the instructions a bit further and came back the following day and programmed appropriate scenes so that the table could be lit using one ‘slider’, the lectern by another and so on.  JMW came back later in the week and updated the program again to add a timed fade up or down and we also set a maximum level on the audience lights to stop us from ‘blinding’ people in the first couple of rows (we set the maximum value of that particular scene to be 20% available intensity).

Lighting in the mini auditorium was from simple overhead ‘domestic’ lamps, I still needed to get some bulbs replaced, and then move / adjust them to best light a speaker stood at the lectern or a discussion panel sat at the table.   Finally we had no control of lighting in Video Confeencia (about normal for a DebConf).

Later in the week we revisited the hum problem again.  We confirmed that the Hum was no longer being emitted out of the desk, so it must have be on the cable run to the stack or in the stack itself.  The hum was still annoying and Kyle wanted to confirm that the DSP at the top of the amp stack was correctly setup – could we improve things?  It took a little persuasion but eventually we were granted permission, and the password, to access the DSP.  The DSP had not been configured properly at all.  Kyle applied a 60Hz notch filter, and this made some difference.  I suggested a comb filter which Kyle then applied for 60Hz and 5 or 6 orders of harmonics, that did the trick (thanks Kyle – I wouldn’t have had a clue how to drive the DSP).  There was no longer any perceivable noise coming out of the left hand speakers, but there was still a noticeable, but much lower, hum from the right.  We removed the input cable to the amp stack and yes the hum was still there, so we were picking up noise between the amps and the speaker!  a quick confirmation of turning off the lighting dimmers and the noise dropped again.  I started chasing the right hand speaker cables – they run up and over the stage along the catwalk, in the same bundle as all the unearthed lighting AND permanent power cables.  We were inducing mains noise directly onto the speaker cables.  The only fix for this would be to properly screen AND separate the speaker fed cables.  Better yet send a balanced audio feed, separated from the power cables, to the right hand side of the stage and move the right hand amplifiers to that side of the stage.  Nothing we could do – but something that we could point out to the venue AV team, who strangely, hadn’t considered this before…

 

 

[0] Where continuous access meant “whilst we had access to the site” (the whole campus is closed overnight)

Dirk Eddelbuettel: RcppCCTZ 0.2.6

Saturday 3rd of August 2019 12:45:00 PM

A shiny new release 0.2.6 of RcppCCTZ is now at CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do—using copies in their packages which remains less than ideal.

This version updates to CCTZ release 2.3 from April, plus changes accrued since then. It also switches to tinytest which, among other benefits, permits continued testing of the installed package.

Changes in version 0.2.6 (2019-08-03)
  • Synchronized with upstream CCTZ release 2.3 plus commits accrued since then (Dirk in #30).

  • The package now uses tinytest for unit tests (Dirk in #31).

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Bits from Debian: New Debian Developers and Maintainers (May and June 2019)

Saturday 3rd of August 2019 08:00:00 AM

The following contributors got their Debian Developer accounts in the last two months:

  • Jean-Philippe Mengual (jpmengual)
  • Taowa Munene-Tardif (taowa)
  • Georg Faerber (georg)
  • Kyle Robbertze (paddatrapper)
  • Andy Li (andyli)
  • Michal Arbet (kevko)
  • Sruthi Chandran (srud)
  • Alban Vidal (zordhak)
  • Denis Briand (denis)
  • Jakob Haufe (sur5r)

The following contributors were added as Debian Maintainers in the last two months:

  • Bobby de Vos
  • Jongmin Kim
  • Bastian Germann
  • Francesco Poli

Congratulations!

Elana Hashman: My favourite bash alias for git

Saturday 3rd of August 2019 04:00:00 AM

I review a lot of code. A lot. And an important part of that process is getting to experiment with said code so I can make sure it actually works. As such, I find myself with a frequent need to locally run code from a submitted patch.

So how does one fetch that code? Long ago, when I was a new maintainer, I would add the remote repository I was reviewing to my local repo so I could fetch that whole fork and target branch. Once downloaded, I could play around with that on my local machine. But this was a lot of overhead! There was a lot of clicking, copying, and pasting involved in order to figure out the clone URL for the remote repo, and a bunch of commands to set it up. It felt like a lot of toil that could be easily automated, but I didn't know a better way.

One day, when a coworker of mine saw me struggling with this, he showed me the better way.

Turns out, most hosted git repos with pull request functionality will let you pull down a read-only version of the changeset from the upstream fork using git, meaning that you don't have to set up additional remote tracking to fetch and run the patch or use platform-specific HTTP APIs.

Using GitHub's git references for pull requests

I first learned how to do this on GitHub.

GitHub maintains a copy of pull requests against a particular repo at the pull/NUM/head reference. (More documentation on refs here.) This means that if you have set up a remote called origin and someone submits a pull request #123 against that repository, you can fetch the code by running

$ git fetch origin pull/123/head remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Total 4 (delta 3), reused 3 (delta 3), pack-reused 1 Unpacking objects: 100% (4/4), done. From github.com:ehashman/hack_the_planet * branch refs/pull/123/head -> FETCH_HEAD $ git checkout FETCH_HEAD Note: checking out 'FETCH_HEAD'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at deadb00 hack the planet!!!

Woah.

Using pull request references for CI

As a quick aside: This is also handy if you want to write your own CI scripts against users' pull requests. Even better—on GitHub, you can fetch a tree with the pull request already merged onto the top of the current master branch by fetching pull/NUM/merge. (I'm not sure if this is officially documented somewhere, and I don't believe it's widely supported by other hosted git platforms.)

If you also specify the --depth flag in your fetch command, you can fetch code even faster by limiting how much upstream history you download. It doesn't make much difference on small repos, but it is a big deal on large projects:

elana@silverpine:/tmp$ time git clone https://github.com/kubernetes/kubernetes.git Cloning into 'kubernetes'... remote: Enumerating objects: 295, done. remote: Counting objects: 100% (295/295), done. remote: Compressing objects: 100% (167/167), done. remote: Total 980446 (delta 148), reused 136 (delta 128), pack-reused 980151 Receiving objects: 100% (980446/980446), 648.95 MiB | 12.47 MiB/s, done. Resolving deltas: 100% (686795/686795), done. Checking out files: 100% (20279/20279), done. real 1m31.035s user 1m17.856s sys 0m7.782s elana@silverpine:/tmp$ time git clone --depth=10 https://github.com/kubernetes/kubernetes.git kubernetes-shallow Cloning into 'kubernetes-shallow'... remote: Enumerating objects: 34305, done. remote: Counting objects: 100% (34305/34305), done. remote: Compressing objects: 100% (22976/22976), done. remote: Total 34305 (delta 17247), reused 19060 (delta 10567), pack-reused 0 Receiving objects: 100% (34305/34305), 34.22 MiB | 10.25 MiB/s, done. Resolving deltas: 100% (17247/17247), done. real 0m31.495s user 0m3.941s sys 0m1.228s Writing the pull alias

So how can one harness all this as a bash alias? It takes just a little bit of code:

pull() { git fetch "$1" pull/"$2"/head && git checkout FETCH_HEAD } alias pull='pull'

Then I can check out a PR locally with the short command pull <remote> <num>:

$ pull origin 123 remote: Enumerating objects: 4, done. remote: Counting objects: 100% (4/4), done. remote: Total 5 (delta 4), reused 4 (delta 4), pack-reused 1 Unpacking objects: 100% (5/5), done. From github.com:ehashman/hack_the_planet * branch refs/pull/123/head -> FETCH_HEAD Note: checking out 'FETCH_HEAD'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at deadb00 hack the planet!!!

You can even add your own commits, save them on a local branch, and push that to your collaborator's repository to build on their PR if you're so inclined... but let's not get too ahead of ourselves.

Changeset references on other git platforms

These pull request refs are not a special feature of git itself, but rather a per-platform implementation detail using an arbitrary git ref format. As far as I'm aware, most major git hosting platforms implement this, but they all use slightly different ref names.

GitLab

At my last job I needed to figure out how to make this work with GitLab in order to set up CI pipelines with our Jenkins instance. Debian's Salsa platform also runs GitLab.

GitLab calls user-submitted changesets "merge requests" and that language is reflected here:

git fetch origin merge-requests/NUM/head

They also have some nifty documentation for adding a git alias to fetch these references. They do so in a way that creates a local branch automatically, if that's something you'd like—personally, I check out so many patches that I would not be able to deal with cleaning up all the extra branch mess!

BitBucket

Bad news: as of the time of publication, this isn't supported on bitbucket.org, even though a request for this feature has been open for seven years. (BitBucket Server supports this feature, but that's standalone and proprietary, so I won't bother including it in this post.)

Gitea

While I can't find any official documentation for it, I tested and confirmed that Gitea uses the same ref names for pull requests as GitHub, and thus you can use the same bash/git aliases on a Gitea repo as those you set up for GitHub.

Saved you a click?

Hope you found this guide handy. No more excuses: now that it's just one short command away, go forth and run your colleagues' code locally!

Sven Hoexter: From 30 to 230 docker containers per host

Friday 2nd of August 2019 02:44:25 PM

I could not find much information on the interwebs how many containers you can run per host. So here are mine and the issues we ran into along the way.

The Beginning

In the beginning there were virtual machines running with 8 vCPUs and 60GB of RAM. They started to serve around 30 containers per VM. Later on we managed to squeeze around 50 containers per VM.

Initial orchestration was done with swarm, later on we moved to nomad. Access was initially fronted by nginx with consul-template generating the config. When it did not scale anymore nginx was replaced by Traefik. Service discovery is managed by consul. Log shipping was initially handled by logspout in a container, later on we switched to filebeat. Log transformation is handled by logstash. All of this is running on Debian GNU/Linux with docker-ce.

At some point it did not make sense anymore to use VMs. We've no state inside the containerized applications anyway. So we decided to move to dedicated hardware for our production setup. We settled with HPe DL360G10 with 24 physical cores and 128GB of RAM.

THP and Defragmentation

When we moved to the dedicated bare metal hosts we were running Debian/stretch + Linux from stretch-backports. At that time Linux 4.17. These machines were sized to run 95+ containers. Once we were above 55 containers we started to see occasional hiccups. First occurences lasted only for 20s, then 2min, and suddenly some lasted for around 20min. Our system metrics, as collected by prometheus-node-exporter, could only provide vague hints. The metric export did work, so processes were executed. But the CPU usage and subsequently the network throughput went down to close to zero.

I've seen similar hiccups in the past with Postgresql running on a host with THP (Transparent Huge Pages) enabled. So a good bet was to look into that area. By default /sys/kernel/mm/transparent_hugepage/enabled is set to always, so THP are enabled. We stick to that, but changed the defrag mode /sys/kernel/mm/transparent_hugepage/defrag (since Linux 4.12) from the default madavise to defer+madvise.

This moves page reclaims and compaction for pages which were not allocated with madvise to the background, which was enough to get rid of those hiccups. See also the upstream documentation. Since there is no sysctl like facility to adjust sysfs values, we're using the sysfsutils package to adjust this setting after every reboot.

Conntrack Table

Since the default docker networking setup involves a shitload of NAT, it shouldn't be surprising that nf_conntrack will start to drop packets at some point. We're currently fine with setting the sysctl tunable

net.netfilter.nf_conntrack_max = 524288

but that's very much up to your network setup and traffic characteristics.

Inotify Watches and Cadvisor

Along the way cadvisor refused to start at one point. Turned out that the default settings (again sysctl tunables) for

fs.inotify.max_user_instances = 128 fs.inotify.max_user_watches = 8192

are too low. We increased to

fs.inotify.max_user_instances = 4096 fs.inotify.max_user_watches = 32768 Ephemeral Ports

We didn't ran into an issue with running out of ephemeral ports directly, but dockerd has a constant issue of keeping track of ports in use and we already see collisions to appear regularly. Very unscientifically we set the sysctl

net.ipv4.ip_local_port_range = 11000 60999 NOFILE limits and Nomad

Initially we restricted nomad (via systemd) with

LimitNOFILE=65536

which apparently is not enough for our setup once we were crossing the 100 container per host limit. Though the error message we saw was hard to understand:

[ERROR] client.alloc_runner.task_runner: prestart failed: alloc_id=93c6b94b-e122-30ba-7250-1050e0107f4d task=mycontainer error="prestart hook "logmon" failed: Unrecognized remote plugin message:

This was solved by following the official recommendation and setting

LimitNOFILE=infinity LimitNPROC=infinity TasksMax=infinity

The main lead here was looking into the "hashicorp/go-plugin" library source, and understanding that they try to read the stdout of some other process, which sounded roughly like someone would have to open at some point a file.

Running out of PIDs

Once we were close to 200 containers per host (test environment with 256GB RAM per host), we started to experience failures of all kinds because processes could no longer be forked. Since that was also true for completely fresh user sessions, it was clear that we're hitting some global limitation and nothing bound to session via a pam module.

It's important to understand that most of our workloads are written in Java, and a lot of the other software we use is written in go. So we've a lot of Threads, which in Linux are presented as "Lightweight Process" (LWP). So every LWP still exists with a distinct PID out of the global PID space.

With /proc/sys/kernel/pid_max defaulting to 32768 we actually ran out of PIDs. We increased that limit vastly, probably way beyond what we currently need, to 500000. Actuall limit on 64bit systems is 222 according to man 5 proc.

More in Tux Machines

Audiocasts/Shows: Jupiter (Linux Academy) and TLLTS

Android Leftovers

KMyMoney 5.0.6 released

The KMyMoney development team today announces the immediate availability of version 5.0.6 of its open source Personal Finance Manager. Another maintenance release is ready: KMyMoney 5.0.6 comes with some important bugfixes. As usual, problems have been reported by our users and the development team fixed some of them in the meantime. The result of this effort is the brand new KMyMoney 5.0.6 release. Despite even more testing we understand that some bugs may have slipped past our best efforts. If you find one of them, please forgive us, and be sure to report it, either to the mailing list or on bugs.kde.org. Read more

Games: Don't Starve Together, Cthulhu Saves the World, EVERSPACE 2 and Stadia

  • Don't Starve Together has a big free update adding in boats and a strange island

    Klei Entertainment have given the gift of new features to their co-op survival game Don't Starve Together, with the Turn of Tides update now available. Taking a little inspiration from the Shipwrecked DLC available for the single-player version Don't Starve, this new free update enables you to build a boat to carry you and other survivors across the sea. Turn of Tides is the first part of a larger update chain they're calling Return of Them, so I'm excited to see what else is going to come to DST.

  • Cthulhu Saves the World has an unofficial Linux port available

    In response to an announcement to a sequel to Cthulhu Saves the World, Ethan Lee AKA flibitijibibo has made a unofficial port for the original and a few other previously Windows-only games. As a quick reminder FNA is a reimplementation of the proprietary XNA API created by Micrsosoft and quite a few games were made with that technology. We’ve gotten several ports thanks to FNA over the years though Ethan himself has mostly moved on to other projects like working on FAudio and Steam Play.

  • EVERSPACE 2 announced, with more of a focus on exploration and it will release for Linux

    EVERSPACE is probably one of my absolute favourite space shooters from the last few years, so I'm extremely excited to see EVERSPACE 2 be announced and confirmed for Linux. For the Linux confirmation, I reached out on Twitter where the developer replied with "#Linux support scheduled for full release in 2021!".

  • Google reveal more games with the latest Stadia Connect, including Cyberpunk 2077

    Today, Google went back to YouTube to show off an impressive list of games coming to their Stadia game streaming service, which we already know is powered by Debian Linux and Vulkan. As a reminder, Google said not to see Stadia as if it was the "Netflix of games", as it's clearly not. Stadia Base requires you to buy all your games as normal, with Stadia Pro ($9.99 monthly) giving you a trickle of free games to access on top of 4K and surround sound support.