Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 2 hours 25 sec ago

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, June 2019

Thursday 18th of July 2019 12:08:50 PM

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 201 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 7 hours (out of 14 hours allocated plus 7 extra hours from May, thus carrying over 14h to July).
  • Adrian Bunk did 6 hours (out of 8 hours allocated plus 8 extra hours from May, thus carrying over 10h to July).
  • Ben Hutchings did 17 hours (out of 17 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 17 hours (out of 17 hours allocated plus 0.25 extra hours from May, thus carrying over 0.25h to July).
  • Emilio Pozuelo Monfort did not provide his June report yet. (He got 17 hours allocated and carried over 0.25h from May).
  • Hugo Lefeuvre did 4.25 hours (out of 17 hours allocated and he gave back 12.75 hours to the pool, thus he’s not carrying over any hours to July).
  • Jonas Meurer did 16.75 hours (out of 17 hours allocated plus 1.75h extra hours from May, thus he is carrying over 2h to July).
  • Markus Koschany did 17 hours (out of 17 hours allocated).
  • Mike Gabriel did 9.75 hours (out of 17 hours allocated, thus carrying over 7.25h to July).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated plus 6h from June, then he gave back 1.5h to the pool, thus he is carrying over 8h to July).
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 17 hours (out of 17 hours allocated).
  • Thorsten Alteholz did 17 hours (out of 17 hours allocated).
DebConf sponsorship

Thanks to the Extended LTS service, Freexian has been able to invest some money in DebConf sponsorship. This year, Debconf attendees should have Debian LTS stickers and flyer in their welcome bag. And while we were thinking of marketing, we also opted to create a promotional video explaining LTS and Freexian’s offer. This video will be premiered at Debconf 19!

Evolution of the situation

We continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker (now for oldoldstable as Buster has been released and thus Stretch became oldoldstable) currently lists 41 packages with a known CVE and the dla-needed.txt file has 43 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Kees Cook: security things in Linux v5.2

Thursday 18th of July 2019 12:07:36 AM

Previously: v5.1.

Linux kernel v5.2 was released last week! Here are some security-related things I found interesting:

page allocator freelist randomization
While the SLUB and SLAB allocator freelists have been randomized for a while now, the overarching page allocator itself wasn’t. This meant that anything doing allocation outside of the kmem_cache/kmalloc() would have deterministic placement in memory. This is bad both for security and for some cache management cases. Dan Williams implemented this randomization under CONFIG_SHUFFLE_PAGE_ALLOCATOR now, which provides additional uncertainty to memory layouts, though at a rather low granularity of 4MB (see SHUFFLE_ORDER). Also note that this feature needs to be enabled at boot time with page_alloc.shuffle=1 unless you have direct-mapped memory-side-cache (you can check the state at /sys/module/page_alloc/parameters/shuffle).

stack variable initialization with Clang
Alexander Potapenko added support via CONFIG_INIT_STACK_ALL for Clang’s -ftrivial-auto-var-init=pattern option that enables automatic initialization of stack variables. This provides even greater coverage than the prior GCC plugin for stack variable initialization, as Clang’s implementation also covers variables not passed by reference. (In theory, the kernel build should still warn about these instances, but even if they exist, Clang will initialize them.) Another notable difference between the GCC plugins and Clang’s implementation is that Clang initializes with a repeating 0xAA byte pattern, rather than zero. (Though this changes under certain situations, like for 32-bit pointers which are initialized with 0x000000AA.) As with the GCC plugin, the benefit is that the entire class of uninitialized stack variable flaws goes away.

Kernel Userspace Access Prevention on powerpc
Like SMAP on x86 and PAN on ARM, Michael Ellerman and Russell Currey have landed support for disallowing access to userspace without explicit markings in the kernel (KUAP) on Power9 and later PPC CPUs under CONFIG_PPC_RADIX_MMU=y (which is the default). This is the continuation of the execute protection (KUEP) in v4.10. Now if an attacker tries to trick the kernel into any kind of unexpected access from userspace (not just executing code), the kernel will fault.

Microarchitectural Data Sampling mitigations on x86
Another set of cache memory side-channel attacks came to light, and were consolidated together under the name Microarchitectural Data Sampling (MDS). MDS is weaker than other cache side-channels (less control over target address), but memory contents can still be exposed. Much like L1TF, when one’s threat model includes untrusted code running under Symmetric Multi Threading (SMT: more logical cores than physical cores), the only full mitigation is to disable hyperthreading (boot with “nosmt“). For all the other variations of the MDS family, Andi Kleen (and others) implemented various flushing mechanisms to avoid cache leakage.

unprivileged userfaultfd sysctl knob
Both FUSE and userfaultfd provide attackers with a way to stall a kernel thread in the middle of memory accesses from userspace by initiating an access on an unmapped page. While FUSE is usually behind some kind of access controls, userfaultfd hadn’t been. To avoid things like Use-After-Free heap grooming, Peter Xu added the new “vm.unprivileged_userfaultfd” sysctl knob to disallow unprivileged access to the userfaultfd syscall.

temporary mm for text poking on x86
The kernel regularly performs self-modification with things like text_poke() (during stuff like alternatives, ftrace, etc). Before, this was done with fixed mappings (“fixmap”) where a specific fixed address at the high end of memory was used to map physical pages as needed. However, this resulted in some temporal risks: other CPUs could write to the fixmap, or there might be stale TLB entries on removal that other CPUs might still be able to write through to change the target contents. Instead, Nadav Amit has created a separate memory map for kernel text writes, as if the kernel is trying to make writes to userspace. This mapping ends up staying local to the current CPU, and the poking address is randomized, unlike the old fixmap.

ongoing: implicit fall-through removal
Gustavo A. R. Silva is nearly done with marking (and fixing) all the implicit fall-through cases in the kernel. Based on the pull request from Gustavo, it looks very much like v5.3 will see -Wimplicit-fallthrough added to the global build flags and then this class of bug should stay extinct in the kernel.

That’s it for now; let me know if you think I should add anything here. We’re almost to -rc1 for v5.3!

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Steve Kemp: Building a computer - part 2

Wednesday 17th of July 2019 06:45:37 PM

My previous post on the subject of building a Z80-based computer briefly explained my motivation, and the approach I was going to take.

This post describes my progress so far:

  • On the hardware side, zero progress.
  • On the software-side, lots of fun.

To recap I expect to wire a Z80 microprocessor to an Arduino (mega). The arduino will generate a clock-signal which will make the processor "tick". It will also react to read/write attempts that the processor makes to access RAM, and I/O devices.

The Z80 has a neat system for requesting I/O, via the use of the IN and OUT instructions which allow the processor to read/write a single byte to one of 255 connected devices.

To experiment, and for a memory recap I found a Z80 assembler, and a Z80 disassembler, both packaged for Debian. I also found a Z80 emulator, which I forked and lightly-modified.

With the appropriate tools available I could write some simple code. I implemented two I/O routines in the emulator, one to read a character from STDIN, and one to write to STDOUT:

IN A, (1) ; Read a character from STDIN, store in A-register. OUT (1), A ; Write the character in A-register to STDOUT

With those primitives implemented I wrote a simple script:

; ; Simple program to upper-case a string ; org 0 ; show a prompt. ld a, '>' out (1), a start: ; read a character in a,(1) ; eof? cp -1 jp z, quit ; is it lower-case? If not just output it cp 'a' jp c,output cp 'z' jp nc, output ; convert from lower-case to upper-case. yeah. math. sub a, 32 output: ; output the character out (1), a ; repeat forever. jr start quit: ; terminate halt

With that written it could be compiled:

$ z80asm ./sample.z80 -o ./sample.bin

Then I could execute it:

$ echo "Hello, world" | ./z80emulator ./sample.bin Testing "./sample.bin"... >HELLO, WORLD 1150 cycle(s) emulated.

And that's where I'll leave it for now. When I have the real hardware I'll hookup some fake-RAM containing this program, and code a similar I/O handler to allow reading/writing to the arduino's serial-console. That will allow the same code to run, unchanged. That'd be nice.

I've got a simple Z80-manager written, but since I don't have the chips yet I can only compile-test it. We'll see how well I did soon enough.

John Goerzen: Tips for Upgrading to, And Securing, Debian Buster

Wednesday 17th of July 2019 05:41:03 PM

Wow.  Once again, a Debian release impresses me — a guy that’s been using Debian for more than 20 years.  For the first time I can ever recall, buster not only supported suspend-to-disk out of the box on my laptop, but it did so on an encrypted volume atop LVM.  Very impressive!

For those upgrading from previous releases, I have a few tips to enhance the experience with buster.

AppArmor

AppArmor is a new line of defense against malicious software.  The release notes indicate it’s now enabled by default in buster.  For desktops, I recommend installing apparmor-profiles-extra apparmor-notify.  The latter will provide an immediate GUI indication when something is blocked by AppArmor, so you can diagnose strange behavior.  You may also need to add userself to the adm group with adduser username adm.

Security

I recommend installing these packages and taking note of these items, some of which are different in buster:

  • unattended-upgrades will automatically install security updates for you.  New in buster, the default config file will also apply stable updates in addition to security updates.
  • needrestart will detect what processes need a restart after a library update and, optionally, restart them. Beginning in buster, it will not automatically restart them when in noninteractive (unattended-upgrades) mode. This can be changed by editing /etc/needrestart/needrestart.conf (or, better, putting a .conf file in /etc/needrestart/conf.d) and setting $nrconf{restart} = 'a'. Edit: If you have an Intel CPU, installing iucode-tool intel-microcode will let needrestart also check on your CPU microcode.
  • debian-security-support will warn you of gaps in security support for packages you are installing or running.
  • package-update-indicator is useful for desktops that won’t be running unattended-upgrades. I believe Gnome 3 has this built in, but for other desktops, this adds an icon when updates are available.
  • You can harden apt with seccomp.
  • You can enable UEFI secure boot.

Tuning

If you hadn’t noticed, many of these items are links into the buster release notes. It’s a good document to read over, even for a new buster install.

Jonathan Dowland: Nadine Shah

Wednesday 17th of July 2019 03:45:24 PM

ticket and cuttings from gig

On July 8 I went to see Nadine Shah perform at the Whitley Bay Playhouse as part of the Mouth Of The Tyne Festival. It was a fantastic gig!

I first saw Nadine Shah — as a solo artist — supporting the Futureheads in the same venue, back in 2013. At that point, she had either just released her debut album, Love Your Dum and Mad, or was just about to (It came out sometime in the same month), but this was the first we heard of her. If memory serves, she played with a small backing band (possibly just drummer, likely co-writer Ben Hillier) and she handled keyboards. It's a pretty small venue. My friends and I loved that show, and as we talked about how good it was, what it reminded us of, (I think we said stuff like "that was nice and gothy, I haven't heard stuff like that for ages"), we hadn't realised that she was sat right behind us, with a grin on her face!

Since then shes put out two more albums, Fast Food which got a huge amount of airplay on 6 Music (and was the point at which I bought into her) and the Mercury-nominated Holiday Destination, a really compelling evolution of her art and a strong political statement.

Kinevil 7 inch

It turns out, though, that I think we saw her before that, too: A local band called Kinevil (now disbanded) supported Ladytron at Digital in Newcastle in 2008. I happen to have their single "Everything's Gone Black" on vinyl (here it is on bandcamp) and noticed years later that the singer is credited as Nadine Shar.

This year's gig was my first gig of 2019, and it was a real blast. The sound mix was fantastic, and loud. The performance was very confident: Nadine now exclusively sings, all the instrument work is done by her band which is now five-strong. The saxophonist made some incredible noises that reminded me of some synth stuff from mid-90s Nine Inch Nails records. I've never heard a saxaphone played that way before. Apparently Shah has been on hiatus for a while for personal reasons and this was her comeback gig. Under those circumstances, it was very impressive. I hope the reception was what she hoped for.

Holger Levsen: 20190716-wanna-work-on-lts

Tuesday 16th of July 2019 03:56:02 PM
Wanna work on Debian LTS (and get funded)?

If you are in Curitiba and are interested to work on Debian LTS (and get paid for that work), please come and talk to me, Debian LTS is still looking for more contributors! Also, if you want a bigger challenge, extended LTS also needs more contributors, though I'd suggest you start with regular LTS

On Thursday, July 25th, there will also be a talk titled "Debian LTS, the good, the bad and the better" where we plan to present what we think works nicely and what doesn't work so nicely yet and where we also want to gather your wishes and requests.

If cannot make it to Curitiba, there will be a video stream (and the possibility to ask questions via IRC) and you can always send me an email or ping on IRC if you want to work on LTS.

Russ Allbery: DocKnot 3.01

Monday 15th of July 2019 04:15:00 AM

The last release of DocKnot failed a whole bunch of CPAN tests that didn't fail locally or on Travis-CI, so this release cleans that up and adds a few minor things to the dist command (following my conventions to run cppcheck and Valgrind tests). The test failures are moderately interesting corners of Perl module development that I hadn't thought about, so seem worth blogging about.

First, the more prosaic one: as part of the tests of docknot dist, the test suite creates a new Git repository because the release process involves git archive and needs a repository to work from. I forgot to use git config to set user.email and user.name, so that broke on systems without Git global configuration. (This would have been caught by the Debian package testing, but sadly I forgot to add git to the build dependencies, so that test was being skipped.) I always get bitten by this each time I write a test suite that uses Git; someday I'll remember the first time.

Second, the build system runs perl Build.PL to build a tiny test package using Module::Build, and it was using system Perl. Slaven Rezic pointed out that this fails if Module::Build isn't installed system-wide or if system Perl doesn't work for whatever reason. Using system Perl is correct for normal operation of docknot dist, but the test suite should use the same Perl version used to run the test suite. I added a new module constructor argument for this, and the test suite now passes in $^X for that argument.

Finally, there was a more obscure problem on Windows: the contents of generated and expected test files didn't match because the generated file content was supposedly just the file name. I think I fixed this, although I don't have Windows on which to test. The root of the problem is another mistake I've made before with Perl: File::Temp->new() does not return a file name, but it returns an object that magically stringifies to the file name, so you can use it that way in many situations and it appears to magically work. However, on Windows, it was not working the way that it was on my Debian system. The solution was to explicitly call the filename method to get the actual file name and use it consistently everywhere; hopefully tests will now pass on Windows.

You can get the latest version from CPAN or from the DocKnot distribution page. A Debian package is also available from my personal archive. I'll probably upload DocKnot to Debian proper during this release cycle, since it's gotten somewhat more mature, although I'd like to make some backward-incompatible changes and improve the documentation first.

François Marier: Installing Debian buster on a GnuBee PC 2

Sunday 14th of July 2019 10:30:00 PM

Here is how I installed Debian 10 / buster on my GnuBee Personal Cloud 2, a free hardware device designed as a network file server / NAS.

Flashing the LibreCMC firmware with Debian support

Before we can install Debian, we need a firmware that includes all of the necessary tools.

On another machine, do the following:

  1. Download the latest librecmc-ramips-mt7621-gb-pc1-squashfs-sysupgrade_*.bin.
  2. Mount a vfat-formatted USB stick.
  3. Copy the file onto it and rename it to gnubee.bin.
  4. Unmount the USB stick

Then plug a network cable between your laptop and the black network port and plug the USB stick into the GnuBee before rebooting the GnuBee via ssh:

ssh 192.68.10.0 reboot

If you have a USB serial cable, you can use it to monitor the flashing process:

screen /dev/ttyUSB0 57600

otherwise keep an eye on the LEDs and wait until they are fully done flashing.

Getting ssh access to LibreCMC

Once the firmware has been updated, turn off the GnuBee manually using the power switch and turn it back on.

Now enable SSH access via the built-in LibreCMC firmware:

  1. Plug a network cable between your laptop and the black network port.
  2. Open web-based admin panel at http://192.168.10.0.
  3. Go to System | Administration.
  4. Set a root password.
  5. Disable ssh password auth and root password logins.
  6. Paste in your RSA ssh public key.
  7. Click Save & Apply.
  8. Go to Network | Firewall.
  9. Select "accept" for WAN Input.
  10. Click Save & Apply.

Finaly, go to Network | Interfaces and note the ipv4 address of the WAN port since that will be needed in the next step.

Installing Debian

The first step is to install Debian jessie on the GnuBee.

Connect the blue network port into your router/switch and ssh into the GnuBee using the IP address you noted earlier:

ssh root@192.168.1.xxx

and the root password you set in the previous section.

Then use fdisk /dev/sda to create the following partition layout on the first drive:

Device Start End Sectors Size Type /dev/sda1 2048 8390655 8388608 4G Linux swap /dev/sda2 8390656 234441614 226050959 107.8G Linux filesystem

Note that I used an 120GB solid-state drive as the system drive in order to minimize noise levels.

Then format the swap partition:

mkswap /dev/sda1

and download the latest version of the jessie installer:

wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/GnuBee_Docs/master/GB-PCx/scripts/jessie_3.10.14/debian-jessie-install

(Yes, the --no-check-certificate is really unfortunate. Please leave a comment if you find a way to work around it.)

The stock installer fails to bring up the correct networking configuration on my network and so I have modified the install script by changing the eth0.1 blurb to:

auto eth0.1 iface eth0.1 inet static address 192.168.10.1 netmask 255.255.255.0

Then you should be able to run the installer succesfully:

sh ./debian-jessie-install

and reboot:

reboot Restore ssh access in Debian jessie

Once the GnuBee has finished booting, login using the serial console:

  • username: root
  • password: GnuBee

and change the root password using passwd.

Look for the IPv4 address of eth0.2 in the output of the ip addr command and then ssh into the GnuBee from your desktop computer:

ssh root@192.168.1.xxx # type password set above mkdir .ssh vim .ssh/authorized_keys # paste your ed25519 ssh pubkey Finish the jessie installation

With this in place, you should be able to ssh into the GnuBee using your public key:

ssh root@192.168.1.172

and then finish the jessie installation:

wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/gnubee-git.github.io/master/debian/debian-modules-install bash ./debian-modules-install reboot

After rebooting, I made a few tweaks to make the system more pleasant to use:

update-alternatives --config editor # choose vim.basic dpkg-reconfigure locales # enable the locale that your desktop is using Upgrade to stretch and then buster

To upgrade to stretch, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian stretch main deb http://httpredir.debian.org/debian stretch-updates main deb http://security.debian.org/ stretch/updates main

Then upgrade the packages:

apt update apt full-upgrade apt autoremove reboot

To upgrade to buster, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian buster main deb http://httpredir.debian.org/debian buster-updates main deb http://security.debian.org/debian-security buster/updates main

and upgrade the packages:

apt update apt full-upgrade apt autoremove reboot Next steps

At this point, my GnuBee is running the latest version of Debian stable, however there are two remaining issues to fix:

  1. openssh-server doesn't work and I am forced to access the GnuBee via the serial interface.

  2. The firmware is running an outdated version of the Linux kernel though this is being worked on by community members.

I hope to resolve these issues soon, and will update this blog post once I do, but you are more than welcome to leave a comment if you know of a solution I may have overlooked.

Benjamin Mako Hill: Hairdressers with Supposedly Funny Pun Names I’ve Visited Recently

Sunday 14th of July 2019 10:08:14 PM

Mika and I recently spent two weeks biking home to Seattle from our year in Palo Alto. The route was ~1400 kilometers and took us past 10 volcanoes and 4 hot springs.

Route of our bike trip from Davis, CA to Oregon City, OR. An elevation profile is also shown.

To my delight, the route also took us past at least 8 hairdressers with supposedly funny pun names! Plus two in Oakland on our way out.

As a result of this trip, I’ve now made 24 contributions to the Hairdressers with Supposedly Funny Pun Names Flickr group photo pool.

Daniel Silverstone: A quarter in review - Halfway to 2020

Sunday 14th of July 2019 03:54:00 PM
The 2019 plan - Second-quarter review

At the start of the year I blogged about my plans for 2019. For those who don't want to go back to read that post, in summary they are:

  1. Continue to lose weight and get fit. I'd like to reach 80kg during the year if I can
  2. Begin a couch to 5k and give it my very best
  3. Focus my software work on finishing projects I have already started
  4. Where I join in other projects be a net benefit
  5. Give back to the @rustlang community because I've gained so much from them already
  6. Be better at tidying up
  7. Save up lots of money for renovations
  8. Go on a proper holiday

At the point that I posted that, I promised myself to do quarterly reviews and so here is the second of those. The first can be found here.

1. Weight loss

So when I wrote in April, I was around 88.6kg and worried about how my body seemed to really like roughly 90kg. This is going to be a similar report. Despite managing to lose 10kg in the first quarter, the second quarter has been harder, and with me focussed on running rather than my full gym routine, loss has been less. I've recently started to push a bit lower though and I'm now around 83kg.

I could really shift my focus back to all-round gym exercise, but honestly I've been enjoying a lot of the spare time returned to me by switching back to my cycling and walking, plus now running a bit. I imagine as the weather returns to its more usual wet mess the gym will return to prominence for me, and with that maybe I'll shed a bit of more of this weight.

I continue give myself a solid "B" for this, though if I were generous, given everything else I might consider a "B+"

2. Couch to 5k

Last time I wrote, I'd just managed a 5k run for the first time. Since then I completed the couch-to-5k programme and have now done eight parkruns. I missed one week due to awful awful weather, but otherwise I've managed to be consistent and attended one parkrun per week. They've all been at the same course apart from one which was in Southampton. This gives me a clean ability to compare runs.

My first parkrun was 30m32s, though I remain aware that the course at Platt Fields is a smidge under 5k really, and I was really pleased with that. However as a colleague explained to me, It never gets easier… Each parkrun is just as hard, if not harder, than the previous one. However to continue his quote, …you just get faster. and I have. Since that first run, I have improved my personal record to 27m34s which is, to my mind at least, bloody brilliant. Even when this week I tried to force myself to go slower, aiming to pace out a 30m run, I ended up at 27m49s.

I am currently trying to convince myself that I can run a bit more slowly and thus increase my distance, but for now I think 5k is a stuck record for me. I'll continue to try and improve that time a little more.

I said last review that I'd be adjusting my goals in the light of how well I'd done with couch-2-5k at that point. Since I've now completed it, I'll be renaming this section the 'Fitness' section and hopefully next review I'll be able to report something other than running in it.

So far, so good, I'm continuing with giving myself an "A+"

3. Finishing projects

I did a bunch more on NetSurf this quarter. We had an amazing long-weekend where we worked on a whole bunch of NS stuff, and I've even managed to give up some of my other spare time to resolve bugs. I'm very pleased with how I've done with that.

Rob and I failed to do much with the pub software, but Lars and I continue to work on the Fable project.

So over-all, this one doesn't get better than the "C" from last time - still satisfactory but could do a lot better.

4. Be a net benefit

My efforts for Debian continue to be restricted, though I hope it continues to just about be a net benefit to the project. My efforts with the Lua community have not extended again, so pretty much the same.

I remain invested in Rust stuff, and have managed (just about) to avoid starting in on any other projects, so things are fairly much the same as before.

I remain doing "okay" here, and I want to be a little more positive than last review, so I'm upgrading to a "B".

5. Give back to the Rust community

My work with Rustup continues, though in the past month or so I've been pretty lax because I've had to travel a lot for work. I continue to be as heavily involved in Rust as I can be -- I've stepped up to the plate to lead the Rustup team, and that puts me into the Rust developer tools team proper. I attended a conference, in part to represent the Rust developer community, and I have some followup work on that which I still need to complete.

I still hang around on the #wg-rustup Discord channel and other channels on that server, helping where I can, and I've been trying to teach my colleagues about Rust so that they might also contribute to the community.

Previously I gave myself an 'A' but thought I could manage an 'A+' if I tried harder. Since I've been a little lax recently I'm dropping myself to an 'A-'.

6. Be better at tidying up

Once again, I came out of the previous review fired up to tidy more. Once again, that energy ebbed after about a week. Every time I feel like I might have the mental space to begin building a cleaning habit, something comes along to knock the wind out of my sails. Sometimes that's a big work related thing, but sometimes it's something as small as "Our internet connection is broken, so suddenly instead of having time to clean, I have no time because it's broken and so I can't do anything, even things which don't need an internet connection."

This remains an "F" for fail, sadly.

7. Save up money for renovations

The savings process continues. I've not managed to put quite as much away in this quarter as I did the quarter before, but I have been doing as much as I can. I've finally consolidated most of my savings into one place which also makes them look a little healthier.

The renovations bills continue to loom, but we're doing well, so I think I get to keep the "A" here.

8. Go on a proper holiday

Well, I had that week "off" but ended up doing so much stuff that it doesn't count as much of a holiday. Rob is now in Japan, but I've not managed to take the time as a holiday because my main project at work needs me there since our project manager and his usual stand-in are both also away in Japan.

We have made a basic plan to take some time around the August Bank Holiday to perhaps visit family etc, so I'm going to upgrade us to "C+" since we're making inroads, even if we've not achieved a holiday yet.

Summary

Last quarter, my scores were B, A+, C, B-, A, F, A, C, which, if we ignore the F is an average of A, though the F did ruin things a little.

This quarter I have a B+, A+, C, B, A-, F, A, C+, which ignoring the F is a little better, though still not great. I guess here's to another quarter.

Ben Hutchings: Talk: What goes into a Debian package?

Sunday 14th of July 2019 02:05:55 PM

Some months ago I gave a talk / live demo at work about how Debian source and binary packages are constructed.

Yesterday I repeated this talk (with minor updates) for the Chicago LUG. I had quite a small audience, but got some really good questions at the end. I have now put the notes up on my talks page.

No, I'm not in Chicago. This was a trial run of giving a talk remotely, which I'll also be doing for DebConf this year. I set up an RTMP server in the cloud (nginx) and ran OBS Studio on my laptop to capture and transmit video and audio. I'm generally very impressed with OBS Studio, although the X window capture source could do with improvement. I used the built-in camera and mic, but the mic picked up a fair amount of background noise (including fan noise, since the video encoding keeps the CPU fairly busy). I should probably switch to a wearable mic in future.

Martin Pitt: Lightweight i3 developer desktop with OSTree and chroots

Sunday 14th of July 2019 12:00:00 AM
Introduction I’ve always liked a clean, slim, lightweight, and robust OS on my laptop (which is my only PC) – I’ve been running the i3 window manager for years, with some custom configuration to enable the Fn keys and set up my preferred desktop session layout. Initially on Ubuntu, for the last two and a half years under Fedora (since I moved to Red Hat). I started with a minimal server install and then had a post-install script that installed the packages that I need, restore my /etc files from git, and some other minor bits.

Jonathan Carter: My Debian 10 (buster) Report

Friday 12th of July 2019 04:58:38 PM

In the early hours of Sunday morning (my time), Debian 10 (buster) was released. It’s amazing to be a part of an organisation where so many people work so hard to pull together and make something like this happen. Creating and supporting a stable release can be tedious work, but it’s essential for any kind of large-scale or long-term deployments. I feel honored to have had a small part in this release

Debian Live

My primary focus area for this release was to get Debian live images in a good shape. It’s not perfect yet, but I think we made some headway. The out of box experiences for the desktop environments on live images are better, and we added a new graphical installer that makes Debian easier to install for the average laptop/desktop user. For the bullseye release I intend to ramp up quality efforts and have a bunch of ideas to make that happen, but more on that another time.

Calamares installer on Cinnamon live image.

Other new stuff I’ve been working on in the Buster cycle

Gamemode

Gamemode is a library and tool that changes your computer’s settings for maximum performance when you launch a game. Some new games automatically invoke Gamemode when they’re launched, but for most games you have to do it manually, check their GitHub page for documentation.

Innocent de Marchi Packages

I was sad to learn about the passing of Innocent de Marchi, a math teacher who was also a Debian contributor for whom I’ve sponsored a few packages before. I didn’t know him personally but learned that he was really loved in his community, I’m continuing to maintain some of his packages that I also had an interest in:

  • calcoo – generic lightweight graphical calculator app that can be useful on desktop environments that doesn’t have one
  • connectagram – a word unscrambling game that gets its words from wiktionary
  • fracplanet – fractal planet generator
  • fractalnow – fast, advanced fractal generator
  • gnubik – 3D Rubik’s cube game
  • tanglet – single player word finding game based on Boggle
  • tetzle – jigsaw puzzle game (was also Debian package of the Day #44)
  • xabacus – simulation of the ancient calculator

Powerline Goodies

I wrote a blog post on vim-airline and powerlevel9k shortly after packaging those: New powerline goodies in Debian.

Debian Desktop

I helped co-ordinate the artwork for the Buster release, although Laura Arjona did most of the heavy lifting on that. I updated some of the artwork in the desktop-base package and in debian-installer. Working on the artwork packages exposed me to some of their bugs but not in time to fix them for buster, so that will be a goal for bullseye. I also packaged the font that’s widely used in the buster artwork called quicksand (Debian package: fonts-quicksand). This allows SVG versions of the artwork in the system to display with the correct font.

Bundlewrap

Bundlewrap is a configuration management system written in Python. If you’re familiar with bcfg2 and Ansible, the concepts in Bundlewrap will look very familiar to you. It’s not as featureful as either of those systems, but what it lacks in advanced features it more than makes up for in ease of use and how easy it is to learn. It’s immediately useful for the large amount of cases where you want to install some packages and manage some config files based on conditions with templates. For anything else you might need you can write small Python modules.

Catimg

catimg is a tool that converts jpeg, png, ico and gif files to terminal output. This was also Debian Package of the day #26.

Gnome Shell Extensions

  • gnome-shell-extension-dash-to-panel: dash-to-panel is an essential shell extension for me, and does more for me to make Gnome 3 feel like Gnome 2.x for me than the classic mode does. It’s the easiest way to get a nice single panel on the top of the screen that contains everything that’s useful.
  • gnome-shell-extension-hide-veth: If you use LXC or Docker (or similar), you’ll probably be somewhat annoyed at all the ‘veth’ interfaces you see in network manager. This extension will hide those from the GUI.
  • gnome-shell-extension-no-annoyance: No annoyance fixes something that should really be configurable in Gnome by default. It removes all those nasty “Window is ready” notifications that are intrusive and distracting.

Other

That’s a wrap for my new Debian packages I maintain in Buster. There’s a lot more that I’d like to talk about that happened during this cycle, like that crazy month when I ran for DPL! And also about DebConf stuff, but I’m all out of time and on that note, I’m heading to DebCamp/DebConf in around 12 hours and look forward to seeing many of my Debian colleagues there :-)

Jonathan McDowell: Burn it all

Friday 12th of July 2019 11:17:06 AM

I am generally positive about my return to Northern Ireland, and decision to stay here. Things are much better than when I was growing up and there’s a lot more going on here these days. There’s an active tech scene and the quality of life is pretty decent. That said, this time of year is one that always dampens my optimism. TLDR: This post brings no joy. This is the darkest timeline.

First, we have the usual bonfire issues. I’m all for setting things on fire while having a drink, but when your bonfire is so big it leads to nearby flat residents being evacuated to a youth hostel for the night or you decide that adding 1800 tyres to your bonfire is a great idea, it’s time to question whether you’re celebrating your cultural identity while respecting those around you, or just a clampit (thanks, @Bolster). If you’re starting to displace people from their homes, or releasing lots of noxious fumes that are a risk to your health and that of your local community you need to take a hard look at the message you’re sending out.

Secondly, we have the House of Commons vote on Tuesday to amend the Northern Ireland (Executive Formation) Bill to require the government to bring forward legislation to legalise same-sex marriage and abortion in Northern Ireland. On the face of it this is a good thing; both are things the majority of the NI population want legalised and it’s an area of division between us and the rest of the UK (and, for that matter, Ireland). Dig deeper and it doesn’t tell a great story about the Northern Ireland Assembly. The bill is being brought in the first place because (at the time of writing) it’s been 907 days since Northern Ireland had a government. The current deadline for forming an executive is August 25th, or another election must be held. The bill extends this to October 21st, with an option to extend it further to January 13th. That’ll be 3 years since the assembly sat. That’s not what I voted for; I want my elected officials to actually do their jobs - I may not agree with all of their views, but it serves NI much more to have them turning up and making things happen than failing to do so. Especially during this time of uncertainty about borders and financial stability.

It’s also important to note that the amendments only kick in if an executive is not formed by October 21st - if there’s a functioning local government it’s expected to step in and enact the appropriate legislation to bring NI into compliance with its human rights obligations, as determined by the Supreme Court. It’s possible that this will provide some impetus to the DUP to re-form the assembly in NI. Equally it’s possible that it will make it less likely that Sinn Fein will rush to re-form it, as both amendments cover issues they have tried to resolve in the past.

Equally while I’m grateful to Stella Creasy and Conor McGinn for proposing these amendments, it’s a rare example of Westminster appearing to care about Northern Ireland at all. The ‘backstop’ has been bandied about as a political football, with more regard paid to how many points Tory leadership contenders can score off each other than what the real impact will be upon the people in Northern Ireland. It’s the most attention anyone has paid to us since the Good Friday Agreement, but it’s not exactly the right sort of attention.

I don’t know what the answer is. Since the GFA politics in Northern Ireland has mostly just got more polarised rather than us finding common ground. The most recent EU elections returned an Alliance MEP, Naomi Long, for the first time, which is perhaps some sign of a move to non-sectarian politics, but the real test would be what a new Assembly election would look like. I don’t hold out any hope that we’d get a different set of parties in power.

Still, I suppose at least it’s a public holiday today. Here’s hoping the pub is open for lunch.

Markus Koschany: My Free Software Activities in June 2019

Thursday 11th of July 2019 08:32:12 PM

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

First of all I want to thank Debian’s Release Team. Whenever there was something to unblock for Buster, I always got feedback within hours and in almost all cases the package could just migrate to testing. Good communication and clear rules helped a lot to make the whole freeze a great experience.

Debian Games
  • I reviewed and sponsored a couple of packages again this month.
  • Reiner Herrmann provided a complete overhaul of xbill, so that we all can fight those Wingdows Viruses again.
  • He also prepared a new upstream release of Supertuxkart, which is currently sitting in experimental but will hopefully be uploaded to unstable within the next days.
  • Bernhard Übelacker fixed two annoying bugs in Freeorion (#930417) and Warzone2100 (#930942).  Unfortunately it was too late to include the fixes for Debian 10 in time but I will prepare an update for the next point release.
  • Well, the freeze is over now (hooray) and I intend to upgrade a couple of games in the warm (if you live in the northern hemisphere) month of July again .
Debian Java
  • I prepared another security update for jackson-databind to fix CVE-2019-12814 and CVE-2019-12384 (#930750).
  • I worked on a security update for Tomcat 8 but have not finished it yet.
Debian LTS

This was my fortieth month as a paid contributor and I have been paid to work 17 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 10.06.2019 until 16.06.2019 and from 24.06.2019 until 30.06.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in wordpress, ansible, libqb, radare2, lemonldap-ng, irssi, libapache2-mod-auth-mellon and openjpeg2.
  • DLA-1827-1. Issued a security update for gvfs fixing 1 CVE.
  • DLA-1831-1. Issued a security update for jackson-databind fixing 2 CVE.
  • DLA-1822-1. Issued a security update for php-horde-form fixing 1 CVE.
  • DLA-1839-1. Issued a security update for expat fixing 1 CVE.
  • DLA-1845-1.  Issued a security update for dosbox fixing 2 CVE.
  • DLA-1846-1.  Issued a security update for unzip fixing 1 CVE.
  • DLA-1851-1. Issued a security update for openjpeg2 fixing 2 CVE.
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my thirteenth month and I have been paid to work 22 hours on ELTS (15 hours were allocated + 7 hours from last month).

  • ELA-133-1. Issued a security update for linux fixing 9 CVE.
  • ELA-137-1. Issued a security update for libvirt fixing 1 CVE.
  • ELA-139-1. Issued a security update for bash fixing 1 CVE.
  • ELA-140-1. Issued a security update for glib2.0 fixing 3 CVE.
  • ELA-141-1. Issued a security update for unzip fixing 1 CVE.
  • ELA-142-1. Issued a security update for libxslt fixing 2 CVE.

Thanks for reading and see you next time.

Vincent Sanders: We can make it better than it was. Better...stronger...faster.

Thursday 11th of July 2019 05:15:09 PM
It is not a novel observation that computers have become so powerful that a reasonably recent system has a relatively long life before obsolescence. This is in stark contrast to the period between the nineties and the teens where it was not uncommon for users with even moderate needs from their computers to upgrade every few years.

This upgrade cycle was mainly driven by huge advances in processing power, memory capacity and ballooning data storage capability. Of course the software engineers used up more and more of the available resources and with each new release ensured users needed to update to have a reasonable experience.
And then sometime in the early teens this cycle slowed almost as quickly as it had begun as systems had become "good enough". I experienced this at a time I was relocating for a new job and had moved most of my computer use to my laptop which was just as powerful as my desktop but was far more flexible.

As a software engineer I used to have a pretty good computer for myself but I was never prepared to spend the money on "top of the range" equipment because it would always be obsolete and generally I had access to much more powerful servers if I needed more resources for a specific task.
To illustrate, the system specification of my desktop PC at the opening of the millennium was:
  • Single core Pentium 3 running at 500Mhz
  • Socket 370 motherboard with 100 Mhz Front Side Bus
  • 128 Megabytes of memory
  • A 25 Gigabyte Deskstar hard drive
  • 150 Mhz TNT 2 graphics card
  • 10 Megabit network card
  • Unbranded 150W PSU
But by 2013 the specification had become:
  • Quad core i5-3330S Processor running at 2700Mhz
  • FCLGA1155 motherboard running memory at 1333 Mhz
  • 8 Gigabytes of memory
  • Terabyte HGST hard drive
  • 1,050 Mhz Integrated graphics
  • Integrated Intel Gigabit network
  • OCZ 500W 80+ PSU
The performance change between these systems was more than tenfold in fourteen years with an upgrade roughly once every couple of years.

I recently started using that system again in my home office mainly for Computer Aided Design (CAD), Computer Aided Manufacture (CAM) and Electronic Design Automation (EDA). The one addition was to add a widescreen monitor as there was not enough physical space for my usual dual display setup.
To my surprise I increasingly returned to this setup for programming tasks. Firstly because being at my desk acts as an indicator to family members I am concentrating where the laptop was no longer had that effect. Secondly I really like the ultra wide display for coding it has become my preferred display and I had been saving for a UWQHD
Alas last month the system started freezing, sometimes it would be stable for several days and then without warning the mouse pointer would stop, my music would cease and a power cycle was required. I tried several things to rectify the situation: replacing the thermal compound, the CPU cooler and trying different memory, all to no avail.
As fixing the system cheaply appeared unlikely I began looking for a replacement and was immediately troubled by the size of the task. Somewhere in the last six years while I was not paying attention the world had moved on, after a great deal of research I managed to come to an answer.
AMD have recently staged something of a comeback with their Ryzen processors after almost a decade of very poor offerings when compared to Intel. The value for money when considering the processor and motherboard combination is currently very much weighted towards AMD.
My timing also seems fortuitous as the new Ryzen 2 processors have just been announced which has resulted in the current generation being available at a substantial discount. I was also encouraged to see that the new processors use the same AM4 socket and are supported by the current motherboards allowing for future upgrades if required.

I Purchased a complete new system for under five hundred pounds, comprising:
  • Hex core Ryzen 5 2600X Processor 3600Mhz
  • MSI B450 TOMAHAWK AMD Socket AM4 Motherboard
  • 32 Gigabytes of PC3200 DDR4 memory
  • Aero Cool Project 7 P650 80+ platinum 650W Modular PSU
  • Integrated RTL Gigabit networking
  • Lite-On iHAS124 DVD Writer Optical Drive
  • Corsair CC-9011077-WW Carbide Series 100R Silent Mid-Tower ATX Computer Case
to which I added some recycled parts:
  • 250 Gigabyte SSD from laptop upgrade
  • GeForce GT 640 from a friend
I installed a fresh copy of Debian and all my CAD/CAM applications and have been using the system for a couple of weeks with no obvious issues.

An example of the performance difference is compiling NetSurf from a clean with empty ccache used to take 36 seconds and now takes 16 which is a nice improvement, however a clean build with the results cached has gone from 6 seconds to 3 which is far less noticeable and during development a normal edit, build, debug cycle affecting only of a small number of files has gone from 400 milliseconds to 200 which simply feels instant in both cases.

My conclusion is that the new system is completely stable but that I have gained very little in common usage. Objectively the system is over twice as fast as its predecessor but aside from compiling large programs or rendering huge CAD drawings this performance is not utilised. Given this I anticipate this system will remain unchanged until it starts failing and I can only hope that will be at least another six years away.

Arturo Borrero González: Netfilter workshop 2019 Malaga summary

Thursday 11th of July 2019 12:00:00 PM

This week we had the annual Netfilter Workshop. This time the venue was in Malaga (Spain). We had the hotel right in the Malaga downtown and the meeting room was in University ETSII Malaga. We had plenty of talks, sessions, discussions and debates, and I will try to summarice in this post what it was about.

Florian Westphal, Linux kernel hacker, Netfilter coreteam member and engineer from Red Hat, started with a talk related to some work being done in the core of the Netfilter code in the kernel to convert packet processing to lists. He shared an overview of current problems and challenges. Processing in a list rather than per packet seems to have several benefits: code can be smarter and faster, so this seems like a good improvement. On the other hand, Florian thinks some of the pain to refactor all the code may not worth it. Other approaches may be considered to introduce even more fast forwarding paths (apart from the flow table mechanism which is already available).

Florian also followed up with the next topic: testing. We are starting to have a lot of duplicated code to do testing. Suggestion by Pablo is to introduce some dedicated tools to ease in maintenance and testing itself. Special mentions to nfqueue and tproxy, 2 mechanisms that require quite a bit of code to be well tested (and could be hard to setup anyway).

Ahmed Abdelsalam, engineer from Cisco, gave a talk on SRv6 Network programming. This protocol allows to simplify some interesting use cases from the network engineering point of view. For example, SRv6 aims to eliminate some tunneling and overlay protocols (VXLAN and friends), and increase native multi-tenancy support in IPv6 networks. Network Services Chaining is one of the main uses cases, which is really interesting in cloud environments. He mentioned that some Kubernetes CNI mechanisms are going to implement SRv6 soon. This protocol does not looks interesting only for the cloud use cases, but also from the general network engineering point of view. By the way, Ahmed shared some really interesting numbers and graphs regarding global IPv6 adoption. Ahmed shared the work that has been done in Linux in general and in nftables in particular to support such setups. I had the opportunity to talk more personally with Ahmed during the workshop to learn more about this mechanism and the different use cases and applications it has.

Fernando, GSoC student, gave us an overview of the OSF functionality of nftables to identify different operating systems from inside the ruleset. He shared some of the improvements he has been working on, and some of them are great, like version matching and wildcards.

Brett, engineer from Untangle, shared some plans to use a new nftables expression (nft_dict) to arbitrarily match on metadata. The proposed design is interesting because it covers some use cases from new perspectives. This triggered a debate on different design approaches and solutions to the issues presented.

Next day, Pablo Neira, head of the Netfilter project, started by opening a debate about extra features for nftables, like the ones provided via xtables-addons for iptables. The first we evaluated was GeoIP. I suggested having some generic infrastructure to be able to write/query external metadata from nftables, given we have more and more use cases looking for this (OSF, the dict expression, GeoIP). Other exhotics extension were discussed, like TARPIT, DELUDE, DHCPMAC, DNETMAP, ECHO, fuzzy, gradm, iface, ipp2p, etc.

A talk on connection tracking support for the linux bridge followed, led by Pablo. A status update on latest work was shared, and some debate happened regarding different use cases for ebtables & nftables.

Next topic was a complex one with no easy solutions: hosting of the Netfilter project infrastructure: git repositories, mailing lists, web pages, wiki deployments, bugzilla, etc. Right now the project has a couple of physical servers housed in a datacenter in Seville. But nobody has time to properly maintain them, upgrade them, and such. Also, part of our infra is getting old, for example the webpage. Some other stuff is mostly unmaintained, like project twitter accounts. Nobody actually has time to keep things updated, and this is probably the base problem. Many options were considered, including moving to github, gitlab, or other hosting providers.

After lunch, Pablo followed up with a status update on hardware flow offload capabilities for nftables. He started with an overview of the current status of ethtool_rx and tc offloads, capabilities and limitations. It should be possible for most commodity hardware to support some variable amount of offload capabilities, but apparently the code was not in very good shape. The new flow block API should improve this situation, while also giving support for nftables offload. There is a related article in LWN.

Next talk was by Phil, engineer at Red Hat. He commented on user-defined strings in nftables, which presents some challenges. Some debate happened, mostly to get to an agreement on how to proceed.

Next day, Phil was the one to continue with the workshop talks. This time the talk was about sharing his TODO list for iptables-nft, presentation and discussion of planned work. This triggered a discussion on how to handle certain bugs in Debian Buster, which have a lot of patch dependencies (so we cannot simply cherry-pick a single patch for stable). It turns out I maintain most of the Debian Netfilter packages, and Sebastian Delafond was attending the workshop too, who is also a Debian Developer. We provided some Debian-side input on how to better proceed with fixes for specific bugs in Debian. Phil continued pointing out several improvements that we require in nftables in order to support some rather exhotic uses cases in both iptables-nft and ebtables-nft.

Yi-Hung Wei, engineer working in OpenVSwitch shared some intresting features related to using the conntrack engine in certain scenarios. OVS is really useful in cloud environments. Specifically, the open discussion was around the zone based timeout policy support for other Netfilter use cases. It was pointed out by Pablo that nftables already support this. By the way, the Wikimedia Cloud Services team plans to use OVS in the near future by means of Neutron (a VXLAN+OVS setup)

Phil gave another talk related to nftables undefined behaviour situations. He has been working lately in polishing the last gaps between -legacy and -nft flavors of iptables and friends. Mostly what we have yet to solve are some corner cases. Also some weird ICMP situation. Thanks to Phil for taking care of this. Actually, Phils has been contributing a lot to the Netfilter project in the past few years.

Stephen, engineer from secunet, followed up after lunch to bring up a couple of topics about improvements to the kernel datapath using XDP. Also, he commented on partitioning the system into control and dataplace CPUs. The nftables flow table infra is doing exactly this, as pointed out by Florian.

Florian continued with some open-for.discussion topics for pending features in nftables. It looks like every day we have more requests for more different setups and use cases with nftables. We need to define uses cases as well as possible, and also try to avoid reinventing the wheel for some stuff.

Laura, engineer from Zevenet, followed up with a really interesting talk on load balancing and clustering using nftables. The amount of features and improvements added to nftlb since the last year is amazing: stateless DNAT topologies, L7 helpers support, more topologies for virtual services and backends, improvements for affinities, security policies, diffrerent clustering architectures, etc. We had an interesting conversation about how we integrate with etcd in the Wikimedia Foundation for sharing information between load balancer and for pooling/depooling backends. They are also spearheading a proposal to include support for nftables into Kubernetes kube-proxy.

Abdessamad El Abbassi, also engineer from Zevenet, shared the project that this company is developing to create a nft-based L7 proxy capable of offloading. They showed some metrics in which this new L7 proxy outperforms HAproxy for some setups. Quite interesting. Also some debate happened around SSL termination and how to better handle that situation.

That very afternoon the core team of the Netfilter project had a meeting in which some internal topics were discussed. Among other things, we decided to invite Phil Sutter to join the Netfilter coreteam.

I really enjoyed this round of Netfilter workshop. Pretty much enjoyed the time with all the folks, old friends and new ones.

Wouter Verhelst: DebConf Video player

Thursday 11th of July 2019 10:14:46 AM

Last weekend, I sat down to learn a bit more about angular, a TypeScript-based programming environment for rich client webapps. According to their website, "TypeScript is a typed superset of JavaScript that compiles to plain JavaScript", which makes the programming environment slightly more easy to work with. Additionally, since TypeScript compiles to whatever subset of JavaScript that you want to target, it compiles to something that should work on almost every browser (that is, if it doesn't, in most cases the fix is to just tweak the compatibility settings a bit).

Since I think learning about a new environment is best done by actually writing a project that uses it, and since I think it was something we could really use, I wrote a video player for the DebConf video team. It makes use of the metadata archive that Stefano Rivera has been working on the last few years (or so). It's not quite ready yet (notably, I need to add routing so you can deep-link to a particular video), but I think it's gotten to a state where it is useful for more general consumption.

We'll see where this gets us...

Steve Kemp: Building a computer - part 1

Thursday 11th of July 2019 10:01:00 AM

I've been tinkering with hardware for a couple of years now, most of this is trivial stuff if I'm honest, for example:

  • Wiring a display to a WiFi-enabled ESP8266 device.
    • Making it fetch data over the internet and display it.
  • Hooking up a temperature/humidity sensor to a device.
    • Submit readings to an MQ bus.

Off-hand I think the most complex projects I've built have been complex in terms of software. For example I recently hooked up a 933Mhz radio-receiver to an ESP8266 device, then had to reverse engineer the protocol of the device I wanted to listen for. I recorded a radio-burst using an SDR dongle on my laptop, broke the transmission into 1 and 0 manually, worked out the payload and then ported that code to the ESP8266 device.

Anyway I've decided I should do something more complex, I should build "a computer". Going old-school I'm going to stick to what I know best the Z80 microprocessor. I started programming as a child with a ZX Spectrum which is built around a Z80.

Initially I started with BASIC, later I moved on to assembly language mostly because I wanted to hack games for infinite lives. I suspect the reason I don't play video-games so often these days is because I'm just not very good without cheating ;)

Anyway the Z80 is a reasonably simple processor, available in a 40PIN DIP format. There are the obvious connectors for power, ground, and a clock-source to make the thing tick. After that there are pins for the address-bus, and pins for the data-bus. Wiring up a standalone Z80 seems to be pretty trivial.

Of course making the processor "go" doesn't really give you much. You can wire it up, turn on the power, and barring explosions what do you have? A processor executing NOP instructions with no way to prove it is alive.

So to make a computer I need to interface with it. There are two obvious things that are required:

  • The ability to get your code on the thing.
    • i.e. It needs to read from memory.
  • The ability to read/write externally.
    • i.e. Light an LED, or scan for keyboard input.

I'm going to keep things basic at the moment, no pun intended. Because I have no RAM, because I have no ROM, because I have no keyboard I'm going to .. fake it.

The Z80 has 40 pins, of which I reckon we need to cable up over half. Only the arduino mega has enough pins for that, but I think if I use a Mega I can wire it to the Z80 then use the Arduino to drive it:

  • That means the Arduino will generate a clock-signal to make the Z80 tick.
  • The arduino will monitor the address-bus
    • When the Z80 makes a request to read the RAM at address 0x0000 it will return something from its memory.
    • When the Z80 makes a request to write to the RAM at address 0xffff it will store it away in its memory.
  • Similarly I can monitor for requests for I/O and fake that.

In short the Arduino will run a sketch with a 1024 byte array, which the Z80 will believe is its memory. Via the serial console I can read/write to that RAM, or have the contents hardcoded.

I thought I was being creative with this approach, but it seems like it has been done before, numerous times. For example:

  • http://baltazarstudios.com/arduino-zilog-z80/
  • https://forum.arduino.cc/index.php?topic=60739.0
  • https://retrocomputing.stackexchange.com/questions/2070/wiring-a-zilog-z80

Anyway I've ordered a bunch of Z80 chips, and an Arduino mega (since I own only one Arduino, I moved on to ESP8266 devices pretty quickly), so once it arrives I'll document the process further.

Once it works I'll need to slowly remove the arduino stuff - I guess I'll start by trying to build an external RAM/ROM interface, or an external I/O circuit. But basically:

  • Hook the Z80 up to the Arduino such that I can run my own code.
  • Then replace the arduino over time with standalone stuff.

The end result? I guess I have no illusions I can connect a full-sized keyboard to the chip, and drive a TV. But I bet I could wire up four buttons and an LCD panel. That should be enough to program a game of Tetris in Z80 assembly, and declare success. Something like that anyway :)

Expect part two to appear after my order of parts arrives from China.

Sven Hoexter: Frankenstein JVM with flavour - jlink your own JVM with OpenJDK 11

Wednesday 10th of July 2019 02:31:27 PM

While you can find a lot of information regarding the Java "Project Jigsaw", I could not really find a good example on "assembling" your own JVM. So I took a few minutes to figure that out. My usecase here is that someone would like to use Instana (non free tracing solution) which requires the java.instrument and jdk.attach module to be available. From an operations perspektive we do not want to ship the whole JDK in our production Docker Images, so we've to ship a modified JVM. Currently we base our images on the builds provided by AdoptOpenJDK.net, so my examples are based on those builds. You can just download and untar them to any directory to follow along.

You can check the available modules of your JVM by runnning:

$ jdk-11.0.3+7-jre/bin/java --list-modules | grep -E '(instrument|attach)' java.instrument@11.0.3

As you can see only the java.instrument module is available. So let's assemble a custom JVM which includes all the modules provided by the default AdoptOpenJDK.net JRE builds and the missing jdk.attach module:

$ jdk-11.0.3+7/bin/jlink --module-path jdk-11.0.3+7 --add-modules $(jdk-11.0.3+7-jre/bin/java --list-modules|cut -d'@' -f 1|tr '\n' ',')jdk.attach --output myjvm $ ./myjvm/bin/java --list-modules | grep -E '(instrument|attach)' java.instrument@11.0.3 jdk.attach@11.0.3

Size wise the increase is, as expected, rather minimal:

$ du -hs myjvm jdk-11.0.3+7-jre jdk-11.0.3+7 141M myjvm 121M jdk-11.0.3+7-jre 310M jdk-11.0.3+7

For the fun of it you could also add the compiler so you can execute source files directly:

$ jdk-11.0.3+7/bin/jlink --module-path jdk-11.0.3+7 --add-modules $(jdk-11.0.3+7-jre/bin/java --list-modules|cut -d'@' -f 1|tr '\n' ',')jdk.compiler --output myjvm2 $ ./myjvm2/bin/java HelloWorld.java Hello World!

More in Tux Machines

Audiocasts/Shows: Jupiter (Linux Academy) and TLLTS

Android Leftovers

KMyMoney 5.0.6 released

The KMyMoney development team today announces the immediate availability of version 5.0.6 of its open source Personal Finance Manager. Another maintenance release is ready: KMyMoney 5.0.6 comes with some important bugfixes. As usual, problems have been reported by our users and the development team fixed some of them in the meantime. The result of this effort is the brand new KMyMoney 5.0.6 release. Despite even more testing we understand that some bugs may have slipped past our best efforts. If you find one of them, please forgive us, and be sure to report it, either to the mailing list or on bugs.kde.org. Read more

Games: Don't Starve Together, Cthulhu Saves the World, EVERSPACE 2 and Stadia

  • Don't Starve Together has a big free update adding in boats and a strange island

    Klei Entertainment have given the gift of new features to their co-op survival game Don't Starve Together, with the Turn of Tides update now available. Taking a little inspiration from the Shipwrecked DLC available for the single-player version Don't Starve, this new free update enables you to build a boat to carry you and other survivors across the sea. Turn of Tides is the first part of a larger update chain they're calling Return of Them, so I'm excited to see what else is going to come to DST.

  • Cthulhu Saves the World has an unofficial Linux port available

    In response to an announcement to a sequel to Cthulhu Saves the World, Ethan Lee AKA flibitijibibo has made a unofficial port for the original and a few other previously Windows-only games. As a quick reminder FNA is a reimplementation of the proprietary XNA API created by Micrsosoft and quite a few games were made with that technology. We’ve gotten several ports thanks to FNA over the years though Ethan himself has mostly moved on to other projects like working on FAudio and Steam Play.

  • EVERSPACE 2 announced, with more of a focus on exploration and it will release for Linux

    EVERSPACE is probably one of my absolute favourite space shooters from the last few years, so I'm extremely excited to see EVERSPACE 2 be announced and confirmed for Linux. For the Linux confirmation, I reached out on Twitter where the developer replied with "#Linux support scheduled for full release in 2021!".

  • Google reveal more games with the latest Stadia Connect, including Cyberpunk 2077

    Today, Google went back to YouTube to show off an impressive list of games coming to their Stadia game streaming service, which we already know is powered by Debian Linux and Vulkan. As a reminder, Google said not to see Stadia as if it was the "Netflix of games", as it's clearly not. Stadia Base requires you to buy all your games as normal, with Stadia Pro ($9.99 monthly) giving you a trickle of free games to access on top of 4K and surround sound support.