Language Selection

English French German Italian Portuguese Spanish

Debian

Syndicate content
Planet Debian - https://planet.debian.org/
Updated: 1 hour 50 min ago

Bits from Debian: Debian welcomes its GSoC 2019 and Outreachy interns

Friday 31st of May 2019 12:15:00 PM

We're excited to announce that Debian has selected seven interns to work with us during the next months: two people for Outreachy, and five for the Google Summer of Code.

Here is the list of projects and the interns who will work on them:

Android SDK Tools in Debian

Package Loomio for Debian

Debian Cloud Image Finder

Debian Patch Porting System

Continuous Integration

Congratulations and welcome to all the interns!

The Google Summer of Code and Outreachy programs are possible in Debian thanks to the efforts of Debian developers and contributors that dedicate part of their free time to mentor interns and outreach tasks.

Join us and help extend Debian! You can follow the interns weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or on each project's team mailing lists.

Chris Lamb: Free software activities in May 2019

Friday 31st of May 2019 11:01:02 AM

Here is my monthly update covering what I have been doing in the free software world during May 2019 (previous month):

  • As part of my duties of being on the board of directors of the Open Source Initiative I attended our biannual face-to-face board meeting in New York, attending the OSI's local event organised by Open Source NYC in order to support my colleagues who were giving talks, as well as participated in various licensing discussions, advocacy activities etc. throughout the rest of the month over the internet.

  • For the Tails privacy-oriented operating system, I attended an online "remote sprint" where we worked collaboratively on issues, features and adjacent concerns regarding the move to Debian buster. I particularly worked on a regression in Fontconfig to ensure the cache filenames remain determinstic [...] as well as reviewed/tested release candidates and others' patches.

  • Gave a few informal talks to Microsoft employees on Reproducible Builds in Seattle, Washington.

  • Opened a pull request against the django-markdown2 utilitiy to correct the template tag name in a documentation example. [...]

  • Hacking on the Lintian static analysis tool for Debian packages:

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom.

Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:

  • Gave a number of informal talks to Microsoft employeers on Reproducible Builds in Seattle, Washington.

  • Drafted, published and publicised our monthly report.

  • Authored and submitted 5 patches to fix reproducibility issues in fonts-ipaexfont, ghmm, liblopsub, ndpi & xorg-gtest.

  • I spent some time our website this month, adding various fixes for larger/smaller screens [...], added a logo suitable for printing physical pin badges [...]. I also refreshed the text on our SOURCE_DATE_EPOCH page.

  • Categorised a huge number of packages and issues in the Reproducible Builds "notes" repository, kept isdebianreproducibleyet.com up to date [...] and posted some branded merchandise to other core team members.

I also made the of following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues:

  • Support the latest PyPI package repository upload requirements by using real reStructuredText comments instead of the raw directive [...] and by stripping out manpage-only parts of the README rather than using the only directive [...].

  • Fix execution of symbolic links that point to the bin/diffoscope entry point in a checked-out version of our Git repository by fully resolving the location as part of dynamically calculating Python's module include path. [...]

  • Add a Dockerfile [...] with various subsequent fixups [...][...][...].

  • Published the resulting Docker image in the diffoscope container registry and updated the diffoscope homepage to provide "quick start" instructions on how to use diffoscope via this image.

Finally, I made a large number of following changes to my web-based ("no installation required") version of the diffoscope tool, try.diffoscope.org:

Debian Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Investigated and triaged CVE-2019-12217, CVE-2019-12219, CVE-2019-12220, CVE-2019-12221 and CVE-2019-12222 in libsdl1.2/libsdl2, simplesamlphp, freeimage & firefox-esr for jessie LTS, and capstone (CVE-2016-7151), sysdig (CVE-2019-8339), enigmail (CVE-2019-12269), firefox-esr (CVE-2019-1169) & sdl-image1.2 (CVE-2019-12218) for wheezy LTS.

  • Frontdesk duties, responding to user/developer questions, reviewing others' packages, etc.

  • Issued DLA 1793-1 for the dhcpcd5 network management protocol client to fix a read overflow vulnerability.

  • Issued DLA 1805-1 to fix a use-after-free vulnerability in minissdpd, a network device discovery daemon where a remote attacker could abuse this to crash the process.

  • Issued ELA-119-1 and DLA 1801-1 for zookeeper (a distributed co-ordination server) where users who were not authorised to read any data were still able to view the access control list.

  • For minissdpd, I filed an appropriate tracking bug for its outstanding CVE (#929297) and then fixed it in the current Debian stable distribution, proposing its inclusion in the next point release via #929613.


Uploads
  • redis (5:5.0.5-1) — New upstream release.

  • python-django (2:2.2.1-1) — New upstream release.

  • bfs (1.4.1-1) — New upstream release.

I also made the following non-maintainer uploads (NMUs) to fix release-critical bugs in Debian buster:

  • coturn (4.5.1.1-1.1) — Don't ship the (empty) /var/lib/turn/turndb SQLite database and generate it on-demand in the post-installation script to avoid overwriting it on upgrade/reinstall. (#929269)

  • libzorpll (7.0.1.0~alpha1-1.1) — Apply a patch from Andreas Beckmann to add suitable Breaks for smoother upgrades from stretch. (#928883)

  • mutt (1.10.1-2.1) — Prevent undefined behaviour when parsing invalid Content-Disposition mail headers. (#929017)

FTP Team

As a Debian FTP assistant I ACCEPTed 16 packages: cc-tool, gdal, golang-github-joyent-gosign, golang-github-mgutz-str, golang-github-mgutz-to, golang-github-ovh-go-ovh, golang-github-src-d-gcfg, golang-golang-x-xerrors, golang-gopkg-ldap.v3, libgit2, nodejs, opensbi, openzwave, rustc, u-boot & websocketd.

Russ Allbery: Review: Bad Blood

Friday 31st of May 2019 03:43:00 AM

Review: Bad Blood, by John Carreyrou

Publisher: Alfred A. Knopf Copyright: 2018 ISBN: 1-5247-3166-8 Format: Kindle Pages: 302

Theranos was a Silicon Valley biotech startup founded by Elizabeth Holmes in 2003. She was a sophomore chemical engineering major at Stanford University when she dropped out to start the company. Theranos's promised innovation was a way to perform blood tests quickly and easily with considerably less blood than was used by normal testing methods. Their centerpiece product was supposed to be a sleek, compact, modern-looking diagnostic device that could use a finger-stick and a small ampule of blood to run multiple automated tests and provide near-immediate results.

Today, Holmes and former Theranos president Ramesh "Sunny" Balwani are facing federal charges of wire fraud. Theranos, despite never producing a working product, burned through $700 million of venture capital funding. Most, possibly all, public demonstrations of their device were faked. Most of their partnerships and contracts fell through. For the rare ones where Theranos actually did testing, they either used industry-standard equipment (not their own products) or sent the samples to other labs.

John Carreyrou is the Wall Street Journal reporter who first broke the story of Theranos's fraud in October of 2015. This book is an expansion of his original reporting. It's also, in the last third or so, the story of that reporting itself, including Theranos's aggressive attempts to quash his story, via both politics and targeted harassment, which were orchestrated by Theranos legal counsel and board member David Boies. (If you had any respect for David Boies due to his association with the Microsoft anti-trust case or Bush v. Gore, this book, along with the similar tactics his firm appears to have used in support of Harvey Weinstein, should relieve you of it. It's depressing, if predictable, that he's not facing criminal charges alongside Holmes and Balwani.)

Long-form investigative journalism about corporate malfeasance is unfortunately a very niche genre and deserves to be celebrated whenever it appears, but even putting that aside, Bad Blood is an excellent book. Carreyrou provides a magnificent and detailed account of the company's growth, internal politics, goals, and strangely unstoppable momentum even while their engineering faced setback after setback. This is a thorough, detailed, and careful treatment that draws boundaries between what Carreyrou has sources for and what he has tried to reconstruct. Because the story of the reporting itself is included, the reader can also draw their own conclusions about Carreyrou's sources and their credibility. And, of course, all the subsequent legal cases against the company have helped him considerably by making many internal documents part of court records.

Silicon Valley is littered with failed startups with too-ambitious product ideas that were not practical. The unusual thing about Theranos is that they managed to stay ahead of the money curve and the failure to build a working prototype for surprisingly long, clawing their way to a $10 billion valuation and biotech unicorn status on the basis of little more than charisma, fakery, and a compelling story. It's astonishing, and rather scary, just how many high-profile people like Boies they managed to attract to a product that never worked and is probably scientifically impossible as described in their marketing, and just how much effort it took to get government agencies like the CMS and FDA to finally close them down.

But, at the same time, I found Bad Blood oddly optimistic because, in the end, the system worked. Not as well as it should have, and not as fast as it should have: Theranos did test actual patients (badly), and probably caused at least some medical harm. But while the venture capital money poured in and Holmes charmed executives and negotiated partnerships, other companies kept testing Theranos's actual results and then quietly backing away. Theranos was forced to send samples to outside testing companies to receive proper testing, and to set up a lab using traditional equipment. And they were eventually shut down by federal regulatory agencies, albeit only after Carreyrou's story broke.

As someone who works in Silicon Valley, I also found the employment dynamics at Theranos fascinating. Holmes, and particularly Balwani when he later joined, ran the company in silos, kept secrets between divisions, and made it very hard for employees to understand what was happening. But, despite that, the history of the company is full of people joining, working there for a year or two, realizing that something wasn't right, and quietly leaving. Theranos management succeeded in keeping enough secrets that no one was able to blow the whistle, but the engineers they tried to hire showed a lot of caution and willingness to cut their losses and walk away. It's not surprising that the company seemed to shift, in its later years, towards new college grads or workers on restrictive immigration visas who had less experience and confidence or would find it harder to switch companies. There's a story here about the benefits of a tight job market and employees who feel empowered to walk off a job. (I should be clear that, while a common theme, this was not universal, and Theranos arguably caused one employee suicide from the stress.)

But if engineers, business partners, a reporter, and eventually regulatory agencies saw through Theranos's fraud, if murkily and slowly, this is also a story of the people who did not. If you are inclined to believe that the prominent conservative Republican figures of the military and foreign policy establishment are wise and thoughtful people, Bad Blood is going to be uncomfortable reading. James Mattis, who served as Trump's Secretary of Defense, was a Theranos booster and board member, and tried to pressure the Department of Defense into using the company's completely untested and fraudulent product for field-testing blood samples from soldiers. One of Carreyrou's main sources was George Shultz's grandson, who repeatedly tried to warn his grandfather of what was going on at Theranos while the elder Republican statesman was on Theranos's board and recruiting other board members from the Hoover Institute, including Henry Kissinger. Apparently the film documentary version of Bad Blood is somewhat kinder to Shultz, but the book is methodically brutal. He comes across as a blithering idiot who repeatedly believed Holmes and Theranos management over his grandson on the basis of his supposed ability to read and evaluate people.

If you are reading this book, I do recommend that you search for video of Elizabeth Holmes speaking. Carreyrou mentions her personal charisma, but it's worth seeing first-hand, and makes some of Theranos's story more believable. She has a way of projecting sincerity directly into the camera that's quite remarkable and is hard to describe in writing, and she tells a very good story about the benefits of easier and less painful (and less needle-filled) blood testing. I have nothing but contempt for people like Boies, Mattis, and Shultz who abdicated their ethical responsibility as board members to check the details and specifics regardless of personal impressions. In a just world with proper legal regulation of corporate boards they would be facing criminal charges along with Holmes. But I can see how Holmes convinced the media and the public that the company was on to something huge. It's very hard to believe that someone who touts a great advancement in human welfare with winning sincerity may be simply lying. Con artists have been exploiting this for all of human history.

I've lived in or near Palo Alto for 25 years and work in Silicon Valley, which made some of the local details of Carreyrou's account fascinating, such as the mention of the Old Pro bar as a site for after-work social meetings. There were a handful of places where Carreyrou got some details wrong, such as his excessive emphasis on the required non-disclosure agreements for visitors to Theranos's office. (For better or ill, this is completely routine for Silicon Valley companies and regularly recommended by corporate counsel, not a sign of abnormal paranoia around secrecy.) But the vast majority of the account rang true, including the odd relationship between Stanford faculty and startups, and between Stanford and the denizens of the Hoover Institute.

Bad Blood is my favorite piece of long-form journalism since Bethany McLean and Peter Elkin's The Smartest Guys in the Room about Enron, and it is very much in the same mold. I've barely touched on all the nuances and surprising characters in this saga. This is excellent, informative, and fascinating work. I'm still thinking about what went wrong and what went right, how we as a society can do better, and the ways in which our regulatory and business system largely worked to stop the worst of the damage, no thanks to people like David Boies and George Shultz.

Highly recommended.

Rating: 9 out of 10

Jonathan Dowland: Multi-architecture OpenShift containers

Thursday 30th of May 2019 08:56:17 AM

Following the initial release of RHEL8-based OpenJDK OpenShift container images, we have now pushed PPC64LE and Aarch64 architecture variants to the Red Hat Container Registry. This is the first time I've pushed Aarch64 images in particular, and I'm excited to work on Aarch64-related issues, should any crop up!

Sean Whitton: Debian Policy call for participation -- May 2019

Wednesday 29th of May 2019 11:35:20 PM

There has been very little activity in recent weeks (preparing the Debian buster release is more urgent than the Policy Manual for most contributors), so the list of bugs I posted in February is still valid.

Bits from Debian: Ask anything you ever wanted to know about Debian Edu!

Wednesday 29th of May 2019 03:30:00 PM

You have heard about Debian Edu or Skolelinux, but do you know exactly what we are doing?

Join us on the #debian-meeting channel on the OFTC IRC network on 03 June 2019 at 12:00 UTC for an introduction to Debian Edu, a Debian pure blend created to fit the requirements of schools and similar institutions.

You will meet Holger Levsen, contributing to Debian Edu since 2005 and member of development team. Ask him anything you ever wanted to know about Debian Edu!

Your IRC nick needs to be registered in order to join the channel. Refer to the Register your account section on the oftc website for more information on how to register your nick.

You can always refer to the debian-meeting wiki page for the latest information and up to date schedule.

Michal Čihař: Spring cleanup

Wednesday 29th of May 2019 08:15:36 AM

What you can probably spot from past posts on my blog, my open source contributions are heavily focused on Weblate and I've phased out many other activities. The main reason being reduced amount of free time with growing family, what leads to focusing on project which I like most. It's fun to develop it and it seems like it will work business wise as well, but that's still something to be shown in the future.

Anyway it's time to admit that I will not spend much time on other things in near future.

Earlier this year, I've resigned from being phpMyAdmin project admin. I was in this role for three years and I've been contributing to the project for 18 years. It has been time, but I haven't contributed significantly in last few months. I will stay with the project for few more months to handle smooth transition, but it's time to say good bye there.

On the Debian project I want to stay active, but I've reduced my involvement and I'm looking for maintainers for some of my packages (mostly RPM related). The special case is the phpMyAdmin package where I was looking for help since 2017, but it still didn't help from the package becoming heavily outdated with security issues what lead to it's removal from Buster. It seems that this has triggered enough attention to resurrect work on the updated packages.

Today I've gone through my personal repos on GitHub and I've archived bunch of them. These have not received any attention for years (many of them were dead by the time I've imported them to GitHub) and it's good to clearly show that to random visitors.

I'm still main developer behind Gammu, but I'm not really doing there more than occasional review of pull requests and merging them. I don't want to abandon the project without handing it out to somebody else, but the problem is that there is nobody else right now.

Filed under: Debian English Gammu SUSE

Russ Allbery: Review: Nimona

Wednesday 29th of May 2019 04:28:00 AM

Review: Nimona, by Noelle Stevenson

Publisher: HarperTeen Copyright: 2015 ISBN: 0-06-227822-3 Format: Graphic novel Pages: 266

Ballister Blackheart is a supervillain, the most notorious supervillain in the kingdom. He used to be a knight, in training at the Institute alongside his friend Goldenloin. But then he defeated Goldenloin in a joust and Goldenloin blew his arm off with a hidden weapon. Now, he plots against the Institute and their hero Sir Goldenloin, although he still follows certain rules.

Nimona, on the other hand, is not convinced by rules. She shows up unexpectedly at Ballister's lair, declaring herself to be his sidekick, winning him over to the idea when she shows that she's also a shapeshifter. And Ballister certainly can't argue with her effectiveness, but her unconstrained enthusiasm for nefarious schemes is rather disconcerting. Ballister, Goldenloin, and the Institute have spent years in a careful dance with unspoken rules that preserved a status quo. Nimona doesn't care about the status quo at all.

Nimona is the collected form of a web comic published between 2012 and 2014. It has the growth curve of a lot of web comics: the first few chapters are lightweight and tend more towards gags, the art starts off fairly rough, and there is more humor than plot. But by chapter four, Stevenson is focusing primarily on the fascinating relationship between Ballister and Nimona, and there are signs that Nimona's gleeful enthusiasm for villainy is hiding something more painful. Meanwhile, the Institute, Goldenloin's employer, quickly takes a turn for the sinister. They're less an organization of superheroes than a shadow government with some dubious goals, and Ballister starts looking less like a supervillain and more like a political revolutionary.

Nimona has some ideas about revolution, most of them rather violent.

At the start of this collection, I wasn't sure how much I'd like it. It's mildly amusing in a gag sort of way while playing with cliches and muddling together fantasy, science fiction, faux-medieval politics, sinister organizations, and superheros. But the story deepens as it continues. Ballister starts off caring about Nimona because he's a fundamentally decent person, but she becomes a much-needed friend. Nimona's villain-worship, to coin a phrase, turns into something more nuanced. And while that's happening, the Institute becomes increasingly sinister, and increasingly dangerous. By the second half of the collection, despite the somewhat excessive number of fight scenes, it was very hard to put down.

Sadly, I didn't think that Stevenson landed the ending. It's not egregiously bad, and the last page partly salvages it, but it wasn't the emotionally satisfying catharsis that I was looking for. The story got surprisingly dark, and I wanted a bit more of a burst of optimism and happiness at the end.

I thought the art was good but not great. The art gets more detailed and more nuanced as the story deepens, but Stevenson stays with a flat, stylized appearance to her characters. The emotional weight comes mostly from the dialogue and from Nimona's expressive transformations rather than the thin and simple faces. But there's a lot of energy in the art, a lot of drama when appropriate, and some great transitions from human scale to the scale of powerful monsters.

That said, I do have one major complaint: the lettering. It's hand-lettered (so far as I can tell) in a way that adds a distinctive style, but the lettering is also small, wavers a bit, and is sometimes quite hard to read. Standard comic lettering is, among other things, highly readable in small sizes; Stevenson's more individual lettering is not, and I occasionally struggled with it.

Overall, this isn't in my top tier of graphic novels, but it was an enjoyable afternoon's reading that hooked me thoroughly and that I was never tempted to put down. I think it's a relatively fast read, since there are a lot of fight scenes and not a lot of detail that invites lingering over the page. I wish the lettering were more uniform and I wasn't entirely happy with the ending, but if slowly-developing unexpected friendship, high drama, and an irrepressible shapeshifter who is more in need of a friend than she appears sounds like something you'd like, give this a try.

Rating: 7 out of 10

Jonathan McDowell: More Yak Shaving: Moving to nftables to secure Home Assistant

Tuesday 28th of May 2019 08:17:56 PM

When I setup Home Assistant last year one of my niggles was that it wanted an entire subdomain, rather than being able to live under a subdirectory. I had a desire to stick various things behind a single SSL host on my home network (my UniFi controller is the other main one), rather than having to mess about with either SSL proxies in every container running a service, or a bunch of separate host names (in particular one for the backend and one for the SSL certificate, for each service) in order to proxy in a single host.

I’ve recently done some reorganisation of my network, including building a new house server (which I’ll get round to posting about eventually) and decided to rethink the whole SSL access thing. As a starting point I had:

  • Services living in their own containers
  • Another container already running Apache, with SSL enabled + a valid external Let’s Encrypt certificate

And I wanted:

  • SSL access to various services on the local network
  • Not to have to run multiple copies of Apache (or any other TLS proxy)
  • Valid SSL certs that would validate correctly on browsers without kludges
  • Not to have to have things like hass-host as the front end name and hass-backend-host as the actual container name.

It dawned on me that all access to the services was already being directed through the server itself, so there was a natural redirection point. I hatched a plan to do a port level redirect there, sending all HTTPS traffic to the service containers to the container running Apache. It would then be possible to limit access to the services (e.g. port 8123 for Home Assistant) to the Apache host, tightening up access, and the actual SSL certificate would have the service name in it.

First step was to figure out how to do the appropriate redirection. I was reasonably sure this would involve some sort of DNAT in iptables, but I couldn’t find a clear indication that it was possible (there was a lot of discussion about how you also ended up needing SNAT, and I needed multiple redirections to 443 on the Apache container, so that wasn’t going to fly). Having now solved the problem I think iptables could have done it just fine, but I ended up being steered down the nftables route. This is long overdue; it’s been available since Linux 3.13 but lacking a good reason to move beyond iptables I hadn’t yet done so (in the same way I clung to ipfwadm and ipchains until I had to move).

There’s a neat tool, iptables-restore-translate, which can take the output of iptables-save and provide a simple translation to nftables. That was a good start, but what was neater was moving to the inet filter instead of ip which then mean I could write one set of rules which applied to both IPv4 and IPv6 services. No need for rule duplication! The ability to write a single configuration file was nicer than the sh script I had to configure iptables as well. I expect to be able to write a cleaner set of rules as I learn more, and although it’s not relevant for the traffic levels I’m shifting I understand the rule parsing is generally more efficient if written properly.Finally there’s an nftables systemd service in Debian, so systemctl enable nftables turned on processing of /etc/nftables.conf on restart rather than futzing with a pre-up in /etc/network/interfaces.

With all the existing config moved over the actual redirection was easy. I added the following block to the end of nftables.conf (I had no NAT previously in place), which redirects HTTPS traffic directed at 192.168.2.3 towards 192.168.2.2 instead.

nftables dnat configuration table ip nat { chain prerouting { type nat hook prerouting priority 0 # Redirect incoming HTTPS to Home Assistant to Apache proxy iif "enp24s0" ip daddr 192.168.2.3 tcp dport https \ dnat 192.168.2.2 } chain postrouting { type nat hook postrouting priority 100 } }

I think the key here is I can guarantee that any traffic coming back from the Apache proxy is going to pass through the host doing the DNAT; each container has a point-to-point link configured rather than living on a network bridge. If there was a possibility traffic from the proxy could go direct to the requesting host (e.g. they were on a shared LAN) then you’d need to do SNAT as well so the proxy would return the traffic to the NAT host which would then redirect to the requesting host.

Apache was then configured as a reverse proxy, with my actual config ending up as follows. For now I’ve restricted access to within my house; I’m still weighing up the pros and cons of exposing access externally without the need for a tunnel. The domain I used on my internal network is a proper registered thing, so although I don’t expose any IP addresses externally I’m able to use Mythic Beasts’ DNS validation instructions and have a valid cert.

Apache proxy config for Home Assistant <VirtualHost *:443> ServerName hass-host ProxyPreserveHost On ProxyRequests off RewriteEngine on # Anything under /local/ we serve, otherwise proxy to Home Assistant RewriteCond %{REQUEST_URI} '/local/.*' RewriteRule .* - [L] RewriteCond %{HTTP:Upgrade} =websocket [NC] RewriteRule /(.*) ws://hass-host:8123/$1 [P,L] ProxyPassReverse /api/websocket ws://hass-host:8123/api/websocket RewriteCond %{HTTP:Upgrade} !=websocket [NC] RewriteRule /(.*) http://hass-host:8123/$1 [P,L] ProxyPassReverse / http://hass-host:8123/ SSLEngine on SSLCertificateFile /etc/ssl/le.crt SSLCertificateKeyFile /etc/ssl/private/le.key SSLCertificateChainFile /etc/ssl/lets-encrypt-x3-cross-signed.crt # Static files can be hosted here instead of via Home Assistant Alias /local/ /srv/www/hass-host/ <Directory /srv/www/hass-host/> Options -Indexes </Directory> # Only allow access from inside the house ErrorDocument 403 "Not for you." <Location /> Order Deny,Allow Deny from all Allow from 192.168.1.0/24 </Location> </VirtualHost>

I’ve done the same for my UniFi controller; the DNAT works exactly the same, while the Apache reverse proxy config is slightly different - a change in some of the paths and config to ignore the fact there’s no valid SSL cert on the controller interface.

Apache proxy config for Unifi Controller <VirtualHost *:443> ServerName unifi-host ProxyPreserveHost On ProxyRequests off SSLProxyEngine on SSLProxyVerify off SSLProxyCheckPeerCN off SSLProxyCheckPeerName off SSLProxyCheckPeerExpire off AllowEncodedSlashes NoDecode ProxyPass /wss/ wss://unifi-host:8443/wss/ ProxyPassReverse /wss/ wss://unifi-host:8443/wss/ ProxyPass / https://unifi-host:8443/ ProxyPassReverse / https://unifi-host:8443/ SSLEngine on SSLCertificateFile /etc/ssl/le.crt SSLCertificateKeyFile /etc/ssl/private/le.key SSLCertificateChainFile /etc/ssl/lets-encrypt-x3-cross-signed.crt # Only allow access from inside the house ErrorDocument 403 "Not for you." <Location /> Order Deny,Allow Deny from all Allow from 192.168.1.0/24 </Location> </VirtualHost>

(worth pointing out that one of my other Home Assistant niggles has also been fixed - there’s now the ability to setup multiple users and separate out API access to OAuth, rather than a single password providing full access. It still needs more work in terms of ACLs for users, but that’s a bigger piece of work.)

More in Tux Machines

today's howtos

Games; CHOP, LeClue - Detectivu, Nantucket, MOTHERGUNSHIP

  • Brutal local co-op platform brawler CHOP has released

    CHOP, a brutal local co-op platform brawler recently left Early Access on Steam. If you like fast-paced fighters with a great style and chaotic gameplay this is for you. There's multiple game modes, up to for players in the standard modes and there's bots as well if you don't have people over often. Speaking about the release, the developer told me they felt "many local multiplayer games fall into a major pitfall : they often lack impact and accuracy, they don't have this extra oomph that ensure players will really be into the game and hang their gamepad like their life depends on it." and that "CHOP stands out in this regard". I've actually quite enjoyed this one, the action in CHOP is really satisfying overall.

  • Mystery adventure game Jenny LeClue - Detectivu is releasing this week

    Developer Mografi has confirmed that their adventure game Jenny LeClue - Detectivu is officially releasing on September 19th. The game was funded on Kickstarter way back in 2014 thanks to the help of almost four thousand backers raising over one hundred thousand dollars.

  • Seafaring strategy game Nantucket just had a big patch and Masters of the Seven Seas DLC released

    Ahoy mateys! Are you ready top set sail? Anchors aweigh! Seafaring strategy game Nantucket is now full of even more content for you to play through. Picaresque Studio and Fish Eagle just released a big new patch adding in "100+" new events, events that can be triggered by entering a city, the Resuscitation command can now heal even if someone isn't dead during combat, the ability to rename crew to really make your play-through personal, minor quests give off better rewards and more. Quite a hefty free update!

  • MOTHERGUNSHIP, a bullet-hell FPS where you craft your guns works great on Linux with Steam Play

    Need a fun new FPS to try? MOTHERGUNSHIP is absolutely nuts and it appears to run very nicely on Linux thanks to Steam Play. There's a few reasons why I picked this one to test recently: the developers have moved onto other games so it's not too likely it will suddenly break, there's not a lot of new and modern first-person shooters on Linux that I haven't finished and it was in the recent Humble Monthly.

GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers

Yesterday, the team behind the GNU project announced Parallel GCC, a research project aiming to parallelize a real-world compiler. Parallel GCC can be used in machines with many cores where GNU cannot provide enough parallelism. A parallel GCC can be also used to design a parallel compiler from scratch. Read more

today's leftovers

  • 3 Ways to disable USB storage devices on Linux
  • Fedora Community Blog: Fedocal and Nuancier are looking for new maintainers

    Recently the Community Platform Engineering (CPE) team announced that we need to focus on key areas and thus let some of our applications go. So we started Friday with Infra to find maintainers for some of those applications. Unfortunately the first few occurrences did not seem to raise as much interest as we had hoped. As a result we are still looking for new maintainers for Fedocal and Nuancier.

  • Artificial Intelligence Confronts a 'Reproducibility' Crisis

    Lo and behold, the system began performing as advertised. The lucky break was a symptom of a troubling trend, according to Pineau. Neural networks, the technique that’s given us Go-mastering bots and text generators that craft classical Chinese poetry, are often called black boxes because of the mysteries of how they work. Getting them to perform well can be like an art, involving subtle tweaks that go unreported in publications. The networks also are growing larger and more complex, with huge data sets and massive computing arrays that make replicating and studying those models expensive, if not impossible for all but the best-funded labs.

    “Is that even research anymore?” asks Anna Rogers, a machine-learning researcher at the University of Massachusetts. “It’s not clear if you’re demonstrating the superiority of your model or your budget.”

  • When Biology Becomes Software

    If this sounds to you a lot like software coding, you're right. As synthetic biology looks more like computer technology, the risks of the latter become the risks of the former. Code is code, but because we're dealing with molecules -- and sometimes actual forms of life -- the risks can be much greater.

    [...]

    Unlike computer software, there's no way so far to "patch" biological systems once released to the wild, although researchers are trying to develop one. Nor are there ways to "patch" the humans (or animals or crops) susceptible to such agents. Stringent biocontainment helps, but no containment system provides zero risk.

  • Why you may have to wait longer to check out an e-book from your local library

    Gutierrez says the Seattle Public Library, which is one of the largest circulators of digital materials, loaned out around three million e-books and audiobooks last year and spent about $2.5 million to acquire those rights. “But that added 60,000 titles, about,” she said, “because the e-books cost so much more than their physical counterpart. The money doesn’t stretch nearly as far.”

  • Libraries are fighting to preserve your right to borrow e-books

    Libraries don't just pay full price for e-books -- we pay more than full price. We don't just buy one book -- in most cases, we buy a lot of books, trying to keep hold lists down to reasonable numbers. We accept renewable purchasing agreements and limits on e-book lending, specifically because we understand that publishing is a business, and that there is value in authors and publishers getting paid for their work. At the same time, most of us are constrained by budgeting rules and high levels of reporting transparency about where your money goes. So, we want the terms to be fair, and we'd prefer a system that wasn't convoluted.

    With print materials, book economics are simple. Once a library buys a book, it can do whatever it wants with it: lend it, sell it, give it away, loan it to another library so they can lend it. We're much more restricted when it comes to e-books. To a patron, an e-book and a print book feel like similar things, just in different formats; to a library they're very different products. There's no inter-library loan for e-books. When an e-book is no longer circulating, we can't sell it at a book sale. When you're spending the public's money, these differences matter.

  • Nintendo's ROM Site War Continues With Huge Lawsuit Against Site Despite Not Sending DMCA Notices

    Roughly a year ago, Nintendo launched a war between itself and ROM sites. Despite the insanely profitable NES Classic retro-console, the company decided that ROM sites, which until recently almost single-handedly preserved a great deal of console gaming history, need to be slayed. Nintendo extracted huge settlements out of some of the sites, which led to most others shutting down voluntarily. While this was probably always Nintendo's strategy, some sites decided to stare down the company's legal threats and continue on.

  • The Grey Havens | Coder Radio 375

    We say goodbye to the show by taking a look back at a few of our favorite moments and reflect on how much has changed in the past seven years.

  • 09/16/2019 | Linux Headlines

    A new Linux Kernel is out; we break down the new features, PulseAudio goes pro and the credential-stealing LastPass flaw. Plus the $100 million plan to rid the web of ads, and more.

  • Powering Docker App: Next Steps for Cloud Native Application Bundles (CNAB)

    Last year at DockerCon and Microsoft Connect, we announced the Cloud Native Application Bundle (CNAB) specification in partnership with Microsoft, HashiCorp, and Bitnami. Since then the CNAB community has grown to include Pivotal, Intel, DataDog, and others, and we are all happy to announce that the CNAB core specification has reached 1.0. We are also announcing the formation of the CNAB project under the Joint Development Foundation, a part of the Linux Foundation that’s chartered with driving adoption of open source and standards. The CNAB specification is available at cnab.io. Docker is working hard with our partners and friends in the open source community to improve software development and operations for everyone.

  • CNAB ready for prime time, says Docker

    Docker announced yesterday that CNAB, a specification for creating multi-container applications, has come of age. The spec has made it to version 1.0, and the Linux Foundation has officially accepted it into the Joint Development Foundation, which drives open-source development. The Cloud Native Application Bundle specification is a multi-company effort that defines how the different components of a distributed cloud-based application are bundled together. Docker announced it last December along with Microsoft, HashiCorp, and Bitnami. Since then, Intel has joined the party along with Pivotal and DataDog. It solves a problem that DevOps folks have long grappled with: how do you bolt all these containers and other services together in a standard way? It’s easy to create a Docker container with a Docker file, and you can pull lots of them together to form an application using Docker Compose. But if you want to package other kinds of container or cloud results into the application, such as Kubernetes YAML, Helm charts, or Azure Resource Manager templates, things become more difficult. That’s where CNAB comes in.