Language Selection

English French German Italian Portuguese Spanish

October 2019

Latest From Mozilla

Filed under
Moz/FF
  • Password dos and don’ts

    So many accounts, so many passwords. That’s online life. The average person with a typical online presence is estimated to have close to 100 online accounts, and that figure is rising. If you’re reading this, you’re probably in that category. You have a collection of primary accounts that you care the most about because they’re important and you access them frequently, like your email, social media, bank, media subscriptions, streaming services, etc.

    Then you most likely also have a handful of lower priority accounts you set up without much thought, and some that you forgot about. Since those accounts are low priority, maybe you weren’t careful about password hygiene, and you slipped into bad habits like password reuse which can put your other accounts at a security risk should there be a data breach.

  • Mozilla Open Policy & Advocacy Blog: A Year in Review: Fighting Online Disinformation

    A year ago, Mozilla signed the first ever Code of Practice on Disinformation, brokered in Europe as part of our commitment to an internet that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts. The Code set a wide range of commitments for all the signatories, from transparency in political advertising to the closure of fake accounts, to address the spread of disinformation online. And we were hopeful that the Code would help to drive change in the platform and advertising sectors.

    Since then, we’ve taken proactive steps to help tackle this issue, and today our self assessment of this work was published by the European Commission. Our assessment covers the work we’ve been doing at Mozilla to build tools within the Firefox browser to fight misinformation, empower users with educational resources, support research on disinformation and lead advocacy efforts to push the ecosystem to live up to their own commitments within the Code of Practice.

  • A Year with Spoke: Announcing the Architecture Kit

    Spoke, our 3D editor for creating environments for Hubs, is celebrating its first birthday with a major update. Last October, we released the first version of Spoke, a compositing tool for mixing 2D and 3D content to create immersive spaces. Over the past year, we’ve made a lot of improvements and added new features to make building scenes for VR easier than ever. Today, we’re excited to share the latest feature that adds to the power of Spoke: the Architecture Kit!

    We first talked about the components of the Architecture Kit back in March. With the Architecture Kit, creators now have an additional way to build custom content for their 3D scenes without using an external tool. Specifically, we wanted to make it easier to take existing components that have already been optimized for VR and make it easy to configure those pieces to create original models and scenes. The Architecture Kit contains over 400 different pieces that are designed to be used together to create buildings - the kit includes wall, floor, ceiling, and roof pieces, as well as windows, trim, stairs, and doors.

  • Auditing For Accessibility Problems With Firefox Developer Tools

    Since its debut in Firefox 61, the Accessibility Inspector in the Firefox Developer Tools has evolved from a low-level tool showing the accessibility structure of a page. In Firefox 70, the Inspector has become an auditing facility to help identify and fix many common mistakes and practices that reduce site accessibility. In this post, I will offer an overview of what is available in this latest release.

Web Software: Rclone Browser, WordPress, Kiwi TCMS

Filed under
Software
Web
  • Cloud Storage GUI Rclone Browser 1.6.0 Adds New Options, Fixes

    The Rclone Browser fork I was telling you about a while back keeps improving, with the latest release adding new options in the application preferences, as well as an important fix on Windows that gets mounting/unmounting to work properly.

    Rclone Browser is a cross-platform Qt5 GUI for Rclone, a command line tool to synchronize (and mount) files from remote cloud storage services like Google Drive, OneDrive, Nextcloud, Dropbox, Amazon Drive and S3, Mega, and others. Use it to copy a file from one cloud storage service to another, from a cloud storage to your system or the other way around, and to mount some cloud storage on your system with a single click.

    Since the original Rclone Browser hasn't been updated in almost 3 years, a new developer has forked it, fixing some issues that started happening with new Rclone versions, while also adding new functionality.

  • WordPress 5.3 RC3

    The third release candidate for WordPress 5.3 is now available!

    WordPress 5.3 is currently scheduled to be released on November 12 2019, but we need your help to get there—if you haven’t tried 5.3 yet, now is the time!

  • Kiwi TCMS: Kiwi TCMS 7.1

    We're happy to announce Kiwi TCMS version 7.1! This is a small improvement update which includes database schema and API changes, several other improvements, internal refactoring and updated translations.

Games: Humble Day, RetroArch, Eight Dragons, ULTRAKILL, The Masquerade

Filed under
Gaming
  • The Humble Day of the Devs Bundle 2019 is out, get 'Minit' plus 'ToeJam & Earl' cheap

    Humble have another bundle! The Humble Day of the Devs Bundle 2019 just recently went live with a small selection of games and some good picks have Linux support too.

    In the $1+ tier you get The Haunted Island, a Frog Detective Game and ART SQOOL although neither offer up Linux support sadly.

    When you pay more than the average you get Flipping Death and Battle Chef Brigade, the latter of which does actually have Linux support it's just not advertised as such on Steam.

  • Play classic games using RetroArch Emulator in Ubuntu/Linux Mint

    Do you love to play classic games? If yes, then you are on the right page. RetroArch is a frontend utility for emulators, game engines and media players. You can play wide variety of classic of computer and consoles games. It is free, open-source and cross-platform runs on Linux, Most Windows versions, Mac OS X; On top of all that, RetroArch also runs on iOS and Android for tablets and phones, as well as on game consoles like PS2, PS3, PSP, PS Vita, Wii, Wii U, 2DS, 3DS, Switch, and more! If you have device which is not mentioned here or simply you don't want to install it on your system or you just want to give it a shot then you can run RetroArch online in your web-browser.

  • Eight Dragons brings retro-inspired beat 'em up action to you and a bunch of friends

    Eight Dragons? Does that mean it's like four times as good as the classic Double Dragon? Asking the important questions here today on GOL.

    You might be able to find the answer to that yourself, as the developer sent word to us on Twitter that their new beat 'em up that's currently in Early Access recently added Linux support. Extend Mode have ported it over from their in-house engine to Unity which has helped it be more cross-platform.

  • ULTRAKILL is a first-person shooter for fans of super speed and lots of blood

    ULTRAKILL, as the name might suggest, is a pretty over-the-top game. It's an upcoming first-person shooter from Hakita that now has a free Prelude build out.

  • Vampire: The Masquerade - Coteries of New York shows off some gameplay, releasing December 4

    Readying for release on December 4, Vampire: The Masquerade - Coteries of New York now actual has some in-game footage available.

    Not to be mixed up with Bloodlines 2 which is not coming to Linux (as far as we know), Vampire: The Masquerade - Coteries of New York is since it mentions it on the Steam store page and it was also clearly stated in the trailer announcement too on Steam.

What OSI Affiliates Are Doing For Open Source.

Filed under
OSS

The Open Source Initiative has seventy affiliate members. They represent a broad swath of the open source community, representing educational institutions, projects, and communities. We’re especially proud of our affiliates’ excellent work: thought leadership in open source philosophy; forward-thinking, community-building initiatives; and the work they do as part of fulfilling their missions to develop, innovate, and encourage the adoption of open source technology.

We wanted to take a moment to share the work of some OSI affiliate organizations and their stellar leadership across the greater open source community in community, design, and technology. Our goal is to offer just a few examples of how some of our affiliates are working which may inspire andinform your own efforts.

Brandeis University recently launched a program in Open Source Technology Management, to help train those seeking leadership roles in companies and communities, giving them a foundation in the value and necessity of open source software and philosophy. The program at Brandeis also creates a space for students to work directly with individuals active in the open source movement.

Creative Commons completely revolutionized licensing for content and media through the creation of the Creative Commons suite of open licenses. Their optimism and dedication to building a cultural commons have inspired countless people around the world to adopt open licenses and share their creative works.

Read more

Fedora Workstation, Server 31 Released. Here's What's New

Filed under
Linux

Fedora Linux the Linux distribution developed by community supported Fedora project and sponsored by Red Hat lands yet another milestone with the release of Fedora Workstation 31.This release brings many exciting new changes and features with its workstation, server release alongside its "spins". Here's what's new.

Read more

Security Leftovers

Filed under
Security
  • 3 quick ways to reduce your attack surface on Linux
  • DNS Hijacking: How to Diagnose a DNS Hijack and Stop It

    DNS hijacking sounds scary, but understanding the risks and installing a VPN are effective countermeasures to ensure your security online. In today’s guide, we’ll teach you everything you need to know about DNS hijacking attacks, and how to fix the problem if it arises.

  • Security updates for Tuesday

    Security updates have been issued by Debian (php7.0, php7.3, ruby-loofah, and spip), Fedora (proftpd), openSUSE (lz4 and sysstat), Red Hat (chromium-browser, jss, kernel, kernel-alt, kpatch-patch, pango, polkit, sudo, systemd, and thunderbird), SUSE (graphite-web, python3, and samba), and Ubuntu (php5, php7.0, php7.2, php7.3, and samba).

Red Hat: OpenShift, RHEL and More

Filed under
Red Hat
  • A PodPreset Based Webhook Admission Controller

    One of the fundamental principles of cloud native applications is the ability to consume assets that are externalized from the application itself during runtime. This feature affords portability across different deployment targets as properties may differ from environment to environment. This pattern is also one of the principles of the Twelve Factor app and is supported through a variety of mechanisms within Kubernetes. Secrets and ConfigMaps are implementations in which assets can be stored whereas the injection point within an application can include environment variables or volume mounts. As Kubernetes and cloud native technologies have matured, there has been an increasing need to dynamically configure applications at runtime even though Kubernetes makes use of a declarative configuration model. Fortunately, Kubernetes contains a pluggable model that enables the validation and modification of applications submitted to the platform as pods, known as admission controllers. These controllers can either accept, reject or accept with modifications the pod which is attempting to be created.

    The ability to modify pods at creation time allows both application developers and platform managers the ability to offer capabilities that surpass any limitation that may be imposed by strict declarative configurations. One such implementation of this feature is a concept called PodPresets which enables the injection of ConfigMaps, Secrets, volumes, volume mounts, and environment variables at creation time to pods matching a set of labels. Kubernetes has supported enabling the use of this feature since version 1.6 and the OpenShift Container Platform (OCP) made it available in the 3.6 release. However, due to a perceived direction change for dynamically injecting these types of resources into pods, the feature became deprecated in version 3.7 and removed in 3.11 which left a void for users attempting to take advantage of the provided capabilities.

  • Verifying signatures of Red Hat container images

    Security-conscious organizations are accustomed to using digital signatures to validate application content from the Internet. A common example is RPM package signing. Red Hat Enterprise Linux (RHEL) validates signatures of RPM packages by default.

    In the container world, a similar paradigm should be adhered to. In fact, all container images from Red Hat have been digitally signed and have been for several years. Many users are not aware of this because early container tooling was not designed to support digital signatures.

    In this article, I’ll demonstrate how to configure a container engine to validate signatures of container images from the Red Hat registries for increased security of your containerized applications.

    In the lack of widely accepted standards, Red Hat designed a simple approach to provide security to its customers. This approach is based on detached signatures served by a standard HTTP server. The Linux container tools (Podman, Skopeo, and Buildah) have built-in support for detached signatures, as well as the CRI-O container engine from Kubernetes and the Red Hat OpenShift Container Platform.

  • Advanced telco services and better customer experience need modern support systems

    It seems nearly everything we do these days involves the internet – communication, commerce, entertainment, banking, filing taxes, home security, even monitoring our health – creating a wealth of opportunity for communications service providers (CSPs) to deliver innovative and advanced services, increasing and expanding their revenue streams. But it’s a significant challenge to do so using the traditional, proprietary and monolithic infrastructures in place for decades. To achieve success, it’s critical to modernize business and network systems with open source, cloud-native solutions, and move operations support systems (OSS) and business support systems (BSS) to microservices-based architectures.

    Red Hat believes that by transforming OSS/BSS to a more modern architecture, service providers will be in a better position to improve customer experience and create new revenue and business models, and operate more efficiently. But moving to a modern OSS/BSS architecture isn’t without challenges.

  • Red Hat Customer Success Stories: Automating management and improving communications security

    Datacom is a IT-based service provider in Asia Pacific with more than 5,000 staff and a vision of designing, building, and running IT systems and processes that are aligned to its clients’ business goals. As a Red Hat Advanced Business Partner, Datacom provides solutions to its market across Red Hat's product lines.

    Because Ansible was getting the attention of many Datacom customers, the company chose to focus on using Ansible as the orchestration glue for automation. Datacom constructed the platform which made it easily consumable while allowing customers to leverage the automation elements. Datacom is witnessing application developers use the infrastructure stack to deploy the apps on different technologies.

    Joseph Tejal is Datacom’s Red Hat Certified Specialist in Ansible Automation based in Wellington. Tejal explained that it wasn’t by chance that Datacom standardized on Red Hat Ansible Automation.

More in Tux Machines

Why We Can't Teach Cybersecurity

By Dr. Andy Farnell

I teach cybersecurity. It's something I really believe in, but it's hard work for all the wrong reasons. First day homework for students is watching Brazil, No Country for Old Men, Chinatown, The Empire Strikes Back, or any other film where evil triumphs and the bad guys win. This establishes the right mindset - like the medics at the Omaha beach landing in Saving Private Ryan. Not to be pessimistic, but cybersecurity is a lost cause, at least as things stand today. If we define computer security to be the combination of confidentiality, integrity, and availability for data, and as resilience, reliability and safety for systems, then we are failing terribly on all points.

As a "proof" after a fashion, my students use a combination of Blotto analysis from military game theory, and Lubarsky's law ("there's always one more bug"). It is a dispiriting exercise to see how logic stacks up against the defenders, according to which "the terrorists always win". Fortunately, game theory frequently fails to explain a reality where we are not all psychopathically selfish Bayesian utility maximisers (unlike corporations which are programmed to be). Occasionally hope, compassion, gratitude, and neighbourly love win out.

Could things be worse than having mathematics against you? Actually yes. You could live in a duplicitous culture antithetical to security but favouring a profitable facsimile of it. Perhaps that's a means for obsolete power hierarchies to preserve themselves, or because we don't really understand what "security" is yet. Regardless, that's the culture we have, and it's a more serious problem than you might think, much more so than software complexity or the simple greed of criminals.

My optimism is that if we can face up to facts, we can start to change and progress. It is in the nature of teachers, doctors and drill instructors that we must believe people can change. So I'll try to explore here how this mess happened, how Linux, BSD and free open source software with transparent standards are a plausible even necessary way out of the present computer security crisis, and why the cybersecurity courses at most universities are not helping.

We have to keep faith that complexity, bad language design and reckless software engineering practices are surmountable by smart people. Maybe one day we'll build computers that are 99.9% secure. But that is unlikely to happen for reasons recently explained by Edward Snowden, who describes an Insecurity Industry. Indeed, I was a little disappointed by Snowden's essay which does not go nearly far enough in my opinion.

For me, the Insecurity Industry is not located in a few commercial black-hat operations like Israel's NSO Group, but within the attitudes and practices running through every vein of mainstream computing. As with its leaders, a society gets the technology it deserves. As we revel in cheap imported goods, surveillance capitalism, greed, convenience, manipulation, and disempowerment of users, we reap the security we deserve.

Blaming the cyber-arms trade, the NSO or NSA for answering the demands of cops and criminals alike, is distracting. Without doubt what they are doing is wrong and harmful to everyone, but we can't have secure computing while those who want it are an educated minority. That situation will not change so long as powerful and fundamentally untrustworthy corporations with business models founded on ignorance dominate our digital lives.

Projects of digital literacy started in the nineteen eighties. They kick-started Western tech economies, but faltered in the mid nineties. Programming and "computer studies" which attempted to explain technological tools were replaced by training in Microsoft Word and Excel spreadsheets. Innovation tailed-off. A generation taught to be dependent on tech, not masters of it, are fit only for what David Graeber described as Bullshit Jobs Graeber18.

Into this vacuum rushed "Silicon Valley values" of rent seeking, piggybacking upon established standards and protocols. With a bit of spit, polish, and aggressive marketing, old lamps could be foisted upon consumers. Twenty years later we have a culture of depressed, addicted, but disenfranchised technology users Lanier11.

We have moved from "It's more fun to compute" to "If you've nothing to fear you've nothing to hide". In other words, we've transformed digital technology from a personally empowering choice into systems of near-mandatory social command and control (see Neil Postman's Technopoly postman93). What advantage would any group have in securing their own chains and the weapons ranged against them? A sentiment only half-disguised in young people today is utter ambivalence toward tech.

As states move to reclaim control from social media platforms, public debate has been framed around whether Facebook and suchlike are threats to democracy and ought to be regulated. But this is merely a fragment of a larger problem and of a discussion that has never been properly widened to examine the general dangers of information technology in all its manifest forms, in the hands of governments, businesses, rogue groups and individuals alike.

For me, an elephant in the room is the colossal distance between what we teach and what we practice. Twice convicted monopolists Microsoft set back computing by decades, and in particular their impact on security has been devastating. Yet their substandard wares are still pushed into schools, hospitals and safety-critical transport roles. Even as embarrassing new holes in their products are exposed daily, lobbying and aggressive misinformation from Microsoft and other Big-Tech companies, all of which suffer from appalling privacy and security faults, continues unabated.

Big-tech corporations are insinuating themselves into our public education and health systems without any proper discussion around their place. It is left to well educated individuals to opt-out, reject their systems, and insist on secure, interoperable choices. Advisories like the European Interoperability Framework (EIF is part of Communication COM134 of the European Commission March 2017) recognise that tech is set to become a socially divisive equality issue. The technical poverty of the future will not separate into "haves and have-nots", but "will and the will-nots", those who will trade their privacy and freedom for access and those who eschew convenience for digital dignity.

As the word "infrastructure" (really vertical superstructure) has slyly replaced ICT (a horizontal service) battles have raged between tech monopolies and champions of open standards for control of government, education and health. The idea of public code (see the commentary of David A Wheeler and Richard Stallman) as the foundation of an interoperable technological society, has been vigorously attacked by tech giants. Germany fought Microsoft tooth and nail to replace Windows systems with 20,000 Linux PCs in 2015, only to have Microsoft lobby their way back in, replacing 30,000 desktops with Windows 10 in 2017. Now the Germans seem poised to switch again, this time taking back all public services by mandating support for LibreOffice.

In the UK, several institutions at which I teach are 'Microsoft customers'. I pause to use the term "Microsoft Universities", but they may as well be. Entirely in the pocket of a single corporation, all email, storage networks, and "Teams" communication are supplied by the giant. Due to de-skilling of the sector, the ICT staff, while nice enough people, lack advanced IT skills. They can use off-the-shelf corporate tools, but anything outside lockstep conformity allowed by check-box webmin interfaces is both terrifying and "not supported". I met a secondary school headmaster who seemed proud to tell me that they were not in the pockets of Microsoft, because they had "become a Google Academy". I responded that "as a Linux child", my daughter woudn't be using any of that rubbish either.

Here's a problem; I don't use Microsoft or Google products. At one level it's an ethical decision, not to enrich aggressive bullies who won't pay proper taxes in my country. It's also a well informed technical position based on my knowledge of computer security. For me to teach Microsoft to cybersecurity students would bring professional disgrace. I won't be the first or last person to lose work for putting professional integrity first. They say "Nobody ever got fired for choosing Microsoft". At some institutions that is not merely advice, it's a threat. Security in the shadow of Big-Tech now means job-security, as in the iron rice bowl from which the compliant may feed, but educated independent thinkers must abstain.

A more serious problem is not just that companies like Google and Microsoft are an expensive, controlling foreign corporations supplying buggy software, or that university administrators have given away control of our networks and systems, it's that commercial products are increasingly incompatible with teaching and research. They inject inbuilt censorship and ideological micromanagement into academies and schools.

Another is that "choice" is something of an illusion. Whatever the appearance of competition between, say, Apple and Facebook, Big-Tech companies collude to maintain interlocking systems of controls that enforce each others shared values including sabotage of interoperability, security and inviting regulation upon themselves to better keep down smaller competitors. Big-Tech comes with its own value system that it imposes on our culture. It restricts the learning opportunities of our kids, limits workplace innovation and diversity, and intrudes into our private activities of commerce and health.

In such a hostile environment for teaching cybersecurity (which is to teach empowering knowledge, and why we call it "Ethical Hacking" 1) one may employ two possible methods. First, we can buy in teaching packages reliant entirely on off-site resources. These are the "official" versions of what computer security is. Two commonly available versions come from Cisco and the EC Council. Though slickly presented these resources suffer the same problems as textbooks in fast-changing disciplines. They very quickly go out of date. They only cover elementary material of the "Cyber Essentials" flavour, which ultimately is more about assurance than reality. And they are partial, perhaps even parochial versions of the subject arrived at by committee.

Online courses also suffer link-rot and patchy VM service that breaks lessons. Unlike in-house setups, professors or students cannot debug or change the system, itself an important opportunity for learning. Besides, the track record of Cisco with respect to backdoors no longer inspires much confidence.

The other method is to create "suitcase data-centres". A box of Raspberry Pi single board computers saves the day! The Raspberry Pi Foundation, perhaps modelled after the early digital literacy drives of Acorn/BBC has done more for British education than any dozen edu-tech companies by promoting (as much as it can) openness of hardware and GNU/Linux/Unix software.

Junk laptops running Debian (Parrot Linux) and SBCs make a great teaching setup because a tangle of real network cables, wifi antennas and flashing lights helps visualise real hacking scenarios. Professors often have to supply this equipment using our own money. I rescued a pile of 1.2GHz Intel Atom netbooks from the garbage. Because we are not allowed to connect to university networks, 3/4G hotspots are necessary, again using bandwidth paid for out of my own pocket in order to run classes. Teaching cybersecurity feels like a "forbidden" activity that we sneakily have to do despite, not because of, university support.

Teaching cybersecurity reflects a cultural battle going on right inside our classrooms. It is a battle between two version of a technological society, two different futures. One an empowering vision of technology, the other a dystopian trap of managed dependency. Dan Geer, speaking in 2014 described cybersecurity as a manifestation of Realpolitik. Nowhere does the issue come so clearly to a head as in the schism between camps of Snowden or Assange supporters and the US State, each of which can legitimately claim the other a "traitor" to some ideal of "security".

At the everyday level there is a tension between what we might call real versus fake security. The latter is a festival of form over function, a circus of phones, apps and gizmos where appearance triumphs over reality. It's a racket of productised solutionism, assurance, certification and compliance that's fast supplanting actual security efforts. By contrast, the former is a quiet anathema to "security industry" razzle. It urges thoughtful, modest simplicity, slow and cautious change. It's about what you don't do.

So, in our second lesson we analyse the word "security" itself. Security is both a reality and a feeling. There are perhaps masculine and feminine flavours of security, one following a military metaphor of perimeters, penetration and targets, the other, as Eve Ensler Ensler06 and Brene Brown Brown12 allude, an inner security that includes the right to be insecure and be free from patrician security impositions "for your own good". Finally, there is the uncomfortable truth that security is often a zero-sum affair - your security means my insecurity. While "good" security is a tide that raises all ships, some people misuse security as a euphemism for wielding power.

None of these social and psychological realities fit well into the lacklustre, two dimensional models of textbook computer security. Fortunately a mature discipline of Security Engineering which does not dodge social and political factors has emerged in the UK. Ross Anderson Anderson08 is part of a team leading such work at the Cambridge Cybercrime Centre. One take-away from lesson two is that the word "security" may not be used as a bare, abstract noun. One must ask; security for who? Security from who or what? Security to what end?

Once we begin to examine the deeper issues around device ownership, implied (but infirm) trust models, forced updates, security theatre, and conflicting cyber-laws, we see that in every important respect tech is anarchy. It's a de facto "might is right" free-for-all where much of what passes for "security" for our smartphones, online banking and personal information is "ignorant bullshit" (in the strict academic sense of bullshit according to Harry Frankfurt; that vendors and politicians don't know that they do not know what they are talking about - and care even less Frankfurt05).

Consequently, much of what we teach; the canonical script of "recon, fingerprinting, vulnerability analysis, vector and payload, clean-up, pivoting, escalation, keeping root…", and the corresponding canon of blue team defence (backup, intrusion detection, defence in depth, etc…) - has no context or connection to a bigger picture. It is ephemeral pop that will evaporate as technology changes leaving students with no deeper understanding of what we are trying to do by testing, protecting and repairing systems and data, or why that even matters.

We create more guards for the castles of tech-feudalism - obedient, unthinking security guards employed to carry out the whims of the management class. Leveraged by the unspoken carrot of preferential technical privilege and enforced by the stick of threatened removal of their "security status", they become administrators of new forms of political force. Challenges to grey-area behaviours beyond the legal remit of managers, are proclaimed "security breaches" unless pursued through intractable administrative routes or through appeals that can be deflected with allusions to "policy" or the abstract "security" of unseen authorities. Some of our smartest people are ultimately paid well to shut up and never to think for themselves.

There is a very serious concern that our "Ethical Hacking" courses (which contain no study of ethics whatsoever) are just creating fresh cyber-criminals. Despite the narrative that "we are desperately short of cybersecurity graduates and there are great jobs for everyone", the reality is that students graduate into an extremely competitive environment where recruitment is often hostile and arbitrary. It doesn't take them long to figure that their newfound skills are valuable elsewhere.

Years ago, it became clear to me that we must switch to a model of "Civic Cyber Security". I became interested in the work of Bruce Schneier not as a cryptographer but as an advocate of Technology In The Public Interest. National security is nothing more than the sum-total of individual earned and learned security. That means teaching children as young as five foundational attitudes that would horrify industry.

There is no room to lie back and hope Apple or Google can protect us. Organisations like the UK's National Cyber Security Centre, or US National Security Agency, which have conflicted remits, might wish to be seen as benevolent guardians. Their output has been likened by comedian Stuart Lee to "Mr Fox's guide to hen-house security". Cybersecurity can never be magically granted by those who have a deep and lasting interest in withholding it.

The business of "personal computing" has become ugly. Cybersecurity, in as much as it exists, is a conflicted and unreliable story we tell ourselves about power and tribal allegiances. We can't put on a "good guys" hat and beat "cyber-criminals" so long as we are competing with them for the same thing, exploitable clueless users.

The only questions are whether they are to be sucked dry by ransomware, "legitimate" advertising, or manipulated for political ends. If we are to engage in sincere, truthful education, then we have to call-out Big-Tech for what it is; more a part of the problem than a solution. Those of us who want to explore and teach must still circumvent, improvise and overcome within institutions that pay only lip service to authentic cybersecurity because they are captured by giant corporations

Bibliography

Bibliography

  • [Graeber18] David Graeber, Bullshit Jobs: A Theory, Simon and Schuster (2018).
  • [Lanier11] Jaron Lanier, You Are Not a Gadget, Vintage (2011).
  • [postman93] Neil Postman, Technopoly: The Surrender of Culture to Technology, Vintage Books. N.Y. (1993).
  • [Ensler06] Ensler, Insecure at last, Villard (2006).
  • [Brown12] Brené Brown, The Power of Vulnerability: Teachings on Authenticity, Connection and Courage, Sounds True (2012).
  • [Anderson08] Ross Anderson, Security Engineering: A Guide to Building Dependable Distributed Systems, Wiley (2008).
  • [Frankfurt05] Harry Frankfurt, On Bullshit, Princeton University Press (2005).

Footnotes:

1 A term made up to attract young students to cybersecurity while assuring parents and politicians.
About the author Dr. Andy Farnell is a computer scientist, author and visiting professor in signals, systems and cybersecurity at a range of European universities. His recent book "Digital Vegan" uses a dietary metaphor to examine technology dependency and over-consumption.

toiday's leftovers

  • Linux Weekly Roundup #158

    Welcome to this week's Linux Roundup. It was a full week of Linux releases with Endless OS 4.0.0 and Deepin 20.3. We hope that you had a good week and may the be a great one ahead!

  • Putting the Open back into Open Source

    Agility has never been more important than it is in today’s disrupted digital world. Leaders in every industry are trying to find a balance between the stability that allows them to plan for the future, while creating highly agile organisations that can quickly respond to new challenges and opportunities. This agility only comes from the ability to innovate at speed, which is why open source communities are as vital as ever. According to SUSE’s recently commissioned Insight Avenue report, Why Today’s IT Leaders are Choosing Open, 84% now see open source as a way to cost-effectively drive this innovation.

  • How To Install Snap on Ubuntu 20.04 LTS - idroot

    In this tutorial, we will show you how to install Snap on Ubuntu 20.04 LTS. For those of you who didn’t know, Snap also known as Snappy is an alternative package management tool and program package format developed by Canonical, the company behind Ubuntu Linux. All the snaps are usually stored in a central repository called Snap Store from where snaps can be downloaded and installed using the snap command. Snaps work across a range of Linux distributions, which makes them a distro-agnostic upstream software deployment solution. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. I will show you the step-by-step installation of the Snap Package Manager on Ubuntu 20.04 (Focal Fossa). You can follow the same instructions for Ubuntu 18.04, 16.04, and any other Debian-based distribution like Linux Mint.

  • The European Commission To Force Apple To Allow App Sideloading (NASDAQ:AAPL) | Seeking Alpha [Ed: It is NOT called SIDELOADING… it’s called INSTALLING!]

    Investors are ignoring significant regulatory developments for Apple while distracted by Apple Car speculation.

  • Unboxing Busybox: Claroty and JFrog uncovers 14 vulnerabilities

    Embedded devices with limited memory and storage resources are likely to leverage a tool such as BusyBox, which is marketed as the Swiss Army Knife of embedded Linux. BusyBox is a software suite of Unix utilities, known as applets, that are packaged as a single executable file. Within BusyBox you can find a full-fledged shell, a DHCP client/server, and small utilities such as cp, ls, grep, and others. You’re also likely to find many OT and IoT devices running BusyBox, including popular programmable logic controllers (PLCs), human-machine interfaces (HMIs), and remote terminal units (RTUs)—many of which now run on Linux. As part of our commitment to improving open-source software security, Claroty’s Team82 and JFrog collaborated on a vulnerability research project examining BusyBox. Using static and dynamic techniques, Claroty’s Team82 and JFrog discovered 14 vulnerabilities affecting the latest version of BusyBox. In most cases, the expected impact of these issues is denial of service (DoS). However, in rarer cases, these issues can also lead to information leaks and possibly remote code execution.

  • Perl Weekly Challenge 140: Multiplication Tables
  • Leaky Rakudo

    Yesterday the discord-bridge-bot refused to perform its 2nd job: EVAL All The Things! The EVALing is done via shell-out and requires a fair bit of RAM (Rakudo is equally slim then Santa). After about 3 weeks the fairly simple bot had grown from about halve a GB to one and a halve – while mostly waiting for the intertubes to deliver small pieces of text. I complained on IRC and was advised to take heap snapshots. Since I didn’t know how to make heaps of snapshots, I had to take timo’s directions towards use Telemetry. As snap(:heap) wasn’t willing to surrender the filename of the snapshot (I want to compress the file, it is going to get big over time) I had a look at the source. I also requested a change to Rakudo so I don’t have to hunt down the filename, which was fulfilled by lizmat 22 minutes later. Since you may not have a very recent Rakudo, the following snipped might be useful.

  • Nemiver debugger now in devx SFS

    Mike (mikewalsh) responded with a great link that lists lots of Linux debugger/trace tools. The EasyOS devx SFS already has the 'gdb' CLI utility, and I saw on that link, there are 'ddd' and 'nemiver' GUI frontends for gdb. I went for nemiver and compiled it. The project seems to be almost dead, but then, if it works, perhaps no need for more commits to the git repository. I chose version 0.8.2, not the latest, but it suited me to stay with a gtk+2 based app rather than gtk+3. Two dependencies, 'gtkmm' and 'libgtop', are compiled in OpenEmbedded. I compiled 'libgtksourceviewmm' and 'nemiver' in a running EasyOS and made them into PETs.

Kernel and Graphics: Linux Stuff and GPUs

  • Facebook/Meta Tackling Transparent Page Placement For Tiered-Memory Linux Systems - Phoronix

    Back during the Linux 5.15 cycle Intel contributed an improvement for tiered memory systems where less used memory pages could be demoted to slower tiers of memory storage. But once demoted that kernel infrastructure didn't have a means of promoting those demoted pages back to the faster memory tiers should they become hot again, though now Facebook/Meta engineers have been working on such functionality.  Prior to the Linux 5.15 kernel, during the memory reclaim process when the system RAM was under memory pressure was to simply toss out cold pages. However, with Linux 5.15 came the ability to shift those cold pages to any slower memory tiers. In particular, modern and forthcoming servers with Optane DC persistent memory or CXL-enabled memory, etc. Therefore the pages are still accessible if needed but not occupying precious system DRAM if they aren't being used and to avoid just flushing them out or swapping to disk. 

  • Linux 5.17 To Boast Latency Optimization For AF_UNIX Sockets - Phoronix

    Net-next has been queuing a number of enticing performance optimizations ahead of the Linux 5.17 merge window kicking off around the start of the new year. Covered already was a big TCP optimization and a big improvement for csum_partial() that is used in the network code for checksum computation. The latest optimization is improving the AF_UNIX code path for those using AF_UNIX sockets for local inter-process communication.  A new patch series was queued up on Friday in net-next for improving the AF_UNIX code. That patch series by Kuniyuki Iwashima of Amazon Japan is ultimately about replacing AF_UNIX sockets' single big lock with per-hash locks. The series replaces the AF_UNIX big lock and also as part of the series has a speed-up to the autobind behavior. 

  • Nvidia Pascal GPU, DX12 and VKD3D: Slideshow time! - Boiling Steam

    So Horizon Zero Dawn had a sale recently on Fanatical, and I thought… OK I’ll grab it! It’s time. I first installed it on my workstation that only has a GTX1060 3GB GPU – not a workhorse but a decent card nonetheless for low-to-medium end gaming. I knew very well that Horizon Zero Dawn is a DX12 game and that Pascal architecture (Nvidia 10xx basically) and earlier versions do not play very well with DX12 games running through vkd3d-proton, the DX12 to Vulkan translation layer. Still, I could imagine getting somewhere around 30 FPS on low-to-medium settings, and use FSR if necessary to get to better framerates. Nothing prepared me for the performance I was about to experience.

Linux 5.16-rc3

So rc3 is usually a bit larger than rc2 just because people had some
time to start finding things.

So too this time, although it's not like this is a particularly big
rc3. Possibly partly due to the past week having been Thanksgiving
week here in the US. But the size is well within the normal range, so
if that's a factor, it's not been a big one.

The diff for rc3 is mostly drivers, although part of that is just
because of the removal of a left-over MIPS Netlogic driver which makes
the stats look a bit wonky, and is over a third of the whole diff just
in itself.

If you ignore that part, the statistics look a bit more normal, but
drivers still dominate (network drivers, sound and gpu are the big
ones, but there is noise all over). Other than that there's once again
a fair amount of selftest (mostly networking), along with core
networking, some arch updates - the bulk of it from a single arm64
uaccess patch, although that's mostly because it's all pretty small -
and random other changes.

Full shortlog below.

Please test,

             Linus
Read more Also: Linux 5.16-rc3 Released With Alder Lake ITMT Fix, Other Driver Fixes - Phoronix