Language Selection

English French German Italian Portuguese Spanish

Linux Journal

Syndicate content
Updated: 1 hour 6 min ago

Raspberry Pi 4 on Sale Now, SUSE Linux Enterprise 15 Service Pack 1 Released, Instaclustr Service Broker Now Available, Steam for Linux to Drop Support for Ubuntu 19.10 and Beyond, and Linux 5.2-rc6 Is Out

Monday 24th of June 2019 03:13:41 PM

News briefs for June 24, 2019.

Raspberry Pi 4 is on sale now, starting at $35. The Raspberry Pi blog post notes that "this is a comprehensive upgrade, touching almost every element of the platform. For the first time we provide a PC-like level of performance for most users, while retaining the interfacing capabilities and hackability of the classic Raspberry Pi line". This version also comes with different memory options (1GB for $35, 2GB for $45 or 4GB for $55). You can order one from approved resellers here.

SUSE releases SUSE Linux Enterprise 15 Service Pack 1 on its one-year anniversary of launching the world's first multimodal OS. From the SUSE blog: "SUSE Linux Enterprise 15 SP1 advances the multimodal OS model by enhancing the core tenets of common code base, modularity and community development while hardening business-critical attributes such as data security, reduced downtime and optimized workloads." Some highlights include faster and easier transition from community Linux to enterprise Linux, enhanced support for edge to HPC workloads and improved hardware-based security. Go here for release notes and download links.

Instaclustr announces the availability of its Instaclustr Service Broker. This release "enables customers to easily integrate their containerized applications, or cloud native applications, with open source data-layer technologies provided by the Instaclustr Managed Platform—including Apache Cassandra and Apache Kafka. Doing so enables organizations—cloud native applications to leverage key capabilities of the Instaclustr platform such as automated service discovery, provisioning, management, and deprovisioning of data-layer clusters." Go here for more details.

Valve developer announces that Steam for Linux will drop support for the upcoming Ubuntu 19.10 release and future Ubuntu releases. Softpedia News reports that "Valve's harsh announcement comes just a few days after Canonical's announcement that they will drop support for 32-bit (i386) architectures in Ubuntu 19.10 (Eoan Ermine). Pierre-Loup Griffais said on Twitter that Steam for Linux won't be officially supported on Ubuntu 19.10, nor any future releases. The Steam developer also added that Valve will focus their efforts on supporting other Linux-based operating systems for Steam for Linux. They will be looking for a GNU/Linux distribution that still offers support for 32-bit apps, and that they will try to minimize the breakage for Ubuntu users."

Linux 5.2-rc6 was released on Saturday. Linus Torvalds writes, "rc6 is the biggest rc in number of commits we've had so far for this 5.2 cycle (obviously ignoring the merge window itself and rc1). And it's not just because of trivial patches (although admittedly we have those too), but we obviously had the TCP SACK/fragmentation/mss fixes in there, and they in turn required some fixes too." He also noted that he's "still reasonably optimistic that we're on track for a calm final part of the release, and I don't think there is anything particularly bad on the horizon."

News Raspberry Pi SUSE Instaclustr Containers cloud native Valve Steam Ubuntu kernel

Python's Mypy--Advanced Usage

Monday 24th of June 2019 11:30:00 AM
by Reuven M. Lerner

Mypy can check more than simple Python types.

In my last article, I introduced Mypy, a package that enforces type checking in Python programs. Python itself is, and always will remain, a dynamically typed language. However, Python 3 supports "annotations", a feature that allows you to attach an object to variables, function parameters and function return values. These annotations are ignored by Python itself, but they can be used by external tools.

Mypy is one such tool, and it's an increasingly popular one. The idea is that you run Mypy on your code before running it. Mypy looks at your code and makes sure that your annotations correspond with actual usage. In that sense, it's far stricter than Python itself, but that's the whole point.

In my last article, I covered some basic uses for Mypy. Here, I want to expand upon those basics and show how Mypy really digs deeply into type definitions, allowing you to describe your code in a way that lets you be more confident of its stability.

Type Inference

Consider the following code:

x: int = 5 x = 'abc' print(x)

This first defines the variable x, giving it a type annotation of int. It also assigns it to the integer 5. On the next line, it assigns x the string abc. And on the third line, it prints the value of x.

The Python language itself has no problems with the above code. But if you run mypy against it, you'll get an error message:

mytest.py:5: error: Incompatible types in assignment (expression has type "str", variable has type "int")

As the message says, the code declared the variable to have type int, but then assigned a string to it. Mypy can figure this out because, despite what many people believe, Python is a strongly typed language. That is, every object has one clearly defined type. Mypy notices this and then warns that the code is assigning values that are contrary to what the declarations said.

In the above code, you can see that I declared x to be of type int at definition time, but then assigned it to a string, and then I got an error. What if I don't add the annotation at all? That is, what if I run the following code via Mypy:

Go to Full Article

GNOME 3.33.3 Released, Kernel Security Updates for RHEL and CentOS, Wine Developers Concerned with Ubuntu 19.10 Dropping 32-Bit Support, Bzip2 to Get an Update and OpenMandriva Lx 4.0 Now Available

Friday 21st of June 2019 02:52:00 PM

News briefs for June 21, 2019.

GNOME 3.33.3 was released yesterday. Note that this release is development code and is intended for testing purposes. Go here to see the list of modules and changes, get the BuildStream project snapshot here or get the source packages here.

Red Hat Enterprise Linux and CentOS Linux have received new kernel security updates to address the recent TCP vulnerabilities. Softpedia News reports that "The new Linux kernel security updates patch an integer overflow flaw (CVE-2019-11477) discovered by Jonathan Looney in Linux kernel's networking subsystem processed TCP Selective Acknowledgment (SACK) segments, which could allow a remote attacker to cause a so-called SACK Panic attack (denial of service) by sending malicious sequences of SACK segments on a TCP connection that has a small TCP MSS value." Update immediately.

Wine developers are concerned with Ubuntu's decision to drop 32-bit support with Ubuntu 19.10. From Linux Uprising: "The Wine developers are concerned with this news because many 64-bit Windows applications still use a 32-bit installer, or some 32-bit components." See the wine-devel mailing list for the discussion.

Bzip2 is about to get its first update since September 2010. According to Phoronix, the new version will include new build systems and security fixes, among other things. See Federico's blog post for details.

OpenMandriva Lx 4.0 was released recently. One major change for OM Lx 4 is switching from rpm5/URPMI to rpm.org/DNF for package management. This change requires users to learn new commands if they use command line, DNF. See the OpenMandriva wiki for all the details and go here to install.

News GNOME Red Hat RHEL CentOS Security Wine Ubuntu Bzip2 OpenMandriva

Understanding Public Key Infrastructure and X.509 Certificates

Friday 21st of June 2019 12:30:00 PM
by Jeff Woods

An introduction to PKI, TLS and X.509, from the ground up.

Public Key Infrastructure (PKI) provides a framework of encryption and data communications standards used to secure communications over public networks. At the heart of PKI is a trust built among clients, servers and certificate authorities (CAs). This trust is established and propagated through the generation, exchange and verification of certificates.

This article focuses on understanding the certificates used to establish trust between clients and servers. These certificates are the most visible part of the PKI (especially when things break!), so understanding them will help to make sense of—and correct—many common errors.

As a brief introduction, imagine you want to connect to your bank to schedule a bill payment, but you want to ensure that your communication is secure. "Secure" in this context means not only that the content remains confidential, but also that the server with which you're communicating actually belongs to your bank.

Without protecting your information in transit, someone located between you and your bank could observe the credentials you use to log in to the server, your account information, or perhaps the parties to which your payments are being sent. Without being able to confirm the identity of the server, you might be surprised to learn that you are talking to an impostor (who now has access to your account information).

Transport layer security (TLS) is a suite of protocols used to negotiate a secured connection using PKI. TLS builds on the SSL standards of the late 1990s, and using it to secure client to server connections on the internet has become ubiquitous. Unfortunately, it remains one of the least understood technologies, with errors (often resulting from an incorrectly configured website) becoming a regular part of daily life. Because those errors are inconvenient, users regularly click through them without a second thought.

Understanding the X.509 certificate, which is fully defined in RFC 5280, is key to making sense of those errors. Unfortunately, these certificates have a well deserved reputation of being opaque and difficult to manage. With the multitude of formats used to encode them, this reputation is rightly deserved.

An X.509 certificate is a structured, binary record. This record consists of several key and value pairs. Keys represent field names, where values may be simple types (numbers, strings) to more complex structures (lists). The encoding from the key/value pairs to the structured binary record is done using a standard known as ASN.1 (Abstract Syntax Notation, One), which is a platform-agnostic encoding format.

Go to Full Article

Episode 21: From Mac to Linux

Thursday 20th of June 2019 08:58:20 PM
Your browser does not support the audio element. Reality 2.0 - Episode 21: From Mac to Linux

Katherine Druckman and Doc Searls talk to Linux Journal Editor at Large, Petros Koutoupis, about moving from Mac to Linux.

Links Mentioned:

Kubernetes 1.15 Releaased, Offensive Security Reveals the 2019-2020 Roadmap for Kali Linux, Canonical Releases a New Kernel Live Patch for Ubuntu 18.04 and 16.04 LTS, Vivaldi 2.6 Now Available, and Mathieu Parent Announces GitLabracadabra

Thursday 20th of June 2019 03:35:36 PM

News briefs for June 20, 2019.

Kubernetes 1.15 was released yesterday. This is the second release of the year and contains 25 enhancements. The two main themes of the release are continuous improvement and extensibility. See the Kubernetes blog post for all the details.

Offensive Security yesterday revealed much of the 2019–2020 roadmap for the open-source Kali Linux project. The press release claims that "The strategy behind much of the roadmap is opening up Kali Linux even more to the community for contributions and helping speed the process of updates and improvements." See the blog post for more details on upcoming changes and new features for Kali Linux.

Canonical released a new kernel live patch for Ubuntu 18.04 LTS and 16.04 LTS to address the recently discovered TCP DoS vulnerabilities. From Softpedia News: "Canonical urges all users of the Ubuntu 18.04 LTS (Bionic Beaver) and Ubuntu 16.04 LTS (Xenial Xerus) operating system series who use the Linux kernel live patch to update their installations as soon as possible to the new kernel versions. These are rebootless kernel updates, so you won't need to restart your computer to apply them."

Vivaldi 2.6 was released today. This new version block abusive ads, improves security, and adds new options for quicker navigation and customization. You can download Vivaldi from here.

Mathieu Parent today announces GitLabracadabra 0.2.1. He started working on the tool to in Python to create and update projects in GitLab. He notes that "This tool is still very young and documentation is sparse, but following the 'release early, release often' motto I think it is ready for general usage."

News Kubernetes Kali Linux Security Canonical Ubuntu Vivaldi GitLab

Getting Started with Rust: Working with Files and Doing File I/O

Thursday 20th of June 2019 11:30:00 AM
by Mihalis Tsoukalos

How to develop command-line utilities in Rust.

This article demonstrates how to perform basic file and file I/O operations in Rust, and also introduces Rust's ownership concept and the Cargo tool. If you are seeing Rust code for the first time, this article should provide a pretty good idea of how Rust deals with files and file I/O, and if you've used Rust before, you still will appreciate the code examples in this article.

Ownership

It would be unfair to start talking about Rust without first discussing ownership. Ownership is the Rust way of the developer having control over the lifetime of a variable and the language in order to be safe. Ownership means that the passing of a variable also passes the ownership of the value to the new variable.

Another Rust feature related to ownership is borrowing. Borrowing is about taking control over a variable for a while and then returning that ownership of the variable back. Although borrowing allows you to have multiple references to a variable, only one reference can be mutable at any given time.

Instead of continuing to talk theoretically about ownership and borrowing, let's look at a code example called ownership.rs:

fn main() { // Part 1 let integer = 321; let mut _my_integer = integer; println!("integer is {}", integer); println!("_my_integer is {}", _my_integer); _my_integer = 124; println!("_my_integer is {}", _my_integer); // Part 2 let a_vector = vec![1, 2, 3, 4, 5]; let ref _a_correct_vector = a_vector; println!("_a_correct_vector is {:?}", _a_correct_vector); // Part 3 let mut a_var = 3.14; { let b_var = &mut a_var; *b_var = 3.14159; } println!("a_var is now {}", a_var); }

So, what's happening here? In the first part, you define an integer variable (integer) and create a mutable variable based on integer. Rust performs a full copy for primitive data types because they are cheaper, so in this case, the integer and _my_integer variables are independent from each other.

However, for other types, such as a vector, you aren't allowed to change a variable after you have assigned it to another variable. Additionally, you should use a reference for the _a_correct_vector variable of Part 2 in the above example, because Rust won't make a copy of a_vector.

Go to Full Article

Docker Is Porting Its Container Platform to Microsoft Windows Subsystem for Linux 2, Ubuntu 19.10 Will Drop 32-Bit Builds, Children of Morta Still Coming to Linux and Vulnerabilities Discovered in the Linux TCP System

Wednesday 19th of June 2019 02:05:45 PM

News briefs for June 19, 2019.

The development team over at Docker is porting their container platform to Microsoft's Windows Subsystem for Linux 2 (WSL 2) It looks as if pretty soon, Docker containers will be managed across both Linux and Windows. See ZDNet for details.

Canonical and the community behind Ubuntu announced that Ubuntu 19.10 will officially drop 32-bit (i386) builds. There has been talk of this for a while, but now it's official. See OMG! Ubuntu! for more information.

Dead Mage, the studio behind Children of Morta posted an update stating that even after all the delays, they still will be bringing the game to Linux, GamingOnLinux reports. The project originally was funded via Kickstarter in 2015.

Security researchers over at Netflix uncovered some troubling security vulnerabilities inside the Linux (and FreeBSD) TCP subsystem, the worst of which is being called SACK. It can permit remote attackers to induce a kernel panic from within your Linux operating system. Patches are available for affected Linux distributions. See Beta News for details.

News Docker Microsoft Containers Ubuntu games Security

Study the Elements with KDE's Kalzium

Wednesday 19th of June 2019 12:00:00 PM
by Joey Bernard

I've written about a number of chemistry packages in the past and all of the computational chemistry that you can do in a Linux environment. But, what is fundamental to chemistry? Why, the elements, of course. So in this article, I focus on how you can learn more about the elements that make up everything around you with Kalzium. KDE's Kalzium is kind of like a periodic table on steroids. Not only does it have information on each of the elements, it also has extra functionality to do other types of calculations.

Kalzium should be available within the package repositories for most distributions. In Debian-based distributions, you can install it with the command:

sudo apt-get install kalzium

When you start it, you get a simplified view of the classical periodic table.

Figure 1. The default view is of the classical ordering of the elements.

You can change this overall view either by clicking the drop-down menu in the top-left side of the window or via the View→Tables menu item. You can select from five different display formats. Clicking one of the elements pops open a new window with detailed information.

Figure 2. Kalzium provides a large number of details for each element.

The default detail pane is an overview of the various physical characteristics of the given element. This includes items like the melting point, electron affinity or atomic mass. Five other information panes also are available. The atom model provides a graphical representation of the electron orbitals around the nucleus of the given atom. The isotopes pane shows a table of values for each of the known isotopes for the selected element, ordered by neutron number. This includes things like the atomic mass or the half-life for radioactive isotopes. The miscellaneous detail pane includes some of the extra facts and trivia that might be of interest. The spectrum detail pane shows the emission and absorption spectra, both as a graphical display and a table of values. The last detail pane provides a list of external links where you can learn more about the selected element. This includes links to Wikipedia, the Jefferson Lab and the Webelements sites.

Figure 3. For those elements that are stable enough, you even can see the emission and absorption spectra.

Go to Full Article

Slimbook Launches New "Apollo" Linux PC, First Beta for Service Pack 5 of SUSE Linux Enterprise 12 Is Out, NVIDIA Binary Drivers for Ubuntu Growing Stale, DragonFly BSD v 5.6 Released and Qt v. 5.12.4 Now Available

Tuesday 18th of June 2019 03:16:57 PM

News briefs for June 18, 2019.

Slimbook, the Spanish Linux computer company, just unveiled a brand-new all-in-one Linux PC called the "Apollo". It has a 23.6 inch IPS LED display with a 1920x1080 resolution, and a choice between an Intel i5-8500 and i7-8700 processors. It comes with up to 32GB of RAM and integrated Intel UHD 630 4K graphics. Pricing starts at $799.

The first beta for service pack 5 of SUSE Linux Enterprise 12 is out and available. It contains updated drivers, a new version of the OpenJDK, support for Intel Optane memory and more.

NVIDIA binary drivers for Ubuntu have grown a bit stale, which is pushing developers to update the drivers for Ubuntu 19.10.

DragonFly BSD version 5.6 is officially released with improvements in the management of virtual memory, updates and bug fixes to both the DRM code and especially to the HAMMER2 filesystem and much more.

Qt version 5.12.4 is available with support for OpenSSL version 1.1.1 and about 250 bug fixes.

News Slimbook Hardware SUSE NVIDIA Ubuntu DragonFly BSD qt

Android Low-Memory Killer--In or Out?

Tuesday 18th of June 2019 12:00:00 PM
by Zack Brown

One of the jobs of the Linux kernel—and all operating system kernels—is to manage the resources available to the system. When those resources get used up, what should it do? If the resource is RAM, there's not much choice. It's not feasible to take over the behavior of any piece of user software, understand what that software does, and make it more memory-efficient. Instead, the kernel has very little choice but to try to identify the software that is most responsible for using up the system's RAM and kill that process.

The official kernel does this with its OOM (out-of-memory) killer. But, Linux descendants like Android want a little more—they want to perform a similar form of garbage collection, but while the system is still fully responsive. They want a low-memory killer that doesn't wait until the last possible moment to terminate an app. The unspoken assumption is that phone apps are not so likely to run crucial systems like heart-lung machines or nuclear fusion reactors, so one running process (more or less) doesn't really matter on an Android machine.

A low-memory killer did exist in the Linux source tree until recently. It was removed, partly because of the overlap with the existing OOM code, and partly because the same functionality could be provided by a userspace process. And, one element of Linux kernel development is that if something can be done just as well in userspace, it should be done there.

Sultan Alsawaf recently threw open his window, thrust his head out, and shouted, "I'm mad as hell, and I'm not gonna take this anymore!" And, he re-implemented a low-memory killer for the Android kernel. He felt the userspace version was terrible and needed to be ditched. Among other things, he said, it killed too many processes and was too slow. He felt that the technical justification of migrating to the userspace dæmon had not been made clear, and an in-kernel solution was really the way to go.

In Sultan's implementation, the algorithm was simple—if a memory request failed, then the process was killed—no fuss, no muss and no rough stuff.

There was a unified wall of opposition to this patch. So much so that it became clear that Sultan's main purpose was not to submit the patch successfully, but to light a fire under the asses of the people maintaining the userspace version, in hopes that they might implement some of the improvements he wanted.

Michal Hocko articulated his opposition to Sultan's patch very clearly—the Linux kernel would not have two separate OOM killers sitting side by side. The proper OOM killer would be implemented as well as could be, and any low-memory killers and other memory finaglers would have to exist in userspace for particular projects like Android.

Go to Full Article

Linux on things that don't normally have Linux

Tuesday 18th of June 2019 05:18:13 AM

Please support Linux Journal by subscribing or becoming a patron.

FreeBSD 11.3-RC1 Available, Lenovo ThinkPad P To Come With Ubuntu Pre-Installed, Star Labs Now Offers Zorin OS On Laptops, Remote Monitoring Software Pulseway v6.3.3 Released, PCLinuxOS KDE Full Edition 2019.06, Linux Kernel Update

Monday 17th of June 2019 02:11:34 PM

FreeBSD 11.3-RC1 is now officially available with installation images for amd64, i386, aarch64, armv6 and more. This release contains mostly bug fixes.

If you are looking for a new laptop with Linux support out-of-box, the Lenovo ThinkPad P series will have Ubuntu 18.04 pre-installed. They will go on sale later this month in the US.

Speaking of laptops, the folks over at Zorin OS are teaming up with UK-based Star Labs to produce a beautiful computing experience. Starting on June 21st, Star Labs will be offering Zorin OS 15 as an option for pre-installed images on a variety of the their laptops.

Real-time remote monitoring and management software, Pulseway version 6.3.3 was released. Key updates include a large number of additional third party titles, the ability to export reports in CSV format, and remote desktop file transfer.

PCLinuxOS KDE Full Edition 2019.06 is now out boasting a Linux 5.1.10 kernel, KDE Applications 19.04.2, KDE Frameworks 5.59.0, KDE Plasma 5.16.0 and more.

With the released of the latest release candidate in the Linux kernel 5.2-rc5, Linus sees a light at the end of the tunnel: "But the good news is that we're getting to the later parts of the rc series, and things do seem to be calming down. I was hoping rc5 would end up smaller than rc4, and so it turned out."  You can view a complete list of changes here.

News

Filesystem Hierarchy Standard

Monday 17th of June 2019 11:00:00 AM
by Kyle Rankin

What are these weird directories, and why are they there?

If you are new to the Linux command line, you may find yourself wondering why there are so many unusual directories, what they are there for, and why things are organized the way they are. In fact, if you aren't accustomed to how Linux organizes files, the directories can seem downright arbitrary with odd truncated names and, in many cases, redundant names. It turns out there's a method to this madness based on decades of UNIX convention, and in this article, I provide an introduction to the Linux directory structure.

Although each Linux distribution has its own quirks, the majority conform (for the most part) with the Filesystem Hierarchy Standard (FHS). The FHS project began in 1993, and the goal was to come to a consensus on how directories should be organized and which files should be stored where, so that distributions could have a single reference point from which to work. A lot of decisions about directory structure were based on traditional UNIX directory structures with a focus on servers and with an assumption that disk space was at a premium, so machines likely would have multiple hard drives.

/bin and /sbin

The /bin and /sbin directories are intended for storing binary executable files. Both directories store executables that are considered essential for booting the system (such as the mount command). The main difference between these directories is that the /sbin directory is intended for system binaries, or binaries that administrators will use to manage the system.

/boot

This directory stores all the bootloader files (these days, this is typically GRUB), kernel files and initrd files. It's often treated as a separate, small partition, so that the bootloader can read it more easily. With /boot on a separate partition, your root filesystem can use more sophisticated features that require kernel support whether that's an exotic filesystem, disk encryption or logical volume management.

/etc

The /etc directory is intended for storing system configuration files. If you need to configure a service on a Linux system, or change networking or other core settings, this is the first place to look. This is also a small and easy-to-back-up directory that contains most of the customizations you might make to your computer at the system level.

/home

The /home directory is the location on Linux systems where users are given directories for storing their own files. Each directory under /home is named after a particular user's user name and is owned by that user. On a server, these directories might store users' email, their SSH keys, or sometimes even local services users are running on high ports.

Go to Full Article

Webinar: Operationalizing DevSecOps

Sunday 16th of June 2019 01:42:13 PM
by Carlie Fairchild

In this webinar, Twistlock's James Jones and Linux Journal's Katherine Druckman discuss hardening your DevOps environments and processes. Topics covered:

  • The keys to DevSecOps success
  • Tangible benefits of DevSecOps
  • Steps and tools involved with building, shipping, and running containers
  • DevSecOps creates a feedback loop
  • Seven steps to containers
  • And more

Register to watch this webinar on-demand: 

https://zoom.us/webinar/register/WN_h6Z3aGxtQzSdHIa2kFv_VA

Go to Full Article

Canonical Announces Embedded Computer Manifold 2 for Drone Developers, Request For Help Testing Snap Package, PHP v7.4.0 Available, PyCharm 2019.2 EAP3 Released, Talks To Port Over Microsoft's Chromium-Based Edge browser To Linux

Friday 14th of June 2019 12:51:30 PM

Yesterday, Canonical, the company behind Ubuntu announced the availability of Manifold 2, a high-performance embedded computer offered by leading enterprise drone manufacturer, DJI. This availability will allow developers access to containerized software packages (e.g. Snaps), allowing for infinite evolution and functionality changes.

It looks as if Ubuntu is transitioning the Chromium Debian package to a Snap one. The community behind this effort is asking for assistance in testing the Snap package.

The first alpha release of PHP version 7.4.0 is now available. And while it contains a large list of bug fixes and feature enhancements, remember, it is an unstable build and should not be used in production.

PyCharm 2019.2 EAP3 is officially released with support for Python Positional-Only Parameters (PEP-570), Restart Kernel Action and more.

There are talks but at the same time, there are not talks to port over Microsoft's Chromium-based Edge browser to Linux. Its developers say that it may happen in the near future but they are too busy to do it today.

News

Wickr: Redefining the Messaging Platform, an Interview with Co-Founder, Chris Howell

Friday 14th of June 2019 11:30:00 AM
by Petros Koutoupis

In the modern era, messaging applications are a constant target for attackers, exposing vulnerabilities, disclosing sensitive information of nation states and insider-employee inappropriate behaviors or practices. There is a constant need to prioritize one's cybersecurity and upgrade one's infrastructure to the latest and greatest of defensive technologies. However, the messaging tools that these same organizations tend to rely on often are the last to be secured, if at all. This is where Wickr comes in. Wickr is an instant-messaging application and platform offering end-to-end encryption and content-expiring messages. Its parent company of the same name takes security seriously and has built a product to showcase that. I was able chat with co-founder and CTO, Chris Howell, who was gracious enough to provide me with more information on what Wickr can achieve, how it works and who would benefit from it.

Petros Koutoupis: Please introduce yourself and tell us about your role at Wickr.

Chris Howell: I'm co-founder/CTO and responsible for technical strategy, security and product design. You can read my full bio here.

Petros: What do you see as a weak point in today's messaging apps?

Chris: By far, at least when it comes to security, the weak point of virtually all messaging apps to date (and all other apps and services, really) is that they're built with the assumption that users will have to trust the service. The problem with that way of thinking is can we really trust the service? That's not to say there are bad people running them, necessarily, but how many breaches (for example, Equifax 2017) or abuses (for example, Snapchat 2019) do we need to see to answer that question? Once the service is built that way, messaging users generally suffer in two ways. First, at some key point on their way to the recipient, messages are readable by some number of folks beyond the recipient. Now, the service typically will point to various security certifications and processes to make us feel okay about that, but in most cases where there are humans involved, what can happen will happen, and whatever controls are put in place to limit access to user data amount to little more than a pinky promise—which when broken, of course, leaves the user with a loss of privacy and security. Second, having been so trusted, the service typically prioritizes "virility" and its own growth over the users' need to control their own data, leading to behavior like scanning message content for marketing purposes, retaining messages longer than necessary, and abusing contacts to aid the growth of the service.

Petros: How does Wickr help address that?

Go to Full Article

Atari Opens Pre-Orders to VCS Retro Gaming Console, Gimp v2.10.12 Released, Distro-Specific Store Pages For Snap Apps Launches, Preview For Built-In Linux Kernel for Windows 10 Available in WSL 2, fs-verity Module MAY Merge Into Mainline Kernel

Thursday 13th of June 2019 02:44:29 PM

Atari has officially opened up pre-orders to the VCS retro gaming console for $250. New orders are expected to be fulfilled by March 2020.

Gimp version 2.10.12 has officially been released and it mostly contains bug fixes, most of which were introduced in the large release of version 2.10.10. There are also some noteworthy improvements which include an improved Curves tool, layers support for TIFF exporting and more.

While the Snap format is intended to run on many other Linux distributions, the Snapcraft team is creating a more inviting and improved experience [for non-Ubuntu users] by launching distro-specific store pages for Snap apps.

The preview for the built-in Linux kernel for Windows 10 is officially available in the new Windows Subsystem for Linux 2 (WSL 2). WSL 2 was announced back in May during the Microsoft's Build developer conference and is based on version 4.19 of the Linux kernel.

A new version of the fs-verity module MAY eventually find its way merged into the mainline kernel. The purpose of this module is to make individual files read-only and enable the kernel to detect modifications made on or offline. The new patch set was posted on May 23 and the story behind it can be found here.

News

FOSS Project Spotlight: OpenNebula

Thursday 13th of June 2019 12:00:00 PM
by Michael Abdou

OpenNebula recently released its latest version, 5.8 "Edge", which now offers pivotal capabilities to allow users to extend their cloud infrastructure to the Edge easily and effectively.

Why OpenNebula?

For anyone looking for an open-source, enterprise solution to orchestrate data-center virtualization and cloud management with ease and flexibility, OpenNebula is a fine candidate that includes:

  • On-demand provisioning of virtual data centers.
  • Features like capacity management, resource optimization, high availability and business continuity.
  • The ability to create a multi-tenant cloud layer on various types of newly built or existing infrastructure management solutions (such as VMware vCenter).
  • The flexibility to create federated clouds across disparate geographies, as well as hybrid cloud solutions integrating with public cloud providers like AWS and Microsoft Azure.

And, it's lightweight, easy to install, infrastructure-agnostic and thoroughly extensible.

Figure 1. High-Level Features

Check here for a more detailed look at OpenNebula features.

New Features in 5.8 "Edge"

With the current conversation shifting away from centralized cloud infrastructure and refocusing toward bringing the computing power closer to the users in a concerted effort to reduce latency, OpenNebula's 5.8 "Edge" release is a direct response to the evolving computing and infrastructure needs, and it offers fresh capabilities to extend one's cloud functionality to the edge. Gaming companies, among others, who have been using OpenNebula were of the first to push for these features (yet they don't have the be the only ones to benefit from them).

LXD Container Support

In addition to supporting KVM hypervisors, as well as offering a cloud management platform for VMware vCenter server components, OpenNebula now provides native support for LXD containers as well. The virtues offered by LXD container support allow users and organizations to benefit from:

  • A smaller space footprint and smaller memory.
  • Lack of virtualized hardware.
  • Faster workloads.
  • Faster deployment times.

From a compatibility perspective, OpenNebula 5.8 and LXD provide the following:

Go to Full Article

Endless OS 3.6.0 Released, Wind River Announces Enhancements to Wind River Linux, Arch Linux 2019.06.01 Is Out, NGD Systems Announces the Newport M.2 SSD and IBM Launches AutoAI for Watson Studio

Wednesday 12th of June 2019 01:31:21 PM

News briefs for June 12, 2019.

Endless OS 3.6.0 has been released. This release has "updated the base OS packages to the latest versions from Debian 'buster' (the forthcoming stable release), most desktop components to the versions from GNOME 3.32, and Linux kernel 5.0." It also includes many new features, performance improvements and bug fixes. Go here to download.

Wind River announces the latest enhancements to Wind River Linux: "This release delivers technology to ease adoption of containers in embedded systems. It provides resources such as pre-built containers, tools, and documentation as well as support for frameworks such as Docker and Kubernetes, all of which can help embedded system developers in their journey to leverage or deploy cloud-native development approaches, especially relevant for appliances at the network edge. Wind River Linux is freely available for download."

Arch Linux 2019.06.01 has been released, marking the first ISO snapshot to ship with a kernel from the 5.1 series. Go here for download/update instructions. Softpedia News reports that the updated kernel means "more preparations for the year 2038, more scalable and faster asynchronous I/O, support for configuring Zstd compression levels in the Btrfs file system, better file system monitorization, and a new cpuidle governor called TEO."

NGD Systems announces the Newport M.2 SSD. According to Blocks and Files, "the Newport M.2 offers 4TB or 8TB of storage in the M.2 22110 form factor — 22mm by 110mm. NGD claims this is twice the capacity of the next largest available M.2 NVMe SSDs, with an average power consumption of less than 1w per TB. The host interface is NVMe 1.3 PCIe Gen 3.0 x4." NGD claims that "The Newport M.2 provides high-performance, high-capacity, low-latency processing for edge computing applications that cannot afford a cluster of 1U or 2U servers to do their processing, whether due to size, power, or compute performance."

IBM adds new automation capabilities to Watson Studio with AutoAI. The press release states that AutoAI is "a new set of capabilities for Watson Studio designed to automate many of the often complicated and laborious tasks associated with designing, optimizing and governing AI in the enterprise. As a result, data scientists can be freed up to dedicate more time to designing, testing and deploying machine learning (ML) models — the work of AI."

News Endless OS Distributions Wind River Arch Linux Storage NVMe IBM AI

More in Tux Machines

LWN: Spectre, Linux and Debian Development

  • Grand Schemozzle: Spectre continues to haunt

    The Spectre v1 hardware vulnerability is often characterized as allowing array bounds checks to be bypassed via speculative execution. While that is true, it is not the full extent of the shenanigans allowed by this particular class of vulnerabilities. For a demonstration of that fact, one need look no further than the "SWAPGS vulnerability" known as CVE-2019-1125 to the wider world or as "Grand Schemozzle" to the select group of developers who addressed it in the Linux kernel. Segments are mostly an architectural relic from the earliest days of x86; to a great extent, they did not survive into the 64-bit era. That said, a few segments still exist for specific tasks; these include FS and GS. The most common use for GS in current Linux systems is for thread-local or CPU-local storage; in the kernel, the GS segment points into the per-CPU data area. User space is allowed to make its own use of GS; the arch_prctl() system call can be used to change its value. As one might expect, the kernel needs to take care to use its own GS pointer rather than something that user space came up with. The x86 architecture obligingly provides an instruction, SWAPGS, to make that relatively easy. On entry into the kernel, a SWAPGS instruction will exchange the current GS segment pointer with a known value (which is kept in a model-specific register); executing SWAPGS again before returning to user space will restore the user-space value. Some carefully placed SWAPGS instructions will thus prevent the kernel from ever running with anything other than its own GS pointer. Or so one would think.

  • Long-term get_user_pages() and truncate(): solved at last?

    Technologies like RDMA benefit from the ability to map file-backed pages into memory. This benefit extends to persistent-memory devices, where the backing store for the file can be mapped directly without the need to go through the kernel's page cache. There is a fundamental conflict, though, between mapping a file's backing store directly and letting the filesystem code modify that file's on-disk layout, especially when the mapping is held in place for a long time (as RDMA is wont to do). The problem seems intractable, but there may yet be a solution in the form of this patch set (marked "V1,000,002") from Ira Weiny. The problems raised by the intersection of mapping a file (via get_user_pages()), persistent memory, and layout changes by the filesystem were the topic of a contentious session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit. The core question can be reduced to this: what should happen if one process calls truncate() while another has an active get_user_pages() mapping that pins some or all of that file's pages? If the filesystem actually truncates the file while leaving the pages mapped, data corruption will certainly ensue. The options discussed in the session were to either fail the truncate() call or to revoke the mapping, causing the process that mapped the pages to receive a SIGBUS signal if it tries to access them afterward. There were passionate proponents for both options, and no conclusion was reached. Weiny's new patch set resolves the question by causing an operation like truncate() to fail if long-term mappings exist on the file in question. But it also requires user space to jump through some hoops before such mappings can be created in the first place. This approach comes from the conclusion that, in the real world, there is no rational use case where somebody might want to truncate a file that has been pinned into place for use with RDMA, so there is no reason to make that operation work. There is ample reason, though, for preventing filesystem corruption and for informing an application that gets into such a situation that it has done something wrong.

  • Hardening the "file" utility for Debian

    In addition, he had already encountered problems with file running in environments with non-standard libraries that were loaded using the LD_PRELOAD environment variable. Those libraries can (and do) make system calls that the regular file binary does not make; the system calls were disallowed by the seccomp() filter. Building a Debian package often uses FakeRoot (or fakeroot) to run commands in a way that appears that they have root privileges for filesystem operations—without actually granting any extra privileges. That is done so that tarballs and the like can be created containing files with owners other than the user ID running the Debian packaging tools, for example. Fakeroot maintains a mapping of the "changes" made to owners, groups, and permissions for files so that it can report those to other tools that access them. It does so by interposing a library ahead of the GNU C library (glibc) to intercept file operations. In order to do its job, fakeroot spawns a daemon (faked) that is used to maintain the state of the changes that programs make inside of the fakeroot. The libfakeroot library that is loaded with LD_PRELOAD will then communicate to the daemon via either System V (sysv) interprocess communication (IPC) calls or by using TCP/IP. Biedl referred to a bug report in his message, where Helmut Grohne had reported a problem with running file inside a fakeroot.

Flameshot is a brilliant screenshot tool for Linux

The default screenshot tool in Ubuntu is alright for basic snips but if you want a really good one you need to install a third-party screenshot app. Shutter is probably my favorite, but I decided to give Flameshot a try. Packages are available for various distributions including Ubuntu, Arch, openSuse and Debian. You find installation instructions on the official project website. Read more

Android Leftovers

IBM/Red Hat and Intel Leftovers

  • Troubleshooting Red Hat OpenShift applications with throwaway containers

    Imagine this scenario: Your cool microservice works fine from your local machine but fails when deployed into your Red Hat OpenShift cluster. You cannot see anything wrong with the code or anything wrong in your services, configuration maps, secrets, and other resources. But, you know something is not right. How do you look at things from the same perspective as your containerized application? How do you compare the runtime environment from your local application with the one from your container? If you performed your due diligence, you wrote unit tests. There are no hard-coded configurations or hidden assumptions about the runtime environment. The cause should be related to the configuration your application receives inside OpenShift. Is it time to run your app under a step-by-step debugger or add tons of logging statements to your code? We’ll show how two features of the OpenShift command-line client can help: the oc run and oc debug commands.

  • What piece of advice had the greatest impact on your career?

    I love learning the what, why, and how of new open source projects, especially when they gain popularity in the DevOps space. Classification as a "DevOps technology" tends to mean scalable, collaborative systems that go across a broad range of challenges—from message bus to monitoring and back again. There is always something new to explore, install, spin up, and explore.

  • How DevOps is like auto racing

    When I talk about desired outcomes or answer a question about where to get started with any part of a DevOps initiative, I like to mention NASCAR or Formula 1 racing. Crew chiefs for these race teams have a goal: finish in the best place possible with the resources available while overcoming the adversity thrown at you. If the team feels capable, the goal gets moved up a series of levels to holding a trophy at the end of the race. To achieve their goals, race teams don’t think from start to finish; they flip the table to look at the race from the end goal to the beginning. They set a goal, a stretch goal, and then work backward from that goal to determine how to get there. Work is delegated to team members to push toward the objectives that will get the team to the desired outcome. [...] Race teams practice pit stops all week before the race. They do weight training and cardio programs to stay physically ready for the grueling conditions of race day. They are continually collaborating to address any issue that comes up. Software teams should also practice software releases often. If safety systems are in place and practice runs have been going well, they can release to production more frequently. Speed makes things safer in this mindset. It’s not about doing the “right” thing; it’s about addressing as many blockers to the desired outcome (goal) as possible and then collaborating and adjusting based on the real-time feedback that’s observed. Expecting anomalies and working to improve quality and minimize the impact of those anomalies is the expectation of everyone in a DevOps world.

  • Deep Learning Reference Stack v4.0 Now Available

    Artificial Intelligence (AI) continues to represent one of the biggest transformations underway, promising to impact everything from the devices we use to cloud technologies, and reshape infrastructure, even entire industries. Intel is committed to advancing the Deep Learning (DL) workloads that power AI by accelerating enterprise and ecosystem development. From our extensive work developing AI solutions, Intel understands how complex it is to create and deploy applications for deep learning workloads. That?s why we developed an integrated Deep Learning Reference Stack, optimized for Intel Xeon Scalable processor and released the companion Data Analytics Reference Stack. Today, we?re proud to announce the next Deep Learning Reference Stack release, incorporating customer feedback and delivering an enhanced user experience with support for expanded use cases.

  • Clear Linux Releases Deep Learning Reference Stack 4.0 For Better AI Performance

    Intel's Clear Linux team on Wednesday announced their Deep Learning Reference Stack 4.0 during the Linux Foundation's Open-Source Summit North America event taking place in San Diego. Clear Linux's Deep Learning Reference Stack continues to be engineered for showing off the most features and maximum performance for those interested in AI / deep learning and running on Intel Xeon Scalable CPUs. This optimized stack allows developers to more easily get going with a tuned deep learning stack that should already be offering near optimal performance.