Language Selection

English French German Italian Portuguese Spanish

Linux Journal

Syndicate content
Updated: 3 hours 42 min ago

Ubuntu 20.10 “Groovy Gorilla” Arrives With Linux 5.8, GNOME 3.38, Raspberry Pi 4 Support

Thursday 22nd of October 2020 06:21:51 PM
Article Images Image

Just two days ago, Ubuntu marked the 16th anniversary of its first ever release, Ubuntu 4.10 “Warty Warthog,” which showed Linux could be a more user friendly operating system.

Back to now, after the six months of development cycle and the release of the current long-term Ubuntu 20.04 “Focal Fossa,” Canonical has announced a new version called Ubuntu 20.10 “Groovy Gorilla” along with its seven official flavor: Kubuntu, Lubuntu, Ubuntu MATE, Ubuntu Kylin, Xubuntu, Ubuntu Budgie, and Ubuntu Studio.

Ubuntu 20.10 is a short term or non-LTS release, which means it will be supported for 9 months until July 2021. Though v20.10 does not seem a major release, it does come with a lot of exciting and new features. So, let’s see what Ubuntu 20.10 “Groovy Gorilla” has to offer:

New Features in Ubuntu 20.10 “Groovy Gorilla”

Ubuntu desktop for Raspberry Pi 4

Starting with one of the most important enhancements, Ubuntu 20.10 has become the first Ubuntu release to feature desktop images for the Raspberry Pi 4. Yes, you can now download and run Ubuntu 20.10 desktop on your Raspberry Pi models with at least 4GB of RAM.

Even both Server and Desktop images also support the new Raspberry Pi Compute Module 4. The 20.10 images may still boot on earlier models, but new Desktop images only built for the arm64 architecture and officially only support the Pi 4 variant with 4GB or 8GB RAM.

Linux Kernel 5.8

Upgrading the previous Linux kernel 5.4, the latest Ubuntu 20.10 ships the new Linux kernel 5.8, which is dubbed “the biggest release of all time” by Linus Torvalds as it contains the highest number of over 17595 commits.

So it’s obvious that Linux 5.8 brings numerous updates, new features, and hardware support. For instance, Kernel Event Notification Mechanism, Intel Tiger Lake Thunderbolt support, extended IPv6 Multi-Protocol Label Switching (MPLS) support, Inline Encryption hardware support, Thunderbolt support for Intel Tiger Lake and non-x86 systems, and initial support for booting POWER10 processors.

GNOME 3.38 Desktop Environment

Another key change that Ubuntu 20.10 includes is the latest version of GNOME desktop environment, which enhances the visual appearance, performance, and user experience of Ubuntu.

One of my favorite features that GNOME 3.38 introduces is a much-needed separate “Restart” button in the System menu.

Among other enhancements, GNOME 3.38 also includes:

  • Better multi-monitor support
  • Revamped GNOME Screenshot app
  • Customizable App Grid with no “Frequent Apps” tab
  • Battery percentage indicator
  • New Welcome Tour app written in Rust
  • Core GNOME apps improvements
Share Wi-Fi hotspot Via QR Code

If you’re the person who wants to share the system’s Internet with other devices wirelessly, this feature of sharing Wi-Fi hotspot through QR code will definitely please you.

Thanks to GNOME 3.38, you can now turn your Linux system into a portable Wi-Fi hotspot by sharing QR code with the devices like laptops, tablets, and mobiles.

Add events in GNOME Calendar app

Forget to remember the events? A pre-installed GNOME Calendar app now lets you add new events (birthday, meetings, reminders, releases), which displays in the message tray. Instead of adding new events manually, you can also sync your events from Google, Microsoft, or Nextcloud calendars after adding online accounts from the settings.

Active Directory Support

In the Ubiquity installer, Ubuntu 20.10 has also added an optional feature to enable Active Directory (AD) integration. If you check the option, you’ll be directed to configure the AD by giving information about the domain, administrator, and password.

Tools and Software upgrade

Ubuntu 20.10 also features the updated tools, software, and subsystems to their new versions. This includes:

  • glibc 2.32, GCC 10, LLVM 11
  • OpenJDK 11
  • rustc 1.41
  • Python 3.8.6, Ruby 2.7.0, PHP 7.4.9
  • perl 5.30
  • golang 1.13
  • Firefox 81
  • LibreOffice 7.0.2
  • Thunderbird 78.3.2
  • BlueZ 5.55
  • NetworkManager 1.26.2
Other enhancements to Ubuntu 20.10:
  • Nftables replaces iptables as default backend for the firewall
  • Better support for fingerprint login
  • Cloud images with KVM kernels boot without an initramfs by default
  • Snap pre-seeding optimizations for boot time improvements

A full release notes of Ubuntu 20.10 is also available to read right from here.

How To Download Or Upgrade To Ubuntu 20.10

If you’re looking for a fresh installation of Ubuntu 20.10, download the ISO image available for several platforms such as Desktop, Server, Cloud, and IoT.

But if you’re already using the previous version of Ubuntu, you can also easily upgrade your system to the Ubuntu 20.10. For upgrading, you must be using Ubuntu 20.04 LTS as you cannot directly reach 20.10 from 19.10, 19.04, 18.10, 18.04, 17.04, or 16.04. You should first hop on to v20.04 and then to the latest v20.10.

As Ubuntu 20.10 is a non-LTS version and by design, Ubuntu only notifies a new LTS release, you need to upgrade manually by either choosing a GUI method using the built-in Software Updater tool or a command line method using the terminal.

For command line method, open terminal and run the following commands:

sudo apt update && sudo apt upgrade

sudo do-release-upgrade -d -m desktop

Or else, if you’re not a terminal-centric person, here’s an official upgrade guide using a GUI Software Updater.

Enjoy Groovy Gorilla!

Ubuntu Groovy Gorilla GNOME GNOME 3.0 Raspberry Pi kernel

Btrfs on CentOS: Living with Loopback

Tuesday 20th of October 2020 03:24:25 PM
by Charles Fisher Introduction

The btrfs filesystem has taunted the Linux community for years, offering a stunning array of features and capability, but never earning universal acclaim. Btrfs is perhaps more deserving of patience, as its promised capabilities dwarf all peers, earning it vocal proponents with great influence. Still, none can argue that btrfs is unfinished, many features are very new, and stability concerns remain for common functions.

Most of the intended goals of btrfs have been met. However, Red Hat famously cut continued btrfs support from their 7.4 release, and has allowed the code to stagnate in their backported kernel since that time. The Fedora project announced their intention to adopt btrfs as the default filesystem for variants of their distribution, in a seeming juxtaposition. SUSE has maintained btrfs support for their own distribution and the greater community for many years.

For users, the most desirable features of btrfs are transparent compression and snapshots; these features are stable, and relatively easy to add as a veneer to stock CentOS (and its peers). Administrators are further compelled by adjustable checksums, scrubs, and the ability to enlarge as well as (surprisingly) shrink filesystem images, while some advanced btrfs topics (i.e. deduplication, RAID, ext4 conversion) aren't really germane for minimal loopback usage. The systemd init package also has dependencies upon btrfs, among them machinectl and systemd-nspawn. Despite these features, there are many usage patterns that are not directly appropriate for use with btrfs. It is hostile to most databases and many other programs with incompatible I/O, and should be approached with some care.

Go to Full Article

How to Secure Your Website with OpenSSL and SSL Certificates

Friday 16th of October 2020 04:10:47 PM
by Tedley Meralus

The Internet has become the number one resources for news, information, events, and all things social. As most people know there are many ways to create a website of your own and capture your own piece of the internet to share your stories, ideas, or even things you like with others. When doing so it is important to make sure you stay protected on the internet the same way you would in the real world. There are many steps to take in the real world to stay safe, however, in this article we will be talking about staying secure on the web with an SSL certificate.

OpenSSL is a command line tool we can use as a type of "bodyguard" for our webservers and applications. It can be used for a variety of things related to HTTPS, generating private keys and CSRs (certificate signing requests), and other examples. This article will break down what OpenSSL is, what it does, and examples on how to use it to keep your website secure. Most online web/domain platforms provide SSL certificates for a fixed yearly price. This method, although it takes a bit of technical knowledge, can save you some money and keep you secure on the web.

* For example purposes we will use testmastersite.com for commands and examples

How this guide may help you:

  • Using OpenSSL to generate and configure CSRs
  • Understanding SSL certificates and their importance
  • Learn about certificate signing requests (CSRs)
  • Learn how to create your own CSR and private key
  • Learn about OpenSSL and its common use cases

Requirements

OpenSSL

The first thing to do would be to generate a 2048-bit RSA key pair on your machine. This pair i'm referring to is both your private and public key. You can use a list of tools online to do so, but for this example we will be working with OpenSSL.

What are SSL certificates and who cares?

According to GlobalSign.com an SSL certificate is a small data file that digitally binds a cryptographic key to an organizations details. When installed on a webserver, it activates the padlock and the https protocol and allows secure connections from a web server to a browser. Let me break that down for you. An SSL certificate is like a bodyguard for your website. To confirm that a site is using an SSL you can typically check that the site has an https in the url rather than an http string in the name. the "s" stands for Secure.

  • Example SECURE Site: https://www.testmastersite.com/

Go to Full Article

Pretty Good Privacy (PGP) and Digital Signatures

Wednesday 14th of October 2020 03:29:57 PM
by Ankur Kothiwal

If you have sent any plaintext confidential emails to someone (most likely you did), have you ever questioned yourself about the mail being tampered with or read by anyone during transit? If not, you should!

Any unencrypted email is like a postcard. It can be seen by anyone (crackers/security hackers, corporations, governments, or anyone with the required skills), during its transit.

In 1991 Phil Zimmermann, a free speech activist, and anti-nuclear pacifist developed Pretty Good Privacy (PGP), the first software available to the general public that utilized RSA (a public key cryptosystem, will discuss it later) for email encryption and signing. Zimmermann, after having had a friend post the program on the worldwide Usenet, got prosecuted by the U.S. government; later he was charged by the FBI for illegal weapon export because encryption tools were considered as such (all charges were eventually dropped). Zimmermann later founded PGP Inc., which is now part of Symantec Corporation.

In 1997 PGP Inc. submitted a standardization proposal to the Internet Engineering Task Force. The standard was called OpenPGP and was defined in 1998 in the IETF document RFC 2440. The latest version of the OpenPGP standard is described in RFC 4880, published in 2007.

Nowadays there are many OpenPGP-compliant products: the most widespread is probably GnuPG (GNU Privacy Guard, or GPG for short) which has been developed since 1999 by Werner Koch. GnuPG is free, open-source, and available for several platforms. It is a command-line only tool.

PGP is used for digital signature, encryption (and decrypting obviously, nobody will use software which only encrypts!), compression, Radix-64 conversion.

In this article, we will explain encryption and digital signatures.

So what encryption is, how does it work, and how does it benefit us?

Encryption (Confidentiality)

Encryption is the process of conversion of any information to a ciphertext or an unreadable form. A very simple example of encrypting text is:

Hello this is Knownymous and this is a ciphertext.

Uryyb guvf vf Xabjalzbhf naq guvf vf n pvcuregrkg.

If you read it carefully, you will notice that every letter of the English alphabet is converted to its next 13th letter in the English alphabet, so 13 is the key here, needed to decrypt it. It was known as Caesar cipher (Yes, the method is named after Julius Caesar).

Since then there are many encryption techniques (Cryptography) developed like- Diffie–Hellman key exchange (DH), RSA.

The techniques can be used in two ways:

Go to Full Article

Mark Text vs. Typora: Best Markdown Editor For Linux?

Tuesday 13th of October 2020 05:22:34 PM
by Sarvottam Kumar

Markdown is a widely used markup language, which is now not only used for creating documentation or notes but also for creating static websites (using Hugo or Jekyll). It is supported by major sites like GitHub, Bitbucket, GitLab, Stack Exchange, and Reddit.

Markdown follows a simple easy-to-read and easy-to-write plain text formatting syntax. By just using non-alphabetic characters like asterisk (*), hashtag (#), backtick (`), or dash (-), you can format text as bold, italics, lists, headings, tables and so on.

Now, to write in Markdown, you can choose any Markdown applications available for Windows, macOS, and Linux desktop. You can even use web-based in-browser Markdown editors like StackEdit. But if you’re specifically looking for the best Markdown editor for Linux desktop, I present you two Markdown editors: Mark Text and Typora.

I’ve also tried other popular Markdown apps available for Linux platforms such as Joplin, Remarkable, ReText, and Mark My Words. But the reason I chose Mark Text and Typora is the seamless live preview features with distraction free user interface. Unlike other Markdown editors, these two do not have a dual panel (writing and preview window) interface, which is why I find both the most distinguishable applications among others.

Before I start discussing the extensive dissimilarities between Typora and Mark Text, let me briefly tell you the common features that both of them offer.

Similarities Between Mark Text And Typora
  • Real time preview
  • Export to HTML and PDF
  • GitHub Flavored Markdown
  • Inline styles
  • Code and Math Blocks
  • Support for Flowchart, Sequence diagram
  • Light and Dark Themes
  • Source Code, Typewriter, and Focus mode
  • Auto save
  • Paste images directly from clipboard
  • Available for Linux, macOS, and Windows
Differences Between Mark Text And Typora Installation

If you’re a beginner and using non-Debian Linux distribution, you may find it difficult to install Typora. This is because Typora is packaged and tested only on Ubuntu, hence, you can install it easily on Debian-based distros like Ubuntu and Linux Mint by using commands or Debian packages, but not on other distros like Arch, or Void, where you’ve to build from binary packages for which official command is also not available.

Go to Full Article

Quick Tutorial on How to Use Shell Scripting in Linux: Coin Toss App

Monday 12th of October 2020 07:08:19 PM
by Nawaz Abbasi

Simply put, a Shell Script is a program that is run by a UNIX/Linux shell. It is a file that contains a series of commands which are executed sequentially as if they were entered on the command line interface (CLI) or terminal.

In this quick tutorial on Shell Scripting, we will write a simple program to toss a coin. Basically, the output of our program should be either HEADS or TAILS (of course, randomly).

To start with, the first line of a shell script should indicate which interpreter/shell is to be used to execute the script. In this tutorial we will be using /bin/bash and it will be denoted as #!/bin/bash which is called a shebang!

Next, we will be using an internal Bash function - a shell variable named RANDOM. It returns a random (actually, pseudorandom) integer in the range 0-32767. We will use this variable to get 2 random values – either 0 (for HEADS) or 1 (for TAILS). This will be done via a simple arithmetic operation in shell using % (Modulus operator, returns remainder), $((RANDOM%2)) and this will be stored in a result variable. So, the second line of our program becomes Result=$((RANDOM%2)) – Note that there should be no space around = (assignment operator) while assigning value to a variable in shell scripts.

At last, we just need to print HEADS if we got 0 or TAILS if we got 1, in the Result variable. Perhaps you guessed it by now, we will use if conditional statements for this. Within the conditions, we will compare the value of Result variable with 0 and 1; and print HEADS or TAILS accordingly. For this, the operator for integer comparison -eq (is equal to) is used to check if the value of two operands are equal or not.

Ergo, our shell script looks like the following:

 

#!/bin/bash Result=$((RANDOM%2)) if [[ ${Result} -eq 0 ]]; then     echo HEADS elif [[ ${Result} -eq 1 ]]; then     echo TAILS fi

 

Let’s say we name the script cointoss.sh – Note that .sh is only to make it identifiable for user(s) that the file/script is a shell script. And, Linux is an Extensionless system.

Finally, to run the script we need to make it executable and that can be done by using the chmod command – chmod +x cointoss.sh

Few script executions:

 

$ ./cointoss.sh TAILS $ ./cointoss.sh HEADS $ ./cointoss.sh HEADS $ ./cointoss.sh TAILS

 

 

To wrap up, in this quick tutorial about writing shell scripts, we learned about shebang, RANDOM, variable assignment, an arithmetic operation using Modulus operator %, if conditional statements, integer comparison operator -eq and executing a shell script.

Go to Full Article

How To Kill Zombie Processes on Linux

Thursday 8th of October 2020 07:30:00 PM
by Nawaz Abbasi Killing Zombies!

Also known as “defunct” or “dead” process – In simple words, a Zombie process is one that is dead but is present in the system’s process table. Ideally, it should have been cleaned from the process table once it completed its job/execution but for some reason, its parent process didn’t clean it up properly after the execution.

In a just (Linux) world, a process notifies its parent process once it has completed its execution and has exited. Then the parent process would remove the process from process table. At this step, if the parent process is unable to read the process status from its child (the completed process), it won’t be able to remove the process from memory and thus the process being dead still continues to exist in the process table – hence, called a Zombie!

In order to kill a Zombie process, we need to identify it first. The following command can be used to find zombie processes:

$ ps aux | egrep "Z|defunct"

Z in the STAT column and/or [defunct] in the last (COMMAND) column of the output would identify a Zombie process.

Now practically you can’t kill a Zombie because it is already dead! What can be done is to notify its parent process explicitly so that it can retry to read the child (dead) process’s status and eventually clean them from the process table. This can be done by sending a SIGCHLD signal to the parent process. The following command can be used to find the parent process ID (PID):

$ ps -o ppid=

Once you have the Zombie’s parent process ID, you can use the following command to send a SIGCHLD signal to the parent process:

$ kill -s SIGCHLD

However, if this does not help clearing out the Zombie process, you will have to kill or restart its parent process OR in case of a huge surge in Zombie processes causing or heading towards system outage, you will have no choice but to go for a system reboot. The following command can be used to kill its parent process:

$ kill -9

Note that killing a parent process will affect all of its child processes, so a quick double check will be helpful to be safe. Alternatively, if few lying zombie processes are not consuming much CPU/Memory, it’s better to kill the parent process or reboot the system in the next scheduled system maintenance.

Go to Full Article

Linux Command Line Interface Introduction: A Guide to the Linux CLI

Tuesday 6th of October 2020 09:07:28 PM
by Antonio Riso Let’s get to know the Linux Command Line Interface (CLI).

Introduction

The Linux command line is a text interface to your computer.

Also known as shell, terminal, console, command prompts and many others, is a computer program intended to interpret commands.

Allows users to execute commands by manually typing at the terminal, or has the ability to automatically execute commands which were programmed in “Shell Scripts”.

A bit of history

The Bourne Shell (sh) was originally developed by Stephen Bourne while working at Bell Labs.

Released in 1979 in the Version 7 Unix release distributed to colleges and universities.

The Bourne Again Shell (bash) was written as a free and open source replacement for the Bourne Shell.

Given the open nature of Bash, over time it has been adopted as the default shell on most Linux systems.

First look at the command line

Now that we have covered some basics, let’s open a terminal window and see how it looks!

When a terminal is open, it presents you with a prompt.

Let's analyze the screenshot above:

Line 1: The shell prompt, it is composed by username@hostname:location$

  • Username: our username is called “john”
  • Hostname: The name of the system we are logged on
  • Location: the working directory we are in
  • $: Delimits the end of prompt

After the $ sign, we can type a command and press Enter for this command to be executed.

Line 2: After the prompt, we have typed the command whoami which stands for “who am i“ and pressed [Enter] on the keyboard.

Go to Full Article

How To Upgrade From Fedora 32 To Fedora 33 [CLI & Graphical Methods]

Monday 5th of October 2020 07:16:39 PM
by Sarvottam Kumar

Last week, a Red Hat-sponsored community project, Fedora, announced the availability of Fedora 33 Beta. It is a prerelease version of the upcoming Fedora 33 Linux distribution, whose final stable version will arrive in the last week of October.

Fedora 33 is one of the exciting releases as it contains the fundamental shift of the default filesystem from ext4 to btrfs for all Fedora desktop editions and spins, along with other new features and visual changes.

Here are some of the key updates that Fedora 33 Beta includes:

  • GNOME 3.38 desktop environment
  • Linux Kernel 5.8
  • GNU Nano as default terminal text editor
  • earlyOOM enabled by default in Fedora 33 KDE
  • Fedora IoT as an official edition
  • Package update like Ruby, Python, and Perl

For complete details of all features, you can check out the Fedora 33 change set.

Coming to the main topic, you can also upgrade your current Fedora system to the beta version of Fedora 33, which you’ll also be able to upgrade further to the final stable release by simply updating your system once it arrives at the end of October.

So, if you’re the one who wants to test all new features of the upcoming Fedora 33, come along with me and upgrade your Fedora 32 Workstation to the Fedora 33 Beta Workstation using either of two methods.

If you’re comfortable playing with the terminal, you can upgrade Fedora 32 to 33 using the command line method or else follow the upgrade process using the graphical Software Center app.

What You Need To Do Before Upgrading Fedora Linux

Before you follow the steps to upgrade your Fedora Workstation, I would highly recommend backing up your data. Well, I didn’t encounter any problems while upgrading but if your data is very important, then I would say prevention is better than a cure.

After data backup, you should also keep it in mind that upgrading the system takes time. So, before you start this operation, buy enough time to finish the upgrade process properly. Needless to say, you should also have a stable internet connection to download all the update data.

Lastly, I also want to mention that the new release may halt some of the functions that worked perfectly in your previous version. For example, I was using Dash to Dock GNOME extension, which was broken in GNOME 3.38. So, I needed to re-install it manually.

Now, let’s begin the migration to Fedora 33.

Upgrade Fedora Linux To New Release Using Terminal

First, open the terminal and run the following command to update your system by getting the latest software packages for Fedora 32.

$ sudo dnf upgrade --refresh

Go to Full Article

Linux Mint 20.1 “Ulyssa” Will Arrive In Mid-December With Chromium, WebApp Manager

Thursday 1st of October 2020 06:37:39 PM
Article Images Image

As the Linux Mint team is progressing to release the first point version of Linux Mint 20 series, its founder and project leader Clement Lefebvre has finally revealed the codename for Linux Mint 20.1 as “Ulyssa”. He has also announced that Mint 20.1 will most probably arrive in mid-December (just before Christmas).

Until you wait for its beta release to test Linux Mint 20.1, Clement has also shared some great news regarding the new updates and features that you’ll get in Mint 20.1.

First, packaging of open source Chromium web browser and its updates directly through the official Mint repositories. As the team noticed delays between the official release and the version available in Linux distros, it has now decided to set up their own packaging and build Chromium package based on upstream code, along with some patches from Debian and Ubuntu as well.

As a result, the first test build of Chromium is available to download from here.

In last month's blog, the Mint team introduced a new WebApp Manager, inspired by Peppermint OS and its SSB (Site Specific Browser) application manager, ICE. It is a WebApp management system that will debut in Linux Mint 20.1 to turn a website into a standalone desktop application.

However, the Debian package of WebApp Manager v1.0.5 is now available to download, which comes with UI improvements, bug fixes and better translations for languages.

 

 

Another feature that you’ll be thrilled to see in Linux Mint 20.1 is the hardware video acceleration enabled by default in the Celluloid video player. Obviously, hardware-accelerated players will bring smoother playback, better performance and reduced CPU usage.

 

 

Besides the confirmed features, the Linux Mint team is also looking for feedback on a side-project by Stephen Collins, “Sticky notes.” It is a note-taking app, which is still in Alpha stage. But if all goes well, who knows, you’ll see Sticky notes app in the upcoming Linux Mint.

 

 

The Linux Mint team has also asked for opinion on IPTV (Internet Protocol Television). If you use M3U IPTV on your phone, tablet or smart TV, you can let them know. The team seems interested to develop an IPTV solution for Linux desktop as a side project if the audience is small or turn it into an official Linux Mint project, if demand is good enough.

Linux Mint

The Preservation and Continuation of the Iconic Linux Journal

Wednesday 30th of September 2020 10:19:09 PM
by Matthew R. Higgins

Editor's note: Thank you to returning contributor Matthew Higgins for these reflections on what the return and preservation of Linux Journal means.

As we welcome the return of Linux Journal, it’s worth recognizing the impact of the September 22nd announcement of the magazine’s return and how it sparked many feelings of nostalgia and excitement in thousands among the Linux community. That being said, it is also worth noting that the ways in which journalism has changed since Linux Journal’s first publication in 1994. The number of printed magazines have significantly decreased and exclusively digitally published content has become the norm in most cases. Linux Journal experienced this change in 2011 when the print version of the magazine was discontinued. Although many resented the change, it is far from the only magazine that embraced this trend. Despite the bitterness by some, embracing the digital version of Linux Journal allowed for its writers and publishers to direct their focus on taking full advantage of what the internet had to offer. 

Despite several advantages of an online publishing format, one concern that was becoming increasingly concerning for Linux Journal until September 22nd, 2020 was the survival of the Linux Journal website. If the website were to have shut down, the community would have potentially lost access to hundreds (or thousands) of articles and documents that were only published on the Linux Journal website and were not collectively available anywhere else. Even if an individual possessed the archive of the monthly issues of the journal, an attempt to republish it would be potentially legally problematic and would certainly show a lack of consideration for the rights of the authors who originally wrote the articles. 

Thanks to Slashdot Media, however, the Linux community no longer needs to express concern over the potential loss of the official Linux Journal archive of publications for the foreseeable future. Given its recent return, it seems like an appropriate time to emphasize the important role that Linux Journal played (and will continue to play) in the Linux community since 1994 and the opportunity to continue this role as the number of Linux users and enthusiasts continues to grow. The journal provides readers with access to several decades of articles and content that date back to the earliest days of Linux. Furthermore, Linux Journal preserves this content as an archive that tells a fascinating history of the kernel and the community built around it.

Go to Full Article

Installing Ubuntu with Two Hard Drives

Tuesday 29th of September 2020 11:12:39 PM
by Tedley Meralus

Many computers these days come with two hard drives, one SSD for fast boot speeds, and one that can be used for storage. My Dell G5 gaming laptop is a great example with a 128GB NAND SSD and a 1TB SSD. When building out a Linux installation I have a few options. Option 1: Follow the steps and install Ubuntu on one SSD hard drive for quick boot times and better speed performance when opening files or moving data. Then mounting the second drive and copying files to it when I want to backup files or need to move files off the first drive. Or, Option 2: install Ubuntu on an older hard drive with more storage but slower start up speeds and use the 128GB as a small mount point.

However, as most Linux users are aware, solid state drives are much faster, and files, folders, and drives on a Linux system all have mount points that can be setup with ease.

In this article we’ll go over how to install Ubuntu Linux with separate /root and /home directories on two separate drives – with root folder on the SSD and home folder on the 1TB hard drive. This allows me to leverage the boot times and speed of the 128GB SSD and still have plenty of space to install steam games or large applications.

This guide can also be used for other use cases as well. An example would be old or cheaper laptops that don't have hard drives with high RPM spinning SSDs. If your computer is a bit on the older side (and has an SD card slot) but you want to utilize faster boot times, you can go out and buy an SD card and install the /root partition onto that for quick boot times, and the /home partition on the main drive for storage. This guide, like Linux, can be used for many other use cases as well.

Go to Full Article

Linux Journal is Back

Tuesday 22nd of September 2020 06:15:04 PM
by Webmaster

As of today, Linux Journal is back, and operating under the ownership of Slashdot Media.

As Linux enthusiasts and long-time fans of Linux Journal, we were disappointed to hear about Linux Journal closing its doors last year. It took some time, but fortunately we were able to get a deal done that allows us to keep Linux Journal alive now and indefinitely. It's important that amazing resources like Linux Journal never disappear.

If you're a former Linux Journal contributor or a Linux enthusiast that would like to get involved, please contact us and let us know the capacity in which you'd like to contribute. We're looking for people to cover Linux news, create Linux guides, and moderate the community and comments. We'd also appreciate any other ideas or feedback you might have. Right now, we don't have any immediate plans to resurrect the subscription/issue model, and will be publishing exclusively on LinuxJournal.com free of charge. Our immediate goal is to familiarize ourself with the Linux Journal website and ensure it doesn't ever get shut down again.

Many of you are probably already aware of Slashdot Media, but for those who aren't, we own and operate Slashdot and SourceForge: two iconic open source software and technology websites that have been around for decades. We didn't always own SourceForge, but we acquired it in 2016, and immediately began improving, and have since come a long way in restoring and growing one of the most important resources in open source. We'd like to do the same here. We're ecstatic to be able to take the helm at Linux Journal, and ensure that this legendary Linux resource and community not only stays alive forever, but continues to grow and improve.

Reach out if you'd like to get involved!

Update Wednesday, September 23rd @ 3:43pm PST: Thanks for the great response to Linux Journal being revived! We're overwhelmed with the thousands of emails so it may take a bit of time to get back to you. This came together last minute as a way to avoid losing 25+ years of Linux history so bear with us as we get organized.

Go to Full Article

Newest IPFire Release Includes Security Fixes and Additional Hardware Support (IPFire 2.25 - Core Update 147)

Friday 24th of July 2020 11:55:00 PM
Image

Michael Tremer, maintainer of the IPFire project, announced IPFire 2.25 Core Update 147 today. This is the newest IPFire release since Core Update 146 on June 29th.

IPFire 2.25 Core Update 147 includes some important security updates including a newer version of Squid web proxy that has patched recent vulnerabilities.

Beyond security updates, IPFire 2.25 Core Update 147 adds support for additional hardware, as well as enhancing support for existing hardware because the new release ships with version 20200519 of the Linux firmware package.

IPFire 2.25 Core Update 147 also rectified a recurring issue relating to forwarding GRE connections.

In addition, the update improved IPFire on AWS configurations.

IPFire 2.25 Core Update 147 includes these updated packages: bind 9.11.20, dhcpcd 9.1.2, GnuTLS 3.6.14, gmp 6.2.0, iproute2 5.7.0, libassuan 2.5.3, libgcrypt 1.8.5, libgpg-error 1.38, OpenSSH 8.3p1, squidguard 1.6.0.

You can download IPFire 2.25 Core Update 147 here.

Releases

More in Tux Machines

Devices/Embedded: Arduino and More

       
  • Arduino Blog » Driving a mini RC bumper car with a Nintendo Wii Balance Board

    Taking inspiration from Colin Furze’s 600cc bumper car constructed a few years ago, Henry Forsyth decided to build his own RC miniature version. His device features a 3D-printed and nicely-painted body, along with a laser-cut chassis that holds the electrical components. The vehicle is driven by a single gearmotor and a pair of 3D-printed wheels, with another caster-style wheel that’s turned left and right by a servo steering. An Arduino Uno and Bluetooth shield are used for overall control with a motor driver. The Bluetooth functionality allows for user interface via a PS4 controller, or even (after a bit of programming) a Wii Balance Board. In the end, the PS4 remote seems to be the better control option, but who knows where else this type of balance technique could be employed?

  • Intel Elkhart Lake COM’s offer up to 3x 2.5GbE, SIL2 functional safety
  • E3K all-in-one wireless bio-sensing platform supports EMG, ECG, and EEG sensors (Crowdfunding)

    Over the year, The maker community has designed several platforms to monitor vital signs with boards like Healthy Pi v4 or HeartyPatch both of which are powered by an ESP32 WiFi & Bluetooth wireless SoC. WallySci has designed another all-in-one wireless bio-sensing platform, called E3K, that also happens to be powered by Espressif Systems ESP32 chip, and can be connected to an electromyography (EMG) sensor to capture muscle movements, an electrocardiography (ECG) sensor to measure heart activity, and/or an electroencephalography (EEG) sensor to capture brain activity. The board also has an extra connector to connect a 9-axis IMU to capture motion.

  • Coffee Lake system can expand via M.2, mini-PCIe, PCIe, and Xpansion

    MiTac’s fanless, rugged “MX1-10FEP” embedded computer has an 8th or 9th Gen Coffee Lake Core or Xeon CPU plus 3x SATA bays, 4x USB 3.1 Gen 2, 2x M.2, 2x mini-PCIe, and optional PCIe x16 and x1. MiTac recently introduced a Coffee Lake based MX1-10FEP computer that is also being distributed by ICP Germany. This month, ICP announced that the MX1-10FEP-D model with PCIe x16 and PCIe x1 slots has been tested and classified by Nvidia as “NGC Ready” for Nvidia GPU Cloud graphics boards such as the Nvidia T4 and Tesla P4. [...] The MX1-10FEP has an Intel C246 chipset and defaults to Windows 10 with Linux on request.

Wine 5.20 Released

The Wine development release 5.20 is now available.

What's new in this release (see below for details):
  - More work on the DSS cryptographic provider.
  - A number of fixes for windowless RichEdit.
  - Support for FLS callbacks.
  - Window resizing in the new console host.
  - Various bug fixes.

The source is available from the following locations:

  https://dl.winehq.org/wine/source/5.x/wine-5.20.tar.xz
  http://mirrors.ibiblio.org/wine/source/5.x/wine-5.20.tar.xz

Binary packages for various distributions will be available from:

  https://www.winehq.org/download

You will find documentation on https://www.winehq.org/documentation

You can also get the current source directly from the git
repository. Check https://www.winehq.org/git for details.

Wine is available thanks to the work of many people. See the file
AUTHORS in the distribution for the complete list.
Read more Also: Wine 5.20 Released With Various Improvements For Running Windows Software On Linux

PostmarketOS update brings HDMI support for the PinePhone and PineTab

When the PinePhone postmarketOS Community Edition smartphone began shipping to customers in September it came with a version of the operating system with one important feature missing: HDMI output. So when my phone arrived a few weeks ago I was able to spend some time familiarizing myself with the operating system and I could plug in the included Convergence Dock to use USB accessories including a keyboard, mouse, and storage. But I wasn’t able to connect an external display. Now I can. Read more

today's howtos

  • How To Install Ubuntu 20.10 Groovy Gorilla

    This tutorial explains Ubuntu 20.10 Groovy Gorilla computer installation. You will prepare at least two disk partitions, finishing it all in about twenty minutes, and enjoy! Let's start right now.

  • How to install Ubuntu 20.10 - YouTube

    In this video, I am going to show how to install Ubuntu 20.10.

  • How To Install Webmin on Ubuntu 20.04 LTS - idroot

    In this tutorial we will show you how to install Webmin on Ubuntu 20.04 LTS, as well as some extra required packages by Webmin control panel

  • Running Ironic Standalone on RHEL | Adam Young’s Web Log

    This is only going to work if you have access to the OpenStack code. If you are not an OpenStack customer, you are going to need an evaluation entitlement. That is beyond the scope of this article.

  • Introduction to Ironic

    The sheer number of projects and problem domains covered by OpenStack was overwhelming. I never learned several of the other projects under the big tent. One project that is getting relevant to my day job is Ironic, the bare metal provisioning service. Here are my notes from spelunking the code.

  • Adding Nodes to Ironic

    TheJulia was kind enough to update the docs for Ironic to show me how to include IPMI information when creating nodes.

  • Secure NTP with NTS

    Many computers use the Network Time Protocol (NTP) to synchronize their system clocks over the internet. NTP is one of the few unsecured internet protocols still in common use. An attacker that can observe network traffic between a client and server can feed the client with bogus data and, depending on the client’s implementation and configuration, force it to set its system clock to any time and date. Some programs and services might not work if the client’s system clock is not accurate. For example, a web browser will not work correctly if the web servers’ certificates appear to be expired according to the client’s system clock. Use Network Time Security (NTS) to secure NTP. Fedora 331 is the first Fedora release to support NTS. NTS is a new authentication mechanism for NTP. It enables clients to verify that the packets they receive from the server have not been modified while in transit. The only thing an attacker can do when NTS is enabled is drop or delay packets. See RFC8915 for further details about NTS. NTP can be secured well with symmetric keys. Unfortunately, the server has to have a different key for each client and the keys have to be securely distributed. That might be practical with a private server on a local network, but it does not scale to a public server with millions of clients. NTS includes a Key Establishment (NTS-KE) protocol that automatically creates the encryption keys used between the server and its clients. It uses Transport Layer Security (TLS) on TCP port 4460. It is designed to scale to very large numbers of clients with a minimal impact on accuracy. The server does not need to keep any client-specific state. It provides clients with cookies, which are encrypted and contain the keys needed to authenticate the NTP packets. Privacy is one of the goals of NTS. The client gets a new cookie with each server response, so it doesn’t have to reuse cookies. This prevents passive observers from tracking clients migrating between networks.

  • Comfortable Motion: Absolutely Cursed Vim Scrolling - YouTube

    Have you ever felt like Vim was too useful and thought hey let's change that, well that's what this dev thought and now we have a plugin called comfortable motion that's adds physics based scrolling into vim, what's physics based scrolling you ask. Well it's scrolling that occurs based on how long you hold down the scroll key.

  • Running Cassandra on Fedora 32 | Adam Young’s Web Log

    This is not a tutorial. These are my running notes from getting Cassandra to run on Fedora 32. The debugging steps are interesting in their own right. I’ll provide a summary at the end for any sane enough not to read through the rest.

  • Recovering Audio off an Old Tape Using Audacity | Adam Young’s Web Log

    One of my fiorends wrote a bunch of music back in high school. The only remainig recordings are on a casette tape that he produced. Time has not been kind to the recordings, but they are audible…barely. He has a device that produces MP3s from the tape. My job has been to try and get them so that we can understand them well enough to recover the original songs. I have the combined recording on a single MP3. I’ve gone through and noted the times where each song starts and stops. I am going to go through the steps I’ve been using to go from that single long MP3 to an individual recording.

  • Role of Training and Certification at the Linux Foundation

    Open source allows anyone to dip their toes in the code, read up on the documentation, and learn everything on their own. That’s how most of us did it, but that’s just the first step. Those who want to have successful careers in building, maintaining, and managing IT infrastructures of companies need more structured hands-on learning with real-life experience. That’s where Linux Foundation’s Training and Certification unit enters the picture. It helps not only greenhorn developers but also members of the ecosystem who seek highly trained and certified engineers to manage their infrastructure. Swapnil Bhartiya sat down with Clyde Seepersad, SVP and GM of Training and Certification at the Linux Foundation, to learn more about the Foundation’s efforts to create a generation of qualified professionals.

  • Hetzner build machine

    This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi. Building Qt5 takes a long time. The build server I was using had CPUs and RAM, but was very slow on I/O. I was very frustrated by that, and I started evaluating alternatives. I ended up setting up scripts to automatically provision a throwaway cloud server at Hetzner.