Linux Journal
Linux in Healthcare - Cutting Costs & Adding Safety
Healthcare domain directly deals with our health and lives. Healthcare is prevention, diagnosis, and treatment of any disease, injury, illness, or any other physical and mental impairments in humans. Emergency situations are often dealt with by the healthcare sector very frequently. With immense scope for improvisations, a thriving healthcare domain deals from telemedicine to insurance, and inpatient hospitals to outpatient clinics. With practitioners practicing in multiple areas like medicine, chiropractic, nursing, dentistry, pharmacy, allied health, and others, it's an industry with complex processes and data-oriented maintenance systems often difficult to manage manually with paperwork.
Need is the mother of innovation and hence people across the world have invented software and systems to manage:
- Patients’ data or rather medical history
- Bills and claims for own and third-party services
- Inventory management
- Communication channels among various departments like reception, doctor’s room, investigation rooms, wards, Operation theaters, etc.
- Controlled Medical equipment and much more.
Thus, saving our precious time, making life easier, and minimizing human errors.
HealthCare integrated with Linux: With high availability, critical workloads, low power consumption and reliability, Linux has established itself in the likes of windows, and Mac OS. With a “stripped-down” graphical interface and minimal OS version, it provides a strong impetus for performance restricting many services from running and direct control over hardware. Integrating Linux with the latest technological solutions in healthcare (check out Elinext healthcare solutions, as an example), businesses are saving a lot along with enhanced security.
Few drivers promoting Linux in healthcare are:
Open Source: One of the utmost benefits of Linux is its open-source saving license cost for health care organizations. Most of the software and programs running on Linux OS are largely open sources too. Anyone can modify Linux kernel based on open source license, resulting customization as per your needs. Using open-source, there is no need to request additional resources or sign additional agreements. It provides you vendor independence. With a creditable Linux community backed by various organizations, you have satisfactory support.
Go to Full ArticleMuseScore Created New Font in Memory of Original SCORE Program Creator
MuseScore represents a free notation software for operating systems such as Windows, macOS and Linux. It is designed and suitable for music teachers, students & both amateur and professional composers. MuseScore is released as FOSS under the GNU GPL license and it’s accompanied by freemium MuseScore.com sheet music catalogue with mobile score viewer, playback app and an online score sharing platform. In 2018, the MuseScore company was acquired by Ultimate Guitar, which included full-time paid developers in the open source team. Since 2019 the MuseScore design team has been led by Martin Keary, known as blogger Tantacrul, who has consistently criticized composer software in connection with design and usability. From that moment on, a qualitative change was set in motion in MuseScore.
Historically, the engraving quality in MuseScore has not been entirely satisfactory. After the review by Martin Keary, MuseScore product owner (previously known as MuseScore head of design) and Simon Smith, an engraving expert, who has produced multiple detailed reports on the engraving quality of MuseScore 3.5, it has become apparent that some key engraving issues should be resolved immediately.That would have a significant impact on the overall quality of our scores. Therefore, these changes will considerably improve the quality of scores published in the sheet music catalog, MuseScore.com.
The MuseScore 3.6 was called 'engraving release,' which addressed many of the biggest issues affecting sheet music's layout and appearance and resulted from a massive collaboration between the community and internal team.
Two of the most notable additions in this release are Leland, our new notation font and Edwin, our new typeface.
Leland is a highly sophisticated notation style created by Martin Keary & Simon Smith. Leland aims to provide a classic notation style that feels 'just right' with a balanced, consistent weight and a finessed appearance that avoids overly stylized quirks.
The new typeface, Edwin, is based on the New Century Schoolbook, which has long been the typeface of choice by some of the world's leading publishers, explicitly chosen as a complementary companion to Leland. We have also provided new default style settings (margins, line thickness, etc.) to compliment Leland and Edwin, which match conventions used by the world's leading publishing houses.
“Then there's our new typeface, Edwin, which is an open license version of new Century Schoolbook - long a favourite of professional publishers, like Boosey and Hawkes. But since there is no music written yet, you'll be forgiven for missing the largest change of all: our new notation font: Leland, which is named after Leland Smith, the creator of a now abandoned application called SCORE, which was known for the amazing quality of its engraving. We have spent a lot of time finessing this font to be a world beater.”
— Martin Keary, product owner of MuseScore
Equally as important as the new notation style is the new vertical layout system. This is switched on by default for new scores and can be activated on older scores too. It is a tremendous improvement to how staves are vertically arranged and will save the composer’s work hours by significantly reducing his reliance on vertical spacers and manual adjustment.
MuseScore 3.6 developers also created a system for automatically organizing the instruments on your score to conform with a range of common conventions (orchestral, marching band, etc.). Besides, newly created scores will also be accurately bracketed by default. A user can even specify soloists, which will be arranged and bracketed according to your chosen convention. These three new systems result from a collaboration between Simon Smith and the MuseScore community member, Niek van den Berg.
MuseScore team has also greatly improved how the software displays the notation fonts: Emmentaler and Bravura, which more accurately match the original designers' intentions and have included a new jazz font called 'Petaluma' designed by Anthony Hughes at Steinberg.
Lastly, MuseScore has made some beneficial improvements to the export process, including a new dialog containing lots of practical and time-saving settings. This work was implemented by one more community member, Casper Jeukendrup.
The team's current plans are to improve the engraving capabilities of MuseScore, including substantial overhauls to the horizontal spacing and beaming systems. MuseScore 3.6 may be a massive step, although there is a great deal of work ahead.
Links
Official release notes: MuseScore 3.6
Martin Keary’s video: “How I Designed a Free Music Font for 5 Million Musicians (MuseScore 3.6)”
Official video: “MuseScore 3.6 - A Massive Engraving Overhaul!”
Download MuseScore for free: MuseScore.org
#Linux Music Software FOSSVirtual Machine Startup Shells Closes the Digital Divide One Cloud Computer at a Time
Shells (shells.com), a new entrant in the virtual machine and cloud computing space, is excited to launch their new product which gives new users the freedom to code and create on nearly any device with an internet connection. Flexibility, ease, and competitive pricing are a focus for Shells which makes it easy for a user to start-up their own virtual cloud computer in minutes. The company is also offering multiple Linux distros (and continuing to add more offerings) to ensure the user can have the computer that they “want” to have and are most comfortable with.
The US-based startup Shells turns idle screens, including smart TVs, tablets, older or low-spec laptops, gaming consoles, smartphones, and more, into fully-functioning cloud computers. The company utilizes real computers, with Intel processors and top-of-the-line components, to send processing power into your device of choice. When a user accesses their Shell, they are essentially seeing the screen of the computer being hosted in the cloud - rather than relying on the processing power of the device they’re physically using.
Shells was designed to run seamlessly on a number of devices that most users likely already own, as long as it can open an internet browser or run one of Shells’ dedicated applications for iOS or Android. Shells are always on and always up to date, ensuring speed and security while avoiding the need to constantly upgrade or buy new hardware.
Shells offers four tiers (Lite, Basic, Plus, and Pro) catering to casual users and professionals alike. Shells Pro targets the latter, and offers a quad-core virtual CPU, 8GB of RAM, 160GB of storage, and unlimited access and bandwidth which is a great option for software engineers, music producers, video editors, and other digital creatives.
Using your Shell for testing eliminates the worry associated with tasks or software that could potentially break the development environment on your main computer or laptop. Because Shells are running round the clock, users can compile on any device without overheating - and allow large compile jobs to complete in the background or overnight. Shells also enables snapshots, so a user can revert their system to a previous date or time. In the event of a major error, simply reinstall your operating system in seconds.
“What Dropbox did for cloud storage, Shells endeavors to accomplish for cloud computing at large,” says CEO Alex Lee. “Shells offers developers a one-stop shop for testing and deployment, on any device that can connect to the web. With the ability to use different operating systems, both Windows and Linux, developers can utilize their favorite IDE on the operating system they need. We also offer the added advantage of being able to utilize just about any device for that preferred IDE, giving devs a level of flexibility previously not available.”
“Shells is hyper focused on closing the digital divide as it relates to fair and equal access to computers - an issue that has been unfortunately exacerbated by the ongoing pandemic,” Lee continues. “We see Shells as more than just a cloud computing solution - it’s leveling the playing field for anyone interested in coding, regardless of whether they have a high-end computer at home or not.”
Follow Shells for more information on service availability, new features, and the future of “bring your own device” cloud computing:
Website: https://www.shells.com
Twitter: @shellsdotcom
Facebook: https://www.facebook.com/shellsdotcom
Instagram: https://www.instagram.com/shellscom
#virtual-machine #cloud-computing #ShellsAn Introduction to Linux Gaming thanks to ProtonDB
In this article, the newest compatibility feature for gaming will be introduced and explained for all you dedicated video game fanatics.
Valve releases its new compatibility feature to innovate Linux gaming, included with its own community of play testers and reviewers.
In recent years we have made leaps and strides on making Linux and Unix systems more accessible for everyone. Now we come to a commonly asked question, can we play games on Linux? Well, of course! And almost, let me explain.
Proton compatibility layer for Steam clientWith the rising popularity of Linux systems, valve is going ahead of the crowd yet again with proton for their steam client (computer program that runs your purchased games from Steam). Proton is a variant of Wine and DXVK that lets Microsoft Games run on Linux operating systems. Proton is backed by Valve itself and can easily be added to any steam account for Linux gaming, through an integration called "Steam Play."
Lately, there has been a lot of controversy as Microsoft is rumored to someday release its own app store and disable downloading software online. In response, many companies and software developers are pressured to find a new "haven" to share content with the internet. Proton might be Valve's response to this and is working to make more of its games accessible to Linux users.
Activating Proton with Steam PlayProton is integrated into the Steam Client with "Steam Play." To activate proton, go into your steam client and click on Steam in the upper right corner. Then click on settings to open a new window.
Steam Client's settings window
From here, click on the Steam Play button at the bottom of the panel. Click "Enable Steam Play for Supported Titles." After, it will ask you to restart steam, click yes and you are ready to play after the restart.
Your computer will now play all of steam's whitelisted games seamlessly. But, if you would like to try other games that are not guaranteed to work on Linux, then click "Enable Steam Play for All Other Titles."
What Happens if a Game has Issues?Don't worry, this can and will happen for games that are not in Steam's whitelisted games archive. But, there is help for you online on steam and in proton's growing community. Be patient and don't give up! There will always be a solution out there.
Go to Full ArticleHow To Use GUI LVM Tools
The LVM is a powerful storage management module which is included in all the distributions of Linux now. It provides users with a variety of valuable features to fit different requirements. The management tools that come with LVM are based on the command line interface, which is very powerful and suitable for automated/batch operations. But LVM's operations and configuration are quite complex because of its own complexity. So many software companies including Red Hat have launched some GUI-based LVM tools to help users manage LVM more easily. Let’s review them here to see the similarities and differences between individual tools.
system-config-lvm (alternate name LVM GUI)Provider: Red Hat
The system-config-lvm is the first GUI LVM tool which was originally released as part of Red Hat Linux. It is also called LVM GUI because it is the first one. Later, Red Hat also created an installation package for it. So system-config-lvm is able to be used in other Linux distributions. The installation package includes RPM packages and DEB packages.
The main panel of system-config-lvm
The system-config-lvm only supports lvm-related operations. Its user interface is divided into three parts. The left part is tree view of disk devices and LVM devices (VGs); the middle part is the main view which shows VG usage, divided into LV and PV columns.
There are zoom in/zoom out buttons in the main view to control display ratio, but it is not enough for displaying complex LVM information.The right part displays details of the selected related objects (PV/LV/VG).
The different versions of system-config-lvm are not completely consistent in the organized way of devices. Some of them show both LVM devices and non-lvm devices (disk), the others show LVM devices only. I have tried two versions, one shows LVM devices existing in the system, namely PV/VG/LV only, no other devices; The other can display non-lvm disks and PV can be removed in disk view.
The version which shows non-lvm disks
Supported operationsPV Operations
- Delete PV
- Migrate PV
VG Operations
- Create VG
- Append PV to VG/Remove PV from VG
- Delete VG (Delete last PV in VG)
LV Operations
Go to Full ArticleBoost Up Productivity in Bash - Tips and Tricks
When spending most of your day around bash shell, it is not uncommon to waste time typing the same commands over and over again. This is pretty close to the definition of insanity.
Luckily, bash gives us several ways to avoid repetition and increase productivity.
Today, we will explore the tools we can leverage to optimize what I love to call “shell time”.
AliasesBash aliases are one of the methods to define custom or override default commands.
You can consider an alias as a “shortcut” to your desired command with options included.
Many popular Linux distributions come with a set of predefined aliases.
Let’s see the default aliases of Ubuntu 20.04, to do so simply type “alias” and press [ENTER].
By simply issuing the command “l”, behind the scenes, bash will execute “ls -CF”.
It's as simple as that.
This is definitely nice, but what if we could specify our own aliases for the most used commands?! The answer is, of course we can!
One of the commands I use extremely often is “cd ..” to change the working directory to the parent folder. I have spent so much time hitting the same keys…
One day I decided it was enough and I set up an alias!
To create a new alias type “alias ” the alias name, in my case I have chosen “..” followed by “=” and finally the command we want an alias for enclosed in single quotes.
Here is an example below.
FunctionsSometimes you will have the need to automate a complex command, perhaps accept arguments as input. Under these constraints, aliases will not be enough to accomplish your goal, but no worries. There is always a way out!
Functions give you the ability to create complex custom commands which can be called directly from the terminal like any other command.
For instance, there are two consecutive actions I do all the time, creating a folder and then cd into it. To avoid the hassle of typing “mkdir newfolder” and then “cd newfolder” i have create a bash function called “mkcd” which takes the name of the folder to be created as argument, create the folder and cd into it.
To declare a new function, we need to type the function name “mkcd ” follower by “()” and our complex command enclosed in curly brackets “{ mkdir -vp "$@" && cd "$@"; }”
Go to Full ArticleCase Study: Success of Pardus GNU/Linux Migration
Eyüpsultan Municipality decided to use an open source operating system in desktop computers in 2015.
The most important goal of the project was to ensure information security and reduce foreign dependency.
As a result of the research and analyzes prepared, a detailed migration plan was prepared.
As a first step, licensed office software installed on all computers has been removed. LibreOffice software was installed instead.
Later, LibreOffice training was given to the municipal staff.
Meanwhile, preparations were made for the operating system migration.
Instead of the existing licensed operating system, Turkey's developed Pardus GNU / Linux distribution was decided to use.
Applications on the Pardus GNU / linux operating system were examined in detail and unnecessary applications were removed.
And a new ISO file was created with the applications used in Eyüpsultan municipality.
This process automated the setup steps and reduced setup time.
While the project continued at full speed, the staff were again trained on LibreOffice and Pardus GNU / linux.
After their training, the users took the exam.
The Pardus GNU / Linux operating system was installed on the computers of the successful ones.
Those who failed were retrained and took the exam again.
As of 2016, 25% of a computer's operating system migration was completed.
Immigration Project Implementation Steps AnalysisA detailed inventory of all software and hardware products used in the institution was created. The analysis should go down to the department, unit and personnel details.
It should be evaluated whether extra costs will arise in the migration project.
PlanningMigration plan should be prepared, migration targets should be determined.
The duration of the migration should be calculated and the team that will carry out the migration should be determined.
ProductionYou can use an existing Linux distribution.
Or you can customize the distribution you will use according to your own preferences.
Making a customized ISO file will give you speed and flexibility.
It also helps you compensate for the loss of time caused by incorrect entries.
TestStart using the ISO file you have prepared in a lab environment consisting of the hardware you use.
Look for solutions, noting any problems encountered during and after installation.
Go to Full ArticleBPF For Observability: Getting Started Quickly
BPF is a powerful component in the Linux kernel and the tools that make use of it are vastly varied and numerous. In this article we examine the general usefulness of BPF and guide you on a path towards taking advantage of BPF’s utility and power. One aspect of BPF, like many technologies, is that at first blush it can appear overwhelming. We seek to remove that feeling and to get you started.
What is BPF?BPF is the name, and no longer an acronym, but it was originally Berkeley Packet Filter and then eBPF for Extended BPF, and now just BPF. BPF is a kernel and user-space observability scheme for Linux.
A description is that BPF is a verified-to-be-safe, fast to switch-to, mechanism, for running code in Linux kernel space to react to events such as function calls, function returns, and trace points in kernel or user space.
To use BPF one runs a program that is translated to instructions that will be run in kernel space. Those instructions may be interpreted or translated to native instructions. For most users it doesn’t matter the exact nature.
While in the kernel, the BPF code can perform actions for events, like, create stack traces, count the events or collect counts into buckets for histograms.
Through this BPF programs provide both fast and immensely powerful and flexible means for deep observability of what is going on in the Linux kernel or in user space. Observability into user space from kernel space is possible, of course, because the kernel can control and observe code executing in user mode.
Running BPF programs amounts to having a user program make BPF system calls which are checked for appropriate privileges and verified to execute within limits. For example, in the Linux kernel version 5.4.44, the BPF system call checks for privilege with:
if (sysctl_unprivileged_bpf_disabled && !capable(CAP_SYS_ADMIN)) return -EPERM;The BPF system call checks for a sysctl controlled value and for a capability. The sysctl variable can be set to one with the command
sysctl kernel.unprivileged_bpf_disabled=1but to set it to zero you must reboot and make sure to not have your system configured to set it to one at boot time.
Because BPF is doing the work in kernel space significant time and overhead is saved avoiding context switches and by not necessitating transferring large amounts of data back to user space.
Not all kernel functions can be traced. For example if you were to try funccount-bpfcc '*_copy_to_user' you may get output like:
cannot attach kprobe, Invalid argument Failed to attach BPF program b'trace_count_3' to kprobe b'_copy_to_user'This is kind of mysterious. If you check the output from dmesg you would see something like:
Go to Full ArticleA Linux Survey For Beginners
So you have decided to give the Linux operating system a try. You have heard it is a good stable operating system with lots of free software and you are ready to give it a shot. It is downloadable for free, so you get on the net and search for a copy, and you are in for a shock. Because there isn’t one “Linux”, there are many. Now you feel like a deer in the headlights. You want to make a wise choice, but have no idea where to start. Unfortunately, this is where a lot new Linux users give up. It is just too confusing.
The many versions of Linux are often referred to as “flavors” or distributions. Imagine yourself in an ice cream shop displaying 30+ flavors. They all look delicious, but it’s hard to pick one and try it. You may find yourself confused by the many choices but you can be sure you will leave with something delicious. Picking a Linux flavor should be viewed in the same way.
As with ice cream lovers, Linux users have their favorites, so you will hear people profess which is the “best”. Of course, the best is the one that you conclude, will fit your needs. That might not be the first one you try. According to linuxquestions.org there are currently 481 distributions, but you don’t need to consider every one. The same source lists these distributions as “popular”: Ubuntu, Fedora, Linux Mint, OpenSUSE, PCLinuxOS, Debian, Mageia, Slackware, CentOS, Puppy, Arch. Personally I have only tried about five of these and I have been a Linux user for more than 20 years. Today, I mostly use Fedora.
Many of these also have derivatives that are made for special purpose uses. For example, Fedora lists special releases for Astronomy, Comp Neuro, Design Suite, Games, Jam, Python Classroom, Security Lab, Robotics Suite. All of these are still Fedora, but the installation includes a large quantity of programs for the specific purpose. Often a particular set of uses can spawn a whole new distribution with a new name. If you have a special interest, you can still install the general one (Workstation) and update later.
Very likely one of these systems will suit you. Even within these there are subtypes and “windows treatments” to customize your operating system. Gnome, Xfce, LXDE, and so on are different windows treatments available in all of the Linux flavors. Some try to look like MS windows, some try to look like a Mac. Some try to be original, light weight, graphically awesome. But that is best left for another article. You are running Linux no matter which of those you choose. If you don’t like the one you choose, you can try another without losing anything. You also need to know that some of these distributions are related, so that can help simplify your choice.
Go to Full Article
Terminal Vitality
Ever since Douglas Engelbart flipped over a trackball and discovered a mouse, our interactions with computers have shifted from linguistics to hieroglyphics. That is, instead of typing commands at a prompt in what we now call a Command Line Interface (CLI), we click little icons and drag them to other little icons to guide our machines to perform the tasks we desire.
Apple led the way to commercialization of this concept we now call the Graphical User Interface (GUI), replacing its pioneering and mostly keyboard-driven Apple // microcomputer with the original GUI-only Macintosh. After quickly responding with an almost unusable Windows 1.0 release, Microsoft piled on in later versions with the Start menu and push button toolbars that together solidified mouse-driven operating systems as the default interface for the rest of us. Linux, along with its inspiration Unix, had long championed many users running many programs simultaneously through an insanely powerful CLI. It thus joined the GUI party late with its likewise insanely powerful yet famously insecure X-Windows framework and the many GUIs such as KDE and Gnome that it eventually supported.
GUI Linux
But for many years the primary role for X-Windows on Linux was gratifyingly appropriate given its name - to manage a swarm of xterm windows, each running a CLI. It's not that Linux is in any way incompatible with the Windows / Icon / Mouse / Pointer style of program interaction - the acronym this time being left as an exercise for the discerning reader. It's that we like to get things done. And in many fields where the progeny of Charles Babbage's original Analytic Engine are useful, directing the tasks we desire is often much faster through linguistics than by clicking and dragging icons.
A tiling window manager makes xterm overload more manageable
A GUI certainly made organizing many terminal sessions more visual on Linux, although not necessarily more practical. During one stint of my lengthy engineering career, I was building much software using dozens of computers across a network, and discovered the charms and challenges of managing them all through Gnu's screen tool. Not only could a single terminal or xterm contain many command line sessions from many computers across the network, but I could also disconnect from them all as they went about their work, drive home, and reconnect to see how the work was progressing. This was quite remarkable in the early 1990s, when Windows 2 and Mac OS 6 ruled the world. It's rather remarkable even today.
Bashing GUIs
Go to Full ArticleBuilding A Dashcam With The Raspberry Pi Zero W
I've been playing around with the Raspberry Pi Zero W lately and having so much fun on the command line. For those uninitiated it's a tiny Arm computer running Raspbian, a derivative of Debian. It has a 1 GHz processor that had the ability to be overclocked and 512 MB of RAM, in addition to wireless g and bluetooth.
A few weeks ago I built a garage door opener with video and accessible via the net. I wanted to do something a bit different and settled on a dashcam for my brother-in-law's SUV.
I wanted the camera and Pi Zero W mounted on the dashboard and to be removed with ease. On boot it should autostart the RamDashCam (RDC) and there should also be 4 desktop scripts dashcam.sh, startdashcam.sh, stopdashcam.sh, shutdownshutdown.sh. Also create and a folder named video on the Desktop for the older video files. I also needed a way to power the RDC when there is no power to the vehicle's usb ports. Lastly I wanted it's data accessible on the local LAN when the vehicle is at home.
Here is the parts list:
- Raspberry Pi Zero W kit (I got mine from Vilros.com)
- Raspberry Pi official camera
- Micro SD card, at least 32 gigs
- A 3d printed case from thingverse.com
- Portable charger, usually used to charge cell phones and tablets on the go
- Command strips, it's like double sided tape that's easy to remove or velcro strips
First I flashed the SD card with Raspbian, powered it up and followed the setup menu. I also set a static IP address.
Now to the fun stuff. Lets create a service so we can start and stop RDC via systemd. Using your favorite editor, navigate to "/etc/systemd/system/" and create "dashcam.service" and add the following:
[Unit] Description=dashcam service After=network.target StartLimitIntervalSec=0 [Service] Type=forking Restart=on-failure RestartSec=1 User=pi WorkingDirectory=/home/pi/Desktop ExecStart=/bin/bash /home/pi/Desktop/startdashcam.sh [Install] WantedBy=multi-user.target
Now that's complete lets enable the service, run the following: sudo systemctl enable dashcam
I added these scripts to start and stop RDC on the Desktop so my brother-in-law doesn't have to mess around in the menus or command line. Remember to "chmod +x" these 4 scripts.
startdashcam.sh
#!/bin/bash # remove files older than 3 days find /home/pi/Desktopvideo -type f -iname '*.flv' -mtime +3 -exec rm {} \; # start dashcam service sudo systemctl start dashcam
stopdashcam.sh
Go to Full ArticleSeaGL - Seattle GNU/Linux Conference Happening This Weekend!
This Friday, November 13th and Saturday, November 14th, from 9am to 4pm PST the 8th annual SeaGL will be held virtually. This year features four keynotes, and a mix of talks on FOSS tech, community and history. SeaGL is absolutely free to attend and is being run with free software!
Additionally, we are hosting a pre-event career expo on Thursday, November 12th from 1pm to 5pm. Counselors will be available for 30 minute video sessions to provide resume reviews and career guidance.
MissionThe Seattle GNU/Linux conference (SeaGL) is a free, as in freedom and tea, grassroots technical summit dedicated to spreading awareness and knowledge about free/libre/open source software, hardware, and culture.
SeaGL strives to be welcoming, enjoyable, and informative for professional technologists, newcomers, enthusiasts, and all other users of free software, regardless of their background knowledge; providing a space to bridge these experiences and strengthen the free software movement through mentorship, collaboration, and community.
Dates/Times- November 13th and 14th
- Friday and Saturday
- Main Event: 9am-4:30pm
- TeaGL: 1-2:45pm, both days
- Friday Social: 4:30-6pm
- Saturday Party: 6-10pm
- Pre-event Career Expo: 1-5pm, Thursday November 12th
- All times in Pacific Timezone
- `#SeaGL2020`
- `#TeaGLtoasts`
Social Media Reference LinksBest contact: press@seagl.org
Go to Full ArticleHot Swappable Filesystems, as Smooth as Btrfs
Filesystems, like file cabinets or drawers, control how your operating system stores data. They also hold metadata like filetypes, what is attached to data, and who has access to that data. For windows or macOS users
Quite honestly, not enough people consider which file system to use for their computers.
Windows and macOS users have no valid reason to look into filesystems because they have one that’s been widely used since its inception. For Windows that’s NTFS and macOS that’s HFS+. For Linux users, there are plenty of different file system options to choose from. The current default in the Linux field is known as the Fourth Extended Filesystem or ext4.
Currently there is discussion for changes in the filesystem space of Linux. Much like the changes to the default init systems and the switch to systemd a few years ago, there has been a push for changing the default Linux filesystem to the Btrfs. No, I'm not using slang or trying to insult you. Btrfs stands for the B-Tree file system. Many Linux users and sysadmins were not too happy with its initial changes. That could be because people are generally hesitant to change, or because they change may have been too abrupt. A friend once said, "I've learned that fear limits you and your vision. It serves as blinders to what may be just a few steps down the road for you." In this article I want to help ease the understanding of Btrfs and make the transition as smooth as butter. Let’s go over a few things first.
What do Filesystems do?Just to be clear, we can summarize what filesystems do and what they are used for. Like mentioned before filesystems are used for controlling how data is store after a program is no longer using it, how to access that data, where that data is located, and what is attached to the data itself. As a sysadmin, one of the many tasks and responsibilities is to maintain backups and manage filesystems. Partitioning filesystems help with separating different areas in business environments and is common practice for data retention. An example would be taking a 3TB hard disk and partitioning 1TB for your production environment, 1TB for your development environment, 1TB for company related documents and files. When accidents happen to a specific partition, only the data stored in that partition is affected, instead of the entire 3TB drive in this example. A fun example would be a user testing a script in a development application that begins filling up disk space in the dev partition. Filling up a filesystem accidentally, whether it be from an application or a user’s script or anything on the system, could cause an entire system to stop functioning. If data is partitioned to separate partitions, only the data in that partition will be full or affected, so the production and company data partitions are safe.
Go to Full ArticleHow to Try Linux Without a Classical Installation
For many different reasons, you may not be able to install Linux on your computer.
Maybe you are not familiar with words like partitioning and bootloader, maybe you share the PC with your family, maybe you don’t feel comfortable to wipe out your hard drive and start over, or maybe you just want to see how it looks before proceeding with a full installation.
I know, it feels frustrating, but no worries, we have got you covered!
In this article, we will explore several ways to try Linux out without the hassle of a classical installation.
Choosing a distributionIn the Linux world, there are several distributions which are quite different between them.
Some are general purpose operating systems, some others are created with a specific use case in mind. That being said, I know how confusing this can be for a beginner.
If you are moving your first steps with Linux and you are still not sure how and why to pick a distribution instead of another one, there are several resources online available to help you.
A perfect example of these resources is the website https://distrochooser.de/ which will walk you through a questionnaire to understand your needs and advice on what distribution could be a good fit for your use case.
Once you have chosen your distribution, there are high chances it will have a live CD image available for testing before the installation. If this is the case, here below you can find many ways to “boot” your live CD ISO image.
MobaLiveCDMobaLiveCD is an amazing open source application which lets run a live Linux on windows with nearly zero efforts.
Download the application from the official site download page available here and run it.
It will present a screen where you can choose either a Linux Live CD ISO file or a bootable USB drive.
Click on Run the LiveCD, select your ISO file, select no when asked if you want to create a hard disk.
Your Linux virtual machine will boot up “automagically”.
Go to Full ArticleHow to Create EC2 Duplicate Instance with Ansible
Many companies like mine use AWS infrastructure as a service (IaaS) heavily. Sometimes we want to perform a potentially risky operation on an EC2 instance. As long as we do not work with immutable infrastructure it is imperative to be prepared for instant revert.
One of the solutions is to use a script that will perform instance duplication, but in modern environments, where unification is an essence it would be wiser to use more common known software instead of making up a custom script.
Here comes the Ansible!
Ansible is a simple automation software. It handles configuration management, application deployment, cloud provisioning, ad-hoc task execution, network automation, and multi-node orchestration. It is marketed as a tool for making complex changes like zero-downtime rolling patching, therefore we have used it for this straightforward snapshotting task.
RequirementsFor this example we will only need an Ansible, in my case it was version 2.9 - in subsequent releases there is a major change with introducing collections so let's stick with this one for simplicity.
Due to working with AWS we require a minimal set of permissions, which include permissions to create:
- AWS snapshots
- Register images (AMI)
- Start and stop EC2
Since I am forced to work on Windows I have utilized Vagrant instances. Please find below a Vagrantfile content.
We are launching a virtual machine, with Centos 7 and Ansible installed.
For security reasons Ansible, by default, has disabled reading configuration from mounted location, therefore we have to implcity indicate path /vagrant/ansible.cfg.
Listing 1. Vagrantfile for our research
Vagrant.configure("2") do |config| config.vm.box = "geerlingguy/centos7" config.vm.hostname = "awx" config.vm.provider "virtualbox" do |vb| vb.name = "AWX" vb.memory = "2048" vb.cpus = 3 end config.vm.provision "shell", inline: "yum install -y git python3-pip" config.vm.provision "shell", inline: "pip3 install ansible==2.9.10" config.vm.provision "shell", inline: "echo 'export ANSIBLE_CONFIG=/vagrant/ansible.cfg' >> /home/vagrant/.bashrc" end First tasksIn the first lines of the Ansible we specify few meta values. Most of them, like name, hosts and tasks are mandatory. Others provide auxiliary functions.
Listing 2. duplicate_ec2.yml playbook first lines
--- - name: yolo hosts: localhost connection: local gather_facts: false become: false vars: instance_id: i-deadbeef007tasks:
Go to Full ArticleTCP Analysis with Wireshark
Transmission Control is an essential aspect of network activity and governs the behavior of many services we take for granted. When sending your emails or just browsing the web you are relying on TCP to send and receive your packets in a reliable fashion. Thanks to two DARPA scientists, Vinton Cerf and Bob Kahn who developed TCP/IP in 1970, we have a specific set of rules that define how we communicate over a network. When Vinton and Bob first conceptualized TCP/IP, they set up a basic network topology and a device that can interface between two other hosts.
In the Figure 1 we have two networks connected by a single gateway. The gateway plays an essential role in the development of any network and bares the responsibility of routing data properly between these two networks.
Since the gateway must understand the addresses of each host on the network, it is necessary to have a standard format in every packet that arrives. Vince and Bob called this the internetwork header prefixed to the packet by the source host.
The source and destination entries, along with the IP address, uniquely identify every host on the network so that the gateway can accurately forward packets.
The sequence number and byte count identifies each packet sent from the source, and accounts for all of the text within the segment. The receiver can use this to determine if it has already seen the packet and discard if necessary.
The check sum is used to validate each packet being sent to ensure error free transmission. This checksum uses a false header and encapsulates the data of the original TCP header, such as source/destination entries , header length and byte count .
Go to Full ArticleHow to Add a Simple Progress Bar in Shell Script
At times, we need to write shell scripts that are interactive and user executing them need to monitor the progress. For such requirements, we can implement a simple progress bar that gives an idea about how much task has been completed by the script or how much the script has executed.
To implement it, we only need to use the “echo” command with the following options and a backslash-escaped character.
-n : do not append a newline -e : enable interpretation of backslash escapes \r : carriage return (go back to the beginning of the line without printing a newline)For the sake of understanding, we will use “sleep 2” command to represent an ongoing task or a step in our shell script. In a real scenario, this could be anything like downloading files, creating backup, validating user input, etc. Also, to give an example we are assuming only four steps in our script below which is why we are using 20,40,60,80 (%) as progress indicator. This can be adjusted as per the number of steps in a script. For instance, a script with three steps can be represented by 33,66,99 (%) or a script with ten steps can be represented by 10-90 (%) as progress indicator.
The implementation looks like the following:
echo -ne '>>> [20%]\r' # some task sleep 2 echo -ne '>>>>>>> [40%]\r' # some task sleep 2 echo -ne '>>>>>>>>>>>>>> [60%]\r' # some task sleep 2 echo -ne '>>>>>>>>>>>>>>>>>>>>>>> [80%]\r' # some task sleep 2 echo -ne '>>>>>>>>>>>>>>>>>>>>>>>>>>[100%]\r' echo -ne '\n'In effect, every time the “echo” command executes, it replaces the output of the previous “echo” command in the terminal thus representing a simple progress bar. The last “echo” command simply enters a newline (\n) in the terminal to resume the prompt for the user.
The execution looks like the following:
Go to Full ArticleUbuntu 20.10 “Groovy Gorilla” Arrives With Linux 5.8, GNOME 3.38, Raspberry Pi 4 Support
Just two days ago, Ubuntu marked the 16th anniversary of its first ever release, Ubuntu 4.10 “Warty Warthog,” which showed Linux could be a more user friendly operating system.
Back to now, after the six months of development cycle and the release of the current long-term Ubuntu 20.04 “Focal Fossa,” Canonical has announced a new version called Ubuntu 20.10 “Groovy Gorilla” along with its seven official flavor: Kubuntu, Lubuntu, Ubuntu MATE, Ubuntu Kylin, Xubuntu, Ubuntu Budgie, and Ubuntu Studio.
Ubuntu 20.10 is a short term or non-LTS release, which means it will be supported for 9 months until July 2021. Though v20.10 does not seem a major release, it does come with a lot of exciting and new features. So, let’s see what Ubuntu 20.10 “Groovy Gorilla” has to offer:
New Features in Ubuntu 20.10 “Groovy Gorilla” Ubuntu desktop for Raspberry Pi 4Starting with one of the most important enhancements, Ubuntu 20.10 has become the first Ubuntu release to feature desktop images for the Raspberry Pi 4. Yes, you can now download and run Ubuntu 20.10 desktop on your Raspberry Pi models with at least 4GB of RAM.
Even both Server and Desktop images also support the new Raspberry Pi Compute Module 4. The 20.10 images may still boot on earlier models, but new Desktop images only built for the arm64 architecture and officially only support the Pi 4 variant with 4GB or 8GB RAM.
Linux Kernel 5.8Upgrading the previous Linux kernel 5.4, the latest Ubuntu 20.10 ships the new Linux kernel 5.8, which is dubbed “the biggest release of all time” by Linus Torvalds as it contains the highest number of over 17595 commits.
So it’s obvious that Linux 5.8 brings numerous updates, new features, and hardware support. For instance, Kernel Event Notification Mechanism, Intel Tiger Lake Thunderbolt support, extended IPv6 Multi-Protocol Label Switching (MPLS) support, Inline Encryption hardware support, Thunderbolt support for Intel Tiger Lake and non-x86 systems, and initial support for booting POWER10 processors.
GNOME 3.38 Desktop EnvironmentAnother key change that Ubuntu 20.10 includes is the latest version of GNOME desktop environment, which enhances the visual appearance, performance, and user experience of Ubuntu.
One of my favorite features that GNOME 3.38 introduces is a much-needed separate “Restart” button in the System menu.
Among other enhancements, GNOME 3.38 also includes:
- Better multi-monitor support
- Revamped GNOME Screenshot app
- Customizable App Grid with no “Frequent Apps” tab
- Battery percentage indicator
- New Welcome Tour app written in Rust
- Core GNOME apps improvements
If you’re the person who wants to share the system’s Internet with other devices wirelessly, this feature of sharing Wi-Fi hotspot through QR code will definitely please you.
Thanks to GNOME 3.38, you can now turn your Linux system into a portable Wi-Fi hotspot by sharing QR code with the devices like laptops, tablets, and mobiles.
Add events in GNOME Calendar app
Forget to remember the events? A pre-installed GNOME Calendar app now lets you add new events (birthday, meetings, reminders, releases), which displays in the message tray. Instead of adding new events manually, you can also sync your events from Google, Microsoft, or Nextcloud calendars after adding online accounts from the settings.
Active Directory Support
In the Ubiquity installer, Ubuntu 20.10 has also added an optional feature to enable Active Directory (AD) integration. If you check the option, you’ll be directed to configure the AD by giving information about the domain, administrator, and password.
Tools and Software upgrade
Ubuntu 20.10 also features the updated tools, software, and subsystems to their new versions. This includes:
- glibc 2.32, GCC 10, LLVM 11
- OpenJDK 11
- rustc 1.41
- Python 3.8.6, Ruby 2.7.0, PHP 7.4.9
- perl 5.30
- golang 1.13
- Firefox 81
- LibreOffice 7.0.2
- Thunderbird 78.3.2
- BlueZ 5.55
- NetworkManager 1.26.2
- Nftables replaces iptables as default backend for the firewall
- Better support for fingerprint login
- Cloud images with KVM kernels boot without an initramfs by default
- Snap pre-seeding optimizations for boot time improvements
A full release notes of Ubuntu 20.10 is also available to read right from here.
How To Download Or Upgrade To Ubuntu 20.10If you’re looking for a fresh installation of Ubuntu 20.10, download the ISO image available for several platforms such as Desktop, Server, Cloud, and IoT.
But if you’re already using the previous version of Ubuntu, you can also easily upgrade your system to the Ubuntu 20.10. For upgrading, you must be using Ubuntu 20.04 LTS as you cannot directly reach 20.10 from 19.10, 19.04, 18.10, 18.04, 17.04, or 16.04. You should first hop on to v20.04 and then to the latest v20.10.
As Ubuntu 20.10 is a non-LTS version and by design, Ubuntu only notifies a new LTS release, you need to upgrade manually by either choosing a GUI method using the built-in Software Updater tool or a command line method using the terminal.
For command line method, open terminal and run the following commands:
sudo apt update && sudo apt upgrade
sudo do-release-upgrade -d -m desktop
Or else, if you’re not a terminal-centric person, here’s an official upgrade guide using a GUI Software Updater.
Enjoy Groovy Gorilla!
Ubuntu Groovy Gorilla GNOME GNOME 3.0 Raspberry Pi kernelBtrfs on CentOS: Living with Loopback
The btrfs filesystem has taunted the Linux community for years, offering a stunning array of features and capability, but never earning universal acclaim. Btrfs is perhaps more deserving of patience, as its promised capabilities dwarf all peers, earning it vocal proponents with great influence. Still, none can argue that btrfs is unfinished, many features are very new, and stability concerns remain for common functions.
Most of the intended goals of btrfs have been met. However, Red Hat famously cut continued btrfs support from their 7.4 release, and has allowed the code to stagnate in their backported kernel since that time. The Fedora project announced their intention to adopt btrfs as the default filesystem for variants of their distribution, in a seeming juxtaposition. SUSE has maintained btrfs support for their own distribution and the greater community for many years.
For users, the most desirable features of btrfs are transparent compression and snapshots; these features are stable, and relatively easy to add as a veneer to stock CentOS (and its peers). Administrators are further compelled by adjustable checksums, scrubs, and the ability to enlarge as well as (surprisingly) shrink filesystem images, while some advanced btrfs topics (i.e. deduplication, RAID, ext4 conversion) aren't really germane for minimal loopback usage. The systemd init package also has dependencies upon btrfs, among them machinectl and systemd-nspawn. Despite these features, there are many usage patterns that are not directly appropriate for use with btrfs. It is hostile to most databases and many other programs with incompatible I/O, and should be approached with some care.
Go to Full ArticleHow to Secure Your Website with OpenSSL and SSL Certificates
The Internet has become the number one resources for news, information, events, and all things social. As most people know there are many ways to create a website of your own and capture your own piece of the internet to share your stories, ideas, or even things you like with others. When doing so it is important to make sure you stay protected on the internet the same way you would in the real world. There are many steps to take in the real world to stay safe, however, in this article we will be talking about staying secure on the web with an SSL certificate.
OpenSSL is a command line tool we can use as a type of "bodyguard" for our webservers and applications. It can be used for a variety of things related to HTTPS, generating private keys and CSRs (certificate signing requests), and other examples. This article will break down what OpenSSL is, what it does, and examples on how to use it to keep your website secure. Most online web/domain platforms provide SSL certificates for a fixed yearly price. This method, although it takes a bit of technical knowledge, can save you some money and keep you secure on the web.
* For example purposes we will use testmastersite.com for commands and examples
How this guide may help you:
- Using OpenSSL to generate and configure CSRs
- Understanding SSL certificates and their importance
- Learn about certificate signing requests (CSRs)
- Learn how to create your own CSR and private key
- Learn about OpenSSL and its common use cases
Requirements
- A Linux-based OS
- Comfort with command line tools
The first thing to do would be to generate a 2048-bit RSA key pair on your machine. This pair i'm referring to is both your private and public key. You can use a list of tools online to do so, but for this example we will be working with OpenSSL.
What are SSL certificates and who cares?According to GlobalSign.com an SSL certificate is a small data file that digitally binds a cryptographic key to an organizations details. When installed on a webserver, it activates the padlock and the https protocol and allows secure connections from a web server to a browser. Let me break that down for you. An SSL certificate is like a bodyguard for your website. To confirm that a site is using an SSL you can typically check that the site has an https in the url rather than an http string in the name. the "s" stands for Secure.
-
Example SECURE Site: https://www.testmastersite.com/
More in Tux Machines
- Highlights
- Front Page
- Latest Headlines
- Archive
- Recent comments
- All-Time Popular Stories
- Hot Topics
- New Members
Cluster Server R2 2U rack cluster server ships with up to 72 Rockchip RK3399/RK3328 SoMs
Rockchip RK3399 and RK3328 are typically used in Chromebooks, single board computers, TV boxes, and all sort of AIoT devices, but if you ever wanted to create a cluster based on those processor, Firefly Cluster Server R2 leverages the company’s RK3399, RK3328, or even RK1808 NPU SoM to bring 72 modules to a 2U rack cluster server enclosure, for a total of up to 432 Arm Cortex-A72/A53 cores, 288 GB RAM, and 18 3.5-inch hard drives.
Firefly says the cluster can run Android, Ubuntu, or some other Linux distributions. Typical use cases include “cloud phone”, virtual desktop, edge computing, cloud gaming, cloud storage, blockchain, multi-channel video decoding, app cloning, etc. When fitted with the AI accelerators, it looks similar to Solidrun Janux GS31 Edge AI server designed for real-time inference on multiple video streams for the monitoring of smart cities & infrastructure, intelligent enterprise/industrial video surveillance, object detection, recognition & classification, smart visual analysis, and more. There’s no Wiki for Cluster Server R2 just yet, but you may find some relevant information on the Wiki for an earlier generation of the cluster server.
| How to Get Install Docker On Ubuntu 20.04 LTS
Docker is an Open source technology that allows you to install an run application on several containers (machine) without Interfering with the host or other containers technology is similar to Virtualization, but it is more portable and easy to use.
What is the type of Docker are available?
There is two types of Docker are available Docker CE (Community Edition) and Docker EE (Enterprise Edition).
|
today's howtos
| Contributing to KDE is easier than you think – Bug triaging
Today, 2021-01-28, is the Plasma Beta Review Day for Plasma 5.21, that is to say, Plasma 5.20.90. Right now it’s a bit after 2 a.m., so after this I’m going to bed so I can be present later.
This month I’ve mostly been enjoying my post-job vacation as last year I was bordering burnout. As such I didn’t help much.
Before bed I’ll be providing a few things I’ve learned about triaging, though. While this blog post isn’t specifically about the Beta Review Day, this should make the general bug triaging process clearer for you, making it quite timely.
|
Recent comments
1 hour 26 min ago
5 hours 18 min ago
8 hours 47 sec ago
9 hours 38 min ago
9 hours 39 min ago
9 hours 40 min ago
9 hours 41 min ago
9 hours 42 min ago
10 hours 21 min ago
16 hours 22 min ago