Language Selection

English French German Italian Portuguese Spanish

Linux.com

Syndicate content
News For Open Source Professionals
Updated: 2 hours 5 min ago

Linux Foundation Public Health Joins The Fight Against COVID-19 Pandemic

Monday 25th of January 2021 08:10:31 PM

Brian Behlendorf is one of the most respected luminaries of the open-source world. He has been heading the Linux Foundation’s Hyperledger project since its inception and recently took over additional responsibilities of the Linux Foundation Public Health. With the new administration sworn in, there will be an increased focus on science-backed public health efforts, and the foundation is best positioned to help the public sector with the strategic availability of open-source technologies to tackle serious health crises. In this interview, we dived deep into the scope of the Linux Foundation Public Health project. It’s going to look beyond the COVID-19 pandemic and address many other public health issues that we may see due to climate change and so on.



The post Linux Foundation Public Health Joins The Fight Against COVID-19 Pandemic appeared first on Linux.com.

The Maple Tree, A Modern Data Structure for a Complex Problem

Thursday 21st of January 2021 02:00:00 AM

In recent years, processors have experienced growth in core counts which have pushed software to be multi-threaded and increased contention in the virtual memory data structure. The memory management subsystem uses the mmap_sem lock for write protection of the VMAs. Optimizing the mmap_sem lock into a rw-semaphore helped contention but did not solve the underlying issue. Even with a single threaded program and a well-intended system admin, contention does arise through proc file accesses for application monitoring.

In this blog, we introduce a new data structure that can track gaps, store ranges, and be implemented in an RCU compatible manner. This is the Maple Tree.
Click to Read More at Oracle Linux Kernel Development

The post The Maple Tree, A Modern Data Structure for a Complex Problem appeared first on Linux.com.

Announcing the Unbreakable Enterprise Kernel Release 5 Update 4 for Oracle Linux

Wednesday 20th of January 2021 10:18:23 PM

A summary of what’s new in Unbreakable Enterprise Kernel Release 5 Update 4.
Click to Read More at Oracle Linux Kernel Development

The post Announcing the Unbreakable Enterprise Kernel Release 5 Update 4 for Oracle Linux appeared first on Linux.com.

NVMe over TCP

Wednesday 20th of January 2021 10:18:22 PM

This post describes how set up Oracle Linux for NVMe over Fabrics to use a standard Ethernet network without having to purchase special RDMA-capable network hardware.
Click to Read More at Oracle Linux Kernel Development

The post NVMe over TCP appeared first on Linux.com.

An inside look at CVE-2020-10713, a.k.a. the GRUB2 “BootHole”

Wednesday 20th of January 2021 10:18:22 PM

The inside story of how CVE-2020-10713, a.k.a. the GRUB2 “BootHole” vulnerability was reported and resolved.
Click to Read More at Oracle Linux Kernel Development

The post An inside look at CVE-2020-10713, a.k.a. the GRUB2 “BootHole” appeared first on Linux.com.

Extracting kernel stack function arguments from Linux x86-64 kernel crash dumps

Wednesday 20th of January 2021 10:18:21 PM

This blog post covers in detail how to extract stack function arguments from kernel crash dumps.
Click to Read More at Oracle Linux Kernel Development

The post Extracting kernel stack function arguments from Linux x86-64 kernel crash dumps appeared first on Linux.com.

Migrate NFS to GlusterFS and nfs-ganesha

Wednesday 20th of January 2021 10:18:20 PM

This article covers how to migrate an NFS server from kernel space to userspace, which is based on Glusterfs and nfs-ganesha.
Click to Read More at Oracle Linux Kernel Development

The post Migrate NFS to GlusterFS and nfs-ganesha appeared first on Linux.com.

struct page, the Linux physical page frame data structure

Wednesday 20th of January 2021 10:18:19 PM

Gain insight into the Linux physical page frame data structure struct page and how to safely use various fields in the structure.
Click to Read More at Oracle Linux Kernel Development

The post struct page, the Linux physical page frame data structure appeared first on Linux.com.

Check out the Oracle talks at KVM Forum 2020

Wednesday 20th of January 2021 10:18:18 PM

The annual KVM forum conference is next week. It brings together the world’s leading experts on Linux virtualization technology to present their latest work. The conference is virtual this year, with live attendance from October 28-30, or check out the recordings once they are available! https://events.linuxfoundation.org/kvm-forum. We have a good number of engineers from the Oracle Linux kernel development team who will be presenting their work…
Click to Read More at Oracle Linux Kernel Development

The post Check out the Oracle talks at KVM Forum 2020 appeared first on Linux.com.

Multithreaded Struct Page Initialization

Wednesday 20th of January 2021 10:18:17 PM

Oracle Linux kernel developer Daniel Jordan contributes this post on the initial support for multithreaded jobs in padata.     The last padata blog described unbinding padata jobs from specific CPUs. This post will cover padata’s initial support for multithreading CPU-intensive kernel paths, which takes us to the memory management system. The Bottleneck During boot, the kernel needs to…
Click to Read More at Oracle Linux Kernel Development

The post Multithreaded Struct Page Initialization appeared first on Linux.com.

QEMU Live Update

Wednesday 20th of January 2021 10:18:16 PM

In this blog Oracle Linux Kernel engineers Steve Sistare and Mark Kanda present QEMU live update.   The ability to update software with critical bug fixes and security mitigations while minimizing downtime is extremely important to customers and cloud service providers. In this blog post, we present QEMU Live Update, a new method for updating a running QEMU instance to a new…
Click to Read More at Oracle Linux Kernel Development

The post QEMU Live Update appeared first on Linux.com.

How to setup WireGuard on Oracle Linux

Wednesday 20th of January 2021 10:18:15 PM

Oracle Linux engineer William Kucharski provides an introduction to the VPN protocol WireGuard   WireGuard has received a lot of attention of late as a new, easier to use VPN mechanism, and it has now been added to Unbreakable Enterprise Kernel 6 Update 1 as a technology preview. But what is it, and how do I use it? What is…
Click to Read More at Oracle Linux Kernel Development

The post How to setup WireGuard on Oracle Linux appeared first on Linux.com.

Blacks In Technology and The Linux Foundation Partner to Offer up to $100,000 in Training & Certification to Deserving Individuals

Tuesday 19th of January 2021 03:27:13 PM

Program will provide verifiable, respected industry credentials to help promising individuals start an IT career

SAN FRANCISCO, January 19, 2021The Linux Foundation, the nonprofit organization enabling mass innovation through open source, and The Blacks In Technology Foundation, the largest community of Black technologists globally, today announced the launch of a new scholarship program to help more Black individuals get started with an IT career.

Blacks in Technology will award 50 scholarships per quarter to promising individuals. The Linux Foundation will provide each of these recipients with a voucher to register for any Linux Foundation administered certification exam at no charge, such as the Linux Foundation Certified IT Associate, Certified Kubernetes Administrator, Linux Foundation Certified System Administrator and more. Associated online training courses will also be provided at no cost when available for the exam selected. Each recipient will additionally receive one-on-one coaching with a Blacks In Technology mentor each month to help them stay on track in preparing for their exam. 

All Linux Foundation certification exams are conducted online with a proctor monitoring virtually via webcam and screen sharing. Scholarship recipients will have six months to sit for their exam, and should they fail to pass on the first attempt, one retake will be provided. Upon passing a certification exam, they will receive a PDF certificate and a digital badge which can be displayed on digital resumes and social media profiles, and which can be independently verified by potential employers. 

“We are extremely pleased to expand our partnership with Blacks in Technology to make quality open source education and certification more accessible to aspiring Black IT professionals,” said Linux Foundation SVP & GM of Training & Certification Clyde Seepersad. “While we have taken steps at The Linux Foundation to increase diversity in the open source community, there is a long way yet to go. There is so much potential talent out there, but without the resources and opportunities to nurture it, much will remain unfulfilled. We hope this program will help scholarship recipients start on the path to becoming successful IT professionals who can go on to mentor the next generation.”

“By removing the financial barrier to entry for our members, The Linux Foundation has empowered a new wave of diverse technical experts.” according to Dennis Schultz, Executive Director of the Blacks In Technology Foundation. “By offering training and certification options for all experience levels, we can meet people where they are in their technical journey and provide support along the way for long term success.”

Those interested in applying for a Blacks in Technology/Linux Foundation scholarship can do so by visiting https://foundation.blacksintechnology.net/programs/

About Blacks in Technology

The Blacks In Technology Foundation is a 501(c)(3) non-profit and the largest global community of Black technologists with a combined membership and social media reach of over 50,000. Membership in Blacks In Technology is free. The Blacks In Technology (BIT) Foundation’s goal and mission is to “stomp the divide” between Black workers and the rest of the tech industry and to fundamentally influence and effect change. BIT intends to level the playing field through training, education, networking, and mentorship with the support of allies, partners, sponsors, and members. For more information please visit blacksintechnology.net

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

# # #

The post Blacks In Technology and The Linux Foundation Partner to Offer up to $100,000 in Training & Certification to Deserving Individuals appeared first on Linux Foundation – Training.

The post Blacks In Technology and The Linux Foundation Partner to Offer up to $100,000 in Training & Certification to Deserving Individuals appeared first on Linux.com.

Review of Container-to-Container Communications in Kubernetes

Friday 15th of January 2021 04:15:34 PM

This article was originally posted at TheNewStack.

By Matt Zand and Jim Sullivan

Kubernetes is a containerized solution. It provides virtualized runtime environments called Pods, which house one or more containers to provide a virtual runtime environment. An important aspect of Kubernetes is container communication within the Pod. Additionally, an important area of managing the Kubernetes network is to forward container ports internally and externally to make sure containers within a Pod communicate with one another properly. To manage such communications, Kubernetes offers the following four networking models:

  • Container-to-Container communications
  • Pod-to-Pod communications
  • Pod-to-Service communications
  • External-to-Internal communications

In this article, we dive into Container-to-Container communications, by showing you ways in which containers within a pod can network and communicate.

Communication Between Containers in a Pod

Having multiple containers in a single Pod makes it relatively straightforward for them to communicate with each other. They can do this using several different methods. In this article, we discuss two methods: i- Shared Volumes and ii-Inter-Process Communications in more detail.

I- Shared Volumes in a Kubernetes Pod

In Kubernetes, you can use a shared Kubernetes Volume as a simple and efficient way to share data between containers in a Pod. For most cases, it is sufficient to use a directory on the host that is shared with all containers within a Pod.

Kubernetes Volumes enables data to survive container restarts, but these volumes have the same lifetime as the Pod. This means that the volume (and the data it holds) exists exactly as long as that Pod exists. If that Pod is deleted for any reason, even if an identical replacement is created, the shared Volume is also destroyed and created from scratch.

A standard use case for a multicontainer Pod with a shared Volume is when one container writes logs or other files to the shared directory, and the other container reads from the shared directory. For example, we can create a Pod like so:

In this example, we define a volume named html. Its type is emptyDir, which means that the Volume is first created when a Pod is assigned to a node and exists as long as that Pod is running on that node; as the name says, it is initially empty. The first container runs the Nginx server and has the shared Volume mounted to the directory /usr/share/nginx/html. The second container uses the Debian image and has the shared Volume mounted to the directory /html. Every second, the second container adds the current date and time into the index.html file, which is located in the shared Volume. When the user makes an HTTP request to the Pod, the Nginx server reads this file and transfers it back to the user in response to the request. Here is a good article for reading more on similar Kubernetes topics.

You can check that the pod is working either by exposing the nginx port and accessing it using your browser, or by checking the shared directory directly in the containers:

II- Inter-Process Communications (IPC)

Containers in a Pod share the same IPC namespace, which means they can also communicate with each other using standard inter-process communications such as SystemV semaphores or POSIX shared memory. Containers use the strategy of the localhost hostname for communication within a pod.

In the following example, we define a Pod with two containers. We use the same Docker image for both. The first container is a producer that creates a standard Linux message queue, writes a number of random messages, and then writes a special exit message. The second container is a consumer which opens that same message queue for reading and reads messages until it receives the exit message. We also set the restart policy to “Never”, so the Pod stops after the termination of both containers.

To check this out, create the pod using kubectl create and watch the Pod status:

Now you can check logs for each container and verify that the second container received all messages from the first container, including the exit message:

There is one major problem with this Pod, however, and it has to do with how containers start up.

Conclusion

The primary reason that Pods can have multiple containers is to support helper applications that assist a primary application. Typical examples of helper applications are data pullers, data pushers and proxies. An example of this pattern is a web server with a helper program that polls a git repository for new updates.

The Volume in this exercise provides a way for containers to communicate during the life of the Pod. If the Pod is deleted and recreated, any data stored in the shared Volume is lost. In this article, we also discussed the concept of Inter-Process Communications among containers within a Pod, which is an alternative to shared Volume concepts. Now that you learn how containers inside a Pod can communicate and exchange data, you can move on to learn other Kubernetes networking models — such as Pod-to-Pod or Pod-to-Service communications. Here is a good article for learning more advanced topics on Kubernetes development.

About the Authors Matt Zand Matt is is a serial entrepreneur and the founder of three successful tech startups: DC Web Makers, Coding Bootcamps and High School Technology Services. He is a leading author of Hands-on Smart Contract Development with Hyperledger Fabric book by O’Reilly Media. Jim Sullivan Jim has a bachelor’s degree in Electrical Engineering and a Master’s Degree in Computer Science. Jim also holds an MBA. Jim has been a practicing software engineer for 18 years. Currently, Jim leads an expert team in Blockchain development, DevOps, Cloud, application development, and the SAFe Agile methodology. Jim is an IBM Master Instructor.

The post Review of Container-to-Container Communications in Kubernetes appeared first on Linux Foundation – Training.

The post Review of Container-to-Container Communications in Kubernetes appeared first on Linux.com.

How to Create and Manage Archive Files in Linux

Friday 15th of January 2021 04:15:33 PM

By Matt Zand and Kevin Downs

In a nutshell, an archive is a single file that contains a collection of other files and/or directories. Archive files are typically used for a transfer (locally or over the internet) or make a backup copy of a collection of files and directories which allow you to work with only one file (if compressed, it has a lower size than the sum of all files within it) instead of many. Likewise, archives are used for software application packaging. This single file can be easily compressed for ease of transfer while the files in the archive retain the structure and permissions of the original files.

We can use the tar tool to create, list, and extract files from archives. Archives made with tar are normally called “tar files,” “tar archives,” or—since all the archived files are rolled into one—“tarballs.”

This tutorial shows how to use tar to create an archive, list the contents of an archive, and extract the files from an archive. Two common options used with all three of these operations are ‘-f’ and ‘-v’: to specify the name of the archive file, use ‘-f’ followed by the file name; use the ‘-v’ (“verbose”) option to have tar output the names of files as they are processed. While the ‘-v’ option is not necessary, it lets you observe the progress of your tar operation.

For the remainder of this tutorial, we cover 3 topics: 1- Create an archive file, 2- List contents of an archive file, and 3- Extract contents from an archive file. We conclude this tutorial by surveying 6 practical questions related to archive file management. What you take away from this tutorial is essential for performing tasks related to cybersecurity and cloud technology.

1- Creating an Archive File

To create an archive with tar, use the ‘-c’ (“create”) option, and specify the name of the archive file to create with the ‘-f’ option. It’s common practice to use a name with a ‘.tar’ extension, such as ‘my-backup.tar’. Note that unless specifically mentioned otherwise, all commands and command parameters used in the remainder of this article are used in lowercase. Keep in mind that while typing commands in this article on your terminal, you need not type the $ prompt sign that comes at the beginning of each command line.

Give as arguments the names of the files to be archived; to create an archive of a directory and all of the files and subdirectories it contains, give the directory’s name as an argument.

 To create an archive called ‘project.tar’ from the contents of the ‘project’ directory, type:

$ tar -cvf project.tar project

This command creates an archive file called ‘project.tar’ containing the ‘project’ directory and all of its contents. The original ‘project’ directory remains unchanged.

Use the ‘-z’ option to compress the archive as it is being written. This yields the same output as creating an uncompressed archive and then using gzip to compress it, but it eliminates the extra step.

 To create a compressed archive called ‘project.tar.gz’ from the contents of the ‘project’ directory, type:

$ tar -zcvf project.tar.gz project

This command creates a compressed archive file, ‘project.tar.gz’, containing the ‘project’ directory and all of its contents. The original ‘project’ directory remains unchanged.

NOTE: While using the ‘-z’ option, you should specify the archive name with a ‘.tar.gz’ extension and not a ‘.tar’ extension, so the file name shows that the archive is compressed. Although not required, it is a good practice to follow.

Gzip is not the only form of compression. There is also bzip2 and and xz. When we see a file with an extension of xz we know it has been compressed using xz. When we see a file with the extension of .bz2 we can infer it was compressed using bzip2. We are going to steer away from bzip2 as it is becoming unmaintained and focus on xz. When compressing using xz it is going to take longer for the files to compressed. However, it is typically worth the wait as the compression is much more effective, meaning the resulting file will usually be smaller than other compression methods used. Even better is the fact that decompression, or expanding the file, is not much different between the different methods of compression. Below we see an example of how to utilize xz when compressing a file using tar

  $ tar -Jcvf project.tar.xz project

We simply switch -z for gzip to uppercase -J for xz. Here are some outputs to display the differences between the forms of compression:

As you can see xz does take the longest to compress. However it does the best job of reducing files size, so it’s worth the wait. The larger the file is the better the compression becomes too!

2- Listing Contents of an Archive File

To list the contents of a tar archive without extracting them, use tar with the ‘-t’ option.

 To list the contents of an archive called ‘project.tar’, type:

$ tar -tvf project.tar  

This command lists the contents of the ‘project.tar’ archive. Using the ‘-v’ option along with the ‘-t’ option causes tar to output the permissions and modification time of each file, along with its file name—the same format used by the ls command with the ‘-l’ option.

 To list the contents of a compressed archive called ‘project.tar.gz’, type:

$ tar -tvf project.tar

 3- Extracting contents from an Archive File

To extract (or unpack) the contents of a tar archive, use tar with the ‘-x’ (“extract”) option.

 To extract the contents of an archive called ‘project.tar’, type:

$ tar -xvf project.tar

This command extracts the contents of the ‘project.tar’ archive into the current directory.

If an archive is compressed, which usually means it will have a ‘.tar.gz’ or ‘.tgz’ extension, include the ‘-z’ option.

 To extract the contents of a compressed archive called ‘project.tar.gz’, type:

$ tar -zxvf project.tar.gz

NOTE: If there are files or subdirectories in the current directory with the same name as any of those in the archive, those files will be overwritten when the archive is extracted. If you don’t know what files are included in an archive, consider listing the contents of the archive first.

Another reason to list the contents of an archive before extracting them is to determine whether the files in the archive are contained in a directory. If not, and the current directory contains many unrelated files, you might confuse them with the files extracted from the archive.

To extract the files into a directory of their own, make a new directory, move the archive to that directory, and change to that directory, where you can then extract the files from the archive.

Now that we have learned how to create an Archive file and list/extract its contents, we can move on to discuss the following 9 practical questions that are frequently asked by Linux professionals.

  • Can we add content to an archive file without unpacking it?

Unfortunately, once a file has been compressed there is no way to add content to it. You would have to “unpack” it or extract the contents, edit or add content, and then compress the file again. If it’s a small file this process will not take long. If it’s a larger file then be prepared for it to take a while.

  • Can we delete content from an archive file without unpacking it?

This depends on the version of tar being used. Newer versions of tar will support a –delete.

For example, let’s say we have files file1 and file2 . They can be removed from file.tar with the following:

$ tar -vf file.tar –delete file1 file2

To remove a directory dir1:

$ tar -f file.tar –delete dir1/*

  • What are the differences between compressing a folder and archiving it?

The simplest way to look at the difference between archiving and compressing is to look at the end result. When you archive files you are combining multiple files into one. So if we archive 10 100kb files you will end up with one 1000kb file. On the other hand if we compress those files we could end up with a file that is only a few kb or close to 100kb.

  • How to compress archive files?

As we saw above you can create and archive files using the tar command with the cvf options. To compress the archive file we made there are two options; run the archive file through compression such as gzip. Or use a compression flag when using the tar command. The most common compression flags are- z for gzip, -j for bzip and -J for xz. We can see the first method below:

$ gzip file.tar

Or we can just use a compression flag when using the tar command, here we’ll see the gzip flag “z”:

$ tar -cvzf file.tar /some/directory

  • How to create archives of multiple directories and/or files at one time?

It is not uncommon to be in situations where we want to archive multiple files or directories at once. And it’s not as difficult as you think to tar multiple files and directories at one time. You simply supply which files or directories you want to tar as arguments to the tar command:

$ tar -cvzf file.tar file1 file2 file3

or

$ tar -cvzf file.tar /some/directory1 /some/directory2

  • How to skip directories and/or files when creating an archive?

You may run into a situation where you want to archive a directory or file but you don’t need certain files to be archived. To avoid archiving those files or “exclude” them you would use the –exclude option with tar:

$ tar –exclude ‘/some/directory’ -cvf file.tar /home/user

So in this example /home/user would be archived but it would exclude the /some/directory if it was under /home/user. It’s important that you put the –exclude option before the source and destination as well as to encapsulate the file or directory being excluded with single quotation marks.

Summary

The tar command is useful for creating backups or compressing files you no longer need. It’s good practice to back up files before changing them. If something doesn’t work how it’s intended to after the change you will always be able to revert back to the old file. Compressing files no longer in use helps keep systems clean and lowers the disk space usage. There are other utilities available but tar has reigned supreme for its versatility, ease of use and popularity.

Resources

If you like to learn more about Linux, reading the following articles and tutorials are highly recommended:

About the Authors

Matt Zand is a serial entrepreneur and the founder of 3 tech startups: DC Web Makers, Coding Bootcamps and High School Technology Services. He is a leading author of Hands-on Smart Contract Development with Hyperledger Fabric book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms. At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on LI: https://www.linkedin.com/in/matt-zand-64047871

Kevin Downs is Red Hat Certified System Administrator or RHCSA. At his current job at IBM as Sys Admin, he is in charge of administering hundreds of servers running on different Linux distributions. He is a Lead Linux Instructor at Coding Bootcamps where he has authored 5 self-paced Courses.

The post How to Create and Manage Archive Files in Linux appeared first on Linux Foundation – Training.

The post How to Create and Manage Archive Files in Linux appeared first on Linux.com.

Prepr Partners with the Linux Foundation to Provide Digital Work-Integrated Learning through the F.U.N.™ Program

Friday 15th of January 2021 04:15:31 PM

December 14th, 2020 – Toronto, Canada – Prepr is excited to announce a new partnership with The Linux Foundation, the nonprofit organization enabling mass innovation through open source, that will give work-integrated learning experiences to youth facing employment barriers. The new initiative, the Flexible Upskilling Network (F.U.N.) program, launches in collaboration with the Magnet Network and the Network for the Advancement of Black Communities (NABC). The F.U.N. program is a blended learning program, where participants receive opportunities to combine valuable work experience with digital skill development over a 16-week journey. The objective of the F.U.N. program is to support youth, with a focus on women and visible minority groups who are involuntarily not in employment, education, or training (NEET) in Ontario, by helping them gain employability skills, including soft skills like communication, collaboration, and problem-solving.

Caitlin McDonough, Chief Education Officer at Prepr, says about the F.U.N. program, “Digital skills are essential for the workforce of the future. We at Prepr, are looking forward to the opportunity to support youth capacity development for the future of work.”

With The Linux Foundation, Prepr is committed to supporting over 180 youth participants enrolling and completing the F.U.N program between July 2020 and March 2021. Prepr will be using its signature PIE ® method to train the participants in Project Leadership, Innovation, and Entrepreneurship to expose them to real-world business challenges. The work-integrated learning experience Prepr provides will support participants in developing both soft and hard skills, with a focus on digital skills to help them secure gainful employment for the uncertain future of work.

“In this day and age, it is essential to have a good educational foundation in technology to maximize your chances of career success,” said Clyde Seepersad, SVP and GM, Training & Certification at The Linux Foundation. “We are thrilled to partner with Prepr to bring The Linux Foundation’s vendor-neutral, expert training in the open source technologies that serve as the backbone of modern technologies to communities that will truly benefit from it. I look forward to seeing how these promising students perform and hope to partner with Prepr on future initiatives to train even more in the future.”

The program will explore digital career pathways through multiple work-related challenges. These work challenges will bring creative approaches to gaining innovative skills that are invaluable in today’s new normal of remote work and learn while allowing individuals to become more competitive in today’s digital workforce.

Stephen Crawford, MPP for Oakville, speaking about the government’s commitment to supporting youth facing employment barriers: “This government is committed to supporting our youth, notably visible minorities, as they prepare to enter the workforce. The youth of today will be the leaders of tomorrow.” The Ontario government funding for the F.U.N. program is part of a $37 million investment in training initiatives across the province.

Through the program’s blended learning approach, participants will learn how to use Prepr’s signature PIE ® tool, which addresses three essential skills gaps facing the business services sector today: expertise in innovation, project management, and business development (entrepreneurship, sales, and commercialization). At the end of the program, participants will gain a certification, along with 12 weeks of hands-on work experience, which will foster valuable, future-proof skills to secure gainful employment.

The Linux Foundation will also support participants through an introductory course to Linux and related tools: LFS101x: Introduction to Linux. The program will help to develop the digital skills essential for our new normal of work, with beginner-level challenges to fill obvious skills gaps and foster a mentality of problem-solving. With the support of open Linux Foundation resources, these challenges will be an opportunity for participants to ideate and create project solutions ready for real-world implementation.

About Prepr

Prepr provides the tools, resources, and technology to empower individuals to become lifelong problem solvers. Through triangular cooperation between the public and private sectors as well as government, Prepr aims to strengthen the collaboration on challenges that affect individuals, communities, businesses, and infrastructure to create a more sustainable future for everyone.

About The Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

The post Prepr Partners with the Linux Foundation to Provide Digital Work-Integrated Learning through the F.U.N. Program appeared first on Linux Foundation – Training.

The post Prepr Partners with the Linux Foundation to Provide Digital Work-Integrated Learning through the F.U.N.™ Program appeared first on Linux.com.

Open Source Jobs Remain Secure During COVID-19 Pandemic and More Findings From Linux Foundation and Laboratory for Innovation Science at Harvard Report

Friday 15th of January 2021 04:15:30 PM

A new report from The Linux Foundation and Laboratory for Innovation Science at Harvard (LISH) has found that 56% of survey respondents reported involvement in open source projects was important in getting their current job, and 55% feel that participating in open source projects has increased their salary or otherwise improved their job prospects. The “Report on the 2020 FOSS Contributor Survey” compiled the answers of 1,196 contributors to free and open source software (FOSS), and also found that 81% stated the skills and knowledge gained by working on open source were valuable to their employer.

One highlight of the report was the finding that, “[d]espite the survey being administered during the economic downturn resulting from the COVID-19 pandemic, very few respondents were out of the workforce.” This aligns with our 2020 Open Source Jobs Report from earlier this year, in which only 4% of hiring managers reported they have laid off open source professionals due to the pandemic, and a further 2% furloughed open source staff.

In terms of why these individuals contribute to open source projects, respondents were unsurprisingly most likely to say because they use open source software and need certain features added, so they build and add said features. The next top answers provided some more insight into what motivates these open source professionals though. Those were “I enjoy learning” and “Contributing allows me to fulfill a need for creative, challenging, and/or enjoyable work”. This also aligns with the recent jobs report, where open source pros reported they decided to work in the open source community because “Open source runs everything” and “I am passionate about open source”. Both reports suggested that compensation, while important, is not a dominant source of motivation.

Focusing more on what open source projects can do to be successful, the new report goes on to suggest that, “FOSS projects could also provide some educational materials (such as tutorials or getting started guides) about their projects to help those motivated by a desire to learn.” This gets to the heart of our mission at LF Training & Certification – to make quality training materials about open source technologies accessible to everyone. 

One area of opportunity for projects, employers and open source pros according to the report is around secure development practices. The survey respondents overwhelmingly reported that they spend little time focusing on security issues, despite both the quantity and sophistication of attacks increasing year in and year out, and goes on to suggest that “a free online course on how to develop secure software as a desirable contribution from an external source” may help. LF Training & Certification released just such a training program recently in the form of our Secure Software Development Fundamentals Professional Certificate program created in partnership with the Open Source Security Foundation and hosted by non-profit learning platform edX. The program consists of three courses which can all be audited for free, or those who wish to obtain the Professional Certificate may receive such by paying a fee and passing a series of tests aligned to each course. Employers concerned about software development security issues should consider mandating that staff take training like this, and projects should consider requiring it of maintainers as well.

This is just the tip of the iceberg in terms of the findings of the FOSS Contributor Survey; we encourage you to download and review the full document for ever more insight and recommendations.

The post Open Source Jobs Remain Secure During COVID-19 Pandemic and More Findings From Linux Foundation and Laboratory for Innovation Science at Harvard Report appeared first on Linux Foundation – Training.

The post Open Source Jobs Remain Secure During COVID-19 Pandemic and More Findings From Linux Foundation and Laboratory for Innovation Science at Harvard Report appeared first on Linux.com.

Tips for Starting Your New IT Career in 2021!

Friday 15th of January 2021 04:15:29 PM

2020 was a difficult year for all of us, and for many it continues in 2021. Jobs have been lost, and whole industries have been forced to revamp their entire business models, leaving many out of work or facing new ways of working. While significant challenges remain, think of this as an opportunity to consider a new career in the new year. 

Pick the right path for you

The first thing to consider when looking at moving into an IT career is deciding what area of IT to pursue. The 2020 Open Source Jobs Report found the most in demand position to DevOps practitioners followed by developers. The top areas of expertise being sought by hiring managers are Linux, cloud, and security. While it’s good to consider what skills are in demand, it’s just as important to figure out which subject areas will interest you most. If you find a role that not only offers great career opportunities but that you will also enjoy, you are that much more likely to be successful. Our Career Path Quiz is a great place to start, and can point you in the direction of a technology focus that aligns with your existing interests.

Start with free training to ensure there’s a fit

Before jumping head first into a training and/or certification program, take advantage of free training courses to gain baseline knowledge and also ensure this path is really one you want to pursue. Our Plan Your Training page outlines suggested courses and certifications depending on the subject area you’ve chosen to pursue. Many paths, including System Administration, Cloud & Containers, and DevOps & Site Reliability Engineering all start with LFS101 – Introduction to Linux, which is a good starting point for just about anyone looking to start an IT career. Other popular free courses included LFS151 – Introduction to Cloud Infrastructure Technologies, LFS158 – Introduction to Kubernetes, and LFS162 – Introduction to DevOps & Site Reliability Engineering.

Begin learning about intermediate and advanced topics

Once you’ve selected a path and taken some free courses to confirm it’s right for you, it’s now time to move into intermediate and advanced training courses. The Plan Your Training page is still a great resource as it lists the courses that will be most beneficial to learn about a particular topic area. Keep in mind that you typically will not need to complete every single course in a given area to be ready to begin working; concentrate on ensuring that you have the basic skills needed and you can always come back later in your career to pursue more advanced courses.

Think about certifications

While planning the training courses you wish to complete, keep certifications top of mind as well. Especially for those who are new to IT and do not have past experience to fall back on, holding a certification gives potential employers confidence that you have the skills needed to succeed in a given role. Many Linux Foundation training courses complement and help prepare for specific certification exams, so work both into your learning plan. And we offer certifications for those just starting out, like the Linux Foundation Certified IT Associate (LFCA), in addition to more specialized certifications like the Certified Kubernetes Administrator (CKA). Be sure to take advantage of the digital badges awarded for successfully completing a certification, which can be linked to social media profiles like LinkedIn and also can be independently verified, providing confidence for employers of your skills. The Open Source Jobs Report also found that a majority of hiring managers give preference to certified candidates, so these certifications really can open doors.

More structured options

For those who want a bit more structure and support in achieving their learning goals, we also offer two bootcamps. If you’re just getting started and are interested in pursuing a cloud career, the Cloud Engineer Bootcamp meets all your training and certification needs in one organized package. One major benefit of the bootcamps is they include instructor office hours five days per week, enabling you to actually speak to one of our expert instructors to answer questions and get tips on how to be most successful. 

As we move forward into 2021, countless new career opportunities will be available for those who take the steps to pursue them. Get started today and enroll in training to gain the skills you need to be successful in an IT career, then take those skills and gain the certification to prove it!

The post Tips for Starting Your New IT Career in 2021! appeared first on Linux Foundation – Training.

The post Tips for Starting Your New IT Career in 2021! appeared first on Linux.com.

New, Free Training Course Covering Basics of the WebAssembly Now Available

Friday 15th of January 2021 04:15:28 PM

Introduction to WebAssembly is the newest training course from The Linux Foundation! This course, offered on the non-profit edX learning platform, can be audited by anyone at no cost. The course is designed for web developers, Dweb, cloud, and blockchain developers, architects, and CTOs interested in learning about the strengths and limitations of WebAssembly, the fourth “official” language of the web (alongside JavaScript, HTML and CSS), and its potential applications in blockchain, serverless, edge/IoT, and more. WebAssembly has been rapidly growing in popularity thanks to its security, simplicity and the lightweight nature of the runtime. It is also language-agnostic, being a suitable compilation target for a wide range of modern languages.

The six hour course uses video content, written material and hands-on labs to delve into how WebAssembly runs ‘under the hood’, and how you can leverage its capabilities in and beyond the browser. It also explores a series of potential applications in different industries, and takes a quick peek at upcoming features. Enrollees will walk away from the course with an understanding of what the WebAssembly runtime is, and how it provides a secure, fast and efficient compilation target for a wide range of modern programming languages, allowing them to target the browser and beyond. 

The course was developed by Colin Eberhardt, the Technology Director at Scott Logic, a UK-based software consultancy which creates complex applications for financial services clients. Colin is an avid technology enthusiast, spending his evenings contributing to open source projects, writing blog posts and learning as much as he can.

“WebAssembly is one of the most exciting technologies I have come across for years,” said Eberhard. “Its initial promise was a fast and efficient multi-language runtime for the web, but it has the potential to be so much more. We are already seeing this runtime being used for numerous applications beyond the browser, including serverless and blockchain, with more novel uses and applications appearing each week!”

The course is available for immediate enrollment. Those requiring a verified certificate of completion may upgrade their enrollment for $149. Start gaining skills in WebAssembly today!

The post New, Free Training Course Covering Basics of the WebAssembly Now Available appeared first on Linux Foundation – Training.

The post New, Free Training Course Covering Basics of the WebAssembly Now Available appeared first on Linux.com.

More in Tux Machines

GNOME, Arch and FreeBSD

  • Phaedrus Leeds: Cleaning Up Unused Flatpak Runtimes

    Despite having been a contributor to the GNOME project for almost 5 years now (first at Red Hat and now at Endless), I’ve never found the time to blog about my work. Fortunately in many cases collaborators have made posts or the work was otherwise announced. Now that Endless is a non-profit foundation and we are working hard at advocating for our solutions to technology access barriers in upstream projects, I think it’s an especially good time to make my first blog post announcing a recent feature in Flatpak, which I worked on with a lot of help from Alex Larsson. On many low-end computers, persistent storage space is quite limited. Some Endless hardware for example has only 32 GB. And we want to fill much of it with useful content in the form of Flatpak apps so that the computers are useful even offline. So often in the past we have shipped computers that are already quite full before the user stores any files. Ideally we want that limited space to be used as efficiently as possible, and Flatpak and OSTree already have some neat mechanisms to that end, such as de-duplicating any identical files across all apps and their runtimes (and, in the case of Endless OS, including the OS files as well).

  • Outreachy Progress Report

    I’m halfway gone into my Outreachy internship at the GNOME Foundation. Time flies so fast right? I’m a little emotional cuz I don’t want this fun adventure to end soo soon. Just roughly five weeks to go!! Oh well, let’s find out what I’ve been able to achieve over the past eight weeks and what my next steps are… My internship project is to complete the integration between the GNOME Translation Editor (previously known as Gtranslator) and Damned Lies(DL). This integration involves enabling users to reserve a file for translation directly from the Translation Editor and permitting them to upload po files to DL.

  • Kubernetes on Hetzner in 2021

    Hello and welcome to my little Kubernetes on Hetzner tutorial for the first half of 2021. This tutorial will help you bootstrapping a Kubernetes Cluster on Hetzner with KubeOne. I am writing this small tutorial, because I had some trouble to bootstrap a cluster on Hetzner with KubeOne. But first of all let us dive into the question why we even need KubeOne and how does KubeOne helps. KubeOne is a small wrapper around kubeadm. Kubeadm is the official tool for installing Kubernetes on VMs or bare-metal nodes, but it has one major disadvantage: It is very toilsome. KubeOne tries to solve this with providing you a wrapper around Kubeadm and various other provisioning tools like Terraform. Terraform lets you manage your infrastructure as code. The advantage is that you can easily destroy, deploy or enhance your infrastructure via a few config file changes. You may ask yourself why you even need this tutorial. There is already at least one tutorial that guides you through the process of setting up a Kubernetes cluster on Hetzner. This is correct, but I felt it is unnecessary complicated, takes too much manual steps and is not really automatable (although there are solutions like kubespray that intend to solve this).

  • FreeBSD Desktop – Part 22 – Configuration – Aero Snap Extended

    I like to post new articles and solutions when I think they are ready. Production tested and stable. Well thought and tested … or at least trying to make things as good as possible in the available time window. Perfectionism definitely does not help making often articles on the blog.

    Today’s solution is not perfect but I will ‘ship it’ anyway because good and done is better then perfect. I wanted to rework it so many times that I stopped counting … and I really would like to continue the series – thus I have made a conscious decision to finally release it and hope that maybe someone else will have better ideas to make it better. I really wanted to provide pixel perfect solution with as much screen space used as possible but to deliver it as it is I tested it only on the resolution I use the most – the FullHD one with 1920×1080 pixels.

    You may want to check other articles in the FreeBSD Desktop series on the FreeBSD Desktop – Global Page where you will find links to all episodes of the series along with table of contents for each episode’s contents.

Oracle, Red Hat, and CloudLinux

  • Cloud Native Patterns: a free ebook for developers

    Building cloud native applications is a challenging undertaking, especially considering the rapid evolution of cloud native computing. But it’s also very liberating and rewarding. You can develop new patterns and practices where the limitations of hardware dependent models, geography, and size no longer exist. This approach to technology can make cloud application developers more agile and efficient, even as it reduces deployment costs and increases independence from cloud service providers. Oracle is one of the few cloud vendors to also have a long history of providing enterprise software. Wearing both software developer and cloud service provider hats, we understand the complexity of transforming on-premises applications into cloud native applications. Removing that complexity for customers is a guiding tenet at Oracle.

  • Red Hat extends certification expiration dates and expands remote offerings

    In 2020, remote exams became the standard experience for certificate-hopefuls across many fields. Red Hat worked quickly to release four of our most in-demand exams in this format. We have seen remote exams grow rapidly in popularity with our candidates. As we roll into 2021, our list has expanded with even more offerings. Now, you can take advantage of more remote exams to validate your skills in Red Hat’s most in-demand technologies, including OpenShift, Ansible, Containers and Kubernetes, and more.

  • CloudLinux Expands Its Extended Lifecycle Support Services to Cover More End-of-Life Linux Distributions
  • CloudLinux to Offer Lifecycle Support Services for Expired Linux Distributions

    CloudLinux on Monday announced the expansion of its affordable Extended Lifecycle Support (ELS) services for Linux distributions, by providing its own updates and security patches for several years after expiration of the products’ end-of-life date.

Sharing and Free Software Leftovers

  • 10 fabulous free apps for working with audio, video, and images

    You want Photoshop-like features without the Photoshop-like price tag, and, for that, there’s Gimp. Free, open-source, and available for Windows, Mac, and Linux, this powerful tool can be used by graphic designers, photographers, and illustrators alike.

  • Gnuastro 0.14 released
    Dear all,
    
    I am happy to announce the availability of Gnuastro 0.14. For the full
    list of added and changed/improved features, see the excerpt of the
    NEWS file for this release in [1] below.
    
    Gnuastro is an official GNU package, consisting of various
    command-line programs and library functions for the manipulation and
    analysis of (astronomical) data. All the programs share the same basic
    command-line user interface (modeled on GNU Coreutils). For the full
    list of Gnuastro's library, programs, and a comprehensive general
    tutorial (recommended place to start using Gnuastro), please see the
    links below respectively:
    
    https://www.gnu.org/s/gnuastro/manual/html_node/Gnuastro-library.html
    https://www.gnu.org/s/gnuastro/manual/html_node/Gnuastro-programs-list.html
    https://www.gnu.org/s/gnuastro/manual/html_node/General-program-usage-tutorial.html
    
    The most prominent new feature may be the new Query program (called
    with 'astquery'). It allows you to directly query many large
    astronomical data centers (currently VizieR, NED, ESA and ASTRON) and
    only download your selected columns/rows. For example with the command
    below you can download the RA, Dec and Parallax of all stars in the
    Gaia eDR3 dataset (from VizieR) that overlap with your
    'image.fits'. You just have to change '--dataset' to access any of the
    +20,000 datasets within VizieR for example! You can also search in the
    dataset metadata from the command-line, and much more.
    
      astquery vizier --dataset=gaiaedr3 --overlapwith=image.fits \
               --column=RAJ2000,DEJ2000,Plx
    
    See the new "Query" section in the Gnuastro book for more:
    
    https://www.gnu.org/software/gnuastro/manual/html_node/Query.html
    
    Here is the compressed source and the GPG detached signature for this
    release. To uncompress Lzip tarballs, see [2]. To check the validity
    of the tarballs using the GPG detached signature (*.sig) see [3]:
    
      https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.14.tar.lz    (3.6MB)
      https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.14.tar.gz    (5.6MB)
      https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.14.tar.gz.sig (833B)
      https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.14.tar.lz.sig (833B)
    
    Here are the MD5 and SHA1 checksums:
    
    30d77e2ad1c03d4946d06e4062252969  gnuastro-0.14.tar.gz
    f3ddbc4b5763ec2742f9080d42b69ed3  gnuastro-0.14.tar.lz
    cfbcd4b9ae1c5c648c5dc266d638659f0117c816  gnuastro-0.14.tar.gz
    4e4c6b678095d2838f77b2faae584ea51df2d33c  gnuastro-0.14.tar.lz
    
    I am very grateful to (in alphabetic order) Pedram Ashofteh Ardakani,
    Thérèse Godefroy, Raúl Infante-Sainz, Sachin Kumar Singh, Samane Raji
    and Zahra Sharbaf for directly contributing to the source of Gnuastro
    since the last alpha-release. It is great that in this release we have
    an equal gender balance in the contributors. I sincerely hope this can
    continue in the next release :-).
    
    I am also very grateful to (in alphabetic order) Antonio Diaz Diaz,
    Paul Eggert, Andrés García-Serra Romero, Thérèse Godefroy, Bruno
    Haible, Martin Kuemmel, Javier Licandro, Alireza Molaeinezhad, Javier
    Moldon, Sebastian Luna Valero, Samane Raji, Alberto Madrigal, Carlos
    Morales Socorro, Francois Ochsenbein, Joanna Sakowska, Zahra Sharbaf,
    Sachin Kumar Singh, Ignacio Trujillo and Xiuqin Wu for their very
    useful comments, suggestions and bug fixes that have now been
    implemented in Gnuastro since the last alpha-release.
    
    If any of Gnuastro's programs or libraries are useful in your work,
    please cite _and_ acknowledge them. For citation and acknowledgment
    guidelines, run the relevant programs with a `--cite' option (it can
    be different for different programs, so run it for all the programs
    you use). Citations _and_ acknowledgments are vital for the continued
    work on Gnuastro, so please don't forget to support us by doing so.
    
    This tarball was bootstrapped (created) with the tools below. Note
    that you don't need these to build Gnuastro from the tarball, these
    are the tools that were used to make the tarball itself. They are only
    mentioned here to be able to reproduce/recreate this tarball later.
      Texinfo 6.7
      Autoconf 2.70
      Automake 1.16.2
      Help2man 1.47.17
      ImageMagick 7.0.10-59
      Gnulib v0.1-4396-g3b732e789
      Autoconf archives v2019.01.06-98-gefa6f20
    
    The dependencies to build Gnuastro from this tarball on your system
    are described here:
      https://www.gnu.org/s/gnuastro/manual/html_node/Dependencies.html
    
    Best wishes,
    Mohammad
    
  • LibreOffice Community Member Monday: Felipe Viggiano and Zhenghua Fong

    In the future, I would like to start contributing more with others teams, and with TDF in order to help increase LibreOffice’s success. In my opinion, LibreOffice needs to be better known – we have a great free office solution that attends the majority of the requirements of the general public, but, at least in Brazil, many people are not aware of this!

  • ISA2 Launches New Open Source Bug Bounties

    Awards of up to EUR 5000 are available for finding security vulnerabilities in Element, Moodle and Zimbra, open source solutions used by public services across the European Union. There is a 20% bonus for providing a code fix for the bugs they discover.

  • Amazon Creates ALv2-Licensed Fork of Elasticsearch

    Amazon states that their forks of Elasticsearch and Kibana will be based on the latest ALv2-licensed codebases, version 7.10. “We will publish new GitHub repositories in the next few weeks. In time, both will be included in the existing Open Distro distributions, replacing the ALv2 builds provided by Elastic. We’re in this for the long haul, and will work in a way that fosters healthy and sustainable open source practices—including implementing shared project governance with a community of contributors,” the announcement says.

  • Elasticsearch and Kibana are now business risks

    In a play to convert users of their open source projects into paying customers, today Elastic announced that they are changing the license of both Elasticsearch and Kibana from the open source Apache v2 license to Server Side Public License (SSPL). If your organisation uses the open source versions of either Elasticsearch or Kibana in its products or projects, it is now at risk of being forced to release its intellectual property under terms dictated by another.

  • Wikipedia Turns Twenty

    If there is a modern equivalent to Encyclopédie for cultural impact, scale of content, and controversy, it’s surely Wikipedia, the free open-source online encyclopedia run by the not-for-profit Wikimedia Foundation. Started by entrepreneurs Jimmy Wales and Larry Sanger on January 15th, 2001, it has since grown to become one of the world’s top 15 websites with a vast database of 55 million articles in 317 languages, as well as a family of related projects covering everything from travel guides to recipes. Beloved of geeks, friend to lazy students and journalists alike, and bane to procrastinators, it celebrates its 20th birthday this month.

    It’s hard to overstate just how much information is on Wikipedia. You can instantly find the average July temperature in Lisbon, the difference between an ale and a lager, the historical background to the Fifth Amendment of the United States Constitution, or the full list of 10 ways a batsman can be out in cricket. The illustrated article on aguaxima includes far more information than Diderot’s effort, and readers can find a far more accurate article on religion in Sweden. These articles all link to their sources, so a reader can do their own fact-checking.

    There is one more crucial difference between Encyclopédie and Wikipedia, though. Encyclopédie’s subscribers needed to pay 280 livres for it, far beyond the wages of an ordinary person. But anyone who can afford a device with an Internet connection can access Wikipedia wherever they go. This accessibility was game-changing.

Programming Leftovers

  • An Introduction to Bash Brace Expansion

    The Borne Again Shell (BASH) has a lot of great features that it borrows from other shells and even from some programming languages. It was created in the late 1980s in a response to a lacking in the current available shells on Berkley Distributions (BSD), and the predecessor to Linux, GNU. BASH features numerous in-built features such as in-line scripting capabilities like brace expansion, which we are going to examine today.

  • Rakudo Weekly News: 2021.04 Grant Reporting
  • The Trouble with Reference Counting

    Perl uses a simple form of garbage collection (GC) called reference counting. Every variable created by a Perl program has a refcnt associated with it. If the program creates a reference to the variable, Perl increments its refcnt. Whenever Perl exits a block it reclaims any variables that belong to the block scope. If any are references, their referenced values’ refcnt are either decremented or they’re reclaimed as well if no other references to them remain.

  • Dustin J. Mitchell: The Horrors of Partial-Identity Encodings -- or -- URL Encoding Is Hard

    URL encoding is a pretty simple thing, and has been around forever. Yet, it is associated with a significant fraction of bugs in web frameworks, libraries, and applications. Why is that? Is there a larger lesson here?

  • Enrico Zini: nspawn-runner: support for image selection

    .gitlab-ci.yml supports 'image' to allow selecting in which environment the script gets run. The documentation says "Used to specify a Docker image to use for the job", but it's clearly a bug in the documentation, because we can do it with nspawn-runner, too. It turns out that most of the environment variables available to CI runs are also available to custom runner scripts. In this case, the value passed as image can be found as $CUSTOM_ENV_CI_JOB_IMAGE in the custom runner scripts environment.

  • Introduction to Making GraphQL APIs and Apps in Node.js – Linux Hint

    The communication and data transfer between the front end and backend of any application occurs through APIs (Application Programming Interface). There are many different types of APIs used to communicate between the front and back-end applications like RESTful API, SOAP API, GraphQL API, etc. The GraphQL API is a relatively new technology, and it is much faster than other types of APIs available. Fetching data from the database using GraphQL api is much faster than the REST API. While using GraphQL API, the client has control to fetch only the required data instead of getting all the details; that is why GraphQL API works faster than REST API.

  • Issue with phpMyAdmin and PHP: Warning in ./libraries/sql.lib.php#613 count(): Parameter must be an array or an object that implements Countable”

    Today, I had installed PHP 7.3 and phpMyAdmin on Ubuntu 18.04 LTS system. I am using MariaDB as database server running on the same instance. When I tried to access data in tables using phpMyAdmin got the following error message on screen.

  • C++ Access Specifiers – Linux Hint

    In C++, a class is a set of variables and functions that have been configured to work together. When the variables of the class are given values, an object is obtained. An object has the same variables and functions as a class, but this time, the variables have values. Many objects can be created from one class. One object differs from another object according to the different set of values assigned to the variables of the other object. Creating an object from a class is said to be instantiating the object. Even if two different objects have the same values for their variables, these objects are different entities, identified by different names in the program. The variables for an object and its corresponding class are called data members. The functions of an object and its corresponding class are called member functions. Data members and member functions are called members. The word access means to read or change the value of a variable, and it also means to use a function. C++ access specifiers are the words, “private,” “protected,” and “public.” They decide whether a member can access other members of its class, or if a function or operator outside the class and not belonging to the class can access any member of the class. They also decide whether a member of a derived (child) class can access a member of a parent class. Basic knowledge of C++ is required to understand this article and to test the code provided.

  • Compiling Code in Parallel using Make – Linux Hint

    Whoever you ask how to build software properly will come up with Make as one of the answers. On GNU/Linux systems, GNU Make [1] is the Open-Source version of the original Make that was released more than 40 years ago — in 1976. Make works with a Makefile — a structured plain text file with that name that can be best described as the construction manual for the software building process. The Makefile contains a number of labels (called targets) and the specific instructions needed to be executed to build each target. Simply speaking, Make is a build tool. It follows the recipe of tasks from the Makefile. It allows you to repeat the steps in an automated fashion rather than typing them in a terminal (and probably making mistakes while typing). Listing 1 shows an example Makefile with the two targets “e1” and “e2” as well as the two special targets “all” and “clean.” Running “make e1” executes the instructions for target “e1” and creates the empty file one. Running “make e2” does the same for target “e2” and creates the empty file two. The call of “make all” executes the instructions for target e1 first and e2 next. To remove the previously created files one and two, simply execute the call “make clean.”

  • Zeal – simple offline documentation browser

    Zeal is billed as a simple offline documentation browser. It offers easy access to a huge database of documentation, API manuals, and code snippets. The main purpose of the software is to enable you to have reference documentation at your fingertips. Let’s see how it fares.