Language Selection

English French German Italian Portuguese Spanish


Server: Kubeflow, Sysdig, Mesosphere and More

Filed under
  • Issue #2019.08.05 ? Kubeflow 0.6 Release

    Kubeflow v0.6: support for artifact tracking, data versioning & multi-user – version 0.6 includes several enterprise features to support multiple users and better model training pipelines. For multiple users, Kubeflow v0.6 provides a flexible architecture for user isolation and single sign-on. For data, enhancements have been added to Kubeflow Pipelines and jupyter. In total, over 250+ merged pull requests!

  • State Of Cloud Native Landscape : Sysdig Founder Loris Degioanni

    In this Takeaway segment, Loris Degioanni, founder and CTO of Sysdig, talks about the evolution and state of cloud-native world.

  • Kubernetes Orchestrates Name Change For Mesosphere, It’s Called D2IQ Now

    Mesosphere, one of the earliest players to offer container orchestration platform, is re-tuning its focus with the name change. The company is now called D2iQ.

  • Mesosphere Becomes D2IQ, Moves Into Kubernetes, Big Data

    The jargonized new name means "Day 2 IQ," with Day 2 being a DevOps term that refers to the operations part of the software development lifecycle and with IQ equating to "smart."

  • My Favorite Infrastructure

    Working at a startup has many pros and cons, but one of the main benefits over a traditional established company is that a startup often gives you an opportunity to build a completely new infrastructure from the ground up. When you work on a new project at an established company, you typically have to account for legacy systems and design choices that were made for you, often before you even got to the company. But at a startup, you often are presented with a truly blank slate: no pre-existing infrastructure and no existing design choices to factor in.

    Brand-new, from-scratch infrastructure is a particularly appealing prospect if you are at a systems architect level. One of the distinctions between a senior-level systems administrator and architect level is that you have been operating at a senior level long enough that you have managed a number of different high-level projects personally and have seen which approaches work and which approaches don't. When you are at this level, it's very exciting to be able to build a brand-new infrastructure from scratch according to all of the lessons you've learned from past efforts without having to support any legacy infrastructure.

Server/IBM/Red Hat

Filed under
Red Hat
  • Talking High Bandwidth With IBM’s Power10 Architect

    As the lead engineer on the Power10 processor, Bill Starke already knows what most of us have to guess about Big Blue’s next iteration in a processor family that has been in the enterprise market in one form or another for nearly three decades. Starke knows the enterprise grade variants of the Power architecture designed by IBM about as well as anyone on Earth does, and is acutely aware of the broad and deep set of customer needs that IBM always has to address with each successive Power chip generation.

    It seems to be getting more difficult over time, not less so, as the diversifying needs of customers run up against the physical reality of the Moore’s Law process shrink wall and the economics of designing and manufacturing server processors in the second and soon to be the third decade of the 21st century. But all of these challenges are what get hardware and software engineers out of bed in the morning. Starke started out at IBM in 1990 as a mainframe performance analysis engineer in the Poughkeepsie, New York lab and made the jump to the Austin Lab where the development for the AIX variant of Unix and the Power processors that run it is centered, first focusing on the architecture and technology of future systems and then Power chip performance and then shifting to being one of the Power chip architects a decade ago. Now, Starke has steered the development of the Power10 chip after being heavily involved in Power9 and is well on the way to mapping out what Power11 might look like and way off in the distance has some ideas about what Power12 might hold.

  • IBM: Better Cash Flows Together

    On Friday, International Business Machines (IBM) finally provided detailed financial projections on the Red Hat merger. The company had always provided an indication that the deal was immediately cash flow accretive while not EPS accretive until the end of year two. The headlines spooked investors, but the details should bring investors back with a smile.

  • Using Metrics to Guide Container Adoption, Part I

    Earlier this year, I wrote about a new approach my team is pursuing to inform our Container Adoption Program. We are using software delivery metrics to help keep organizations aligned and focused, even when those organizations are engaging in multiple workstreams spanning infrastructure, release management, and application onboarding. I talked about starting with a set of four core metrics identified in Accelerate: Building and Scaling High Performance Technology Organizations (by Nicole Forsgren, Jez Humble, and Gene Kim) that act as drivers of both organizational and noncommercial performance.

    Let’s start to highlight how those metrics can inform an adoption program at the implementation team level. The four metrics are: Lead Time for Change, Deployment Frequency, Mean Time to Recovery, and Change Failure Rate. Starting with Lead Time and Deployment Frequency, here are some suggestions for activities that each metric can guide in initiatives to adopt containers, with special thanks to Eric Sauer, Prakriti Verma, Simon Bailleux, and the rest of the Metrics-Driven Transformation working group at Red Hat.

  • OPA Gatekeeper: Policy and Governance for Kubernetes

    The Open Policy Agent Gatekeeper project can be leveraged to help enforce policies and strengthen governance in your Kubernetes environment. In this post, we will walk through the goals, history, and current state of the project.

Hardware/Servers: SUSE Enterprise Storage, SysAdmin Courses, Industrial Computer, Storage Accelerator Card, vGPUs in OpenStack Stein

Filed under
  • The Data Center is Changing, so is SUSE Enterprise Storage: Say Hello to Version 6

    Data growth is explosive – a challenge for all regardless of business sector. There’s the vast quantity of data from mobiles – all that data on your phone (and everyone else’s). There’s the data from the IoT – with just about everything having a sensor these days – the much-celebrated fridge that orders groceries or adds to your shopping list has actually become a reality. If you’re in the medical sector, there’s the epic growth in X-Ray, and MRI data – with each new wave of improvements in scans bringing new data, and new processing requirements. Ordinary businesses selling on and offline have every increasing volumes of data of transactional history – things bought, and things nearly bought. Then of course there is video – reams of footage from stores, or the emergency services, or even field engineers – needs a home. And let’s not forget about email… which just so happens to have all those other sources of data in attachments. We’ve got lots of data, we’re going to have loads more, and a lot of it is unstructured. 

  • Linux Academy Monthly Update for August

    During July, we started creating transcripts for many of our course videos. During August, we are working on adding even more transcripts to our courses! These transcripts will make it even easier to follow along with the course authors, and allow you to pause and re-read a section instead of having to rewind.

  • Fanless industrial Apollo Lake computer has dual mini-PCIe slots

    Ibase’s compact, rugged “CSB200-818” industrial computer is equipped with an Apollo Lake SoC, removable SATA, 2x GbE, 4x USB 3.0, 4x COM, 2x mini-PCIe, and HDMI.

    Ibase has launched a fanless, 172 x 111.6 x 53mm computer built around its 3.5-inch IB818 SBC aimed at industrial automation and intelligent transportation applications. No OS support was listed for the Intel Apollo Lake based CSB200-818, but the IB818 supports Windows and Ubuntu Linux.

  • Xilinx Expands Alveo Portfolio with Industry's First Adaptable Compute, Network and Storage Accelerator Card Built for Any Server, Any Cloud
  • OpenStack Stein feature highlights: edge deployments, storage and single node deployments

    Recently, we looked at how OpenStack’s use of vGPUs enables new technology use cases such as time series forecasting and autonomous vehicle image recognition. Now let’s examine the deployment options that can enable those applications. 

    Red Hat and the OpenStack community recognize that to serve the needs of today’s providers of telecommunications service, IoT, retail apps and other workloads, centralized infrastructure only may not be a feasible approach. Instead, applications and their underlying infrastructure likely need to move out to the edge to be as close to the client or data source as possible in order to deliver processing and insights in near real time. 

    Let’s look at some of the new capabilities that are available in OpenStack's Stein release or may come to future versions of Red Hat OpenStack.

Server: Kubernetes, SUSE Cloud Application Platform, Mesosphere/D2IQ and Kubernetes Adoption Drivers

Filed under
  • Charmed Kubernetes update for upstream API server vulnerability

    n upstream Kubernetes vulnerability (CVE-2019-11247) has been identified where the API server mistakenly allows access to a cluster-scoped custom resource, if the request is made as if the resource were namespaced. Authorisations for the resource accessed in this manner are enforced using roles and role bindings within the namespace. This means that a user with access only to a resource in one namespace could create, view updates or delete the cluster-scoped resource (according to their namespace role privileges).

    Charmed Kubernetes has already been patched to mitigate against this vulnerability. Patched builds of the 1.13.8, 1.14.4 and 1.15.1 kube-apiserver snap have also been published.

    The vulnerability, of medium severity, has also been patched in the following upstream version of Kubernetes – 1.13.9, 1.14.5 and 1.15.2. Users are encouraged to update to one of these versions now.

  • Why you might want to build your own custom buildpack (And how to!)

    A PaaS can be viewed at as a method that takes different streams of data and combines them into a working application. For SUSE Cloud Application Platform, we take the application code, buildpack, environment variables, service descriptions and output a configured and running container. Each of these pieces can come from a different person or team with a different focus to create a quickly iterable but still secure process.

    In this list, the buildpack is likely the least understood. Simply put, It is the part of the build system that takes the code provided by the developers and builds it into a full application ready to run.

    There are several buildpacks that come standard as part of the default installation of SUSE Cloud Application Platform. That said, one of my favorite “features” is the ability to customize the platform to fit your needs while still coming with sane defaults. It’s opinionated in a way that you can change it’s mind!

  • Mesosphere changes name to D2IQ, shifts focus to Kubernetes, cloud native

    Mesosphere was born as the commercial face of the open-source Mesos project. It was surely a clever solution to make virtual machines run much more efficiently, but times change and companies change. Today the company announced it was changing its name to Day2IQ, or D2IQ for short, and fixing its sights on Kubernetes and cloud native, which have grown quickly in the years since Mesos appeared on the scene.

    D2IQ CEO Mike Fey says that the name reflects the company’s new approach. Instead of focusing entirely on the Mesos project, it wants to concentrate on helping more mature organizations adopt cloud native technologies.

  • Survey Identifies Myriad Kubernetes Adoption Drivers

    One of the assumptions made about key drivers Kubernetes adoption is that organizations are trying to accelerate the rate at which software is built by embracing microservices based on containers. However, a survey of 130 attendees of three recent container conferences published by Replex, a provider of governance and cost management tools for Kubernetes, finds the top two drivers of Kubernetes adoption are improving scalability (61%) and resource utilization (46%), followed by a desire to adopt a cloud-native stack (37%) and shortening development and deployment times (42%).

    Only 24% identified avoiding lock-in as a reason for adopting Kubernetes, which suggests portability is not yet a major factor in driving Kubernetes adoption.

    The surveys were conducted at the KubeCon Europe conference in Barcelona; a VelocityConf even in San Jose, California; and ContainerDays Hamburg in the second quarter of 2019. The survey finds 65% of respondents indicated that they are using Kubernetes in production. Nearly 40% of respondents not yet in production indicated they are planning on going to production within a year, the survey finds.

Server: Cloud Native Computing Foundation (CNCF), IBM/Red Hat and Hyperledger

Filed under
Red Hat
  • Kubernetes shows promise for managing martech compatibility across platforms

    The system was originally developed by Google, given to the Cloud Native Computing Foundation (CNCF), and is now becoming the containerizing standard for most cloud-based business apps. The goal of the platform is to allow more concise, customized management of assets and services across a multitude of environments. In an increasingly globalized business model, this ability is imperative to remaining viable.

    The architecture is constructed in layers, with the main server acting as the master among a cluster of machines that make up the infrastructure of that particular network. Each machine within this grouping is assigned a specific function, with the master server acting as the main control and switchboard. It receives requests from end-users, exposes API, performs health checks on nodes within the network of servers and schedules tasks to whichever server is designated to complete them.

  • Open source stories: Farming for the future film

    Hear from Dorn Cox, Melanie Shimano and Peter Webb from this video of their discussion at the “Farming for the Future” film premiere at Red Hat Summit 2019.

  • With Zowe, open source and DevOps are democratizing the mainframe computer

    The venerable mainframe computer is experiencing a surprising but well-deserved resurgence, as the organizations that depend on these systems realize how important they are for digital initiatives and for hybrid information technology strategies in general.

    IBM Corp. – the sole remaining purveyor of mainframe systems – continues to invest in the platform, and Big Blue’s latest iteration, the z14 (pictured), is a masterpiece of scalability, reliability and security, as is its core operating system, z/OS.

  • Hyperledger Adds 11 New Members

    Hyperledger, an open source collaborative effort created to advance cross-industry blockchain technologies, today welcomed 11 new members to its expanding enterprise blockchain community. The announcement comes as Hyperledger members from around the world are meeting in Tokyo, Japan, at the annual Hyperledger Member Summit, a two-day event dedicated to community-driven planning, training and networking. 

Databases and Data With FOSS (Ish)

Filed under
  • Redgate acquires (but commits to widening) open source Flyway

    Database development company Redgate has been to the shops.

    The Cambridge, UK-based firm has bought eggs, fresh bloomers (no, the bread kind) and, direct from the meat counter, a US$10 million portion (i.e. all of it) of cross-platform database migrations tool, Flyway.

    Redgate’s mission in life is to enable the database to be included in DevOps, whatever database its customers are working on.

  • NuoDB 4.0 beats drum for cloud-native cloud-agnosticism

    Distributed SQL database company NuoDB has reached its version 4.0 iteration… and aligned further to core open source cloud platform technologies.

  • Neo4j charts tighter grip on graph data-at-rest

    A graph database is a database designed to treat the relationships between data as equally important to the data itself — it is intended to hold data without constricting it to a pre-defined model… instead, the data is stored showing how each individual entity connects with or is related to others.

  • Open source databases are not just about the licensing dollar

    The majority of organizations now use technology in ways that are quite different from ten years ago. The concept of using pre-built solutions or platforms hosted remotely is nothing new: mainframes and thin terminals dominated the enterprise from the 1970s until the arrival of the trusty desktop PC.

    The cloud has also shifted our notions of how and why we pay for technology solutions. Commercial and proprietary platforms designed to be installed on-premises were, just a few years ago, accompanied by a hefty up-front bill for licenses. Today, paying per seat, per gigabyte throughput, or even per processor cycle, is becoming standard.

  • Cloudera Update: Open source route seeks to keep big data alive

    Cloudera has had a busy 2019. The vendor started off the year by merging with its primary rival Hortonworks to create a new Hadoop big data juggernaut. However, in the ensuing months, the newly merged company has faced challenges as revenue has come under pressure and the Hadoop market overall has shown signs of weakness. Against that backdrop, Cloudera said July 10 that it would be changing its licensing model, taking a fully open source approach. The Cloudera open source route is a new strategy for the vendor. In the past, Cloudera had supported and contributed to open source projects as part of the larger Hadoop ecosystem but had kept its high-end product portfolio under commercial licenses.

Servers: Capsule8, SUSE and More

Filed under
  • Capsule8 Announces New Investigations Capability
  • Capsule8 'Investigations' To Provide More Proactive Prevention for Linux-Based Environments

    Brooklyn, N.Y.-based Capsule8 today announced new "full endpoint detection and response (EDR)-like investigations functionality for cloud workloads"...

  • A Pen Plotter Powered by Artificial Intelligence

    As you can see, to process a picture of this size which contains only one short mathematical question, the time consumed is around 11 minutes. It is very likely that the time consumed for the entire process can be reduced by 50 percent, if the code is changed and sends the text detection job to ‘cloud’ instead of to the native Raspberry Pi 3, or if you use Raspberry Pi 3 with Neural Compute Stick(s) for accelerating the inference. But this assumption still would have to be proven Smile.

  • From 30 to 230 docker container per host

    In the beginning there were virtual machines running with 8 vCPUs and 60GB of RAM. They started to serve around 30 containers per VM. Later on we managed to squeeze around 50 containers per VM.

    Initial orchestration was done with swarm, later on we moved to nomad. Access was initially fronted by nginx with consul-template generating the config. When it did not scale anymore nginx was replaced by Traefik. Service discovery is managed by consul. Log shipping was initially handled by logspout in a container, later on we switched to filebeat. Log transformation is handled by logstash. All of this is running on Debian GNU/Linux with docker-ce.

    At some point it did not make sense anymore to use VMs. We've no state inside the containerized applications anyway. So we decided to move to dedicated hardware for our production setup. We settled with HPe DL360G10 with 24 physical cores and 128GB of RAM.

Red Hat/IBM: EPEL, Ceph, OpenShift and Call for Code Challenge,

Filed under
Red Hat
  • Kevin Fenzi: epel8-playground

    We have been working away at getting epel8 ready (short status: we have builds and are building fedpkg and bodhi and all the other tools maintainers need to deal with packages and hope to have some composes next week), and I would like to introduce a new thing we are trying with epel8: The epel8-playground.

    epel8-playground is another branch for all epel8 packages. By default when a package is setup for epel8 both branches are made, and when maintainers do builds in the epel8 branch, fedpkg will build for _both_ epel8 and epel8-playground. epel8 will use the bodhi updates system with an updates-testing and stable repo. epel8-playground will compose every night and use only one repo.

  • Red Hat OpenStack Platform with Red Hat Ceph Storage: MySQL Database Performance on Ceph RBD

    In Part 1 of this series, we detailed the hardware and software architecture of our testing lab, as well as benchmarking methodology and Ceph cluster baseline performance. In this post, we?ll take our benchmarking to the next level by drilling down into the performance evaluation of MySQL database workloads running on top of Red Hat OpenStack Platform backed by persistent block storage using Red Hat Ceph Storage.

  • OpenShift Persistent Storage with a Spring Boot Example

    One of the great things about Red Hat OpenShift is the ability to develop both Cloud Native and traditional applications. Often times, when thinking about traditional applications, the first thing that comes to mind is the ability to store things on the file system. This could be media, metadata, or any type of content that your application relies on but isn’t stored in a database or other system.

    To illustrate the concept of persistent storage (i.e. storage that will persist even when a container is stopped or recreated), I created a sample application for tracking my electronic books that I have in PDF format. The library of PDF files can be stored on the file system, and the application relies on this media directory to present the titles to the user. The application is written in Java using the Spring Boot framework and scans the media directory for PDF files. Once a suitable title is found, the application generates a thumbnail image of the book and also determines how many pages it contains. This can be seen in the following image:

  • IBM and Linux Foundation Call on Developers to Make Natural Disasters Less Deadly

    On a stormy Tuesday in July, a group of 30 young programmers gathered in New York City to take on natural disasters. The attendees—most of whom were current college students and alumnae of the nonprofit Girls Who Code—had signed up for a six-hour hackathon in the middle of summer break.

    Flash floods broke out across the city, but the atmosphere in the conference room remained upbeat. The hackathon was hosted in the downtown office of IBM as one of the final events in this year’s Call for Code challenge, a global competition sponsored by IBM and the Linux Foundation. The challenge focuses on using technology to assist survivors of catastrophes including tropical storms, fires, and earthquakes.

    Recent satellite hackathon events in the 2019 competition have recruited developers in Cairo to address Egypt’s national water shortage; in Paris to brainstorm AI solutions for rebuilding the Notre Dame cathedral; and in Bayamón, Puerto Rico, to improve resilience in the face of future hurricanes.

    Those whose proposals follow Call for Code’s guidelines are encouraged to submit to the annual international contest for a chance to win IBM membership and Linux tech support, meetings with potential mentors and investors, and a cash prize of US $200,000. But anyone who attends one of these optional satellite events also earns another reward: the chance to poke around inside the most prized software of the Call for Code program’s corporate partners.

SUSE and IBM/Red Hat Leftovers

Filed under
Red Hat
  • No More Sleepless Nights and Long Weekends Doing Maintenance

    Datacenter maintenance – you dread it, right? Staying up all night to make sure everything runs smoothly and nothing crashes, or possibly losing an entire weekend to maintenance if something goes wrong. Managing your datacenter can be a real drag. But it doesn’t have to be that way.

    At SUSECON 2019, Raine and Stephen discussed how SUSE can help ease your pain with SUSE Manager, a little Salt and a few best practices for datacenter management and automation.

  • Fedora Has Formed A Minimization Team To Work On Shrinking Packaged Software

    The newest initiative within the Fedora camp is a "Minimization Team" seeking to reduce the size of packaged applications, run-times, and other software available on Fedora Linux.

    The hope of the Fedora Minimization Team is that they can lead to smaller containers, eliminating package dependencies where not necessary, and reducing the patching foot-print.

  • DevNation Live: Easily secure your cloud-native microservices with Keycloak

    DevNation Live tech talks are hosted by the Red Hat technologists who create our products. These sessions include real solutions and code and sample projects to help you get started. In this talk, you’ll learn about Keycloak from Sébastien Blanc, Principal Software Engineer at Red Hat.

    This tutorial will demonstrate how Keycloak can help you secure your microservices. Regardless of whether it’s a Node.js REST Endpoint, a PHP app, or a Quarkus service, Keycloak is completely agnostic of the technology being used by your services. Learn how to obtain a JWT token and how to propagate this token between your different secured services. We will also explain how to add fine-grained authorizations to these services.

Server: 'Cloud', virtualisation and IBM/Red Hat

Filed under
  • Cloud Native Applications in AWS supporting Hybrid Cloud – Part 1

    Let us talk first about what is cloud native and the benefits of SUSE Cloud Application Platform and AWS when building cloud native applications.

  • Cloud Native Applications in AWS supporting Hybrid Cloud – Part 2

    In my previous post , I wrote about using SUSE Cloud Application Platform on AWS for cloud native application delivery. In this follow-up, I’ll discuss two ways to get SUSE Cloud Application Platform installed on AWS and configure the service broker:

  • 10 Top Data Virtualization Tools

    With the continuing expansion of data mining by enterprises, it's no longer possible or advisable for an organization to keep all data in a single location or silo. Yet having disparate data analytics stores of both structured and unstructured data, as well as Big Data, can be complex and seemingly chaotic.

    Data virtualization is one increasingly common approach for dealing with the challenge of ever-expanding data. Data virtualization integrates data from disparate big data software and data warehouses - among other sources – without copying or moving the data. Most helpful, it provides users with a single virtual layer that spans multiple applications, formats, and physical locations, making data more useful and easier to manage.

  • Running MongoDB with OCS3 and using different types of AWS storage options (part 3)

    In the previous post I explained how to performance test MongoDB pods on Red Hat OpenShift with OpenShift Container Storage 3 volumes as the persistent storage layer and Yahoo! Cloud System Benchmark (YCSB) as the workload generator.

    The cluster I’ve used in the prior posts was based on the AWS EC2 m5 instance series and using EBS storage of type gp2. In this blog I will compare these results with a similar cluster that is based on the AWS EC2 i3 instance family that is using local attached storage (sometimes referred as "instance storage" or "local instance store").

  • OpenShift 4.1 Bare Metal Install Quickstart

    In this blog we will go over how to get you up and running with a Red Hat OpenShift 4.1 Bare Metal install on pre-existing infrastructure. Although this quickstart focuses on the bare metal installer, this can also be seen as a “manual” way to install OpenShift 4.1. Moreover, this is also applicable to installing to any platform which doesn’t have the ability to provide ignition pre-boot. For more information about using this generic approach to install on untested platforms, please see this knowledge base article.

Syndicate content

More in Tux Machines

Pekwm: A lightweight Linux desktop

Let's say you want a lightweight desktop environment, with just enough to get graphics on the screen, move some windows around, and not much else. You find traditional desktops get in your way, with their notifications and taskbars and system trays. You want to live your life primarily from a terminal, but you also want the luxury of launching graphical applications. If that sounds like you, then Pekwm may be what you've been looking for all along. Pekwm is, presumably, inspired by the likes of Window Maker and Fluxbox. It provides an application menu, window decoration, and not a whole lot more. It's ideal for minimalists—users who want to conserve resources and users who prefer to work from a terminal. Read more

What motivates people to contribute to open source?

Knowing what motivates people is a smart way to recruit contributors to an open source project—and to keep them contributing once they've joined. For his book How Open Source Ate Software, Red Hat's Gordon Haff did a lot of research on the topic of motivation, and he shared some of it in his Lightning Talk at All Things Open 2019, "Why do we contribute to open source?" Watch Gordon's Lightning Talk to learn about the three main types of motivation—extrinsic, intrinsic, and internalized extrinsic—what they are, and how they relate to open source communities. Read more

6 Best Free Linux Speed Reading Tools

The idea of speed reading was invented by an American schoolteacher named Evelyn Wood. There’s a few different approaches when it comes to speed reading. Spritz technology is based on the notion that much of the time spent in reading text is taken by the eye’s focus moving between words and across the page. According to Spritz, spritzing is defined as reading content one word at a time with the optimal recognition point (ORP) positioned inside of their custom “redicle”. After your eyes find the ORP, your brain starts to process the meaning of the word that you’re viewing. The concept of speed reading in this context is simple: slice a text into individual short segments, like a word. The software featured in this group test is based on spritzing. Read text without moving your eyes, and therefore rapidly increase your reading speed. Unlike other reading techniques, you don’t need to rewire your brain to work more efficiently. Read more

5 cool terminal pagers in Fedora

Large files like logs or source code can run into the thousands of lines. That makes navigating them difficult, particularly from the terminal. Additionally, most terminal emulators have a scrollback buffer of only a few hundred lines. That can make it impossible to browse large files in the terminal using utilities which print to standard output like cat, head and tail. In the early days of computing, programmers solved these problems by developing utilities for displaying text in the form of virtual “pages” — utilities imaginatively described as pagers. Pagers offer a number of features which make text file navigation much simpler, including scrolling, search functions, and the ability to feature as part of a pipeline of commands. In contrast to most text editors, some terminal pagers do not require loading the entire file for viewing, which makes them faster, especially for very large files. Read more