Language Selection

English French German Italian Portuguese Spanish

Red Hat

IBM, Red Hat, and SUSE

Filed under
Red Hat
SUSE
  • IBM Research open-sources SysFlow to tackle cloud threats

    IBM Corp.’s research division today announced the release of SysFlow, an open-source security toolkit for hunting breaches in cloud and container environments.

    SysFlow is designed to tackle a common problem in network protection. Modern security monitoring tools capture system activity with a high degree of granularity, often down to individual events such file changes.

    That’s useful to a point but also creates a large amount of noise that makes spotting threats harder. IBM researchers Frederico Araujo and Teryl Taylor described looking for breaches under such circumstances as “akin to searching for a needle in an extremely large haystack.”

  • Red Hat DevSecOps Strategy Centers on Quay

    Red Hat is moving toward putting the open source Quay container registry at the center of its DevSecOps strategy for securing containers.

    The latest 3.2 version of Quay adds support for Container Security Operator, which integrates Quay’s image vulnerability scanning capabilities with Kubernetes. Dirk Herrmann, senior principal product manager for Red Hat, says that capability will make it possible to leverage the open source Clair vulnerability scanning tool developed by CoreOS. Red Hat acquired CoreOS in 2018.

    [...]

    The latest release of Quay also makes it easier to extend DevSecOps processes across multiple instances of the container registry. Version 3.2 of Quay includes a mirroring capability that makes it possible to replicate instances of Quay container registries across multiple locations. In fact, Herrmann says one of the things that differentiates Quay most from other container registries is its ability to scale.

    Other capabilities added to Quay include support for OpenShift Container Storage 4, which is enabled via NooBaa Operator for data management, based on the S3 application programming interface (API) for cloud storage developed by Amazon Web Services (AWS).

  • 2020 Red Hat Women in Open Source Award Nominations Now Open

    Red Hat, Inc., the world's leading provider of open source solutions, today announced that it is accepting nominations for the 2020 Women in Open Source Award program. Now in its sixth year, the Women in Open Source Award program was created and is sponsored by Red Hat to honor women who make important contributions to open source projects and communities, or those making innovative use of open source methodology.

    Nominations for this year's awards will be accepted for two categories: Academic, open to women who are enrolled full-time, earning 12 or more credit hours, in college or university; and Community, open to all other women contributing to projects related to open source.

  • Melissa Di Donato, CEO, SUSE: On cloud journeys, hyperscaler complexity, and daring to be different

    When Melissa Di Donato joined SAP in 2017, having counted Salesforce, IBM and Oracle among her previous employers, she told this publication it was like ‘coming home.’ Now, as chief executive of Linux enterprise software provider SUSE, it is more a step into the unknown.

    Yet it is not a complete step. Working with a proprietary software company means your experience is primarily in selling it, implementing it and aligning it to others’ business needs. With SUSE, Di Donato knows far more acutely what customers want.

    [...]

    Not unlike other organisations, SUSE’s customer base is split into various buckets. You have traditionalists, which comprise about 80% of customers, hybrid beginners, cloud adopters and cloud-native; the latter three all moving in ever decreasing circles. Regardless of where you are in your cloud journey, SUSE argues, the journey itself is the same. You have to simplify, before you modernise, and then accelerate.

    Di Donato argues that cloud and containers are ‘very, very overused words’, and that getting to grips with the technology which holds the containers is key – but all journey paths are valid. “Whether cloud means modernising, or container means modernising, VMs, open source… [customers’] version of modernising is really important, and they want to simply and modernise to then get to a point where they can accelerate,” she says. “Regardless of what persona you are, what customer type you are, everyone wants to accelerate.”

    These days, pretty much everyone is on one of the hyperscale cloud providers as well. SUSE has healthy relationships with all the major clouds – including AWS, which is a shot in the arm for its occasionally-criticised stance on open source – aiming to offer partnerships and value-adds aplenty.

Remembering Brad Childs

Filed under
Red Hat
Obits

Earlier this year, the Kubernetes family lost one of its own. Brad Childs was a SIG Storage chair and long time contributor to the project. Brad worked on a number of features in storage and was known as much for his friendliness and sense of humor as for his technical contributions and leadership.

Read more

Red Hat on Servers

Filed under
Red Hat
  • How much education do you need to be a Linux sysadmin?

    A recent Enable Sysadmin article by Kevin Casey asks the question, Do I need a college degree to be a sysadmin? I won't spoil his conclusions and findings, but my personal opinion is that all professionals should have a degree, even for entry-level positions. Certainly, this "should" can mean an Associate's degree with the intent of obtaining a Bachelor's degree during job tenure. Many companies provide tuition assistance or complete reimbursement, so there's no reason not to work toward a degree.

    Of course, a degree is no guarantee of employment or insulation from layoffs, but it can be a deciding factor in both instances. If you have two sysadmins and one has a degree and the other doesn't, the manager is more likely to retain the degreed individual with all other aspects of the two being equal or comparable. That assertion comes from observations from working in large enterprise organizations for more than twenty years, through dozens of layoff events.

    At least until an industry standard is created, I doubt that any educational requirements will ever be set. What currently happens is that educational requirements are company-specific and not job-specific.

  • 5 questions to ask before choosing a public cloud provider

    In today’s digital economy, companies are looking for an edge. They want to be more efficient, outpace the competition, and deliver a customer experience that builds loyalty and increases revenue. And they’re exploring hybrid cloud and multicloud strategies to accelerate that transformation.

    In doing so, more and more companies are realizing they’ll need to welcome public clouds into their IT environment. Of course, some might argue public clouds are not a requirement in either a hybrid cloud or multicloud approach—we’ll leave that debate off the table. But as a refresher, multicloud refers to the presence of more than one cloud deployment of the same type (public or private), sourced from different vendors, while hybrid cloud refers to the presence of multiple deployment types (public or private) with some form of integration or orchestration between them.

    Hybrid cloud and multicloud approaches are not necessarily mutually exclusive, but because they always involve more than one cloud deployment and/or cloud type, it’s likely that at some point, a public cloud will be added to the mix. After all, public cloud promises greater elasticity, scalability, and speed of deployment.

  • Architecting messaging solutions with Apache ActiveMQ Artemis

    Over the years, I have hardly seen two messaging architectures that are absolutely the same. Every organization has something unique in the way they manage their infrastructure and organize their teams, and that inevitably ends up reflected in the resulting architectures. Your job as a consultant or architect is to find the most suitable architecture within the current constraints, and educate and guide the customer towards the best possible outcome. There is no right or wrong architecture, but deliberate trade-off commitments in a context.

    In this article, I tried to cover as many areas of Artemis as possible from an architecturally significant point of view. But by doing so, I had to be opinionated, ignore other areas, and emphasize what I think is significant based on my experience. I hope you find it useful and learned something from it. If that is the case, say something on Twitter and spread the word.

Red Hat and IBM Leftovers

Filed under
Red Hat
  • ‘API First’ paves the way for agile integration

    Sameer Parulkar: We started talking about agile integration at Red Hat Summit in 2017. We were looking at the integration space and the capabilities that we offer as well as some of the challenges from the customer perspective of adopting these integration capabilities, as well as providing faster and competitive solutions. And then we spoke with a lot of our customers and there was consensus that integration should be more agile and align with DevOps. One of our key motivations with agile integration was to essentially position integration as a key business capability, enabling differentiated services for customers.

  • Triangle Business Journal - Sneak Peek: Inside Red Hat's new 'open studio'

    Red Hat's Chief People Officer DeLisa Alexander describes [the space] as Red Hat's in-house "marketing agency." And the new space – 9,000 square feet directly adjacent to its lobby - is designed for them to collaborate, and publicly.

  • Using IBM POWER9 PowerVM Virtual Persistent Memory for SAP HANA with SUSE Linux

    SAP HANA uses in-memory database technology that allows much faster access to data than was ever possible with hard disk technology on a conventional database – access times of 5 nanoseconds versus 5 milliseconds. SAP HANA customers can also use the same database for real-time analysis and decision-making that is used for transaction processing.

    The combination of faster access speeds and better access for analytics has resulted in strong customer demand for SAP HANA. There are already more than 1600 customers using SAP HANA on Power since it became available in 2015.

  • OpenShift Authentication Integration with ArgoCD

    GitOps is a pattern that has gained a fair share of popularity in recent times as it emphasizes declaratively expressing infrastructure and application configuration within Git repositories. When using Kubernetes, the concepts that GitOps employs aligns well as each of the resources (Deployments, Services, ConfigMaps) that comprise not only an application, but the platform itself can be stored in Git. While the management of these resources can be handled manually, a number of tools have emerged to not only aid in the GitOps space, but specifically with the integration with Kubernetes.

    ArgoCD is one such tool that emphasizes Continuous Delivery (CD) practices to repeatedly deliver changes to Kubernetes environments.

    Note: ArgoCD has recently joined forces with Flux, a Cloud Native Computing Foundation (CNCF) sandbox project, to create gitops-engine as the solution that will combine the benefits of each standalone project.

    ArgoCD accomplishes CD methodologies by using Git repositories as a source of truth for Kubernetes manifests that can be specified in a number of ways including plan yaml files, kustomize applications, as well as Helm Charts, and applies them to targeted clusters. When working with multiple teams and, in particular, enterprise organizations, it is imperative that each individual using the tool is authorized to do so in line with the principle of least privilege. ArgoCD features a fully functional Role Based Access Control (RBAC) system that can be used to implement this requirement.

Announcing Oracle Linux 7 Update 8 Beta Release

Filed under
Red Hat
Server

We are pleased to announce the availability of the Oracle Linux 7 Update 8 Beta release for the 64-bit Intel and AMD (x86_64) and 64-bit Arm (aarch64) platforms. Oracle Linux 7 Update 8 Beta is an updated release that include bug fixes, security fixes and enhancements.

Read more

Fedora: Copr, fstrim and RPM of PHP releases/bugfixes

Filed under
Red Hat
  • Copr: review of 2019 and vote for features in 2020

    I want to sum up what happened in Copr during 2019. At the end of this post, you can see our TODO list and cast your vote on what we should focus on in 2020.

  • Fedora and fstrim

    A proposal to periodically run the fstrim command on Fedora 32 systems was discussed recently on the Fedora devel mailing list. fstrim is used to cause a filesystem to inform the underlying storage of unused blocks, which can help SSDs and other types of block devices perform better. There were a number of questions and concerns raised, including whether to change the behavior of earlier versions of the distribution when they get upgraded and if the kernel should be responsible for handling the whole problem.

    The proposal for a Fedora 32 system-wide change to "enable fstrim.timer by default" was posted by program manager Ben Cotton on behalf of its owner, Chris Murphy. The fstrim.timer systemd unit file simply runs fstrim.service (which runs fstrim) weekly on mounted filesystems.

  • Remi Collet: PHP version 7.3.14RC1 and 7.4.2RC1

    Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

    RPM of PHP version 74.2RC1 are available as SCL in remi-test repository and as base packages in the remi-php74-test repository for Fedora 29-31 and Enterprise Linux 7-8.

    RPM of PHP version 7.3.13RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30-31 or remi-php73-test repository for Fedora 29 and Enterprise Linux.

Red Hat and SUSE Leftovers

Filed under
Red Hat
SUSE
  • Debugging applications within Red Hat OpenShift containers

    There are debugging tools that can be used within containers but are not preinstalled in container base images. Tools such as strace or Valgrind must be included in a container during the container image build process.

    In order to add a debugging tool to a container, the container image build process must be configured to perform additional package installation commands. Whether or not package installation is permitted during the image build process depends on the method being used to build the container image. OpenShift provides several methods of building container images. These methods are called build strategies. Currently, OpenShift supports the Dockerfile, Source-to-Image (S2I), Pipeline, and Custom build strategies. Not all build strategies allow package installation: Of the most commonly-used strategies, the Dockerfile strategy permits package installation but the S2I strategy does not, because an S2I build process builds the container image in an unprivileged environment. A build process within an unprivileged environment lacks the ability to invoke package installation commands.

  • Fedora 33 To Finally Kill Off Python 2.6 Support

    Python 2.6 has been end-of-life all the way back to late 2013. However, Python 2.6 packaging for Fedora has kept upt in order to maintain some compatibility with RHEL/EPEL 6 having Python 2.6. But now with EPEL 6 reaching end-of-life as the extra packages for Red Hat Enterprise Linux 6 / CentOS 6, Fedora will gut its Python 2.6 support should anyone still be using it outside of the EPEL building/testing use-case. EPEL 6 is being retired in November 2020, similar to the expected release of Fedora 33.

  • SUSE Manager 4 Content Lifecycle Management Deep Dive

    SUSE® Manager 4 is a best-in-class open source infrastructure management solution that lowers costs, enhances availability and reduces complexity for lifecycle management of Linux systems in large, complex and dynamic IT landscapes. You can use SUSE Manager to configure, deploy and administer thousands of Linux systems running on hypervisors, as containers, on bare metal systems, IoT devices and third-party cloud platforms. SUSE Manager also allows you to manage virtual machines.

  • Transformation – Simplify First

    While a bit of a stretch, there is some similarity to the dilemma that many companies are facing in this rapidly changing business environment. In my last blog, I talked about how companies are looking at the digital transformation of their business in order to stay competitive in a rapidly changing world. In a 2019 report by 451 Research commissioned by SUSE, 89% of survey respondents are considering, evaluating or executing their digital transformation strategy.

Red Hat and Servers Leftovers

Filed under
Red Hat
Server
  • Satellite Host Configuration with RHEL System Roles Powered by Ansible

    Most of the Red Hat Enterprise Linux (RHEL) system administrators I talk to are looking for ways to further automate tasks in order to save time and make their systems more consistent?this can lead to better reliability and improve security in the environment.

    RHEL System Roles Powered by Ansible is a feature introduced in RHEL 7.4 as a technology preview, and became a supported feature in RHEL 7.6. These system roles allow you to configure several aspects of RHEL: SELinux, kdump, network configuration, and time synchronization. As of RHEL 7.7, a Postfix system role is also available as a technology preview.

    Using RHEL System Roles Powered by Ansible allows you to automate these configurations across your environment. In addition, system roles provide a consistent configuration interface across major RHEL versions. You can use the same system roles to automate the configuration on RHEL 6.10 or later, RHEL 7 and RHEL 8 systems, even when the underlying technologies change between versions.

    For example, for time synchronization, rather than having to learn how to configure ntp on RHEL 6 and how to configure chrony on RHEL 7 and RHEL 8, you just need to know how to use the time synchronization system role. The system role will automatically translate that configuration to ntp on RHEL 6 and chrony on RHEL 7 and 8. This makes management easier and saves time, especially in environments with a mixture of RHEL 6, RHEL 7, and RHEL 8.

  • 6 requirements of cloud-native software

    For many years, monolithic applications were the standard enterprise architecture for achieving business requirements. But that changed significantly once cloud infrastructure began treating business acceleration at scale and speed. Application architectures have also transformed to fit into the cloud-native applications and the microservices, serverless, and event-driven services that are running on immutable infrastructures across hybrid and multi-cloud platforms.

  • Cumulus's Linux to Run Networks for Large HPE Storage Clusters
  • Software Development, Microservices & Container Management – Part IV – About making Choices – CaaSP 4 as SUSE’s empowering of Kubernetes

    Together with my colleague Bettina Bassermann and SUSE partners, we will be running a series of blogs and webinars from SUSE (Software Development, Microservices & Container Management, a SUSE webinar series on modern Application Development), and try to address the former questions and doubts about K8s and Cloud Native development and how it is not compromising quality and control.

  • Peter Czanik: Keeping syslog-ng portable

    I define syslog-ng, as an “Enhanced logging daemon with a focus on portability and high-performance central log collection”. One of the original goals of syslog-ng was portability: running the same application on a wide variety of architectures and operating systems. After one of my talks mentioning syslog-ng, I was asked how we ensure that syslog-ng stays portable when all the CI infrastructure focus on 64bit x86 architecture and Linux.

    [...]

    Not this often, but I also test syslog-ng git snapshots on FreeBSD. Mostly on AMD64, but sometimes also on Aarch64. Just to make sure that one more operating system outside of Linux and OS X is regularly tested. Why FreeBSD? First of all, I keep using FreeBSD almost from the day it was born, even a few months earlier before I started to use Linux. And it is also the largest platform outside Linux where syslog-ng is used, including some appliances built around FreeBSD.

    Travis announced support for ARM just recently: https://blog.travis-ci.com/2019-10-07-multi-cpu-architecture-support. It needed some extra work on the syslog-ng side, but now each pull request is also tested on ARM before merging. This is not just a simple compile test – as I do most of the time – but it includes unit tests as well.

    Does this approach work? Yes, it seems to work. For example, syslog-ng compiles on all architectures supported by Debian. That also includes MIPS that I only tested with syslog-ng once. And I learned about a new architecture just by checking on which CPU architecture the BMW i3 is using to run syslog-ng Smile It is the SuperH.

IBM, Red Hat and Fedora Leftovers

Filed under
Red Hat
  • OpenShift under the hood: How global systems integrators like DXC Technology are using enterprise Kubernetes to build on the promise of PaaS

    Anwar Belayachi is a senior partner global solutions architect for the DXC Technology Alliance at Red Hat.

    Accelerating application development and deployment processes for the cloud and digital era doesn’t just happen. Building the next-generation of enterprise apps, which are likely going to be cloud-native, requires a new set of tools, practices and platforms is a vital component to the success of these transformation initiatives.

    The variety, scope and scale of these new demands can seem overwhelming, but Red Hat and our global systems integrators partners like DXC Technology are ready to help.

  • Red Hat support for Node.js

    For the past two years, Red Hat Middleware has provided a supported Node.js runtime on Red Hat OpenShift as part of Red Hat Runtimes. Our goal has been to provide rapid releases of the upstream Node.js core project, example applications to get developers up and running quickly, Node.js container images, integrations with other components of Red Hat’s cloud-native stack, and (of course) provide world-class service and support for customers. Earlier this year, the team behind Red Hat’s distribution and support of Node.js even received a “Devie” award from DeveloperWeek for this work, further acknowledging Red Hat’s role in supporting the community and ecosystem.

  • IBM acquisitions may be best path back to the top

    Like many tech companies in the early 2000s, IBM has stumbled a few times trying to find its footing traveling from the civilized world of proprietary hardware to the wild west that is the cloud-based software and services market.

    It hasn't all been a series of stumbles and bumbles and IBM appears to have finally found the right path forward. With IBM's acquisition of Red Hat in late 2018, the company now owns a solid foundation to build a hybrid-cloud strategy, which many feel IBM should have pursued several years ago.

    Instead, the company spent too much time focusing on its private cloud strategy, catering to what it felt its longtime corporate customers preferred.

    [...]

    Since 2001, IBM has made over 160 acquisitions of software, hardware and communication companies spending anywhere from less than a million to multiple billions for each of them. The vast majority of those deals proved strategically inconsequential and/or IBM buried the acquired technologies so deeply into its own existing products they became either invisible or lost their value to the intended user base.

    If IBM hopes to grow its revenues again, it will have to do a better job of selecting acquisition targets. Since the end of 2011, IBM's top-line number has steadily slid from $106.9 billion to what financial analysts project will be in the neighborhood of $77 billion for fiscal 2019. Despite the enormous amount of money IBM and IBM Research have spent on developing innovative products the past 20 years, the resulting products have contributed little to top-line growth.

  • Martin Stransky: Fedora Firefox team at 2019

    I think the last year was the strongest one in whole Fedora Firefox team history. We have been always contributed at Mozilla but in 2019 we finished some major outstanding projects at upstream and also ship them at Fedora.

    The first finished project I’d like to mention is disabled system titlebar by default on Gnome. Firefox UI on Linux finally matches Windows/MacOS and provides similar user experience. We also implement various tweaks like styled and HiDPI titlebar button rendering and left/right button placement.

    A rather small by code changes but highly impacted was gcc optimization with PGO/LTO. In cooperation with Jakub Jelinek and SuSE guys we managed to match and even slightly outperform default Mozilla Firefox binaries which are built with clang. I’m going to post more accurate numbers in some follow up post as was already published by a Czech linux magazine.

    Firefox Gnome search provider is another small but useful feature we introduced last year. It’s not integrated at upstream yet because it needs an update for an upcoming async history lookup API at Firefox side but we ship it as tech preview to get more user feedback.

Red Hat and Fedora

Filed under
Red Hat
  • Events: The life force of open source

    The people who make free and open source software often come together in time and space to collaborate on projects that matter to them. These gatherings, both large and small, are very important to the ongoing growth and success of open source communities.

    Open Source events can include regular local meetups and large international conventions, but they serve the same purpose. They give members of these communities an opportunity to meet, collaborate, and form lasting friendships.

    Most development and collaboration in open source software projects happens online, in mailing lists, chat channels, and issue trackers. It can be easy to misinterpret the written word, and misunderstandings lead to unnecessary conflict. The relationships formed at open source events go a long way toward mitigating this kind of friction.

    Biella Coleman is an anthropologist who studied the world of hackers. She wrote about the "lifeworld" of hacker conferences and documented the celebration of community that goes on there, and this also applies to most open source community events.

  • Building and running SAP Commerce in OpenShift

    Given that the OpenShift Container Platform leverages container images as the packaging model, a layered file system is in use which allows for a common base to be used regardless of the number of applications. Since images are atomic in nature, there is a guarantee that the same base can be replicated across all of the applications. In addition, a container delivery pipeline can be created that allows for applications to be rebuilt automatically whenever the base is updated, such as when updates are installed or a security vulnerability is discovered.

  • Simplifying OpenShift Case Information Gathering Workflow: Must-Gather Operator

    Collecting debugging information from a large set of nodes (such as when creating SOS reports) can be a time consuming task to perform manually. Additionally, in the context of Red Hat OpenShift 4.x and Kubernetes, it is considered a bad practice to ssh into a node and perform debugging actions. To better accomplish this type of operation in OpenShift Container Platform 4, there is a new command: oc adm must-gather, which will collect debugging information across the entire cluster (nodes and control plane). More detailed information on the must-gather command can be found in the platform documentation.

    While using the must-gather command is fairly straightforward, the full end-to-end process to facilitate all of the available tasks can be time consuming. This process involves issuing the command, waiting for the associated tasks to complete, and then upload the resulting information to the Red Hat case management system.

    A way to further streamline the process is to automate these actions.

  • Re-evaluating systemd

    First developed by Lennart Poettering and Kay Sievers of Red Hat, systemd began as a replacement for init, the first process started during bootup. However, it quickly developed into an overall manager for system resources and services, acting as an intermediary between applications and the kernel and providing a common administrative utility for major distributions. Objections to systemd ranged from the claim that it was overly complex, that is was contrary to the Unix tradition of tinkering, to the conspiracy theory that systemd was part of Red Hat’s long range plans to seize control of Linux. In the years since, most developers and users grew to accept systemd, but the objections have never wholly gone away. DistroWatch’s search function reveals that 98 of 277 active distributions are built without systemd, a figure that indicates that a significant minority resistance continues to this day.

    According to a posting by Debian Project Leader Sam Hartman, the reconsideration of systemd was brought about by the proposal to include a fork in Debian elogind of the systemd-logind daemon that provides support for systemd’s D-Bus-based login system, but runs without the whole of systemd. This setup opened the possibility of offering Debian builds with alternatives like init and SysVinit. With no consensus among elogind developers emerging after a lengthy discussion, Hartman concluded, “we’ve reached a point where in order to respect the people trying to get the work done, we need to figure out where we are as a project. We can either decide that this is work we want to facilitate, or work that we as a project decide is not important.” Although Hartman did not say, one reason for revisiting the issue was that in 2014 Debian specifically declined to take an official position on init alternatives.

    [...]

    Whether any other distribution might re-evaluate systemd five years later is uncertain. Perhaps only such a famously diverse distribution like Debian would even even consider doing so. Ubuntu, for example, tends not to poll contributors about technical directions. Neither does Fedora. But considering the lingering controversies, perhaps Debian’s re-evaluation should be copied by other distros -- if only to confirm that systemd is here to stay.

  • An introduction to VoIP for sysadmins

    Voice over IP (VoIP) is a technology that allows phone calls to traverse regular IP-based networks (such as the internet). You might associate phone systems with arcane, difficult to use technology. For many years, this was certainly the case. However, building a modern VoIP system doesn’t have to be difficult. While telephony is a complex discipline, building a simple phone network to place inbound and outbound calls is within the reach of anyone who takes the time to understand VoIP technology. With the foundation that you will build in this article, you can approach your next voice project with confidence in your knowledge of the basic protocols that make VoIP networks run.

  • 4 lessons for sysadmins from The Unicorn Project

    Most people practicing DevOps are familiar with the work of Gene Kim. You might have been introduced to DevOps and "The Three Ways" through The Phoenix Project, or you might rely on The DevOps Handbook as a guide to change your team’s culture and increase their productivity. Kim is back with a new volume from the perspective of a lead developer and architect in The Unicorn Project.

    The book is described as "a novel about developers, digital disruption, and thriving in the age of data." I was hooked on the storyline from the beginning when the lead character, Maxine, was exiled from a great engineering position in the fictional company Parts Unlimited because of a mistake that was made in production.

    Maxine took on her new role in the "Phoenix Project" with a smile (and a fair bit of skepticism). Her first challenge was to get the developer build environment up and running on her laptop. This proved to be no easy task as she ran into issue after issue and kept creating tickets with the internal help desk requesting access to certain share folders or license keys. She was getting nowhere fast, but she was determined.

    Maxine’s persistence got the attention of others. While instructed to "lay low" by management, that was not her style. She had quite the adventure throughout the rest of the story as she was invited to join "The Rebellion"—a group of the best and brightest engineers in the company training in secret to become a learning organization.

  • Fedora 32 Aiming To Ship With The Latest Mono 6 For Microsoft .NET On Linux/a>

    A late change proposal for Fedora 32 would jump the shipped Mono package from version 5.20 to version 6.6.

    Mono 6.6 could make it into this next Fedora Linux release for delivering the latest Microsoft .NET capabilities on Linux. With Mono 6 originally introduced in July 2019 the C# compiler version defaulted to 8.0, the Mono Interpreter was deemed feature complete, debugger improvements, and other enhancements. Mono 6.6 was released in December with continued work on their WebAssembly support, better CoreFX compatibility, and other work.

Syndicate content

More in Tux Machines

Today in Techrights

Android Leftovers

Canonical Outs New Major Kernel Update for All Supported Ubuntu Releases

Available for the Ubuntu 19.10 (Eoan Ermine), Ubuntu 18.04 LTS (Bionic Beaver), and Ubuntu 16.04 LTS (Xenial Xerus) operating system series, the new Linux kernel security update is here to fix a vulnerability (CVE-2019-14615) affecting systems with Intel Graphics Processing Units (GPUs), which could allow a local attacker to expose sensitive information. It also addresses a race condition (CVE-2019-18683) discovered in the Virtual Video Test Driver (VIVID), which could allow an attacker with write access to /dev/video0 to gain administrative privileges, as well as a flaw (CVE-2019-19241) in Linux kernel’s IO uring implementation that could also allow a local attacker to gain administrative privileges. Another race condition (CVE-2019-19602) was fixed on x86 platforms, which could let a local attacker to cause a denial of service (memory corruption) or gain administrative privileges. Moreover, issues (CVE-2019-18786 and CVE-2019-19947) discovered in the Renesas Digital Radio Interface (DRIF) and Kvaser CAN/USB drivers could allow local attackers to expose sensitive information (kernel memory). Read more

10 Best Linux Terminal Emulators [2020 Edition]

Do you prefer terminal emulators over GUI? But there are times when the terminal’s decent styling seems boring. In such cases, you look for more options to customize the terminal just like we do while choosing Linux distros. If that’s the case, your wait is over as we bring the list of best terminal emulators for Linux that you can use to refresh your monotonous daily work. Along with the styling, you can also turn the single terminal into a multigrid, observing the activity of each terminal simultaneously. Read more