Language Selection

English French German Italian Portuguese Spanish

Server

Red Hat on Middleware, RHEL AUDITD, and More Security Issues

Filed under
Red Hat
Server
Security
  • Open Outlook: Middleware (part 1)

    Middleware, both as a term and as a concept, has been around for decades. As a term, like other terms in the Darwinian world of IT jargon, it has followed a typical fashion lifecycle and is perhaps somewhat past its apogee of vogue. As a concept, however, middleware is more relevant than ever, and while a memetic new label hasn't quite displaced the traditional term, the capabilities themselves are still very much at the heart of enterprise application development.

    Middleware is about making both developers and operators more productive. Analogous to standardized, widely-used, proven subassemblies in the manufacture of physical goods such as cars, middleware relieves developers from "reinventing the wheel" so that they can compose and innovate at higher levels of abstraction. For the staff responsible for operating applications in production, at scale, with high reliability and performance, the more such applications use standardized middleware components and services, the more efficient and reliable the running of the application can be.

  • RHEL AUDITD
  • Security updates for Tuesday

Servers: Container Mythbusters, OpenShift (Red Hat) and IBM

Filed under
Server
SUSE
  • Video: Container Mythbusters

    Michael Jennings has been a UNIX/Linux sysadmin and software engineer for over 20 years. He has been the author of or a contributor to numerous open source software projects, including Charliecloud, Mezzanine, Eterm, RPM, Warewulf/PERCEUS, and TORQUE. Additionally, he co-founded the Caos Foundation, creators of CentOS, and has been the lead developer on 3 separate Linux distributions. He currently serves as the Platforms Team Lead in the HPC Systems group at Los Alamos National Laboratory, responsible for managing some of our nation’s most powerful supercomputers and is the primary author/maintainer for the LBNL Node Health Check (NHC) project. He is also the Vice President of HPCXXL, the extreme-scale HPC Users group.

  • Assessing App Portfolios for Onboarding to OpenShift

    Most professionals who’ve spent enough time in the IT industry have seen organizational silos in action. The classic silos are the ones created by Development and Operations organizations; silos we aim to break down through DevOps-style collaboration. But how many organizations pursuing digital transformation are continuing that siloed thinking when it comes to evaluating the application portfolio for cloud migration and modernization?

    Application Development, Database Operations, Infrastructure, and the various lines of business have portions of the application portfolio for which they take responsibility. When organizations think about modernization, they need to deemphasize the silos and develop a comprehensive approach that evaluates the entire portfolio, and the teams that support those applications. Otherwise, they’re leaving money on the table in the form of missed opportunities for cost savings and application improvements that generate revenue and increase customer engagement.

    A comprehensive approach takes into account the full range of workloads supported by the IT organization and starts making tough decisions about: which workloads can/should be modernized, which should be rehosted to take advantage of more efficient cloud platforms, and which should be left as is or even retired because they’re outlived their usefulness.

  • Big Blue Finally Brings IBM i To Its Own Public Cloud

    Well, that took quite a long time. After what seems like eons of nudging and cajoling and pushing, IBM is making the IBM i operating system and its integrated database management system, as well as the application development tools and other systems software, available on its self-branded IBM Cloud public cloud.

    Big Blue previewed its plans to bring both IBM i and AIX to the IBM Cloud at its annual Think conference in Las Vegas, on scale out machines aimed at small and medium businesses as well as to customers who want to run clusters of machines, and on scale up systems that have NUMA electronics that more tightly cluster them into shared memory systems.

Linux Foundation and Servers: LF Edge, Open Mainframe Project, CNCF and Kubernetes

Filed under
Server
  • ETSI MEC Creates Its First Working Group

    The group will be led by Walter Featherstone, a principal research engineer at Viavi.

    ETSI formed the MEC industry specification group (ISG) with 24 companies in December 2014. The group now boasts around 85 members. It set out to create a standardized, open environment for the integration of applications across multi-vendor MEC platforms.

    MEC will enable operators and vendors to provide cloud computing as well as an IT service environment at the edge of the network, which is characterized by low latency and high bandwidth. The technology is a rapidly developing application for 5G and IoT use cases.

    [...]

    The Linux Foundation, earlier this year, launched an edge computing initiative called LF Edge. The initiative will serve as an umbrella organization for five edge projects. The group has set out to build an open, interoperable framework for edge computing that is independent of hardware, silicon, cloud, or operating systems.

  • Open Mainframe Project: Zowe Ready for Prime Time

    There is a lot of interest in updating mainframe technology/interfaces across traditional enterprises. As development environments and toolsets have evolved outside the mainframe, there is a struggle to keep up—partially because backward compatibility requirements make wild changes difficult and partly because the very architecture of mainframes is different.

  • These Are Not The Containers You're Looking For

    It is a well-documented fact that the rise of cloud and open-source has been connected, which also brings some interesting tensions, as I explored in my previous article. In containers, this synergy seems stronger than ever. The juggernaut behind Kubernetes and many related open source projects, the Cloud Native Computing Foundation (CNCF), is part of the Linux Foundation. The CNCF charter is clear about the intentions of the foundation: it seeks to foster and sustain an ecosystem of open source, vendor-neutral projects. Consequentially, since the CNCF's inception in 2014, it has become increasingly feasible to manage a complex cloud-native stack with a large mix of these open source projects (some interesting data in the foundation's annual report). The more you get into container-native methodologies, the more open source you will use.

  • What is Knative, and What Can It Do for You?

    Kubernetes is great, as it is. But with Knative, a new, open source platform spearheaded by Google, Kubernetes can be even better.

    If you haven’t yet taken a look at what Knative is or how it can save developers time and headaches, you could be missing out on some powerful features that help you get more out of Kubernetes (and containers in general) with less effort.

    Keep reading for an overview of what Knative is and how it can help you double down on microservices and containers.

Databases: DigitalOcean, InfluxData and SQLite

Filed under
Server
OSS
  • DigitalOcean launches its managed database service

    DigitalOcean started as an affordable but basic virtual private server offering with a pleasant user interface. Over the last few years, the company started adding features like object and block storage, load balancers and a container service. Today, it’s expanding its portfolio once again by launching a feature that was sorely missing in its lineup: a managed database service.

    The first edition of these DigitalOcean Managed Databases only supports PostgreSQL, the popular open-source relational database. Later this year, it’ll add MySQL and Redis support (likely in Q2 or Q3). As for other databases, the company says that it’ll listen to customer feedback and use that to prioritize other offerings.

  • InfluxData Secures $60 Million in Series D Funding to Bring the Value of Time Series to the Enterprise Mainstream
  • InfluxData raises $60 million for time-series database software

    The amount of data generated today boggles the mind — U.S. companies alone produce 2.5 quintillion bytes daily, enough to fill ten thousand Libraries of Congress in a year — and much of it is of the time-series variety (i.e., data points indexed in time order). Given the sheer volume, it’s no wonder that only 12 percent of companies say they’re analyzing the data they have, according to Forrester Research.

    That’s one of the reasons Paul Dix — who’s helped to build software for startups, large companies, and organizations like Microsoft, Google, McAfee, Thomson Reuters, and Air Force Space Command — founded Y Combinator- and Bloomberg Beta-backed InfluxData (formerly Errplane) in 2012. The San Francisco startup develops an open source time series platform, InfluxDB, that is optimized to handle metrics and events in DevOps, internet of things (IoT), and real-time analytics domains. And after a banner year that saw revenue double, InfluxDB 2.0 launch in alpha, and Flux — a functional language for both querying and processing data — debut in technical preview, the startup is gearing up for growth.

  • Why you should use SQLite

    Lift the hood on most any business application, and you’ll reveal some way to store and use structured data. Whether it’s a client-side app, an app with a web front-end, or an edge-device app, chances are it needs an embedded database of some kind.

    SQLite is an embeddable open source database, written in C and queryable with conventional SQL, that is designed to cover those use cases and more. SQLite is designed to be fast, portable, and reliable, whether you’re storing only kilobytes of data or multi-gigabyte blobs.

Bare-Metal Kubernetes Servers and SUSE Servers

Filed under
Server
SUSE
  • The Rise of Bare-Metal Kubernetes Servers

    While most instances of Kubernetes today are deployed on virtual machines running in the cloud or on-premises, there is a growing number of instances of Kubernetes being deployed on bare-metal servers.

    The two primary reasons for opting to deploy Kubernetes on a bare- metal server over a virtual machine usually are performance and reliance on hardware accelerators. In the first instance, an application deployed at the network edge might be too latency-sensitive to tolerate the overhead created by a virtual machine. AT&T, for example, is working with Mirantis to deploy Kubernetes on bare-metal servers to drive 5G wireless networking services.

  • If companies can run SAP on Linux, they can run any application on it: Ronald de Jong

    "We have had multiple situations with respect to security breaches in the last couple of years, albeit all the open source companies worked together to address the instances. As the source code is freely available even if something goes wrong, SUSE work closely with open source software vendors to mitigate the risk", Ronald de Jong, President of -Sales, SUSE said in an interview with ET CIO.

  • SUSE Public Cloud Image Life-cycle

    It has been a while since we published the original image life-cycle guidelines SUSE Image Life Cycle for Public Cloud Deployments. Much has been learned since, technology has progressed, and the life-cycle of products has changed. Therefore, it is time to refresh things, update our guidance, and clarify items that have led to questions over the years. This new document serves as the guideline going forward starting February 15th, 2019 and supersedes the original guideline. Any images with a date stamp later than v20190215 fall under the new guideline. The same basic principal as in the original guideline applies, the image life-cycle is aligned with the product life-cycle of the product in the image. Meaning a SLES image generally aligns with the SUSE Linux Enterprise Server life-cycle and a SUSE Manager image generally aligns with the SUSE Manager life-cycle.

Server: Network Function Virtualization. Little Backup Box, Oracle and Red Hat

Filed under
Red Hat
Server
  • NFV, virtualized central offices, and the Need for VNF Data Protection

    Network Function Virtualization (NFV) is designed to provide value around modularity and flexibility. NFV can allow different radio access networks and customer applications to run on one physical network so that the 5G revolution becomes a reality. Critical enterprise compliance requirements, including data protection and disaster recovery, must still be met during this race to modernization.

  • Little Backup Box: A Handful of Improvements and a Dash of PHP

    My every Little Backup Box improvement project starts with the same thought, It does the job but... This time around I wanted to fix and improve several things. Firstly, since the DLNA feature wasn't working at all, I removed it altogether a while ago. Subsequently, I missed the ability to browse and view freshly backed up photos on many occasions. Secondly, I'm not a big fan of Python. There is no particular reason for that, I just never really warmed up to the language. On the other hand, PHP has always been my personal favorite and go-to scripting language, no matter what some professional developers think of it. So I wanted to swap the Python-based Little Backup Box web interface with a simpler, and arguably more elegant, version written in PHP. Finally, Little Backup Box theoretically can be installed on any Linux machine running a Debian-based Linux distribution. But due to some values hard-wired in the scripts, deploying Little Backup Box on any system other than Raspbian requires some manual tweaking. This is something I wanted to fix as well.

  • What is Oracle Linux? And where to Download it

    Oracle Linux is based on and fully compatible with Red Hat Enterprise Linux ( source code and binaries ). It has the exact same package as the same version of Red Hat Enterprise Linux and has the exact same source code as the Red Hat distribution. There are approximately 1000 packages in the distribution. Even if the source code of the two is compared byte by byte, there is no difference. The only change is to remove the trademark and copyright information. So, that’s why we can call it an Oracle Enterprise Linux.

    Oracle Linux, the first version of Oracle released in early 2006, one of the Linux distributions, to better support Oracle software and hardware support. Because of the enterprise-level support plan UBL (Unbreakable Linux) provided by Oracle, many people called it an indestructible Linux.

  • Linux chops are crucial in containerized world, says Red Hat executive

    How are companies in 2019 going to make multicloud a practical reality? The jury seems to have selected containers (a virtualized method for running distribute applications). This is why legacies and startups alike are flooding the market with container products. Which should companies choose?

    Ever see those Red Hat Inc. T-shirts that say “Containers Are Linux”? That pretty much sums up Red Hat’s bid for the containerization championship.

    “As you move into that space of Kubernetes, and containers and orchestration, you really want someone who knows Linux,” said Stefanie Chiras (pictured), vice president and general manager of the Red Hat Enterprise Linux business unit, known as RHEL, at Red Hat.

    Chiras spoke with Dave Vellante (@dvellante) and Stu Miniman (@stu), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the IBM Think event in San Francisco. They discussed RHEL 8 and the crucial importance of Linux for containers. (* Disclosure below.)

  • Red Hat Delivers Unified Integration Platform for Cloud-Native Application Development
  • Red Hat Extends Datacenter Infrastructure Control, Automation with Latest Version of Red Hat CloudForms

SUSE and Red Hat Server Software

Filed under
Red Hat
Server
SUSE
  • SUSE OpenStack Cloud 9 Release Candidate 1 is here!
  • The New News on OpenShift 3.11

    Greetings fellow OpenShift enthusiasts! Not too long ago, Red Hat announced that OKD v3.11, the last release in the 3.x stream, is now generally available. The latest release of OpenShift enhances a number of current features that we know and love, as well as a number of interesting updates and technology previews for features that may or may not be included in OpenShift 4.0. Let’s take a look at one of the more exciting releases that may be part of The Great Updates coming in OpenShift 4.0.

  • Red Hat Satellite 6.4.2 has just been released

    Red Hat Satellite 6.4.2 is now generally available. The main drivers for the 6.4.2 release are upgrade and stability fixes. Eighteen bugs have been addressed in this release - the complete list is at the end of the post. The most notable issue is support of cloning for Satellite 6.4.

    Cloning allows you to copy your Satellite installation to another host to facilitate testing or upgrading the underlying operating system. For example, when moving a Satellite installation from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7. An overview of this feature is available on Red Hat’s Customer Portal.

Server: UNIX, Server Virtualization, Red Hat and Fedora, Networking and PostgreSQL

Filed under
Server
  • The long, slow death of commercial Unix [Ed: Microsoft propagandist Andy Patrizio should also do an article about the death of Windows Server.]

    In the 1990s and well into the 2000s, if you had mission-critical applications that required zero downtime, resiliency, failover and high performance, but didn’t want a mainframe, Unix was your go-to solution.

    If your database, ERP, HR, payroll, accounting, and other line-of-business apps weren’t run on a mainframe, chances are they ran on Unix systems from four dominant vendors: Sun Microsystems, HP, IBM and SGI. Each had its own flavor of Unix and its own custom RISC processor. Servers running an x86 chip were at best used for file and print or maybe low-end departmental servers.

  • What is Server Virtualization: Is It Right For Your Business?

    In the modern world of IT application deployment, server virtualization is a commonly used term. But what exactly is server virtualization and is it right for your business?

    Server virtualization in 2019 is a more complicated and involved topic than it was when the concept first started to become a popular approach nearly two decades ago. However, the core basic concepts and promises remain the same.

  • Transitioning Red Hat SSO to a highly-available hybrid cloud deployment

    About two years ago, Red Hat IT finished migrating our customer-facing authentication system to Red Hat Single Sign-On (Red Hat SSO). As a result, we were quite pleased with the performance and flexibility of the new platform. Due to some architectural decisions that were made in order to optimize for uptime using the technologies at our disposal, we were unable to take full advantage of Red Hat SSO’s robust feature set until now. This article describes how we’re now addressing database and session replication between global sites.

  • Red Hat named to Fortune’s 100 Best Companies to Work For list

    People come to work at Red Hat for our brand, but they stay for the people and the culture. It's integral to our success as an organization. It's what makes the experience of being a Red Hatter and working with other Red Hatters different. And it's what makes us so passionate about our customers’ and Red Hat’s success. In recognition of that, Red Hat has been ranked No. 50 on Fortune Magazine's list of 100 Best Companies to Work For! Hats off--red fedoras, of course--to all Red Hatters!

  • News from Fedora Infrastructure

    One of the first tasks we have achieved is to move as many application we maintain to use CentOS CI for our Continuous Integration pipeline. CentOS CI provides us with a Jenkins instance that is running in an OpenShift cluster, you can have a look at the this instance here.

    Since a good majority of our application are developed in Python, we agreed on using tox to execute our CI tests. Adopting tox on our application allows us to use a really convenient way to configure the CI pipeline in Jenkins. In fact we only needed to create .cico.pipeline file in the application repository with the following.

  • Mirantis to Help Build AT&T's Edge Computing Network for 5G On Open Source

    The two companies hope other telcos will follow AT&T's lead in building their 5G networks on open source software.

  • The Telecom Industry Has Moved to Open Source

    The telecom industry is at the heart of the fourth industrial revolution. Whether it’s connected IoT devices or mobile entertainment, the modern economy runs on the Internet.
    However, the backbone of networking has been running on legacy technologies. Some telecom companies are centuries old, and they have a massive infrastructure that needs to be modernized.
    The great news is that this industry is already at the forefront of emerging technologies. Companies such as AT&T, Verizon, China Mobile, DTK, and others have embraced open source technologies to move faster into the future. And LF Networking is at the heart of this transformation.
    “2018 has been a fantastic year,” said Arpit Joshipura, General Manager of Networking at Linux Foundation, speaking at Open Source Summit in Vancouver last fall. “We have seen a 140-year-old telecom industry move from proprietary and legacy technologies to open source technologies with LF Networking.”

  • Monroe Electronics Releases Completely Redesigned HALO Version 2.0

    With improvements including a new web-based interface and its shift to a unified web-server platform, HALO V2.0 simplifies and streamlines all of these critical processes. The new web-based interface for HALO V2.0 allows users to work with their preferred web browser (e.g., Chrome, Firefox, Safari). The central HALO server now runs on a Linux OS (Ubuntu and CentOS 7) using a PostgreSQL database.

  • PostgreSQL 11.2, 10.7, 9.6.12, 9.5.16, and 9.4.21 released

    The PostgreSQL project has put out updated releases for all supported versions. "This release changes the behavior in how PostgreSQL interfaces with 'fsync()' and includes fixes for partitioning and over 70 other bugs that were reported over the past three months."

Server: Kiwi TCMS, Kubernetes Operators, OpenFabrics Alliance and Linux Watch Command

Filed under
Server
  • Kiwi TCMS 6.5.3

    We're happy to announce Kiwi TCMS version 6.5.3! This is a security, improvement and bug-fix update that includes new versions of Django, includes several database migrations and fixes several bugs. You can explore everything at https://demo.kiwitcms.org!

  • How to explain Kubernetes Operators in plain English
  • The State of High-Performance Fabrics: A Chat with the OpenFabrics Alliance

    The global high-performance computing (HPC) market is growing and its applications are constantly evolving. These systems rely on networks, often referred to as fabrics, to link servers together forming the communications backbone of modern HPC systems. These fabrics need to be high speed and highly scalable to efficiently run advanced computing applications. Often, there is also a requirement that the software that runs these fabrics be open source. It turns out that this description of high-performance fabrics is increasingly applicable to environments outside classical HPC, even as HPC continues to serve as the bellwether for the future of commercial and enterprise computing. Fortunately, the mission of the OpenFabrics Alliance (OFA) has recently been updated to include accelerating the development of advanced fabrics and importantly to further their adoption in fields beyond traditional HPC.

  • Linux Watch Command

Server/Linux Foundation: Sleepy Sysadmins, Academy Software Foundation, and Cloud Native Computing Foundation (CNCF)

Filed under
Server
  • When I was sleepy

    One day I came back from the lunch (a good one), and was feeling a bit sleepy. I had taken down the tomcat server, pushed the changes to the application, and then wanted to start the server up again.

    [...]

    From that day on, before doing any kind of destructive operation, I double check the command prompt for any typo. I make sure, that I don’t remove anything randomly and also make sure that I have my backups is place.

  • Sony Pictures Has Open-Sourced Software Used to Make ‘Into the Spider-Verse’

    Sony Pictures Imageworks has contributed a software tool used to create movies like "Spider-Man: Into the Spider-Verse," "Hotel Transylvania 3," "Alice in Wonderland" and "Cloudy with a Chance of Meatballs" to the open source community.

    OpenColorIO, a tool used for color management during the production process, has become the second software project of the Academy Software Foundation , an industry-wide open source association spearheaded by the Linux Foundation.

    [...]

    The Academy Software Foundation was founded in August of 2018 as an industry-wide effort to advance the development and use of open source software in Hollywood. Founding members include Autodesk, Cisco, DreamWorks, Epic Games, Foundry, Google Cloud, Intel, Walt Disney Studios and others. Sony Pictures Entertainment/Sony Pictures Imageworks, Warner Bros., the Blender Foundation and the Visual Effects Society (VES) joined the group last fall.

  • The CNCF 2018 annual report
  • Decipher Technology Studios Announces Silver Membership with the Cloud Native Computing Foundation (CNCF)

    Decipher Technology Studios, the leader in cognitive service mesh operations for the enterprise, announced it is now a silver member of the Cloud Native Computing Foundation (CNCF), a sub-foundation of the Linux Foundation.

Syndicate content

More in Tux Machines

Red Hat on Middleware, RHEL AUDITD, and More Security Issues

  • Open Outlook: Middleware (part 1)
    Middleware, both as a term and as a concept, has been around for decades. As a term, like other terms in the Darwinian world of IT jargon, it has followed a typical fashion lifecycle and is perhaps somewhat past its apogee of vogue. As a concept, however, middleware is more relevant than ever, and while a memetic new label hasn't quite displaced the traditional term, the capabilities themselves are still very much at the heart of enterprise application development. Middleware is about making both developers and operators more productive. Analogous to standardized, widely-used, proven subassemblies in the manufacture of physical goods such as cars, middleware relieves developers from "reinventing the wheel" so that they can compose and innovate at higher levels of abstraction. For the staff responsible for operating applications in production, at scale, with high reliability and performance, the more such applications use standardized middleware components and services, the more efficient and reliable the running of the application can be.
  • RHEL AUDITD
  • Security updates for Tuesday

Vulkan/DXVK and More GNU/Linux Games (Native)

Software and HowTos: Organizer, Handbrake, Logical & in Bash and Python

Android Leftovers