Language Selection

English French German Italian Portuguese Spanish

Red Hat

Servers: Twitter Moves to Kubernetes, Red Hat/IBM News and Tips

Filed under
Red Hat
Server
  • Twitter Announced Switch from Mesos to Kubernetes

    On the 2nd of May at 7:00 PM (PST), Twitter held a technical release conference and meetup at its headquarters in San Francisco. At the conference, David McLaughlin, Product and Technical Head of Twitter Computing Platform, announced that Twitter's infrastructure would completely switch from Mesos to Kubernetes.

    For a bit of background history, Mesos was released in 2009, and Twitter was one of the early companies in support and use Mesos. As one of the most successful social media giants in the world, Twitter has received much attention due to its large production cluster scale (having tens of thousands of nodes). In 2010, Twitter started to develop the Aurora project based on the Mesos project to make it more convenient to manage both its online and offline business and gradually adopt to Mesos.

  • Linux Ending Support for the Floppy Drive, Unity 2019.2 Launches Today, Purism Unveils Final Librem 5 Smartphone Specs, First Kernel Security Update for Debian 10 "Buster" Is Out, and Twitter Is Switching from Mesos to Kubernetes

    Twitter is switching from Mesos to Kubernetes. Zhang Lei, Senior Technical Expert on Alibaba Cloud Container Platform and Co-maintainer of Kubernetes Project, writes "with the popularity of cloud computing and the rise of cloud-based containerized infrastructure projects like Kubernetes, this traditional Internet infrastructure starts to show its age—being a much less efficient solution compared with that of Kubernetes". See Zhang's post for some background history and more details on the move.

  • Three ways automation can help service providers digitally transform

    As telecommunication service providers (SPs) look to stave off competitive threats from over the top (OTT) providers, they are digitally transforming their operations to greatly enhance customer experience and relevance by automating their networks, applying security, and leveraging infrastructure management. According to EY’s "Digital transformation for 2020 and beyond" study, process automation can help smooth the path for SP IT teams to reach their goals, with 71 percent of respondents citing process automation as "most important to [their] organization’s long-term operational excellence."

    There are thousands of virtual and physical devices that comprise business, consumer, and mobile services in an SP’s environment, and automation can help facilitate and accelerate the delivery of those services.

    [...]

    Some SPs are turning to Ansible and other tools to embark on their automation journey. Red Hat Ansible Automation, including Red Hat Ansible Engine and Red Hat Ansible Tower, simplifies software-defined infrastructure deployment and management, operations, and business processes to help SPs more effectively deliver consumer, business, and mobile services.

    Red Hat Process Automation Manager (formerly Red Hat JBoss BPM Suite) combines business process management, business rules management, business resource optimization, and complex event processing technologies in a platform that also includes tools for creating user interfaces and decision services. 

  • Deploy your API from a Jenkins Pipeline

    In a previous article, 5 principles for deploying your API from a CI/CD pipeline, we discovered the main steps required to deploy your API from a CI/CD pipeline and this can prove to be a tremendous amount of work. Hopefully, the latest release of Red Hat Integration greatly improved this situation by adding new capabilities to the 3scale CLI. In 3scale toolbox: Deploy an API from the CLI, we discovered how the 3scale toolbox strives to automate the delivery of APIs. In this article, we will discuss how the 3scale toolbox can help you deploy your API from a Jenkins pipeline on Red Hat OpenShift/Kubernetes.

  • How to set up Red Hat CodeReady Studio 12: Process automation tooling

    The release of the latest Red Hat developer suite version 12 included a name change from Red Hat JBoss Developer Studio to Red Hat CodeReady Studio. The focus here is not on the Red Hat CodeReady Workspaces, a cloud and container development experience, but on the locally installed developers studio. Given that, you might have questions about how to get started with the various Red Hat integration, data, and process automation product toolsets that are not installed out of the box.

    In this series of articles, we’ll show how to install each set of tools and explain the various products they support. We hope these tips will help you make informed decisions about the tooling you might want to use on your next development project.

SUSE displaces Red Hat @ Istanbul Technical University

Filed under
Red Hat
SUSE

Did you know the third-oldest engineering sciences university in the world is in Turkey? Founded in 1773, Istanbul Technical University (ITU) is one of the oldest universities in Turkey. It trains more than 40,000 students in a wide range of science, technology and engineering disciplines.

The third-oldest engineering sciences university selected the oldest Enterprise Linux company. Awesome match of experience! The university ditched the half-closed/half-open Red Hat products and went for truly open, open source solutions from SUSE.

Read more

Red Hat/IBM Leftovers

Filed under
Red Hat
  • 3scale toolbox: Deploy an API from the CLI

    Deploying your API from a CI/CD pipeline can be a tremendous amount of work. The latest release of Red Hat Integration greatly improved this situation by adding new capabilities to the 3scale CLI. The 3scale CLI is named 3scale toolbox and strives to help API administrators to operate their services as well as automate the delivery of their API through Continuous Delivery pipelines.

    Having a standard CLI is a great advantage for our customers since they can use it in the CI/CD solution of their choice (Jenkins, GitLab CI, Ansible, Tekton, etc.). It is also a means for Red Hat to capture customer needs as much as possible and offer the same feature set to all our customers.

  • Red Hat Universal Base Image: How it works in 3 minutes or less
  • Guidelines for instruction encoding in the NOP space
  • Edge computing: 6 things to know

    As more and more things get smart – from thermostats and toothbrushes to utility grids and industrial machines – data is being created nearly everywhere, making it increasingly urgent for IT leaders to determine how and where that data will be processed.

    Enter the edge. There are perhaps as many ways to define edge computing as there are ways to apply it. At its core, edge computing is the practice of processing data close to where it is generated.

Red Hat and IBM

Filed under
Red Hat
Server
  • 16 essentials for sysadmin superheroes

    You know you're a sysadmin if you are either knee-deep in system logs, constantly handling user errors, or carving out time to document it all along the way. Yesterday was Sysadmin Appreciation Day and we want to give a big "thank you" to our favorite IT pros. We've pulled together the ultimate list of tasks, resources, tools, commands, and guides to help you become a sysadmin superhero.

  • Kubernetes by the numbers: 13 compelling stats

    Fast-forward to the dog days of summer 2019 and a fresh look at various stats in and around the Kubernetes ecosystem, and the story’s sequel plays out a lot like the original: Kubernetes is even more popular. It’s tough to find a buzzier platform in the IT world these days. Yet Kubernetes is still quite young; it just celebrated its fifth “birthday,” and version 1.0 of the open source project was released just over four years ago. So there’s plenty of room for additional growth.

  • Vendors not contributing to open source will fall behind says John Allessio, SVP & GM, Red Hat Global Services
  • IBM open-sources AI algorithms to help advance cancer research

    IBM Corp. has open-sourced three artificial intelligence projects focused on cancer research.

  • IBM Just Made its Cancer-Fighting AI Projects Open-Source

    IBM just announced that it was making three of its artificial intelligence projects designed to help doctors and cancer researchers open-source.

  • IBM Makes Its Cancer-Fighting AI Projects Open Source

    IBM launches three new AI projects to help researchers and medical experts study cancer and find better treatment to the said disease in the future.

  • New Open-Source AI Machine Learning Tools to Fight Cancer

    In Basel, Switzerland at this week’s 18th European Conference on Computational Biology (ECCB) and 27th Conference on Intelligent Systems for Molecular Biology (ISMB), IBM will share three novel artificial intelligence (AI) machine learning tools called PaccMann, INtERAcT, and PIMKL, that are designed to assist cancer researchers.

    [...]

    “There have been a plethora of works focused on prediction of drug sensitivity in cancer cells, however, the majority of them have focused on the analysis of unimodal datasets such as genomic or transcriptomic profiles of cancer cells,” wrote the IBM researchers in their study. “To the best of our knowledge, there have not been any multi-modal deep learning solutions for anticancer drug sensitivity prediction that combine a molecular structure of compounds, the genetic profile of cells and prior knowledge of protein interactions.”

  • IBM offering cancer researchers 3 open-source AI tools

    Researchers and data scientists at IBM have developed three novel algorithms aimed at uncovering the underlying biological processes that cause tumors to form and grow.

    And the computing behemoth is making all three tools freely available to clinical researchers and AI developers.

    The offerings are summarized in a blog post written by life sciences researcher Matteo Manica and data scientist Joris Cadow, both of whom work at an IBM research lab in Switzerland.

  • Red Hat CTO says no change to OpenShift, conference swag plans after IBM buy

    Red Hat’s CTO took to Reddit this week to reassure fans that the company would stick to its open source knitting after the firm absorbed by IBM earlier this month AND that their Red Hat swag could be worth a packet in future .

    The first question to hit in Chris Wright’s Reddit AMA regarded the effect on Red Hat’s OpenShift strategy. The short answer, was “no effect”.

    “First, Red Hat is still Red Hat, and we are focused on delivering the industry’s most comprehensive enterprise Kubernetes platform,” Wright answered “Second, upstream first development in Kubernetes and community ecosystem development in OKD are part of our product development process. Neither of those change. The IBM acquisition can help accelerate the adoption of OpenShift given the increase scale and reach in sales and services that IBM has.”

IBM, Red Hat, Fedora Leftovers

Filed under
Red Hat
  • 5 principles for deploying your API from a CI/CD pipeline

    With companies generating more and more revenue through their APIs, these APIs also have become even more critical. Quality and reliability are key goals sought by companies looking for large scale use of their APIs, and those goals are usually supported through well-crafted DevOps processes. Figures from the tech giants make us dizzy: Amazon is deploying code to production every 11.7 seconds, Netflix deploys thousands of time per day, and Fidelity saved $2.3 million per year with their new release framework. So, if you have APIs, you might want to deploy your API from a CI/CD pipeline.

    Deploying your API from a CI/CD pipeline is a key activity of the “Full API Lifecycle Management.” Sitting between the “Implement” and “Secure” phases, the “Deploy” activity encompasses every process needed to bring the API from source code to the production environment. To be more specific, it covers Continuous Integration and Continuous Delivery.

  • DevNation Live: Subatomic reactive systems with Quarkus

    DevNation Live tech talks are hosted by the Red Hat technologists who create our products. These sessions include real solutions and code and sample projects to help you get started. In this talk, Clement Escoffier, Principal Software Engineer at Red Hat, will dive into the reactive side of Quarkus.

    Quarkus provides a supersonic development experience and a subatomic execution environment thanks to its integration with GraalVM. But, that’s not all. Quarkus also unifies the imperative and reactive paradigm.

    This discussion is about the reactive side of Quarkus and how you can use it to implement reactive and data streaming applications. From WebSockets to Kafka integration and reactive streams, you will learn how to build a reactive system with Quarkus.

  • What does it mean to be a sysadmin hero?

    Sysadmins spend a lot of time preventing and fixing problems. There are certainly times when a sysadmin becomes a hero, whether to their team, department, company, or the general public, though the people they "saved" from trouble may never even know.

    Enjoy these two stories from the community on sysadmin heroics. What does it mean to you?

  • What’s The Future Of Red Hat At IBM

    IBM has a long history of working with the open source community. Way back in 1999, IBM announced a $1billion investment in Linux. IBM is also credited for creating one of the most innovative advertisements about Linux. But IBM’s acquisition of Red Hat raised some serious and genuine questions around IBM’s commitment to Open Source and the future of Red Hat at the big blue.

    Red Hat CTO, Chris Wright, took it upon himself to address some of these concerns and answer people’s questions in an AMA (Ask Me Anything) on Reddit. Wright has evolved from being a Linux kernel developer to becoming the CTO of the world’s largest open source company. He has his pulse on both the business and community sides of the open source world.

  • Financial industry leaders talk open source and modernization at Red Hat Summit 2019

    IT leaders at traditional financial institutions seem poised to become the disruptors rather than the disrupted in what has become a dynamic industry. And they’re taking advantage of enterprise open source technology to do it, building applications in exciting and innovative ways, and even adopting the principles and culture of startup technology companies themselves.

  • FPgM report: 2019-30

    Here’s your report of what has happened in Fedora Program Management this week. The mass rebuild is underway.

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

Fedora's ARM SIG Is Looking At Making An AArch64 Xfce Desktop Spin

Filed under
Red Hat

Another late change proposal being talked about for this autumn's Fedora 31 release is introducing a 64-bit ARM (AArch64) Xfce desktop spin.

Fedora's ARM special interest group already maintains an AArch64 minimal spin, a server spin, and Fedora Workstation complete with the GNOME Shell desktop. This proposed Xfce desktop image for 64-bit Arm SoCs would be catering towards lighter-weight SBCs/systems not capable or interested in running a full workstation desktop.

Read more

Also: Now available: The user preview release of Fedora CoreOS

Red Hat CTO Chris Wright talks about Red Hat's future with IBM

Filed under
Red Hat

Many people are still waiting for the other shoe to drop now that IBM has acquired Red Hat. In a Reddit Ask Me Anything (AMA), Red Hat CTO and Linux kernel developer Chris Wright reassured everyone that Red Hat would be staying its open-source and product course.

Question number one was what are the plans for Red Hat's Kubernetes offering OpenShift. Kubernetes is vital for the modern-day hybrid cloud. Indeed, one of the big reasons why IBM bought Red Hat was for its hybrid-cloud expertise. That said, IBM has its own native Kubernetes offering, IBM Cloud Kubernetes Service for use on its private cloud offerings.

Read more

IBM and Servers

Filed under
Red Hat
Server
  • Controlling Red Hat OpenShift from an OpenShift pod

    This article explains how to configure a Python application running within an OpenShift pod to communicate with the Red Hat OpenShift cluster via openshift-restclient-python, the OpenShift Python client.

  • 24 sysadmin job interview questions you should know

    As a geek who always played with computers, a career after my masters in IT was a natural choice. So, I decided the sysadmin path was the right one. In the process of my career, I have grown quite familiar with the job interview process. Here is a look at what to expect, the general career path, and a set of common questions and my answers to them.

  • How to transition into a career as a DevOps engineer

    DevOps engineering is a hot career with many rewards. Whether you're looking for your first job after graduating or seeking an opportunity to reskill while leveraging your prior industry experience, this guide should help you take the right steps to become a DevOps engineer.

    [...]

    If you have prior experience working in technology, such as a software developer, systems engineer, systems administrator, network operations engineer, or database administrator, you already have broad insights and useful experience for your future role as a DevOps engineer. If you're just starting your career after finishing your degree in computer science or any other STEM field, you have some of the basic stepping-stones you'll need in this transition.

  • Getting Started with Knative on Ubuntu

    Serverless computing is a style of computing that simplifies software development by separating code development from code packaging and deployment. You can think of serverless computing as synonymous with function as a service (FaaS). 

    Serverless has at least three parts, and consequently can mean something different depending on your persona and which part you look at – the infrastructure used to run your code, the framework and tools (middleware) that hide the infrastructure, and your code which might be coupled with the middleware. In practice, serverless computing can provide a quicker, easier path to building microservices. It will handle the complex scaling, monitoring, and availability aspects of cloud native computing.

Red Hat Leftovers

Filed under
Red Hat
  • Knative’s first year: Where it’s at and what's next in serverless

    Today we celebrate the one year anniversary since the Knative project came to the world of Kubernetes. Red Hat is one of the top vendor contributors focused on bringing the project to enterprises looking to enable portability of serverless applications in hybrid environments.

    Knative helps developers build and run serverless applications anywhere Kubernetes runs—on-premise or on any cloud. It was originally started by Google, but is maintained by the community, which includes companies like Red Hat, Google, IBM and SAP and a great ecosystem of startups. The project aims to extend Kubernetes to provide a set of components for deploying, running and managing modern applications running serverless. Serverless computing means building and running applications that do not require server management and that scale up and down (even to zero) based on demand, which usually happens through incoming events. Knative was announced last year with a number of goals to help make it easier for developers to focus on the applications, versus the underlying infrastructure, and our work together has coalesced and consolidated into this initiative as a community versus attempting to handle it alone.

  • The business value of a Red Hat Technical Account Manager

    As organizations pursue technologies to help them digitally transform their operations, it becomes increasingly clear how important the human side of this process is. With virtualization, cloud computing, containers and many other tools available to speed innovation, there is pressure to change while maintaining a stable and secure environment--which takes a special kind of expertise. It highlights for us the importance of Red Hat’s variety of support and service offerings.

  • Red Hat Certificate System achieves Common Criteria certification

    With cybersecurity front and center for CIOs across the public and private sectors, providing infrastructure technologies that meet the stringent security needs for sensitive production applications is critical. Today, we’re pleased to expand Red Hat’s offerings of open technologies to power the world’s most critical workloads with the Common Criteria certification of Red Hat Certificate System.

  • FTC Announces $5 Billion Settlement with Facebook, First Preview Release of Fedora CoreOS Now Available, Red Hat Certificate System Achieves Common Criteria Certification, GNOME 3.33.4 Released and Summer Update on /e/

    The Fedora CoreOS team announces the first preview release of Fedora CoreOS, "a new Fedora edition built specifically for running containerized workloads securely and at scale". From the announcement: "It's designed specifically for running containerized workloads without regular maintenance, automatically updating itself with the latest OS improvements, bug fixes, and security updates. It provisions itself with Ignition, runs containers with Podman and Moby, and updates itself atomically and automatically with rpm-ostree." Note that only the testing stream is available at this time. You can download the Fedora CoreOS preview release here.

  • Fedora CoreOS Sees Its First Preview Release

    It was a year and a half ago that Red Hat acquired CoreOS while today they are announcing their first preview release of Fedora CoreOS.

    Fedora CoreOS is the successor to Fedora Atomic Host and CoreOS Container Linux as a new distribution flavor for running containerized workloads with an emphasis on security and scalability.

deepin 15.11 GNU/Linux Release and Fedora's Plan to Adopt Its Desktop

Filed under
GNU
Linux
Red Hat
  • deepin 15.11 GNU/Linux Released with Download Links, Mirrors, and Torrents

    deepin 15.11 released this July with the slogan "Better Never Stops" just three months after the previous 15.10 last April. Here's official direct download links from official server, SourceForge, OSDN, and also several mirrors, and of course torrents provided by community. Just like usual, I strongly recommend you to use BitTorrent way instead and then verify your ISO to be identical with the official one. Finally, so you can safely burn that ISO to DVD or USB and run deepin GNU/Linux. Happy downloading!

  • Deepin 15.11 Desktop Could Be On The Way To Fedora 31

    Released last week was Deepin 15.11 with various desktop improvements for this popular third-party desktop option. This desktop option could be on its way to Fedora 31's package repository to replace the existing Deepin 5.9 packaging.

    Deepin 15.11 has many bug fixes to its KWin integration code, disc burning functionality has been added to its file manager, a more useful battery icon on the desktop, improved screen preview from the dock, Cloud Sync functionality, and a wide variety of fixes.

Syndicate content

More in Tux Machines

Linux Kernel and Linux Foundation Leftovers

  • Improve memset
    
    since the merge window is closing in and y'all are on a conference, I
    thought I should take another stab at it. It being something which Ingo,
    Linus and Peter have suggested in the past at least once.
    
  • An Improved Linux MEMSET Is Being Tackled For Possibly Better Performance

    Borislav Petkov has taken to improve the Linux kernel's memset function with it being an area previously criticzed by Linus Torvalds and other prominent developers. Petkov this week published his initial patch for better optimizing the memset function that is used for filling memory with a constant byte.

  • Kernel Address Space Isolation Still Baking To Limit Data Leaks From Foreshadow & Co

    In addition to the work being led by DigitalOcean on core scheduling to make Hyper Threading safer in light of security vulnerabilities, IBM and Oracle engineers continue working on Kernel Address Space Isolation to help prevent data leaks during attacks. Complementing the "Core Scheduling" work, Kernel Address Space Isolation was also talked about at this week's Linux Plumbers Conference in Lisbon, Portugal. The address space isolation work for the kernel was RFC'ed a few months ago as a feature to prevent leaking sensitive data during attacks like L1 Terminal Fault and MDS. The focus on this Kernel ASI is for pairing with hypervisors like KVM as well as being a generic address space isolation framework.

  • The Linux Kernel Is Preparing To Enable 5-Level Paging By Default

    While Intel CPUs aren't shipping with 5-level paging support, they are expected to be soon and distribution kernels are preparing to enable the kernel's functionality for this feature to extend the addressable memory supported. With that, the mainline kernel is also looking at flipping on 5-level paging by default for its default kernel configuration. Intel's Linux developers have been working for several years on the 5-level paging support for increasing the virtual/physical address space for supporting large servers with vast amounts of RAM. The 5-level paging increases the virtual address space from 256 TiB to 128 PiB and the physical address space from 64 TiB to 4 PiB. Intel's 5-level paging works by extending the size of virtual addresses to 57 bits from 48 bits.

  • Interview with the Cloud Foundry Foundation CTO

    In this interview, Chip Childers, the CTO of the Cloud Foundry Foundation talks about some hot topics.

  • Research Shows Open Source Program Offices Improve Software Practices

    Using open source software is commonplace, with only a minority of companies preferring a proprietary-first software policy. Proponents of free and open source software (FOSS) have moved to the next phases of open source adoption, widening FOSS usage within the enterprise as well as gaining the “digital transformation” benefits associated with open source and cloud native best practices. Companies, as well as FOSS advocates, are determining the best ways to promote these business goals, while at the same time keeping alive the spirit and ethos of the non-commercial communities that have embodied the open source movement for years.

  • Linux Foundation Survey Proves Open-Source Offices Work Better

Releasing Slax 9.11.0

New school year has started again and next version of Slax is here too :) this time it is 9.11.0. This release includes all bug fixes and security updates from Debian 9.11 (code name Jessie), and adds a boot parameter to disable console blanking (console blanking is disabled by default). You can get the newest version at the project's home page, there are options to purchase Slax on DVD or USB device, as well as links for free download. Surprisingly for me we skipped 9.10, I am not sure why :) I also experimented with the newly released series of Debian 10 (code name Buster) and noticed several differences which need addressing, so Slax based on Debian 10 is in progress, but not ready yet. Considering my current workload and other circumstances, it will take some more time to get it ready, few weeks at least. Read more Also: Slax 9.11 Released While Re-Base To Debian 10 Is In Development

today's howtos

KDE Frameworks 5.62.0 and Reports From Akademy 2019 in Milan

  • KDE Frameworks 5.62.0

    KDE Frameworks are over 70 addon libraries to Qt which provide a wide variety of commonly needed functionality in mature, peer reviewed and well tested libraries with friendly licensing terms. For an introduction see the KDE Frameworks web page. This release is part of a series of planned monthly releases making improvements available to developers in a quick and predictable manner.

  • KDE Frameworks 5.62 Released With KWayland Additions & Other Improvements

    KDE Frameworks 5.62 is out today as the latest monthly update to this collection of KDE libraries complementing the Qt5 tool-kit offerings.

  • Back from Akademy 2019 in Milan

    The last week I was in Milan with my wife Aiswarya to attend Akademy 2019, the yearly event of the KDE community. Once again it was a great experience, with lots of interesting conferences and productive BoF sessions (“Birds of a Feather”, a common name for a project meeting during a conference). On Sunday, we presented our talk “GCompris in Kerala, part 2”. First, Aiswarya told some bits of Free-Software history in Kerala, gave examples of how GCompris is used there, and explained her work to localize the new version of GCompris in Malayalam (the language of this Indian state). Then I made a quick report of what happened in GCompris the last 2 years, and talked about the things to come for our next release.

  • Akademy was a blast!

    I attended my first ever Akademy! The event was held at the University of Milano-Bicocca in Milan, Italy this year. And the experience was splendid. During the 2 day conference, I had the opportunity to talk at the Student Showcase, where all of the SoC students presented their work to the community. There were about 8 students, and everyone gave a good briefing on their project. My project this summer was with Kdenlive, the open source non linear professional video editor. I proposed to revamp one of the frequently used tools in the editor, called the Titler tool, which is used to create title clips. Title clips are video clips that contain text and/or images that are composited or appended to your video (eg: subtitles). The problem with the titler tool as it is, is that it uses QGraphicsView to describe a title clip and QGraphicsView was deprecated since the release of Qt5. This obviously leads to problems - upstream bugs crawling affecting the functionality of the tool and an overall degradation in the ease of maintenance of the codebase. Moreover, adding new features to the existing code base was no easy task and therefore, a complete revamp was something in sights of the developer community in Kdenlive for a long time now. I proposed to rework on the backend for the period of GSoC replacing the use of XML with QML and use a new rendering backend with QQuickRenderControl, along with a new MLT module to handle the QML frames. I was able to cover most of the proposed work, I seek to continue working on it and finish evolving the titler tool.