Language Selection

English French German Italian Portuguese Spanish

Server

Continuous Integration/Continuous Development with FOSS Tools

Filed under
Development
Server

One of the hottest topics within the DevOps space is Continuous Integration and Continuous Deployment (CI/CD). This attention has drawn lots of investment dollars, and a vast array of proprietary Software As A Service (SaaS) tools have been created in the CI/CD space, which traditionally has been dominated by free open-source software (FOSS) tools. Is FOSS still the right choice with the low cost of many of these SaaS options?

It depends. In many cases, the cost of self-hosting these FOSS tools will be greater than the cost to use a non-FOSS SaaS option. However, even in today's cloud-centric and SaaS-saturated world, you may have good reasons to self-host FOSS. Whatever those reasons may be, just don't forget that "Free" isn't free when it comes to keeping a service running reliably 24/7/365. If you're looking at FOSS as a means to save money, make sure you account for those costs.

Even with those costs accounted for, FOSS still delivers a lot of value, especially to small and medium-sized organizations that are taking their first steps into DevOps and CI/CD. Starting with a commercialized FOSS product is a great middle ground. It gives a smooth growth path into the more advanced proprietary features, allowing you to pay for those only once you need them. Often called Open Core, this approach isn't universally loved, but when applied well, it has allowed for a lot of value to be created for everyone involved.

Read more

Servers ('Cloud'), IBM, and Fedora

Filed under
Red Hat
Server
  • Is the cloud right for you?

    Corey Quinn opened his lightning talk at the 17th annual Southern California Linux Expo (SCaLE 17x) with an apology. Corey is a cloud economist at The Duckbill Group, writes Last Week in AWS, and hosts the Screaming in the Cloud podcast. He's also a funny and engaging speaker. Enjoy this video "The cloud is a scam," to learn why he wants to apologize and how to find out if the cloud is right for you.

  • Google Cloud to offer VMware data-center tools natively

    Google this week said it would for the first time natively support VMware workloads in its Cloud service, giving customers more options for deploying enterprise applications.

    The hybrid cloud service called Google Cloud VMware Solution by CloudSimple will use VMware software-defined data center (SDCC) technologies including VMware vSphere, NSX and vSAN software deployed on a platform administered by CloudSimple for GCP.

  • Get started with reactive programming with creative Coderland tutorials

    The Reactica roller coaster is the latest addition to Coderland, our fictitious amusement park for developers. It illustrates the power of reactive computing, an important architecture for working with groups of microservices that use asynchronous data to work with each other.

    In this scenario, we need to build a web app to display the constantly updated wait time for the coaster.

  • Fedora Has Deferred Its Decision On Stopping Modular/Everything i686 Repositories

    The recent proposal to drop Fedora's Modular and Everything repositories for the upcoming Fedora 31 release is yet to be decided after it was deferred at this week's Fedora Engineering and Steering Committee (FESCo) meeting.

    The proposal is about ending the i686 Modular and Everything repositories beginning with the Fedora 31 cycle later this year. But this isn't about ending multi-lib support, so 32-bit packages will continue to work from Fedora x86_64 installations. But as is the trend now, if you are still running pure i686 (32-bit x86) Linux distributions, your days are numbered. Separately, Fedora is already looking to drop their i686 kernels moving forward and they are not the only Linux distribution pushing for the long overdue retirement of x86 32-bit operating system support.

Servers: Twitter Moves to Kubernetes, Red Hat/IBM News and Tips

Filed under
Red Hat
Server
  • Twitter Announced Switch from Mesos to Kubernetes

    On the 2nd of May at 7:00 PM (PST), Twitter held a technical release conference and meetup at its headquarters in San Francisco. At the conference, David McLaughlin, Product and Technical Head of Twitter Computing Platform, announced that Twitter's infrastructure would completely switch from Mesos to Kubernetes.

    For a bit of background history, Mesos was released in 2009, and Twitter was one of the early companies in support and use Mesos. As one of the most successful social media giants in the world, Twitter has received much attention due to its large production cluster scale (having tens of thousands of nodes). In 2010, Twitter started to develop the Aurora project based on the Mesos project to make it more convenient to manage both its online and offline business and gradually adopt to Mesos.

  • Linux Ending Support for the Floppy Drive, Unity 2019.2 Launches Today, Purism Unveils Final Librem 5 Smartphone Specs, First Kernel Security Update for Debian 10 "Buster" Is Out, and Twitter Is Switching from Mesos to Kubernetes

    Twitter is switching from Mesos to Kubernetes. Zhang Lei, Senior Technical Expert on Alibaba Cloud Container Platform and Co-maintainer of Kubernetes Project, writes "with the popularity of cloud computing and the rise of cloud-based containerized infrastructure projects like Kubernetes, this traditional Internet infrastructure starts to show its age—being a much less efficient solution compared with that of Kubernetes". See Zhang's post for some background history and more details on the move.

  • Three ways automation can help service providers digitally transform

    As telecommunication service providers (SPs) look to stave off competitive threats from over the top (OTT) providers, they are digitally transforming their operations to greatly enhance customer experience and relevance by automating their networks, applying security, and leveraging infrastructure management. According to EY’s "Digital transformation for 2020 and beyond" study, process automation can help smooth the path for SP IT teams to reach their goals, with 71 percent of respondents citing process automation as "most important to [their] organization’s long-term operational excellence."

    There are thousands of virtual and physical devices that comprise business, consumer, and mobile services in an SP’s environment, and automation can help facilitate and accelerate the delivery of those services.

    [...]

    Some SPs are turning to Ansible and other tools to embark on their automation journey. Red Hat Ansible Automation, including Red Hat Ansible Engine and Red Hat Ansible Tower, simplifies software-defined infrastructure deployment and management, operations, and business processes to help SPs more effectively deliver consumer, business, and mobile services.

    Red Hat Process Automation Manager (formerly Red Hat JBoss BPM Suite) combines business process management, business rules management, business resource optimization, and complex event processing technologies in a platform that also includes tools for creating user interfaces and decision services. 

  • Deploy your API from a Jenkins Pipeline

    In a previous article, 5 principles for deploying your API from a CI/CD pipeline, we discovered the main steps required to deploy your API from a CI/CD pipeline and this can prove to be a tremendous amount of work. Hopefully, the latest release of Red Hat Integration greatly improved this situation by adding new capabilities to the 3scale CLI. In 3scale toolbox: Deploy an API from the CLI, we discovered how the 3scale toolbox strives to automate the delivery of APIs. In this article, we will discuss how the 3scale toolbox can help you deploy your API from a Jenkins pipeline on Red Hat OpenShift/Kubernetes.

  • How to set up Red Hat CodeReady Studio 12: Process automation tooling

    The release of the latest Red Hat developer suite version 12 included a name change from Red Hat JBoss Developer Studio to Red Hat CodeReady Studio. The focus here is not on the Red Hat CodeReady Workspaces, a cloud and container development experience, but on the locally installed developers studio. Given that, you might have questions about how to get started with the various Red Hat integration, data, and process automation product toolsets that are not installed out of the box.

    In this series of articles, we’ll show how to install each set of tools and explain the various products they support. We hope these tips will help you make informed decisions about the tooling you might want to use on your next development project.

Kubernetes News/Views

Filed under
Server
  • Cloud Foundry and Kubernetes – The Blending Continues [Ed: Cloud Foundry Foundation dominated by proprietary software firms]

    At the recent Cloud Foundry Summit in Philadephia, Troy Topnik of SUSE participated in the latest iteration of a panel discussing how the community continues to blend Cloud Foundry and Kubernetes. There is some interesting and insightful discussion between members of the panel from Google, IBM, Microsoft, Pivotal, SAP, and Swarna Podila of the Cloud Foundry Foundation.

    Cloud Foundry Foundation has posted all recorded talks form CF Summit on YouTube.

  • Don’t Throw Your Kubernetes Away

    The adoption of Kubernetes is growing at an unprecedented rate. Companies of all sizes are running it in production. Almost all of these companies were early adopters of Kubernetes where different dev teams brought Kubernetes inside the organization.

    Kubernetes is a very engineer-driven technology. Unlike instances like virtualization or other infrastructure components that are managed by the central IT team which offers them to different development groups, Kubernetes is something that developers bring into the organization.

  • Issue #2019.07.29 – Kubeflow Releases so far (0.5, 0.4, 0.3)

    Kubeflow 0.5 simplifies model development with enhanced UI and Fairing library – The 2019 Q1 release of Kubeflow goes broader and deeper with release 0.5. Give your Jupyter notebooks a boost with the redesigned notebook app. Get nerdy with the new kfctl command line tool. Power to the people – use your favourite python IDE and send your model to a Kubeflow cluster using the Fairing python library. More training tools added as well, with an example of XGBoost and Fairing.

Server: IBM, Amazon, Elastic, Cloudera and YugaByte

Filed under
Server
  • IBM CTO: ‘Open Tech Is Our Cloud Strategy’

    IBM may not be as splashy as some of the other tech giants that make big code contributions to open source. But as Chris Ferris, CTO for open technology at IBM says, “we’ve been involved in open source before open source was cool.”

    By Ferris’ estimation, IBM ranks among the top three contributors in terms of code commits to open source project and contributors to the various open source communities. “It’s really significant,” he said. “We don’t run around with the vanity metrics the way some others do, but it’s really important to us.”

  • TurboSched Is A New Linux Scheduler Focused On Maximizing Turbo Frequency Usage

    TurboSched is a new Linux kernel scheduler that's been in development by IBM for maximizing use of turbo frequencies for the longest possible periods of time. Rather than this scheduler trying to balance the load across all available CPU cores, it tries to keep the priority tasks on a select group of cores while aiming to keep the other cores idle in order to allow for the power allowance to be used by those few turbo-capable cores with the high priority work.

    TurboSched aims to keep low utilization tasks to already active cores as opposed to waking up new cores from their idle/power-savings states. This is beneficial for allowing the CPU cores most likely to be kept in their turbo state for longer while saving power in terms of not waking up extra cores for brief periods of time when handling various background/jitter tasks.

  • AWS Turbocharges new Linux Kernel Releases in its Extras Catalogue

    Amazon says it has added AWS-optimised variants of new Linux Kernel releases to its extras catalogue in Amazon Linux 2 – a Linux server operating system (OS) – saying the boost results in higher bandwidth with lower latency on smaller instance types.

    Amazon Linux is an OS distribution supported and updated by AWS and made available for use with Elastic Compute Cloud (EC2) instances. Amazon Linux users will now be able to update the operating system to Linux Kernel 4.19, as released in October 2018.

  • Elastic Cloud Enterprise 2.3 turns admins into bouncers

    Version 2.3 of Elastic Cloud Enterprise (ECE) is now available for download, finally bringing role-based access control (RBAC) to its general user base and letting admins decide who gets to see what. ECE allows the deployment of Elastic’s search-based software as a service offerings on a company’s infrastructure of choice (public cloud, private cloud, virtual machines, bare metal).

    The new version is the first to come with four pre-configured roles to help admins control deployment access and management privileges. This is only the first step in the product’s RBAC journey, though. Customisable deployment-level permissions and greater abilities to separate users by teams are on the ECE roadmap.

  • Cloudera open source route seeks to keep big data alive

    Cloudera has had a busy 2019. The vendor started off the year by merging with its primary rival Hortonworks to create a new Hadoop big data juggernaut. However, in the ensuing months, the newly merged company has faced challenges as revenue has come under pressure and the Hadoop market overall has shown signs of weakness.

    Against that backdrop, Cloudera said July 10 that it would be changing its licensing model, taking a fully open source approach. The Cloudera open source route is a new strategy for the vendor. In the past, Cloudera had supported and contributed to open source projects as part of the larger Hadoop ecosystem but had kept its high-end product portfolio under commercial licenses.

    The new open source approach is an attempt to emulate the success that enterprise Linux vendor Red Hat has achieved with its open source model. Red Hat was acquired by IBM for $34 billion in a deal that closed in July. In the Red Hat model, the code is all free and organizations pay a subscription fee for support services.

  • YugaByte goes 100% open under Apache

    Open source distributed SQL database company YugaByte has confirmed that its eponymously named YugaByte DB is now 100 percent open source under the Apache 2.0 license.

    The additional homage to open source-ness means that previously commercial features now move into the open source core.

    YugaByte says it hopes that this will directly create more opportunities for open collaboration between users, who will have their hands on 100% open tools.

Cautionary Tales About Hosting With Microsoft

Filed under
Server
Microsoft
  • GitHub confirms it has blocked developers in Iran, Syria and Crimea [Ed: Microsoft wants us to believe that all companies need to do what GitHub did. That’s a lie. But Microsoft knows that it needs to lick Trump’s and Bolton’s boots to keep getting those government contracts that ‘bail it out’. Microsoft made its choice [1, 2].]

    The impact of U.S. trade restrictions is trickling down to the developer community. GitHub, the world’s largest host of source code, is preventing users in Iran, Syria, Crimea and potentially other sanctioned nations from accessing portions of the service, chief executive of the Microsoft-owned firm said.

  • Migrating an Exchange Server to the Cloud? What could possibly go wrong?

    As users stared at useless login screens, Ben and his team floundered for a few hours, trying to work out how to restore access.

    The clue was in the word "restore" as one bright spark remembered there was a user account named "backup" used, well, to do backups.

    It had been missed in the Exchange account purge and so was still active.

    And the Linux connection? The Microsoft Certified Partner used a server running the open-source operating system to perform backup duties.

    The backup software used that Active Directory account, which just so happened to have enough privileges to re-enable the Windows users via Linux LDAP tools.

    After all, these days Microsoft just loves open source, right?

Red Hat and IBM

Filed under
Red Hat
Server
  • 16 essentials for sysadmin superheroes

    You know you're a sysadmin if you are either knee-deep in system logs, constantly handling user errors, or carving out time to document it all along the way. Yesterday was Sysadmin Appreciation Day and we want to give a big "thank you" to our favorite IT pros. We've pulled together the ultimate list of tasks, resources, tools, commands, and guides to help you become a sysadmin superhero.

  • Kubernetes by the numbers: 13 compelling stats

    Fast-forward to the dog days of summer 2019 and a fresh look at various stats in and around the Kubernetes ecosystem, and the story’s sequel plays out a lot like the original: Kubernetes is even more popular. It’s tough to find a buzzier platform in the IT world these days. Yet Kubernetes is still quite young; it just celebrated its fifth “birthday,” and version 1.0 of the open source project was released just over four years ago. So there’s plenty of room for additional growth.

  • Vendors not contributing to open source will fall behind says John Allessio, SVP & GM, Red Hat Global Services
  • IBM open-sources AI algorithms to help advance cancer research

    IBM Corp. has open-sourced three artificial intelligence projects focused on cancer research.

  • IBM Just Made its Cancer-Fighting AI Projects Open-Source

    IBM just announced that it was making three of its artificial intelligence projects designed to help doctors and cancer researchers open-source.

  • IBM Makes Its Cancer-Fighting AI Projects Open Source

    IBM launches three new AI projects to help researchers and medical experts study cancer and find better treatment to the said disease in the future.

  • New Open-Source AI Machine Learning Tools to Fight Cancer

    In Basel, Switzerland at this week’s 18th European Conference on Computational Biology (ECCB) and 27th Conference on Intelligent Systems for Molecular Biology (ISMB), IBM will share three novel artificial intelligence (AI) machine learning tools called PaccMann, INtERAcT, and PIMKL, that are designed to assist cancer researchers.

    [...]

    “There have been a plethora of works focused on prediction of drug sensitivity in cancer cells, however, the majority of them have focused on the analysis of unimodal datasets such as genomic or transcriptomic profiles of cancer cells,” wrote the IBM researchers in their study. “To the best of our knowledge, there have not been any multi-modal deep learning solutions for anticancer drug sensitivity prediction that combine a molecular structure of compounds, the genetic profile of cells and prior knowledge of protein interactions.”

  • IBM offering cancer researchers 3 open-source AI tools

    Researchers and data scientists at IBM have developed three novel algorithms aimed at uncovering the underlying biological processes that cause tumors to form and grow.

    And the computing behemoth is making all three tools freely available to clinical researchers and AI developers.

    The offerings are summarized in a blog post written by life sciences researcher Matteo Manica and data scientist Joris Cadow, both of whom work at an IBM research lab in Switzerland.

  • Red Hat CTO says no change to OpenShift, conference swag plans after IBM buy

    Red Hat’s CTO took to Reddit this week to reassure fans that the company would stick to its open source knitting after the firm absorbed by IBM earlier this month AND that their Red Hat swag could be worth a packet in future .

    The first question to hit in Chris Wright’s Reddit AMA regarded the effect on Red Hat’s OpenShift strategy. The short answer, was “no effect”.

    “First, Red Hat is still Red Hat, and we are focused on delivering the industry’s most comprehensive enterprise Kubernetes platform,” Wright answered “Second, upstream first development in Kubernetes and community ecosystem development in OKD are part of our product development process. Neither of those change. The IBM acquisition can help accelerate the adoption of OpenShift given the increase scale and reach in sales and services that IBM has.”

Server: So-called 'DevOps' (Buzzword) and SysAdmin Day

Filed under
Server
  • Q&A: CircleCI CTO Explains Why DevOps Is a Growing Enterprise

    The CTO of DevOps platform vendor CircleCI shares insights on how the market has changed as his company raises new funds to power ahead.

  • Have you thanked a sysadmin today?

    Sysadmins are the heartbeat of many open source projects around the world. What would we do without them?

    So, once a year—or more if you're working on a team with a great outlook on life and positive culture—we take time out of our busy lives to say thank you.

  • Happy SysAdmin Day!

    The Purism team enjoys celebrating across all time zones. So far this year we’ve posted in celebration of Women’s Day, Pi Day and Towel Day–and today we’re celebrating System Administrator Appreciation Day!

    Because behind every network, big or small, system administrators are working hard to make sure that servers are secure, updates are painless and metaphorical fires are quickly put out. They frequently go beyond their job description to provide additional support to individual users on the network.

    One big, well-kept secret is that most of the Internet runs on free software. The other big secret is that all of the Internet runs on SysAdmins.

    So today we’d like to thank our SysOps team for their tireless work, juggling the demands of company resources, our shop and various websites, as well as our Librem One services. Your laptop, services –and soon your phone–will make their way to you in large part thanks to the infrastructure they maintain.

IBM and Servers

Filed under
Red Hat
Server
  • Controlling Red Hat OpenShift from an OpenShift pod

    This article explains how to configure a Python application running within an OpenShift pod to communicate with the Red Hat OpenShift cluster via openshift-restclient-python, the OpenShift Python client.

  • 24 sysadmin job interview questions you should know

    As a geek who always played with computers, a career after my masters in IT was a natural choice. So, I decided the sysadmin path was the right one. In the process of my career, I have grown quite familiar with the job interview process. Here is a look at what to expect, the general career path, and a set of common questions and my answers to them.

  • How to transition into a career as a DevOps engineer

    DevOps engineering is a hot career with many rewards. Whether you're looking for your first job after graduating or seeking an opportunity to reskill while leveraging your prior industry experience, this guide should help you take the right steps to become a DevOps engineer.

    [...]

    If you have prior experience working in technology, such as a software developer, systems engineer, systems administrator, network operations engineer, or database administrator, you already have broad insights and useful experience for your future role as a DevOps engineer. If you're just starting your career after finishing your degree in computer science or any other STEM field, you have some of the basic stepping-stones you'll need in this transition.

  • Getting Started with Knative on Ubuntu

    Serverless computing is a style of computing that simplifies software development by separating code development from code packaging and deployment. You can think of serverless computing as synonymous with function as a service (FaaS). 

    Serverless has at least three parts, and consequently can mean something different depending on your persona and which part you look at – the infrastructure used to run your code, the framework and tools (middleware) that hide the infrastructure, and your code which might be coupled with the middleware. In practice, serverless computing can provide a quicker, easier path to building microservices. It will handle the complex scaling, monitoring, and availability aspects of cloud native computing.

Servers: cloud-init 19.2 Released and Red Hat on 'Cloud' and OpenStack with Kuryr

Filed under
Server
  • cloud-init 19.2 Released

    Version 19.1 is already available in Ubuntu Eoan. A stable release updates (SRU) to Ubuntu 18.04 LTS (Bionic) and Ubuntu 16.04 LTS (Xenial) will start in the next week.

  • Considering Cloud Repatriation? Don’t Forget Your Data!

    Organizations should consider complementing their object storage initiatives with an abstraction layer that combines storage from multiple clouds into a single virtual storage unit. Enterprises shouldn’t migrate data unless absolutely necessary. An abstraction layer can make it easier to manage data wherever it resides.

    The end result of all of this is an IT strategy that eliminates or reduces discontinuity between different cloud platforms. Enterprises can choose to use the public cloud based on their unique business needs, not their technical bandwidth. Or, they can opt to use a combination of public and private clouds. Either way, with the appropriate storage infrastructure, they can get rid of the remorse and rest assured that their data will always be available.

  • Accelerate your OpenShift Network Performance on OpenStack with Kuryr

    Overall, Kuryr provides a significant boost in pod-to-pod network performance. As an example we went from getting 0.5Gbps pod-to-pod to 5 Gbps on a 25 Gigabit link for the common case of 1024B TCP packets when worker nodes nodes were spread across separate OpenStack hypervisors. With Kuryr, we are able to achieve a higher throughput, satisfying application needs for better bandwidth while at the same time achieving better utilization on our high bandwidth NICs.

Syndicate content

More in Tux Machines

Pekwm: A lightweight Linux desktop

Let's say you want a lightweight desktop environment, with just enough to get graphics on the screen, move some windows around, and not much else. You find traditional desktops get in your way, with their notifications and taskbars and system trays. You want to live your life primarily from a terminal, but you also want the luxury of launching graphical applications. If that sounds like you, then Pekwm may be what you've been looking for all along. Pekwm is, presumably, inspired by the likes of Window Maker and Fluxbox. It provides an application menu, window decoration, and not a whole lot more. It's ideal for minimalists—users who want to conserve resources and users who prefer to work from a terminal. Read more

What motivates people to contribute to open source?

Knowing what motivates people is a smart way to recruit contributors to an open source project—and to keep them contributing once they've joined. For his book How Open Source Ate Software, Red Hat's Gordon Haff did a lot of research on the topic of motivation, and he shared some of it in his Lightning Talk at All Things Open 2019, "Why do we contribute to open source?" Watch Gordon's Lightning Talk to learn about the three main types of motivation—extrinsic, intrinsic, and internalized extrinsic—what they are, and how they relate to open source communities. Read more

6 Best Free Linux Speed Reading Tools

The idea of speed reading was invented by an American schoolteacher named Evelyn Wood. There’s a few different approaches when it comes to speed reading. Spritz technology is based on the notion that much of the time spent in reading text is taken by the eye’s focus moving between words and across the page. According to Spritz, spritzing is defined as reading content one word at a time with the optimal recognition point (ORP) positioned inside of their custom “redicle”. After your eyes find the ORP, your brain starts to process the meaning of the word that you’re viewing. The concept of speed reading in this context is simple: slice a text into individual short segments, like a word. The software featured in this group test is based on spritzing. Read text without moving your eyes, and therefore rapidly increase your reading speed. Unlike other reading techniques, you don’t need to rewire your brain to work more efficiently. Read more

5 cool terminal pagers in Fedora

Large files like logs or source code can run into the thousands of lines. That makes navigating them difficult, particularly from the terminal. Additionally, most terminal emulators have a scrollback buffer of only a few hundred lines. That can make it impossible to browse large files in the terminal using utilities which print to standard output like cat, head and tail. In the early days of computing, programmers solved these problems by developing utilities for displaying text in the form of virtual “pages” — utilities imaginatively described as pagers. Pagers offer a number of features which make text file navigation much simpler, including scrolling, search functions, and the ability to feature as part of a pipeline of commands. In contrast to most text editors, some terminal pagers do not require loading the entire file for viewing, which makes them faster, especially for very large files. Read more