Language Selection

English French German Italian Portuguese Spanish

Server

Servers: Kubernetes, Red Hat, USENET and Solaris

Filed under
Server
  • HPE launches container platform, aims to be 100% open source Kubernetes

    Hewlett Packard Enterprise launched its HPE Container Platform, a Kubernetes container system designed to run both cloud and on-premises applications.

    On the surface, HPE Container Platform will face an uphill climb as all the top cloud providers have Kubernetes management tools and instances and IBM with Red Hat has a big foothold for hybrid cloud deployments and the container management that goes with it.

    HPE, which recently outlined a plan to make everything a service, is betting that the HPE Container Platform can differentiate itself based on two themes. First, HPE is pledging that its container platform will be 100% open source Kubernetes compared to other systems that have altered Kubernetes. In addition, HPE Container Platform will be able to run across multiple environments and provide one management layer.

  • Virtio-networking: first series finale and plans for 2020

    Let's take a short recap of the Virtio-networking series that we've been running the past few months. We've covered a lot of ground! Looking at this series from a high level, let's revisit some of the topics we covered:

    [...]

    For those who didn't crack and made it all the way here, we hope this series helped you clarify the dark magic of virtio and low-level networking both in the Linux kernel and in DPDK.

  • Inside the Book of Red Hat

    Shared stories are the cornerstone of community. And in open organizations like Red Hat—where community is paramount—shared stories are especially important to the collective identity that binds participants together.

    At Red Hat, we're quite fond of the stories that inform our shared history, purpose, and culture. We've just collected some of them in a new version of the Book of Red Hat, which is available now.

    Here are just three of the community-defining moments the book recounts.

  • The Early History of Usenet, Part III: File Format

    When we set out to design the over-the-wire file format, we were certain of one thing: we wouldn't get it perfectly right. That led to our first decision: the very first character of the transmitted file would be the letter "A" for the version. Why not a number on the first line, including perhaps a decimal point? If we ever considered that, I have no recollection of it.
    A more interesting question is why we didn't use email-style headers, a style later adopted for HTTP. The answer, I think, is that few, if any, of us had any experience with those protocols at that time. My own personal awareness of them started when I requested and received a copy of the Internet Protocol Transition Workbook a couple of years later — but I was only aware of it because of Usenet. (A few years earlier, I gained a fair amount of knowledge of the ARPANET from the user level, but I concentrated more on learning Multics.)

    Instead, we opted for the minimalist style epitomized by 7th Edition Unix. In fact, even if we had known of the Internet (in those days, ARPANET) style, we may have eschewed it anyway. Per a later discussion of implementation, the very first version of our code was a shell script. Dealing with entire lines as single units, and not trying to parse headers that allowed arbitrary case, optional white space, and continuation lines was certainly simpler!

    [...]

    Sending a date and an article title were obvious enough that these didn't even merit much discussion. The date and time line used the format generated by the ctime() or asctime() library routines. I do not recall if we normalized the date and time to UTC or just ignored the question; clearly, the former would have been the proper choice. (There is an interesting discrepancy here. A reproduction of the original announcement clearly shows a time zone. Neither the RFC nor the ctime() routine had one. I suspect that announcement was correct.) The most interesting question, though, was about what came to be called newsgroups.

    We decided, from the beginning, that we needed multiple categories of articles — newsgroups. For local use, there might be one for academic matters ("Doctoral orals start two weeks from tomorrow"), social activities ("Reminder: the spring picnic is Sunday!"), and more. But what about remote sites? The original design had one relayed newsgroup: NET. That is, there would be no distinction between different categories of non-local articles.

  • From humble Unix sysadmin to brutal separatist suppressor to president of Sri Lanka

    A former Unix sysadmin has been elected the new president of Sri Lanka, giving hope to all those IT workers who fear they are trapped in a role where the smallest of decisions can have catastrophic consequences if it goes wrong.

    Gotabaya Rajapaksa, younger brother of former president Mahindra, won the popular vote in an election held on Saturday (16 November). He is notable to The Register's readership for his stint working in America as a Solaris system integrator and later as a Unix sysadmin for a Los Angeles university.

Supercomputing Articles

Filed under
Server
  • Exascale meets hyperscale: How high-performance computing is transitioning to cloud-like environments

    Twice a year the high-performance computing (HPC) community anxiously awaits the announcement of the latest edition of the Top500 list, cataloging the most powerful computers on the planet. The excitement of a supercomputer breaking the coveted exascale barrier and moving into the top position typically overshadows the question of which country will hold the record. As it turned out, the top 10 systems on the November 2019 Top500 list are unchanged from the previous revision with Summit and Sierra still holding #1 and #2 positions, respectively. Despite the natural uncertainty around the composition of the Top500 list, there is little doubt about software technologies that are helping to reshape the HPC landscape. Starting at the International Supercomputing conference earlier this year, one of the technologies leading this charge is containerization, lending further credence to how traditional enterprise technologies are influencing the next generation of supercomputing applications.

    Containers are borne out of Linux, the operating system underpinning Top500 systems. Because of that, the adoption of container technologies has gained momentum and many supercomputing sites already have some portion of their workflows containerized. As more supercomputers are being used to run artificial intelligence (AI) and machine learning (ML) applications to solve complex problems in science-- including disciplines like astrophysics, materials science, systems biology, weather modeling and cancer research, the focus of the research is transitioning from using purely computational methods to AI-accelerated approaches. This often requires the repackaging of applications and restaging the data for easier consumption, where containerized deployments are becoming more and more important.

  • Exploring AMD’s Ambitious ROCm Initiative

    Three years ago, AMD released the innovative ROCm hardware-accelerated, parallel-computing environment [1] [2]. Since then, the company has continued to refine its bold vision for an open source, multiplatform, high-performance computing (HPC) environment. Over the past three years, ROCm developers have contributed many new features and components to the ROCm open software platform.

    ROCm is a universal platform for GPU-accelerated computing. A modular design lets any hardware vendor build drivers that support the ROCm stack [3]. ROCm also integrates multiple programming languages and makes it easy to add support for other languages. ROCm even provides tools for porting vendor-specific CUDA code into a vendor-neutral ROCm format, which makes the massive body of source code written for CUDA available to AMD hardware and other hardware environments.

  • High-Performance Python – GPUs

    When GPUs became available, C code via CUDA, a parallel computing platform and programming model developed by Nvidia for GPUs, was the logical language of choice. Since then, Python has become the tool of choice for machine learning, deep learning, and, to some degree, scientific code in general.

    Not long after the release of CUDA, the Python world quickly created tools for use with GPUs. As with new technologies, a plethora of tools emerged to integrate Python with GPUs. For some time, the tools and libraries were adequate, but soon they started to show their age. The biggest problem was incompatibility.

    If you used a tool to write code for the GPU, no other tools could read or use the data on the GPU. After making computations on the GPU with one tool, the data had to be copied back to the CPU. Then a second tool had to copy the data from the CPU to the GPU before commencing its computations. The data movement between the CPU and the GPU really affected overall performance. However, these tools and libraries allowed people to write functions that worked with Python.

    In this article, I discuss the Python GPU tools that are being actively developed and, more importantly, likely to interoperate. Some tools don’t need to know CUDA for GPU code, and other tools do need to know CUDA for custom Python kernels.

  • Porting CUDA to HIP

    You’ve invested money and time in writing GPU-optimized software with CUDA, and you’re wondering if your efforts will have a life beyond the narrow, proprietary hardware environment supported by the CUDA language.

    Welcome to the world of HIP, the HPC-ready universal language at the core of AMD’s all-open ROCm platform [1]. You can use HIP to write code once and compile it for either the Nvidia or AMD hardware environment. HIP is the native format for AMD’s ROCm platform, and you can compile it seamlessly using the open source HIP/​Clang compiler. Just add CUDA header files, and you can also build the program with CUDA and the NVCC compiler stack (Figure 1).

  • OpenMP – Coding Habits and GPUs

    When first using a new programming tool or programming language, it’s always good to develop some good general habits. Everyone who codes with OpenMP directives develops their own habits – some good and some perhaps not so good. As this three-part OpenMP series finishes, I highlight best practices from the previous articles that can lead to good habits.

    Enamored with new things, especially those that drive performance and scalability, I can’t resist throwing a couple more new directives and clauses into the mix. After covering these new directives and clauses, I will briefly discuss OpenMP and GPUs. This pairing is fairly recent, and compilers are still catching up to the newer OpenMP standards, but it is important for you to understand that you can run OpenMP code on targeted offload devices (e.g., GPUs).

  • News and views on the GPU revolution in HPC and Big Data:

    Exploring AMD's Ambitious ROCm Initiative
    Porting CUDA to HIP
    Python with GPUs
    OpenMP – Coding Habits and GPUs

Bringing PostgreSQL to Government

Filed under
Server
OSS
  • Crunchy Data, ORock Technologies Form Open Source Cloud Partnership for Federal Clients

    Crunchy Data and ORock Technologies have partnered to offer a database-as-a-service platform by integrating the former's open source database with the latter's managed offering designed to support deployment of containers in multicloud or hybrid computing environments.

    The partnership aims to implement a PostgreSQL as a service within ORock's Secure Containers as a Service, which is certified for government use under the Federal Risk and Authorization Management Program, Crunchy Data said Tuesday.

  • Crunchy Data and ORock Technologies Partnership Brings Trusted Open Source Cloud Native PostgreSQL to Federal Government

    Crunchy Data and ORock Technologies, Inc. announced a partnership to bring Crunchy PostgreSQL for Kubernetes to ORock’s FedRAMP authorized container application Platform as a Service (PaaS) solution. Through this collaboration, Crunchy Data and ORock will offer PostgreSQL-as-a-Service within ORock’s Secure Containers as a Service with Red Hat OpenShift environment. The combined offering provides a fully managed Database as a Service (DBaaS) solution that enables the deployment of containerized PostgreSQL in hybrid and multi-cloud environments.

    Crunchy PostgreSQL for Kubernetes has achieved Red Hat OpenShift Operator Certification and provides Red Hat OpenShift users with the ability to provision trusted open source PostgreSQL clusters, elastic workloads, high availability, disaster recovery, and enterprise authentication systems. By integrating with the Red Hat OpenShift platform within ORock’s cloud environments, Crunchy PostgreSQL for Kubernetes leverages the ability of the Red Hat OpenShift Container Platform to unite developers and IT operations on a single FedRAMP-compliant platform to build, deploy, and manage applications consistently across hybrid cloud infrastructures.

Red Hat and Containers

Filed under
Red Hat
Server
OSS
  • Queensland government looks to open source for single sign-on project

    Red Hat Single Sign-On, which is based on the open source Keycloak project, and the Apollo GraphQL API Gateway platform will be the two key software components underpinning a Queensland effort to deliver a single login for access to online government services.

    Queensland is implementing single sign-on capabilities for state government services, including ‘tell us once’ capabilities that will allow basic personal details of individuals to be, where consent is given by an individual, shared between departments and agencies.

  • Red Hat Releases Open Source Project Quay Container Registry
  • Red Hat open sources Project Quay container registry

    Yesterday, Red Hat introduced the open source Project Quay container registry, which is the upstream project representing the code that powers Red Hat Quay and Quay.io. Open-sourced as a Red Hat commitment, Project Quay “represents the culmination of years of work around the Quay container registry since 2013 by CoreOS, and now Red Hat,” the official post reads.

    Red Hat Quay container image registry provides storage and enables users to build, distribute, and deploy containers. It will also help users to gain more security over their image repositories with automation, authentication, and authorization systems. It is compatible with most container environments and orchestration platforms and is also available as a hosted service or on-premises.

  • Red Hat declares Quay code open

    Red Hat has open sourced the code behind Project Quay, the six year old container registry it inherited through its purchase of CoreOS.

    The code in question powers both Red Hat Quay and Quay.IO, and also includes the Clair open source security project which was developed by the Quay team, and integrated with the registry back in 2015.

    In the blog post announcing the move, Red Hat principal software engineer – and CoreOS alumnus – Joey Schorr, wrote, “We believe together the projects will benefit the cloud-native community to lower the barrier to innovation around containers, helping to make containers more secure and accessible.”

  • New Open Source Offerings Simplify Securing Kubernetes

    In advance of the upcoming KubeCon 2019 (CyberArk booth S55), the flagship event for all things Kubernetes and Cloud Native Computing Foundation, CyberArk is adding several new Kubernetes offerings to its open source portfolio to improve the security of application containers within Kubernetes clusters running enterprise workloads.

  • Java Applications Go Cloud-Native with Open-Source Quarkus Framework

    "With Quarkus, Java developers are able to continue to work in Java, the language they are proficient in, even when they are working with new, cloud-native technologies," John Clingan, senior principal product manager of middleware at Red Hat, told IT Pro Today. "With memory utilization measured in 10s of MB and startup time measured in 10s of milliseconds, Quarkus enables organizations to continue with their significant Java investments for both microservices and serverless."

    Many organizations have been considering alternative runtimes to Java, like Node.js and Go, due to high memory utilization of Java applications, according to Clingan. In addition, Java’s startup times are generally too slow to be an effective solution for serverless environments. As such, Clingan said that even if an organization decided to stick with Java for microservices, it would be forced to switch to an alternative runtime for serverless, or functions-as-a-service (FaaS), deployment.

  • Styra Secures $14M in Funding Led by Accel to Expand Open Source and Commercial Solutions for Kubernetes/Cloud-native Security

    New technology—like Kubernetes, Containers, ServiceMesh, and CICD Automation—speed application delivery and development. However, they lack a common framework for authorization to determine where access should be allowed, and where it should be denied. Styra’s commercial and open source solutions—purpose-built for the scale of cloud-native development—provide this authorization layer to mitigate risk across cloud application components, as well as the infrastructure they are built upon.

The Secrets of Docker Secrets

Filed under
Server
HowTos

Most web apps need login information of some kind, and it is a bad idea to put them in your source code where it gets saved to a git repository that everyone can see. Usually these are handled by environment variables, but Docker has come up with what they call Docker secrets. The idea is deceptively simple in retrospect. While you figure it out it is arcane and difficult to parse what is going on.

Essentially the secrets function create in memory files in the docker image that contain the secret data. The data can come from files, or a Docker swarm.

The first thing to know is that the application running in the docker image needs to be written to take advantage of the Docker secrets function. Instead of getting the password from an environment variable, it would get the password from the file system at /run/secrets/secretname. Not all images available use this functionality. If they don't describe how to use Docker secrets, the won't work. The files will be created in the image, but the application won't read them.

Read more

Server Leftovers

Filed under
Server
  • Knative at 1: New Changes, New Opportunities

    This summer marked the one-year anniversary of Knative, an open-source project that provides the fundamental building blocks for serverless workloads in Kubernetes. In its relatively short life (so far), Knative is already delivering on its promise to boost organizations’ ability to leverage serverless and FaaS (functions as a service).

    Knative isn’t the only serverless offering for Kubernetes, but it has become a de-facto standard because it arguably has a richer set of features and can be integrated more smoothly than the competition. And the Knative project continues to evolve to address businesses’ changing needs. In the last year alone, the platform has seen many improvements, giving organizations looking to expand their use of Kubernetes through serverless new choices, new considerations and new opportunities.

  • Redis Labs Leverages Kubernetes to Automate Database Recovery

    Redis Labs today announced it has enhanced the Operator software for deploying its database on Kubernetes clusters to include an automatic cluster recovery that enables customers to manage a stateful service as if it were stateless.

    Announced at Redis Day, the latest version of Kubernetes Operator for Redis Enterprise makes it possible to spin up a new instance of a Redis database in minutes.

    Howard Ting, chief marketing officer for Redis Labs, says as Kubernetes has continued to gain traction, it became apparent that IT organizations need tools to provision Redis Enterprise for Kubernetes clusters. That requirement led Redis Labs to embrace Operator software for Kubernetes developed by CoreOS, which has since been acquired by Red Hat. IT teams can either opt to recover databases manually using Kubernetes Operator or configure the tool to recover databases automatically anytime a database goes offline. In either case, he says, all datasets are loaded and balanced across the cluster without any need for manual workflows.

  • Dare to Transform IT with SUSE Global Services

Mirantis acquires Docker Enterprise

Filed under
Server
OSS

Docker, the technology, is famous. It kick-started the container revolution. Docker, the company, is famous for failing to profit on its technology. Now, in a move indicating that Docker CEO Rob Bearden wasn't able to obtain badly needed capital, Mirantis, a prominent OpenStack and Kubernetes cloud company, has acquired Docker Enterprise product line, developers, and business.

The deal is effective immediately. Mirantis CEO and co-founder Adrian Ionel, said in an e-mail interview, "We are not disclosing the terms of the deal. The deal closes Wednesday [Nov. 12, 2019] morning."

Read more

WordPress 5.3 “Kirk”

Filed under
Server
OSS

5.3 expands and refines the block editor with more intuitive interactions and improved accessibility. New features in the editor increase design freedoms, provide additional layout options and style variations to allow designers more control over the look of a site.

This release also introduces the Twenty Twenty theme giving the user more design flexibility and integration with the block editor. Creating beautiful web pages and advanced layouts has never been easier.

Read more

20 Essential Things to Know if You’re on Nginx Web Server

Filed under
Server
HowTos

Nginx Web Server is one of the two most widely used web servers in the world. It cemented its position as the perfect web server application since its inception 15 years ago. Nginx is known for its superior performance, which is roughly 2.5 times faster than Apache. It is best suited to websites that deal with a bulk amount of static assets, but can also be used for general-purpose websites. It has outgrown its intended use a long time ago and is now used for a plethora of tasks like reverse proxying, caching, load balancing, media streaming, and more.

Read more

Where To Try Nextcloud Service for Free?

Filed under
Server
OSS

Nextcloud is server-based free software all-in-one alternative to Google Drive with plugins capability that you can install to your server. It is very great, with it you can create file server for your office, with awesome additional plugins like LibreOffice Online ("Collabora") & Video Conference ("Talk") you can install as you wish. But if you do not have server to install it, where you can test out a Nextcloud service for free? There is a good news. Actually, Nextcloud Project already provides a gratis demo server at try.nextcloud.com everybody could use. By creating a free account, you can instantly test out Nextcloud with your friends. Enjoy!

Read more

Syndicate content

More in Tux Machines

Google outlines plans for mainline Linux kernel support in Android

This is an extremely long journey that results in every device shipping millions of lines of out-of-tree kernel code. Every shipping device kernel is different and device specific—basically no device kernel from one phone will work on another phone. The mainline kernel version for a device is locked in at the beginning of an SoC's initial development, so it's typical for a brand-new device to ship with a Linux kernel that is two years old. Even Google's latest and, uh, greatest device, the Pixel 4, shipped in October 2019 with Linux kernel 4.14, an LTS release from November 2017. It will be stuck on kernel 4.14 forever, too. Android devices do not get kernel updates, probably thanks to the incredible amount of work needed to produce just a single device kernel, and the chain of companies that would need to cooperate to do it. Thanks to kernel updates never happening, this means every new release of Android usually has to support the last three years of LTS kernel releases (the minimum for Android 10 is 4.9, a 2016 release). Google's commitments to support older versions of Android with security patches means the company is still supporting kernel 3.18, which is five years old now. Google's band-aid solution for this so far has been to team up with the Linux community and support mainline Linux LTS releases for longer, and they're now up to six years of support. Last year, at Linux Plumbers Conference 2018, Google announced its initial investigation into bringing the Android kernel closer to mainline Linux. This year it shared a bit more detail on its progress so far, but it's definitely still a work in progress. "Today, we don't know what it takes to be added to the kernel to run on a [specific] Android device," Android Kernel Team lead Sandeep Patil told the group at LPC 2019. "We know what it takes to run Android but not necessarily on any given hardware. So our goal is to basically find all of that out, then upstream it and try to be as close to mainline as possible." Read more

Thin clients showcase new Gemini Lake Refresh chips

The Futro S9010, S7010, and S5010, which we saw on Fanless Tech, are intended to run the proprietary, Linux-based eLux RP 6.7.0 CR, although they also support Windows 10 IoT Enterprise. Read more

Audiocasts/Shows/Screencasts: Linux in the Ham Shack, Linux Headlines, LibreOffice 6.4 Alpha Quick Look and OpenIndiana 2019.10 Overview

Announcing coreboot 4.11

The coreboot project is proud to announce to have released coreboot 4.11. This release cycle was a bit shorter to get closer to our regular schedule of releasing in spring and autumn. Since 4.10 there were 1630 new commits by over 130 developers. Of these, about 30 contributed to coreboot for the first time. Thank you to all contributors who made 4.11 what it is and welcome to the project to all new contributors! Read more Also: Coreboot 4.11 Brings Many Intel Improvements, New Support For Supermicro / Lenovo Boards