Language Selection

English French German Italian Portuguese Spanish

Server

Kubernetes: KubeInvaders, CSI Ephemeral Inline Volumes and Reviewing 2019 in Docs

Filed under
Server
OSS
  • KubeInvaders - Gamified Chaos Engineering Tool for Kubernetes

    Some months ago, I released my latest project called KubeInvaders. The first time I shared it with the community was during an Openshift Commons Briefing session. Kubenvaders is a Gamified Chaos Engineering tool for Kubernetes and Openshift and helps test how resilient your Kubernetes cluster is, in a fun way.

  • CSI Ephemeral Inline Volumes

    Typically, volumes provided by an external storage driver in Kubernetes are persistent, with a lifecycle that is completely independent of pods or (as a special case) loosely coupled to the first pod which uses a volume (late binding mode). The mechanism for requesting and defining such volumes in Kubernetes are Persistent Volume Claim (PVC) and Persistent Volume (PV) objects. Originally, volumes that are backed by a Container Storage Interface (CSI) driver could only be used via this PVC/PV mechanism.

    But there are also use cases for data volumes whose content and lifecycle is tied to a pod. For example, a driver might populate a volume with dynamically created secrets that are specific to the application running in the pod. Such volumes need to be created together with a pod and can be deleted as part of pod termination (ephemeral). They get defined as part of the pod spec (inline).

    Since Kubernetes 1.15, CSI drivers can also be used for such ephemeral inline volumes. The CSIInlineVolume feature gate had to be set to enable it in 1.15 because support was still in alpha state. In 1.16, the feature reached beta state, which typically means that it is enabled in clusters by default.

    CSI drivers have to be adapted to support this because although two existing CSI gRPC calls are used (NodePublishVolume and NodeUnpublishVolume), the way how they are used is different and not covered by the CSI spec: for ephemeral volumes, only NodePublishVolume is invoked by kubelet when asking the CSI driver for a volume. All other calls (like CreateVolume, NodeStageVolume, etc.) are skipped. The volume parameters are provided in the pod spec and from there copied into the NodePublishVolumeRequest.volume_context field. There are currently no standardized parameters; even common ones like size must be provided in a format that is defined by the CSI driver. Likewise, only NodeUnpublishVolume gets called after the pod has terminated and the volume needs to be removed.

  • Reviewing 2019 in Docs

    Hi, folks! I’m one of the co-chairs for the Kubernetes documentation special interest group (SIG Docs). This blog post is a review of SIG Docs in 2019. Our contributors did amazing work last year, and I want to highlight their successes.

    Although I review 2019 in this post, my goal is to point forward to 2020. I observe some trends in SIG Docs–some good, others troubling. I want to raise visibility before those challenges increase in severity.

What Must be Considered Before Choosing a Container Platform?

Filed under
Server
OSS

An increasing number of IT groups are incorporating development tools, such as containers, in order to create cloud-native apps that operate in a constant manner across public, private, and hybrid clouds.

However, the trickiest part is to find the best container platforms for the organization. It is hard to make the correct decisions regarding container orchestration for managing lifecycles of the containers in order to function at scale and accelerate innovation.

Containers can be Linux

It is vital for every application to run on Linux since the containers are always running on a Linux host.

Containers that are used for managing their lifecycles, work best with Linux. However, these days, Kubernetes is the popular container orchestration platform that was built on Linux concepts and make use of Linux tooling and application programming interfaces (APIs) for managing the containers.

The companies are advised to opt for a Linux distribution that they know and trust before taking any decision on the OS for their container platform. Red Hat Enterprise Linux (RHEL), an OS platform, suits well for operating company’s containers as it provides stability and security features simultaneously, allowing developers to be agile.

Read more

Kubernetes: KubeDR, Elastic and Bug Bounty

Filed under
Server
OSS
  • Catalogic Software Announces KubeDR – Open Source Kubernetes Disaster Recovery

    Catalogic Software, a developer of innovative data protection solutions, today announced the introduction of its Catalogic open source utility, KubeDR, built to provide backup and disaster recovery for Kubernetes cluster configuration, certificates and metadata. Kubernetes is the fastest growing and most popular platform for managing containerized workloads in hybrid cloud environments. Catalogic is also launching cLabs to support new products, open source initiatives and innovations, such as KubeDR.

    Kubernetes stores cluster data in etcd, an interface that collects configuration data for distributed systems. While there are solutions focused on protecting persistent volumes, the cluster configuration data is often forgotten in existing industry solutions. There is a market need to provide the specific requirements of backup and support for Kubernetes cluster data stored in etcd. Catalogic’s new KubeDR is a user-friendly, secure, scalable and an open source solution for backup and disaster recovery designed specifically for Kubernetes applications.

  • Elastic Brings Observability Platform to Kubernetes

    Elastic N.V. announced this week that Elastic Cloud, a subscription instance of an observability platform based on the open source Elasticsearch engine, is generally available on Kubernetes.

    Anurag Gupta, principal product manager for Elastic Cloud, deploying Elastic Cloud for Kubernetes (ECK) eliminates the need to invoke an instance of the platform running outside their Kubernetes environment.

  • Kubernetes Launches Bug Bounty

    Kubernetes, the open-source container management system, has opened up its formerly private bug bounty program and is asking hackers to look for bugs not just in the core Kubernetes code, but also in the supply chain that feeds into the project.

    The new bounty program is supported by Google, which originally wrote Kubernetes, and it’s an extension of what had until now been an invitation-only program. Google has lent financial support and security expertise to other bug bounty programs for open source projects. The range of rewards is from $100 to $10,000 and the scope of what’s considered a valid target is unusual.

  • Google Partners With CNCF, HackerOne on Kubernetes Bug Bounty
  • CNCF, Google, and HackerOne launch Kubernetes bug bounty program

    Bug bounty programs motivate individuals and hacker groups to not only find flaws but disclose them properly, instead of using them maliciously or selling them to parties that will. Originally designed by Google and now run by the CNCF, Kubernetes is an open source container orchestration system for automating application deployment, scaling, and management. Given the hundreds of startups and enterprises that use Kubernetes in their tech stacks, it’s significantly cheaper to proactively plug security holes than to deal with the aftermath of breaches.

Enterprise Insights: Red Hat And The Public Cloud

Filed under
Red Hat
Server

Open source projects are the epicenter of technology innovation today. Docker and Kubernetes are revolutionizing cloud-native computing, along with data-focused projects like Mongo and Redis and many others. Even as open source projects drive innovation, however, sponsoring companies face a growing existential threat from hyper-scale cloud providers.

Red Hat is the recognized leader in enterprise open source support. It's a successful public company with a track record of growth, so it was somewhat puzzling to understand why the Red Hat board decided to sell to IBM this past year.

Read more

16 Open Source Cloud Storage Software for Linux in 2020

Filed under
Server
OSS

The cloud by the name indicates something which is very huge and present over a large area. Going by the name, in a technical field, Cloud is something that is virtual and provides services to end-users in the form of storage, hosting of apps or virtualizing any physical space. Nowadays, Cloud computing is used by small as well as large organizations for data storage or providing customers with its advantages which are listed above.

Mainly, three types of Services come associated with Cloud which are: SaaS (Software as a Service) for allowing users to access other publically available clouds of large organizations for storing their data like Gmail, PaaS (Platform as a Service) for hosting of apps or software on Others public cloud ex: Google App Engine which hosts apps of users, IaaS (Infrastructure as a Service) for virtualizing any physical machine and availing it to customers to make them get feel of a real machine.

Read more

Edge AI server packs in a 16-core Cortex-A72 CPU plus up to 32 i.MX8M SoCs and 128 NPUs

Filed under
GNU
Linux
Server
Hardware

SolidRun’s “Janux GS31 AI Inference Server” runs Linux on its CEx7 LX2160A Type 7 module equipped with NXP’s 16-core Cortex-A72 LX2160A. The system also supplies up to 32 i.MX8M SoCs for video and up to 128 Grylfalcon Lightspeeur 2803 NPUs via multiple “Snowball” modules.

When people talk about edge AI servers, they might be referring to some of the high-end embedded systems we regularly cover here at LinuxGizmos or perhaps something more server-like such as SolidRun’s rackmount form factor Janux GS31 AI Inference Server. The system would generally exceed the upper limits of our product coverage, but it’s a particularly intriguing beastie. The Janux GS31 is based on a SolidRun CEx7 LX2160A COM Express Type 7 module, which also powers the SolidRun HoneyComb LX2K networking board that we covered in June.

Read more

Also: Google Cloud Now Offering IBM Power SystemsGoogle Cloud Now Offering IBM Power Systems

Kubernetes on MIPS

Filed under
Server
Hardware
OSS

Background

MIPS (Microprocessor without Interlocked Pipelined Stages) is a reduced instruction set computer (RISC) instruction set architecture (ISA), appeared in 1981 and developed by MIPS Technologies. Now MIPS architecture is widely used in many electronic products.

Kubernetes has officially supported a variety of CPU architectures such as x86, arm/arm64, ppc64le, s390x. However, it’s a pity that Kubernetes doesn’t support MIPS. With the widespread use of cloud native technology, users under MIPS architecture also have an urgent demand for Kubernetes on MIPS.

Achievements

For many years, to enrich the ecology of the open-source community, we have been working on adjusting MIPS architecture for Kubernetes use cases. With the continuous iterative optimization and the performance improvement of the MIPS CPU, we have made some breakthrough progresses on the mips64el platform.

Over the years, we have been actively participating in the Kubernetes community and have rich experience in the using and optimization of Kubernetes technology. Recently, we tried to adapt the MIPS architecture platform for Kubernetes and achieved a new a stage on that journey. The team has completed migration and adaptation of Kubernetes and related components, built not only a stable and highly available MIPS cluster but also completed the conformance test for Kubernetes v1.16.2.

Read more

Kubernetes: Looking for Bugs, New Study and SUSE's Stake

Filed under
Server
OSS
  • Announcing the Kubernetes bug bounty program

    We aimed to set up this bug bounty program as transparently as possible, with an initial proposal, evaluation of vendors, and working draft of the components in scope. Once we onboarded the selected bug bounty program vendor, HackerOne, these documents were further refined based on the feedback from HackerOne, as well as what was learned in the recent Kubernetes security audit. The bug bounty program has been in a private release for several months now, with invited researchers able to submit bugs and help us test the triage process. After almost two years since the initial proposal, the program is now ready for all security researchers to contribute!

    What’s exciting is that this is rare: a bug bounty for an open-source infrastructure tool. Some open-source bug bounty programs exist, such as the Internet Bug Bounty, this mostly covers core components that are consistently deployed across environments; but most bug bounties are still for hosted web apps. In fact, with more than 100 certified distributions of Kubernetes, the bug bounty program needs to apply to the Kubernetes code that powers all of them. By far, the most time-consuming challenge here has been ensuring that the program provider (HackerOne) and their researchers who do the first line triage have the awareness of Kubernetes and the ability to easily test the validity of a reported bug. As part of the bootstrapping process, HackerOne had their team pass the Certified Kubernetes Administrator (CKA) exam.

  • Kubernetes: a secure, flexible and automated edge for IoT developers

    Cloud native software such as containers and Kubernetes and IoT/edge are playing a prominent role in the digital transformation of enterprise organisations. They are particularly critical to DevOps teams that are focused on faster software releases and more efficient IT operations through collaboration and automation. Most cloud native software is open source which broadens the developer pool contributing and customising the software. This has led to streamlined versions of Kubernetes with low footprints which are suited for IoT/edge workloads.

  • What’s New with SUSE CaaS Platform?

    SUSE CaaS Platform continues its steady pace of advancement, delivering new capabilities targeted at improving the Kubernetes platform operator experience. In addition to updating to Kubernetes 1.16, the SUSE CaaS Platform also now enables operators to consolidate operations across multi-cluster, multi-cloud, and multi-platform environments; to simplify cluster and application management with a web-based console; and to optimize system performance with powerful monitoring and management capabilities.

    Customer centricity was once again at the heart of feature considerations and enhancements for SUSE CaaS Platform. Over the past couple of weeks, we heard an increasing desire from our customers for key capabilities like the need for a unified management console and the need for more powerful data visualization. We listened to you, and your needs, and let that be our guide for development.

Kubernetes in the News

Filed under
Server
OSS
  • Should You Be Using Kubernetes?

    Like most people, I was only vaguely familiar with Kubernetes until my company started working with it. Since then, I’ve gained a deep appreciation for what it brings to cloud application management.

    For those unfamiliar, Kubernetes is a container-orchestration framework developed in 2014, originally as an internal project at Google. The framework automates much of the work involved in software development, including deployment, management and scaling. The Cloud Native Computing Foundation currently manages Kubernetes as an open source project, and Apache 2.0 distributes it.

    When we started our project, I understood only the basics of this framework. But as I dove deeper into the infrastructure and logic of Kubernetes, I discovered its distinct advantages when it came to integrating hardware, vendors and clouds onto a single platform.

  • Kubernetes Gets a Runtime Security Tool
  • 4 Ways Kubernetes Could Be Improved

    Kubernetes is good, but it could be improved. Here are some things that could be better.

    Like almost everyone else these days, I think Kubernetes is the best container orchestration solution. But that doesn’t mean Kubernetes isn’t without its flaws. For my money, there are a number of things that Kubernetes could do better—and needs to if it is going to remain the de facto open source container orchestrator.

    Indeed, some days I think the best thing I can say about Kubernetes is that it has fewer shortcomings than its competitors (I’m looking at you, Docker Swarm) rather than that it truly stands apart for its strengths.

    Here’s a list of areas where Kubernetes can be improved.

  • VMware Eyes Storage Options for Kubernetes
  • What Do Customers Want From The Kubernetes Ecosystem In 2020
  • Why the Air Force put Kubernetes in an F-16

Server: SysAdmins, So-called 'Ops', Infrastructure-as-Code (More Buzzwords) and Kubernetes Hype

Filed under
Server
  • 5 ops hacks for sysadmins

    As a sysadmin, every day I am faced with problems I need to solve quickly because there are users and managers who expect things to run smoothly. In a large environment like the one I manage, it's nearly impossible to know all of the systems and products from end to end, so I have to use creative techniques to find the source of the problems and (hopefully) come up with solutions.

    This has been my daily experience for well over 20 years, and I love it! Coming to work each day, I never quite know what will happen. So, I have a few quick and dirty tricks that I default to when a problem lands on my lap, but I don't know where to start.

  • Are you being the right person for DevOps?

    What does it mean to be the "right" person in a DevOps environment? That's the question that Josh Atwell, senior tech advocate at Splunk, tried to answer in his Lightning Talk at All Things Open 2019.

    "Being the right person for DevOps is being more than just your ops/dev role," says Josh. "In order to be the right person for DevOps, you have to be improving yourself, and you have to be working to improve for others."

    Watch Josh's Lightning Talk, "Are you being the right person for DevOps?" to learn why you should add communication, selflessness, self-care, and celebration to your list of core DevOps traits.

  • Infrastructure-as-Code mistakes and how to avoid them

    Two industry trends point to a gap in DevOps tooling chosen by many. Operations teams need more than an Infrastructure-as-Code approach, but a complete model-driven operations mentality. Learn how Canonical has addressed these concerns by developing Juju, an open source DevOps tool, to allow it create multiple world-leading products.

    [...]

    Juju is simple, secure devops tooling built to manage today’s complex applications wherever you run your software. Compute, storage, networking, service discovery and health monitoring come for free and work with Kubernetes, the cloud and the laptop.

    Juju allows your software infrastructure to maintain always-optimal configuration. As your deployment changes, every applications’ configuration operations are dynamically adjusted by charms. Charms are software packages that are run alongside your applications. They encode business rules for adapting to environmental changes.

    Using a model-driven mentality means raising the level of abstraction. Users of Juju quickly get used to a flexible, declarative syntax that is substrate-agnostic. Juju interacts with the infrastructure provider, but operations code remains the same across. Focusing on creating a software model of your product’s infrastructure increases productivity and reduces complexity.

    Automating infrastructure at a low level of abstraction, DevOps has bought the industry from breathing space. But that breathing space is running out.

  • 5 Kubernetes trends to watch in 2020

    “As more and more organizations continue to expand on their usage of containerized software, Kubernetes will increasingly become the de facto deployment and orchestration target moving forward,” says Josh Komoroske, senior DevOps engineer at StackRox.

    Indeed, some of the same or similar catalysts of Kubernetes interest to this point – containerization among them – are poised to continue in 2020. The shift to microservices architecture for certain applications is another example.

Syndicate content

More in Tux Machines