Language Selection

English French German Italian Portuguese Spanish

Server

Server: Kubernetes, Containers, and Microsoft Downtime

Filed under
Server
  • Kubernetes is the new operating environment (Part 1)

    This is the first in a series of articles that consider the role of Kubernetes and application servers. Do application servers need to exist? Where does the current situation leave developers trying to choose the right path forward for their applications?

    Why Kubernetes is the new application server

    By now you’ve likely read “Why Kubernetes is The New Application Server” and you might be wondering what that means for you. How does it impact Java EE or Jakarta EE and Eclipse MicroProfile? What about application servers or fat JARs? Is it the end as we’ve known it for nearly two decades?

    In reality, it doesn’t impact the worldview for most. It’s in line with the efforts of a majority of vendors around Docker and Kubernetes deployments over the last few years. In addition, there’s greater interest in service mesh infrastructures, such as Istio, and how they can further assist with managing Kubernetes deployments.

    All these factors are drivers for the current trends within development—pushing more concerns traditionally associated with development down into the lower layers of the entire stack, with concerns moving into infrastructure or the operating environment an application runs on.

    Throughout the series, we will see that there is no need for doom and gloom. Although the mechanisms might change, there’s still a place for application servers and fat JARs when developing applications.

  • Understanding the State of Container Networking

    Container networking is a fast moving space with lots of different pieces. In a session at the Open Source Summit, Frederick Kautz, principal software engineer at Red Hat outlined the state of container networking today and where it is headed in the future.

    Containers have become increasingly popular in recent years, particularly the use of Docker containers, but what exactly are containers?

    Kautz explained the containers make use of the Linux kernel's ability to allow for multiple isolated user space areas. The isolation features are enabled by two core elements cGroups and Namespaces. Control Groups (cGroups) limit and isolate the resource usage of process groups, while namespaces partition key kernel structures for process, hostname, users and network functions.

  • Lightning strikes put Microsoft Azure data centre offline

    Microsoft's Azure cloud platform has suffered a massive outage that affected customers in various parts of the world, with cooling problems being identified at about 2.30am Pacific Time on Tuesday (7.30pm AEST Tuesday).

UCS 4.3-2 Published! New: Maintenance Mode for Release Updates …

Filed under
Server
Debian

UCS 4.3-2 now offers a maintenance mode for importing release updates via Univention Management Console (UMC). UMC is the web-based, graphical user interface for the administration of the entire domain. In the past, when a release update was recorded, short-term failures of the UMC could occur, for example, because the updated services were restarted. This new maintenance mode significantly improves the reliability during the import of release updates via UMC. In addition, you can now track the progress of the updates.

Read more

Servers: Load Balancing and Failover, Telcos, Google and Beyond Kubernetes

Filed under
Server
  • Improving the Standards of Linux Load Balancing and Failover

    Oracle supports both simple and weighted round-robin load balancing of requests from its web components and aims to improve features like high availability and load balancing. By following a specific path and port, Linux remote direct memory access (RDMA) has problems regarding performance and security perspectives. In the LDAP environment, load balancing for writes of a user and group data can produce undesirable behavior due to the replication. LDAP replication does not guarantee transaction integrity; the limitation of replication is however very dominant in the system itself.

    Segmenting the user and group data may be effective for distributing the load if the case rests upon separate user population in distinct branches of the Directory Information Tree (DIT). By maintaining different primary LDAP server for read and write purpose, load balances of such kind of operations can be obtained efficiently. Also, selecting a standard network interface card can be beneficial as they pick which network device is appropriate to transport the data. RDMA is proved to be more resilient over IP (RDMAIP) which creates a high availability connection to create a bonding group among adapters’ ports. The traffic automatically gets transported to the other ports in the group in case of loss of any significant port. This can be achieved by utilizing Oracle's Reliable Datagram Sockets (RDS).Oracle supports both simple and weighted round-robin load balancing of requests from its web components and aims to improve features like high availability and load balancing. By following a specific path and port, Linux remote direct memory access (RDMA) has problems regarding performance and security perspectives. In the LDAP environment, load balancing for writes of a user and group data can produce undesirable behavior due to the replication. LDAP replication does not guarantee transaction integrity; the limitation of replication is however very dominant in the system itself.

    Segmenting the user and group data may be effective for distributing the load if the case rests upon separate user population in distinct branches of the Directory Information Tree (DIT). By maintaining different primary LDAP server for read and write purpose, load balances of such kind of operations can be obtained efficiently. Also, selecting a standard network interface card can be beneficial as they pick which network device is appropriate to transport the data. RDMA is proved to be more resilient over IP (RDMAIP) which creates a high availability connection to create a bonding group among adapters’ ports. The traffic automatically gets transported to the other ports in the group in case of loss of any significant port. This can be achieved by utilizing Oracle's Reliable Datagram Sockets (RDS).

  • Linux Foundation maps out the telco’s future with edge and AI platforms

    The mobile operator no longer has the luxury of dealing with a relatively closed and well-defined set of technologies and partners. The mobile network is increasingly intertwined with fixed line connections, and also with broad virtualized, programmable platforms, which will be essential to enable new business models and justify the investment in 5G. That sees operators getting deeply involved in a host of new technologies and standards, and increasingly emerging from the secrecy of inhouse labs and working through open source projects. Two important areas of effort are edge computing and machine learning (ML). Both are the focus of several open initiatives, in which certain operators, notably AT&T, are prominent. Both are starting to be deployed, often starting with the…

  • Google infrastructure chief Urs Hölzle: This is the future of software and the cloud

    Look at the history of open source. Twenty years ago there was nothing that was relevant to an enterprise that was open source. Maybe BSD [Berkeley Software Distribution version of Unix], but basically nothing. Five years later, 2003, Linux and the LAMP stack [Linux, the Apache HTTP Server, the MySQL relational database management system and the PHP programming language] was pretty common already. Java wasn’t quite open source, but I’ll throw it in there. Basically, every five years afterwards, the amount of IT where open source was relevant was bigger.

  • Beyond Kubernetes - 5 Promising Cloud-Native Technologies To Watch

Server: 'Serverless', Mainframes and OpenStack

Filed under
Server
  • Is Serverless the Future of Open Source and Software Development?

    Is serverless computing the next evolution of open source? And, more broadly, is serverless the key to opening up software development to the masses?

    Those two questions were the crux of a short keynote address by Austen Collins, CEO of Serverless Inc., at this week’s Open Source Summit event in Vancouver, British Columbia.

  • Mainframes Get GUI, With Zowe Project

    For the last 50 years, mainframe have literally been the big iron systems that have helped to power critical elements of IT infrastructure. Yet despite the core role that mainframes have held, the primary interface to the mainframe throughout its history has been the 'green screen' command line.

    At the Open Source Summit, the Linux Foundation's Open Mainframe project announced the new Zowe effort which for the first time brings a real graphical user interface to the mainframe. The Open Mainframe project itself was first announced at Linux Foundation's LinuxCon 2015 event in Seattle.

  • OpenStack Bare Metal Clouds, Fast Forward Upgrades and Hardware Accelerators Take Center Stage in Latest Release, ‘Rocky’

    18th release of OpenStack addresses new demands for infrastructure driven by modern use cases like AI, machine learning, NFV and edge computing, by starting with a bare metal foundation and enabling containers, VMs and GPUs

Virtual Servers and Containers News

Filed under
Server
  • Developers’ corner: Top 9 open source projects spawned by Kubernetes

    Kubernetes is at the center of the container revolution today. What began with Docker has gone beyond the confines of a single organization or tool. The container movement has brought the entire IT industry to consolidate around open standards that benefit all organizations, not just a few powerful vendors. This is what Kubernetes represents — a world of software delivery that is built on an open foundation.

  • Despite What VMware Says, Not Everyone Wants to Deploy Containers in VMs [Ed: So the company wants containers to be placed within its proprietary VMs/hypervisors with back doors]

    For its 18th code release, issued today, the OpenStack community is making the software easier to deploy on bare metal. The release, named “Rocky,” takes advantage of the way physical servers are managed to make it as fast and easy to deploy OpenStack on physical servers as in virtual machines (VMs).

    Jonathan Bryce, executive director with the OpenStack Foundation, said, “We see an environment where people want the right building blocks and to be able to pick a physical server or a VM and have the ability to manage it all in a single platform.”

  • VMware Claims Greater Scalability With Open-Source Blockchain Project [Ed: Openwashing]

    Cloud computing and virtualization firm VMware said Tuesday that it has developed an open-source blockchain infrastructure designed to be both scalable and energy efficient.

    Dubbed Project Concord, VMware's blockchain aims to provide a base for blockchain implementations which can solve certain scaling issues by modifying the Byzantine Fault Tolerance consensus algorithm commonly found in blockchain networks.

    Senior researcher Guy Golan Gueta wrote in a company blog post that the project's algorithm uses a different communication procedure than existing consensus protocols that "exploits optimism to provide a common case fast-path execution" and utilizes new cryptographic algorithms.

  • [Podcast] PodCTL #47 – VM Admin vs Container Admin

    This week, we were watching as fall trade show season got started and we noticed that one of the Container 101 sessions had a packed room. This led to a discussion about how many people were still at the 101 stages of container knowledge. TL;DR – it’s still a lot! So we thought it would be useful to do a basic-level show about what a VM-Admin would need to know in order to be a Container Admin. We walked a mile in that admin’s shoes, and laid out a map for how to think about their world in a container-centric way.

New OpenStack cloud release embraces bare metal

Filed under
Server
OSS

OpenStack is getting bigger than ever. It now powers more than 75 public cloud data centers and thousands of private clouds at a scale of more than 10 million compute cores. But it's always been hard to upgrade from one version of OpenStack to another, and it's been hard to deploy on bare metals. With OpenStack 18, Rocky, both problems are much easier to deal with now.

Read more

Server: Kubernetes, Hummingbird at Rackspace, Containers

Filed under
Server
  • Kubernetes Development Infrastructure Moving Out of Google Control

    Google helped to create the Linux Foundation's Cloud Native Computing Foundation in July 2015 with the contribution of the Kubernetes container orchestration system. Although Google contributed Kubernetes, it was still running the core infrastructure for building, developing and testing Kubernetes—until now.

    On Aug. 29 at the Linux Foundation's Open Source Summit here, the CNCF and Google announced that Kubernetes development will be moving to the CNCF's control in an effort to further enable multicloud development. Alongside the move, Google announced that it is donating $9 million in Google Cloud Platform credits to enable the CNCF to run Kubernetes developments for the next three years.

  • The shutdown of the project Hummingbird at Rackspace

    On reflection, I suspect their chances would be better if they were serious about interoperating with Swift. The performance gains that they demonstrated were quite impressive. But their paymasters at RAX weren't into this community development and open-source toys (not that RAX went through the change of ownership while Hummingbird was going on).

  • The container future is here. It’s just not evenly distributed

    Science fiction writer William Gibson once said, “The future is already here -- it’s just not evenly distributed.” He was explaining that things we once thought of as futuristic already were a reality for some people, but not everyone.

    He may as well have been talking about adoption of Linux containers within the federal government.

    While evidence suggests that the public sector’s interest in Linux containers continues to grow, many agencies remain on the fence. Whether due to budget, lack of information or other constraints, government adoption of Linux containers has been slower than it has been in the commercial space. Many agencies continue to view containers as exclusively for the cool kids in Silicon Valley.

Oracle Solaris 11.4

Filed under
OS
Server
  • Oracle Solaris 11.4 Released for General Availability

    I'm pleased to announce the release of Oracle Solaris 11.4. Of the four releases of Oracle Solaris that I've been involved in, this is the best one yet!

    Oracle Solaris is the trusted business platform that you depend on. Oracle Solaris 11 gives you consistent compatibility, is simple to use and is designed to always be secure.

  • Solaris 11.4 released

    Congrats to my colleagues in the Solaris team who released Solaris 11.4 today. Despite the 11.x moniker, this is actually a major Solaris release; Oracle has just decided to go down the perpetual macOS X / Windows 10 version numbering route from now on. (This development is unlikely to faze Solaris veterans, who have been using SunOS 5.x since 1992.)

  • Oracle Solaris 11.4 Officially Released

    Two years after Solaris 11.3 and Oracle opting for a "continuous delivery" model of 11.next updates instead of a "Solaris 12", Solaris 11.4 is out the door today.

    Oracle is talking up Solaris 11.4 with its general availability release as "the trusted business platform", "consistent compatibility, is simple to use and is designed to always be secure", "more than 3,000 applications certified to run on it", and "the only operating system that has completed UNIX V7 certification."

Is Kubernetes free as an open source software?

Filed under
Server
OSS

So, is Kubernetes free?

Yes, but also no.

Pure open source Kubernetes is free and can be downloaded from its repository on GitHub. Administrators must build and deploy the Kubernetes release to a local system or cluster or to a system or cluster in a public cloud, such as AWS, Google Cloud Platform (GCP) or Microsoft Azure.

While the pure Kubernetes distribution is free to download, there are always costs involved with open source software. Without professional support, Kubernetes adopters need to pay in-house staff for help or contract someone knowledgeable. The Kubernetes admin needs a detailed working knowledge of Kubernetes software build creation and deployment within a Linux environment.

In effect, users need to know what they're getting into before they adopt open source software in the enterprise.

Read more

Containers: Aqua Security and a Primer

Filed under
Server
  • Aqua Security Open Sources Container Pen Test

    Aqua Security is trying to level the container security playing field by making available as an open source project an open source edition of a penetration testing tool designed specifically for container clusters.

    Rani Osnat, vice president of product marketing for Aqua Security, says kube-hunter is an automated penetration testing tool that developers and cybersecurity teams can employ to discover vulnerabilities within containers.

    That tool is designed to be run in two modes. Passive hunters run by default and are designed to execute a series of tests that probe for potential access points within your cluster. An active hunting mode then can be employed to execute additional tests against any weaknesses found with the passive hunter. Results from those tests are then shown on a website hosted by Aqua Security.

  • Getting started with Linux containers

    A major drawback of an OS-based model is that it is slow, and to deploy a new application, IT administrators might need to install a new server, which incurs operational costs and requires time.

    When every application has its own copy of the OS, operations are often inefficient. For example, to guarantee security, every application needs its own dedicated server, which results in lots of under-utilized hardware in the data center.

    A container is an isolated environment where the OS uses namespaces to create barriers. Linux containers have all the necessary components to run an application and make it easy to run a container on top of an operating system.

    From a hardware standpoint, containers utilize resources more efficiently. If there is still hardware capacity available, containers can use that and admins won't need to install a new server.

Syndicate content

More in Tux Machines

LAS 2018

  • LAS 2018
    This month I was at my second Libre Application Summit in Denver. A smaller event than GUADEC but personally was my favorite conference so far. One of the main goals of LAS has been to be a place for multiple platforms to discuss the desktop space and not just be a GNOME event. This year two KDE members, @aleixpol and Albert Astals Cid, who spoke about release cycle of KDE Applications, Plasma, and the history of Qt. It is always interesting to see how another project solves the same problems and where there is overlap. The elementary folks were there since this is @cassidyjames home turf who had a great “It’s Not Always Techincal” talk as well as a talk with @danrabbit about AppCenter which are both very important areas the GNOME Project needs to improve in. I also enjoyed meeting a few other community members such as @Philip-Scott and talk about their use of elementary’s platform.
  • Developer Center Initiative – Meeting Summary 21st September
    Since last blog post there’s been two Developer Center meetings held in coordination with LAS GNOME Sunday the 9th September and again Friday the 21st September. Unfortunately I couldn’t attend the LAS GNOME meeting, but I’ll cover the general progress made here.

The "Chinese EPYC" Hygon Dhyana CPU Support Still Getting Squared Away For Linux

Back in June is when the Linux kernel patches appeared for the Hygon Dhyana, the new x86 processors based on AMD Zen/EPYC technology licensed by Chengdu Haiguang IC Design Co for use in Chinese data-centers. While the patches have been out for months, they haven't reached the mainline kernel quite yet but that might change next cycle. The Hygon Dyhana Linux kernel patches have gone through several revisions and the code is mostly adapting existing AMD Linux kernel code paths for Zen/EPYC to do the same on these new processors. While these initial Hygon CPUs appear to basically be re-branded EPYC CPUs, the identifiers are different as rather than AMD Family 17h, it's now Family 18h and the CPU Vendor ID is "HygonGenuine" and carries a new PCI Express device vendor ID, etc. So the different areas of the kernel from CPUFreq to KVM/Xen virtualization to Spectre V2 mitigations had to be updated for the correct behavior. Read more

Good Support For Wayland Remote Desktop Handling On Track For KDE Plasma 5.15

The KDE Plasma 5.15 release due out next year will likely be in good shape for Wayland remote desktop handling. The KDE Plasma/KWin developers have been pursuing Wayland remote desktop support along a similar route to the GNOME Shell camp by making use of PipeWire and the XDG-Desktop-Portal. Bits are already in place for KDE Plasma 5.13 and the upcoming 5.14 release, but for the 5.15 release is now where it sounds like the support may be in good shape for end-users. Read more

Linux developers threaten to pull “kill switch”

Linux powers the internet, the Android in your pocket, and perhaps even some of your household appliances. A controversy over politics is now seeing some of its developers threatening to withdraw the license to all of their code, potentially destroying or making the whole Linux kernel unusable for a very long time. Read more