Language Selection

English French German Italian Portuguese Spanish

Server

Kubernetes 1.16 available from Canonical

Filed under
Server
OSS
Ubuntu

Canonical announces full enterprise support for Kubernetes 1.16, with support covering Charmed Kubernetes, MicroK8s and kubeadm.

MicroK8s will be updated with Kubernetes 1.16 enabling users access to the latest upstream release with a single-line command in under 60 seconds. In addition, MicroK8s gets new add-ons with one line installs of Helm and Cilium as well as enhancements, upgrades and bug fixes. Cilium adds enhanced networking features including Kubernetes Network Policy support. With MicroK8s 1.16, users can develop and deploy enterprise grade Kubernetes on any Linux desktop, server or VM across 42 Linux distros.

Canonical’s Charmed Kubernetes 1.16 will come with exciting changes like support for Kata Containers, AWS IAM, SSL passthrough and more. Using Kata Containers, insecure or untrusted pods can be run safely in isolation without disrupting trusted pods in deployments. Identity Access Management on AWS can be used to login to your Charmed Kubernetes cluster. Users get more control over their deployments while benefitting from reduced complexity due to improved LXD support and enhanced Prometheus and OpenStack integration.

“At Canonical, we enable enterprises by reducing the complexity of their Kubernetes deployments. We are actively involved in the Kubernetes community to ensure we listen to, and support our users’ and partners’ needs. Staying on top of security flaws, community issues and features to improve Kubernetes is critical to us. We keep the Ubuntu ecosystem updated with the latest Kubernetes, as soon as it becomes available upstream,” commented Ammar Naqvi, Product Manager at Canonical.

Read more

Did Lilu Ransomware Really Infect Linux Servers

Filed under
Linux
Server
Security

Note that the domain name of this folder has been hidden from view making it impossible for us to verify if these files were actually on a Linux server. The article goes on to note that “Lilocked doesn't encrypt system files, but only a small subset of file extensions, such as HTML, JS, CSS, PHP, INI, and various image file formats. This means infected servers continue to run normally.”

This limitation raises the obvious question of whether the core of the Linux server itself has been compromised or whether merely applications connected to the core have been hacked. There are many very insecure website building applications such as Wordpress and many insecure web mail applications such as Exim that have been repeatedly hacked over the years. Both Wordpress and Exim have suffered from dozens of major security problems that have nothing to do with the security of the Linux operating system which is at the core of all Linux servers. All of the file formats mentioned in the article are files used on Wordpress websites and files that can be transmitted via Exim email programs.

[...]

So instead of 6000 websites on 6000 servers being infected, it looks more like 6000 files on less than 1000 websites were infected. And many of these websites could have been on the same server – meaning that perhaps only a couple dozen out of the worlds 10 million Linux servers had infected files – and none of the files were actually in the core of any Linux servers.

[...]

Many of these articles were exact copies of the Zdnet article. Thus far, not a single so-called “security expert” has bothered either to look into the evidence provided much less challenge or disagree with this silly claim.

Instead, make even more extreme claims, noting that there are millions of Linux servers running outdated, un-patched and insecure versions of Exim software. This is a fact. But given how many holes have been found in the Exim software, the problem is not with the Linux servers, it is with the Exim software. In my humble opinion, the design of Exim is not secure and the design of Postfix is more secure.

The solution to this Exim problem is to demand that Cpanel support support Postfix and to ask Debian to also switch from Exim to Postfix (something Ubuntu has already done for very obvious reasons). This is the benefit of the diversity of free open source software. If one program has problems, there is quite often a more secure alternative that can be installed with just the click of a button. This is a problem that has been going on for years. But it can be fixed in a matter of minutes.

Read more

CentOS 8 To Be Released Next Week

Filed under
Red Hat
Server

The CentOS Project has announced that CentOS 8.0 will be available for download beginning Tuesday, September 24. This release was deferred so that work to release CentOS 7.7 could be completed, which means that CentOS 7.7 will be out shortly as well (and 7.7 it is already beginning to appear in mirrors and repos). This comes 20 weeks to the day from the release of Red Hat Enterprise Linux 8.

Read more

Kubernetes Leftovers

Filed under
Server
OSS
  • With its Kubernetes bet paying off, Cloud Foundry doubles down on developer experience

    More than 50% of the Fortune 500 companies are now using the open-source Cloud Foundry Platform-as-a-Service project — either directly or through vendors like Pivotal — to build, test and deploy their applications. Like so many other projects, including the likes of OpenStack, Cloud Foundry went through a bit of a transition in recent years as more and more developers started looking to containers — and especially the Kubernetes project — as a platform on which to develop. Now, however, the project is ready to focus on what always differentiated it from its closed- and open-source competitors: the developer experience.

  • Kubernetes in the Enterprise: A Primer

    As Kubernetes moves deeper into the enterprise, its growth is having an impact on the ecosystem at large.

    When Kubernetes came on the scene in 2014, it made an impact and continues to impact the way companies build software. Large companies have backed it, causing a ripple effect in the industry and impacting open source and commercial systems. To understand how K8S will continue to affect the industry and change the traditional enterprise data center, we must first understand the basics of Kubernetes.

  • Google Cloud rolls out Cloud Dataproc on Kubernetes

    Google Cloud is trialling alpha availability of a new platform for data scientists and engineers through Kubernetes.

    Cloud Dataproc on Kubernetes combines open source, machine learning and cloud to help modernise big data resource management.

    The alpha availability will first start with workloads on Apache Spark, with more environments to come.

  • Google announces alpha of Cloud Dataproc for Kubernetes

    Not surprisingly, Google, the company that created K8s, thinks the answer to that question is yes. And so, today, the company is announcing the Alpha release of Cloud Dataproc for Kubernetes (K8s Dataproc), allowing Spark to run directly on Google Kubernetes Engine (GKE)-based K8s clusters. The service promises to reduce complexity, in terms of open source data components' inter-dependencies, and portability of Spark applications. That should allow data engineers, analytics experts and data scientists to run their Spark workloads in a streamlined way, with less integration and versioning hassles.

Databases: MariaDB, ScyllaDB, Percona, Cassandra

Filed under
Server
  • MariaDB opens US headquarters in California

    MariaDB Corporation, the database company born as a result of forking the well-known open-source MySQL database...

  • ScyllaDB takes on Amazon with new DynamoDB migration tool

    There are a lot of open-source databases out there, and ScyllaDB, a NoSQL variety, is looking to differentiate itself by attracting none other than Amazon users. Today, it announced a DynamoDB migration tool to help Amazon customers move to its product.

  • ScyllaDB Announces Alternator, an Open Source Amazon DynamoDB-Compatible API

    ScyllaDB today announced the Alternator project, open-source software that will enable application- and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB. Scylla’s DynamoDB-compatible API will be available for use with Scylla Open Source, supporting the majority of DynamoDB use cases and features.

  • ScyllaDB Secures $25 Million to Open Source Amazon DynamoDB-compatible API

    Fast-growing NoSQL database company raises funds to extend operations and bring new deployment flexibility to users of Amazon DynamoDB.

  • ScyllaDB Announces Alternator, an Open Source Amazon DynamoDB-Compatible API

    ScyllaDB today announced the Alternator project, open-source software that will enable application- and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB. Scylla’s DynamoDB-compatible API will be available for use with Scylla Open Source, supporting the majority of DynamoDB use cases and features.

  • ScyllaDB powers up Alternator: an open Amazon DynamoDB API

    Companies normally keep things pretty quiet in the run up to their annual user conferences, so they can pepper the press with a bag of announcements designed to show how much market momentum and traction that have going.

    Not so with ScyllaDB, the company has been dropping updates in advance of its Scylla Summit event in what is perhaps an unusually vocal kind of way.

    [...]

    Scylla itself is a real-time big data database that is fully compatible with Apache Cassandra and is known for its ‘shared-nothing’ approach (a distributed-computing architecture in which each update request is satisfied by a single node –processor/memory/storage unit to increase throughput and storage capacity.

  • Percona Announces Full Conference Schedule for Percona Live Open Source Database Conference Europe 2019

    The Percona Live Open Source Database Conference Europe 2019 is the premier open source database event. Percona Live conferences provide the open source database community with an opportunity to discover and discuss the latest open source trends, technologies and innovations. The conference includes the best and brightest innovators and influencers in the open source database industry.

  • Thwarting Digital Ad Fraud at Scale: An Open Source Experiment with Anomaly Detection

    Our experiment assembles Kafka, Cassandra, and our anomaly detection application in a Lambda architecture, in which Kafka and our streaming data pipeline are the speed layer, and Cassandra acts as the batch and serving layer. In this configuration, Kafka makes it possible to ingest streaming digital ad data in a fast and scalable manner, while taking a “store and forward” approach so that Kafka can serve as a buffer to protect the Cassandra database from being overwhelmed by major data surges. Cassandra’s strength is in storing high-velocity streams of ad metric data in its linearly scalable, write-optimized database. In order to handle automation for provisioning, deploying, and scaling the application, the anomaly detection experiment relies on Kubernetes on AWS EKS.

Server: Kubeflow + OpenShift Container Platform, SUSE's SLES and More

Filed under
Server

Red Hat: Flask on Red Hat Enterprise Linux, OpenShift and SAN vs. NAS

Filed under
Red Hat
Server
  • Develop with Flask and Python 3 in a container on Red Hat Enterprise Linux

    In my previous article, Run Red Hat Enterprise Linux 8 in a container on RHEL 7, I showed how to start developing with the latest versions of languages, databases, and web servers available with Red Hat Enterprise Linux 8 even if you are still running RHEL 7. In this article, I?ll build on that base to show how to get started with the Flask microframework using the current RHEL 8 application stream version of Python 3.

    From my perspective, using Red Hat Enterprise Linux 8 application streams in containers is preferable to using software collections on RHEL 7. While you need to get comfortable with containers, all of the software installs in the locations you?d expect. There is no need to use scl commands to manage the selected software versions. Instead, each container gets an isolated user space. You don?t have to worry about conflicting versions.

    In this article, you?ll create a Red Hat Enterprise Linux 8 Django container with Buildah and run it with Podman. The code will be stored on your local machine and mapped into the container when it runs. You?ll be able to edit the code on your local machine as you would any other application. Since it is mapped via a volume mount, the changes you make to the code will be immediately visible from the container, which is convenient for dynamic languages that don?t need to be compiled. While this approach isn?t the way to do things for production, you get the same development inner loop as you?d have when developing locally without containers. The article also shows how to use Buildah to build a production image with your completed application.

  • IBM brings Cloud Foundry and Red Hat OpenShift together

    At the Cloud Foundry Summit in The Hague, IBM today showcased its Cloud Foundry Enterprise Environment on Red Hat?s OpenShift container platform.

    For the longest time, the open-source Cloud Foundry Platform-as-a-Service ecosystem and Red Hat?s Kubernetes-centric OpenShift were mostly seen as competitors, with both tools vying for enterprise customers who want to modernize their application development and delivery platforms. But a lot of things have changed in recent times. On the technical side, Cloud Foundry started adopting Kubernetes as an option for application deployments and as a way of containerizing and running Cloud Foundry itself.

  • SAN vs. NAS: Comparing two approaches to data storage

    For a new sysadmin, storage can be one of the more confusing aspects of infrastructure. This confusion can be caused by lack of exposure to new or different technologies, often because storage needs may be managed by another team. Without a specific interest in storage, an admin might find one’s self with a number of misconceptions, questions, or concerns about how or why to implement different solutions.

    When discussing enterprise storage, two concepts are at the core of most conversations: storage area networks (SAN) and network-attached storage (NAS). Both options provide storage to clients across a network, which offers the huge benefit of removing individual servers as single points of failure. Using one of these options also reduces the cost of individual clients, as there is no longer a need to have large amounts of local storage.

Servers: "Docker Not Doomed?" and Some IBM/Red Hat Leftovers

Filed under
Red Hat
Server
  • Docker Not Doomed?

    Modern application development essentially consists of composing an application from a variety of services. These services aren't just infrastructure components that live on a server any more. They're delivered via an API and could be almost anything underneath as the abstractions start to pile up.

    COBOL code at the other end of a message bus with a lambda-function frontend? Okay. Ephemeral container running a Spring Boot service that connects to an RDBMS on a physical Unix server on the other side of the country? Sure, why not? Modern applications don't really care, because it's all about getting the job done. The name of the game is loosely-coupled modular components.

    This is why Docker has joined forces with Microsoft, Bitnami, HashiCorp, and a few others to create the Cloud Native Application Bundle (CNAB) specification. Docker uses this spec as part of its Docker App tool, which behaves a lot like docker-compose to collect a variety of services together into a single application bundle that can be shared around. It's a lot like a container collection, and brings the same easy portability of containers to composed applications.

    "[Docker App] allows you to describe not just containers, but other services around which the app is dependent," says Johnston. "And it allows you to do things that enterprises care about, such as signing the bundle, verifying that signature, and automatically promoting it based on that signature and things like that."

  • Red Hat OpenShift Service Mesh is now available: What you should know

    As Kubernetes and Linux-based infrastructure take hold in digitally transforming organizations, modern applications frequently run in a microservices architecture and therefore can have complex route requests from one service to another. With Red Hat OpenShift Service Mesh, we’ve gone beyond routing the requests between services and included tracing and visualization components that make deploying a service mesh more robust. The service mesh layer helps us simplify the connection, observability and ongoing management of every application deployed on Red Hat OpenShift, the industry’s most comprehensive enterprise Kubernetes platform.

    Red Hat OpenShift Service Mesh is available through the OpenShift Service Mesh Operator, and we encourage teams to try this out on Red Hat OpenShift 4 here.

  • Catching up with Red Hat at Sibos 2019

    Red Hat is excited to once again be attending Sibos, an annual financial services industry conference exhibition and networking event that is hosted by SWIFT. This year, the event is being held in London, England from September 23rd through 26th. Red Hat will be attending to sponsor a number of activities and discuss how and why enterprise open source technologies offer innovative capabilities that can help firms thrive in their digital journeys.

Server: Red Hat, Intel and SUSE

Filed under
Linux
Red Hat
Server
SUSE
  • Introduction to virtio-networking and vhost-net

    In this post we have scratched the surface of the virtio-networking ecosystem, introducing you to the basic building blocks of virtualization and networking used by virtio-networking. We have briefly covered the virtio spec and the vhost protocol, reviewed the frontend and backend architecture used for implementing the virtio interface and have taken you through the vhost-net/virtio-net architecture of vhost-net (host kernel) communicating with virtio-net (guest kernel).

    A fundamental challenge we had when trying to explain things was the historical overloading of terms. As one example, virtio-net refers both to the virtio networking device implementation in the virtio specification and also to the guest kernel front end described in the vhost-net/virtio-net architecture. We attempted to address this by explaining the context of terms and using virtio-net to only describe the guest kernel frontend.

    As will be explained in later posts, there are other implementations for the virtio spec networking device based on using DPDK and different hardware offloading techniques which are all under the umbrella of the virtio-networking.

    The next two posts are intended to provide a deeper understanding of the vhost-net/virtio-net architecture. One post will be intended for architects providing a technical deep dive into the vhost-net/virtio-net and explaining how in practice the data plane and control planes are implemented. The other post intended for developers will be a hands on session including Ansible scripts to enable experimenting with the vhost-net/virtio-net architecture.

    If you prefer high level overviews we recommend you keep an eye out for the virtio-networking and DPDK introductions, to be published in the upcoming weeks.

  • Intel Issues Second Release Of Its Rust-Written Cloud-Hypervisor For Modern Linux VMs

    Intel's open-source crew has released version 0.2 of its primarily Rust-developed Cloud Hypervisor and associated firmware also in Rust.

    The Intel Cloud Hypervisor is their experimental VMM running atop KVM designed for modern Linux distributions and VirtIO para-virtualized devices without any legacy device support.

  • Announcing SUSE CaaS Platform 4

    SUSE CaaS Platform 4 raises the bar for robust Kubernetes platform operations with enhancements that expand platform scalability options, strengthen application security, and make it easier to keep pace with technology advancements. Integrating the latest releases of Kubernetes and SUSE Linux Enterprise, SUSE CaaS Platform 4 continues to provide industry leading application delivery capabilities as an enterprise-ready solution.

  • A new era in Cloud Native Application Delivery is here
  • 3 Infrastructure Compliance Best Practices for DevOps

    For most IT organizations, the need for compliance goes without saying. Internal corporate policies and external regulations like HIPAA and Sarbanes Oxley require compliance. Businesses in heavily regulated industries like healthcare, financial services, and public service are among those with the greatest need for strong compliance programs.

Linux Foundation and Cloud Native Computing Foundation (CNCF)

Filed under
Linux
Server
  • The Linux Kernel Mentorship is Life Changing

    My name is Kelsey Skunberg and I am starting my senior year for my Undergraduate in Computer Science at Colorado State University. This summer, I had the honor of participating in the Linux Kernel Mentorship Program through CommunityBridge. Throughout the mentorship, I grew very fond of working on open source projects, learned to work with the open source communities, and my confidence as a developer has grown tremendously.

    Since the beginning, I found the Linux kernel community to be very welcoming and willing to help. Many of the developers and maintainers have taken time to answer questions, review patches, and provide advice. I’ve come to learn contributing is not quite as scary as I first anticipated. It’s ok to make mistakes, just be open to learning and new ideas. There are a lot of resources for learning, and developers willing to invest time in mentoring and helping new contributors.

    [...]

    I chose to work on PCI Utilities and Linux PCI with Bjorn Helgaas as my mentor. Bjorn has been an incredible mentor who provided me with a great amount of advice and has introduced me to several tools which make the development process easier.

  • Sysdig Makes Container Security Case for Falco

    Sysdig is doubling down on its efforts to make its open source Falco project the de facto means for pulling security metrics for runtime security and intrusion detection. The company has already contributed Falco to the Cloud Native Computing Foundation (CNCF) and has hired Kris Nova, a CNCF ambassador who worked for Heptio (now part of VMware) and Deis (now part of Microsoft). Nova is also credited with developing kubicorn, an infrastructure management tool for Kubernetes.

  • Software Development, Microservices & Container Management – Part I – Microservices – Is it the Holy Grail?

    Together with my colleague Bettina Bassermann and SUSE partners, we will be running a series of blogs and webinars from SUSE (Software Development, Microservices & Container Management, a SUSE webinar series on modern Application Development), and try to break the ice about Microservices Architecture (MSA) and Cloud Native Application Development (CNA) in the software development field.

Syndicate content

More in Tux Machines

Desktop GNU/Linux: Rick and Morty, Georges Basile Stavracas Neto on GNOME and Linux Format on Eoan Ermine

  • We know where Rick (from Rick and Morty) stands on Intel vs AMD debate

    For one, it appears Rick is running a version of Debian with a very old Linux kernel (3.2.0) — one dating back to 2012. He badly needs to install some frickin’ updates. “Also his partitions are real weird. It’s all Microsoft based partitions,” a Redditor says. “A Linux user would never do [this] unless they were insane since NTFS/Exfat drivers on Linux are not great.”

  • Georges Basile Stavracas Neto: Every shell has a story

    … a wise someone once muttered while walking on a beach, as they picked up a shell lying on the sand. Indeed, every shell began somewhere, crossed a unique path with different goals and driven by different motivations. Some shells were created to optimize for mobility; some, for lightness; some, for speed; some were created to just fit whoever is using it and do their jobs efficiently. It’s statistically close to impossible to not find a suitable shell, one could argue. So, is this a blog about muttered shell wisdom? In some way, it actually is. It is, indeed, about Shell, and about Mutter. And even though “wisdom” is perhaps a bit of an overstatement, it is expected that whoever reads this blog doesn’t leave it less wise, so the word applies to a certain degree. Evidently, the Shell in question is composed of bits and bytes; its protection is more about the complexities of a kernel and command lines than sea predators, and the Mutter is actually more about compositing the desktop than barely audible uttering.

  • Adieu, 32

    The tenth month of the year arrives and so does a new Ubuntu 19.10 (Eoan Ermine) update. Is it a portent that this is the 31st release of Ubuntu and with the 32nd release next year, 32-bit x86 Ubuntu builds will end?

Linux Kernel and Linux Foundation

  • Linux's Crypto API Is Adopting Some Aspects Of Zinc, Opening Door To Mainline WireGuard

    Mainlining of the WireGuard secure VPN tunnel was being held up by its use of the new "Zinc" crypto API developed in conjunction with this network tech. But with obstacles in getting Zinc merged, WireGuard was going to be resorting to targeting the existing kernel crypto interfaces. Instead, however, it turns out the upstream Linux crypto developers were interested and willing to incorporate some elements of Zinc into the existing kernel crypto implementation. Back in September is when Jason Donenfeld decided porting WireGuard to the existing Linux crypto API was the best path forward for getting this secure networking functionality into the mainline kernel in a timely manner. But since then other upstream kernel developers working on the crypto subsystem ended up with patches incorporating some elements of Zinc's design.

  • zswap: use B-tree for search
    The current zswap implementation uses red-black trees to store
    entries and to perform lookups. Although this algorithm obviously
    has complexity of O(log N) it still takes a while to complete
    lookup (or, even more for replacement) of an entry, when the amount
    of entries is huge (100K+).
    
    B-trees are known to handle such cases more efficiently (i. e. also
    with O(log N) complexity but with way lower coefficient) so trying
    zswap with B-trees was worth a shot.
    
    The implementation of B-trees that is currently present in Linux
    kernel isn't really doing things in the best possible way (i. e. it
    has recursion) but the testing I've run still shows a very
    significant performance increase.
    
    The usage pattern of B-tree here is not exactly following the
    guidelines but it is due to the fact that pgoff_t may be both 32
    and 64 bits long.
    
    
  • Zswap Could See Better Performance Thanks To A B-Tree Search Implementation

    For those using Zswap as a compressed RAM cache for swapping on Linux systems, the performance could soon see a measurable improvement. Developer Vitaly Wool has posted a patch that switches the Zswap code from using red-black trees to a B-tree for searching. Particularly for when having to search a large number of entries, the B-trees implementation should do so much more efficiently.

  • AT&T Finally Opens Up dNOS "DANOS" Network Operating System Code

    One and a half years late, the "DANOS" (known formerly as "dNOS") network operating system is now open-source under the Linux Foundation. AT&T and the Linux Foundation originally announced their plan in early 2018 wish pushing for this network operating system to be used on more mobile infrastructure. At the time they expected it to happen in H2'2018, but finally on 15 November 2019 the goal came to fruition.

Security Patches and FUD/Drama

Android Leftovers