Language Selection

English French German Italian Portuguese Spanish

Server

Servers: SUSE, Red Hat and Google

Filed under
Server
  • Tumbleweed Snapshots Deliver Curl, Salt, FFmpegs Packages Updates

    Mozilla Firefox had a minor release of version 66.0.3 in the latest Tumbleweed 20190415 snapshot. The browser addressed some performance issues with some HTML5 games and provided a Baidu search plugin for Chinese users and China’s Internet space. The command-line tool for transferring data using various protocols, curl 7.64.1 fixed many bugs and added additional libraries to check for Lightweight Directory Access Protocol (LDAP) support. The update of libvirt 5.2.0 dropped a few patches and added several new features like Storage Pool Capabilities to get a more detailed list XML output for the virConnectGetStoragePoolCapabilites Application Programming Interface (API) and libvirt also enabled firmware autoselection for the open-source emulator QEMU. The newest salt 2019.2.0 package in Tumbleweed enhanced network automation and broadened support for a variety of network operating systems, and features for configuration manipulation or operational command execution. Salt also added running playbooks to the 2019.2.0 release with the playbooks function and it includes an ansible playbooks state module, which can be used on a targeted host to run ansible playbooks, or used in an orchestration state runner. The snapshot was trending at a 95 rating at the time of publishing this article, according to the Tumbleweed snapshot reviewer.

  • My Kind of SUSE Support

    Another SUSECON has come and gone, and what an event it was! If you follow us on social media, you’ve seen the amazing, informative and inspiring keynote speakers, pictures of the great parties and videos of people from all over the globe who share our enthusiasm for open source.

  • Operating SUSE Cloud Application Platform for the Swiss Federal Government

    The recent SUSECON in Nashville had hundreds of great sessions, including 11 on SUSE Cloud Application Platform. The one I was most looking forward to was by our partner, Adfinis SyGroup AG, telling their early adopter’s story about implementing SUSE Cloud Application Platform for the Swiss Federal Government. In addition to attending their session, I was able to sit down with Nicolas Christener (CEO/CTO) and Lucas Bickel (Software Engineer) and have a long conversation on about it.

  • Red Hat Summit 2019 Labs: Emerging technology roadmap

    Red Hat Summit 2019 is rocking Boston, MA, May 7-9 in the Boston Convention and Exhibition Center. Everything you need to know about the current state of open source enterprise-ready software can be found at this event. You’ll find customers talking about their experiences leveraging open source in their solutions, creators of open source technologies you’re using, and hands-on lab experiences relating to these technologies.

    This hands-on appeal is what this series of articles is about. In previous articles, we looked at labs focusing on Red Hat Enterprise Linux, Integration and APIs, and cloud-native app development. In this article, we’ll look at labs in the “Emerging Technology” track.

    The following labs can be found in the session catalog online, by searching on title or filtering on “instructor-led labs” and “emerging technology.”

  • Red Hat Summit 2019 Track Guide: Emerging Technology

    From optimizing existing IT infrastructure to executing on their digital transformation goals, enterprises have a lot to think about - especially when emerging technologies may soon be disrupting entire industries. In our annual Red Hat Global Customer Tech Outlook, we asked customers what they are keeping an eye on in 2019. Blockchain, edge computing and developer productivity tools are top three. In other words, it’s all about security, data insights and application development in a world where developers rule the school.

  • Red Hat: Give Everyone the Data They Need, When They Need It
  • Khmer Translation Sprint 3
  • Understanding Anthos: Google’s multi-cloud bid to define the next 20 years of enterprise IT

    In response, Google has brought to market a multi-cloud-enabling platform called Anthos, which it claims can help enterprises containerise their applications so they can run in the Amazon and Microsoft public clouds, as well as traditional on-premise datacentre environments, with minimal modifications.

    [...]

    “Once a platform is high quality and open and gets traction, it actually stays around for a long time,” he said. “So Linux will last another 20 years because, unless [the underlying] hardware deeply changes, it’s a good solution and continues to be a good solution. If Anthos is high quality and broadly adopted, it will last for 20 years.”

    Hölzle added: “This is a natural successor to Linux. If you pick Linux as your operating system, you can pick any hardware below and any software above because pretty much any software runs on this.”

Kubernetes: Deploying Services in Kubernetes, Future of Cloud Providers in Kubernetes, Pod Priority and Preemption in Kubernetes

Filed under
Server
OSS
  • Deploying Services in Kubernetes

    In my opinion, services are the most potent resource provided in Kubernetes. A service is essentially a front-end for your application that automatically re-routes traffic to available pods in an evenly distributed way. This automation is a relief for administrators because you no longer have to specify exact IP addresses or hostnames of the server in the client’s configuration files. Having to maintain this while containers are being moved, shifted and deleted would be a nightmare.

  • The Future of Cloud Providers in Kubernetes

    Approximately 9 months ago, the Kubernetes community agreed to form the Cloud Provider Special Interest Group (SIG). The justification was to have a single governing SIG to own and shape the integration points between Kubernetes and the many cloud providers it supported. A lot has been in motion since then and we’re here to share with you what has been accomplished so far and what we hope to see in the future.

  • Pod Priority and Preemption in Kubernetes

    Kubernetes is well-known for running scalable workloads. It scales your workloads based on their resource usage. When a workload is scaled up, more instances of the application get created. When the application is critical for your product, you want to make sure that these new instances are scheduled even when your cluster is under resource pressure. One obvious solution to this problem is to over-provision your cluster resources to have some amount of slack resources available for scale-up situations. This approach often works, but costs more as you would have to pay for the resources that are idle most of the time.

    Pod priority and preemption is a scheduler feature made generally available in Kubernetes 1.14 that allows you to achieve high levels of scheduling confidence for your critical workloads without overprovisioning your clusters. It also provides a way to improve resource utilization in your clusters without sacrificing the reliability of your essential workloads.

  • A Gardener To Manage Kubernetes At Scale

Do We Have More Kubernetes Distributions Than We Need?

Filed under
Server
OSS

Kubernetes itself—meaning the source code you can download from kubernetes.io—is not very useful on its own. Setting up a Kubernetes cluster using the source code would require you to compile the code and set up a server environment (or, in most cases, a cluster of servers) to host it, install it, configure it, set up tools to manage it and update it all on your own.

That’s a lot of work, and it’s not a realistic way for most people to use Kubernetes. That’s why a number of companies have created Kubernetes distributions. The distributions provide not just a preconfigured version of Kubernetes itself, but also other important tools for installing and working with Kubernetes. Many distributions also include host operating systems. Some even give you hosting infrastructure in the form of IaaS in a public cloud.

Kubernetes is not unique in spawning an ecosystem of distributions. The Linux kernel has done the same thing. So have other complex software platforms, inlcuding Spark, Hadoop and OpenStack.

Read more

Servers: Red Hat, OpenStack, Kubernetes, ForsaOS and SUSE

Filed under
Server
  • Building a DNS-as-a-service with OpenStack Designate

    Designate is a multi-tenant DNS-as-a-service that includes a REST API for domain and record management, a framework for integration with Neutron, and integration support for Bind9.

  • Build your Kubernetes armory with Minikube, Kail, and Kubens

    Kubernetes has grown to be a de facto development platform for building cloud-native applications. As developers, we want to be productive from the word go, or, shall we say, from the word code. But to be productive, we must be armed with the right set of tools. In this article, I will take a look at three important tools that should become part of your Kubernetes tool chest, or armory.

  • Introduction to Kubernetes: From container to containers

    After being introduced to Linux containers and running a simple application, the next step seems obvious: How to get multiple containers running in order to put together an entire system. Although there are multiple solutions, the clear winner is Kubernetes. In this article, we’ll look at how Kubernetes facilitates running multiple containers in a system.

    For this article, we’ll be running a web app that uses a service to determine your location based on your IP address. We’ll run both apps in containers using Kubernetes, and we’ll see how to make RESTful calls inside your cluster from one service (the web app) to another (the location app).

  • What Is Server Management?

    Server management, an essential activity for data center administrators, is a challenging topic. This is because the term server management can be used to refer to managing physical server hardware, virtual machines or many types of application servers and database servers. All of these needs to be managed – constantly.

    Adding complication, there are many types of server management tools and server management services that can help administrators to keep servers of all types working properly. The best server monitoring software applications provide system management application capabilities that serve an array of different use-cases. Let's look at server management in-depth.

  • Formulus Black software stores data in persistent memory

    Persistent memory is one of the hottest topics in the data storage industry, and startup Formulus Black has launched a new Linux-based software stack designed to use it.

    [...]

    The ForsaOS software stack uses patented Formulus Bit Marker (FbM) algorithms to eliminate redundant data, so it can pack more data into the server's main memory. Rickard said ForsaOS typically reduces a data set stored in DIMMs by three to four times.

  • Getting an Edge on Point of Service

    A Point of Service (POS) network is a highly specialized environment that requires specialized tools. Traditional solutions depend on proprietary hardware running proprietary software that limits flexibility and enforces vendor lock-in. An alternative approach is to build the retail environment around general-purpose, PC-based client systems and general-purpose management tools. Implementing a retail environment through general-purpose systems solves the lock-in problem, but it fails to provide the security, efficiency, and convenience of a system specifically designed for the needs of the retail industry. If you’re looking for a solution that is specifically designed for retail, but avoids the complications and added expense of proprietary tools, try SUSE Manager for Retail. SUSE Manager for Retail is an open source infrastructure management solution that is powerful, flexible, secure, and easy to customize.

  • How CIOs can improve business results through SAP and cloud management

Server: Red Hat, Kubernetes, Ceph, SUSE and More

Filed under
Server
  • Red Hat Enterprise Linux with Intel's newest Xeon processors posts record performance results across a wide range of industry benchmarks

    The new CPUs include updated hardware-based security features, increased memory capacity and feature enhanced compute cores with Intel Advanced Vector Extensions 512 (AVX-512). Moreover, the added support for Intel Optane DC persistent memory modules enables additional performance improvements at the system level.

  • Why Learn-by-Doing Matters

    I founded Linux Academy over seven years ago to help people learn by doing. The idea was and continues to be that learning by doing helps us learn faster and retain more information. With the pace of change in our multi-cloud world, learning quickly with a higher retention rate has become more important in our everyday lives. Over the years, we at Linux Academy have taken on the responsibility to empower our students with the ability to learn by doing very seriously. It’s in our DNA.

    But while Linux Academy is just seven years old, learning by doing is a bit older. John Dewey coined the term Learn by Doing and its philosophy in the early 20th century. His initial applications of experiential learning were in childhood development, but he wrote some pretty inspirational things that have much broader implications for multicloud training for Linux Academy.

  • Process ID Limiting for Stability Improvements in Kubernetes 1.14

    Have you ever seen someone take more than their fair share of the cookies? The one person who reaches in and grabs a half dozen fresh baked chocolate chip chunk morsels and skitters off like Cookie Monster exclaiming “Om nom nom nom.”

    In some rare workloads, a similar occurrence was taking place inside Kubernetes clusters. With each Pod and Node, there comes a finite number of possible process IDs (PIDs) for all applications to share. While it is rare for any one process or pod to reach in and grab all the PIDs, some users were experiencing resource starvation due to this type of behavior. So in Kubernetes 1.14, we introduced an enhancement to mitigate the risk of a single pod monopolizing all of the PIDs available.

  • Want to learn all about Ceph?

    Join us in Barcelona Spain for the second annual Cephalocon international conference, May 19 & 20. This is Ceph Day on steroids with more than 800 technologists and adopters from across the globe showcasing Ceph’s history and future. It is the largest community event focused on Ceph. SUSE, a founding member of the newly formed Ceph Foundation, will be there as a Platinum Sponsor. We will have experts working in our booth as well as presenting in several sessions. Please go here to learn more about Cephalocon and register.

  • SUSECON 2019 – What a great experience!

    For those of you who made it to SUSECON 2019 in Nashville, I hope you had as great a time as I did. If you were not able to make it please consider coming next year. Without a doubt, this was one of the best events I have ever attended. And I am not just saying that because I work for SUSE.

  • The World has Changed Since Tiger Woods First Won the Masters!

    “The open source movement is in some ways the spiritual core of the Internet, encompassing much of the hardware, software, and protocols that make up the global communications infrastructure — as well as championing openness, transparency, and the power of collaborative development.”[4]

    That’s what SUSE is all about. We pride ourselves on being a truly open, open source company. We are at the forefront of delivering open source software and solutions that are increasingly essential in this new interconnected world.

The future's hiring - but is the tech sector ready?

Filed under
GNU
Linux
Server

We’ve witnessed a lot of success with our Linux Essentials program and are broadening that out to include security, Internet of Things (IoT)/embedded and web development topics.

Read more

Free Software in Telecom

Filed under
Server
OSS
  • ONS 2019: the balance is shifting from telco thinking to open source
  • New group pushes open disaggregation to chip level, with 5G in its sights
  • The ONF and P4.Org Complete Combination to Accelerate Innovation in Operator-Led Open Source
  • Opening Up for 5G and Beyond: Open Source and White Box Will Support New Data Demands

    As much as some people might think it’s just a question of bolting some new radios to towers and calling it a day, the truth is that 5G requires an entirely new approach to designing and building networks.

  • Why the mobile edge needs open source to overcome its pitfalls (Reader Forum)

    Edge computing dominated MWC 2019 along with 5G and all the robots at the show. In fact, according to some analysts, edge computing could be worth almost $7 billion within the next three years. Much of the new architecture’s advantages stem from the capacity offered by 5G to deploy scalable, typically cloud-based, compute platforms at the edge of the network. However, a growing number of operators are coming across a challenge when they look to scale services to the edge – portability is a headache.

  • Q&A: T-Systems' Clauberg says industry needs more collaboration

    t last week's Open Networking Summit in San Jose, California, Axel Clauberg spoke about the need for collaboration between the open source groups and SDOs ahead of a Friday morning panel that was comprised of many of the leaders of those organizations.

    At this start of this year, Clauberg slid over from his role as Deutsche Telekom's vice president, aggregation, transport, IP (TI-ATI) and infrastructure cloud architecture, to Deutsche Telekom's enterprise division, T-Systems. At T-Systems, Clauberg holds the title of vice president, strategic portfolio management and CTO of telecommunications services.

    Clauberg serves as the chairman of the Telecom Infra Project (TIP) and he also worked at Cisco for 13 years. All in all, Clauberg has seen the industry from various points of view over the years, which validates his call for more industry collaboration.

  • 10 operators, including AT&T and Verizon, align around creating task force for NFVi

    There are numerous attempts afoot to wrestle NFV into a more manageable and workable approach to virtualization.

    Last week at the Open Networking Summit, some of the carrier members of a new effort around simplifying network functions virtualization infrastructure (NFVi) presented their approach on a panel.

    The group, which is called Common NFVi Telco Task Force, is comprised of AT&T, Bell Canada, China Mobile, Deutsche Telekom, Jio, Orange, SK Telecom, Telstra, Verizon and Vodafone.

    Currently, there are too many types of NFVi floating around, which means virtual network functions (VNFs) vendors need to create multiple versions of their VNFs to work with the different flavors of NFVi. The Common NFVi Telco Task Force is taking aim at reducing the number of NFVi implementations down to three or four versions, according to AT&T's Amy Wheelus, vice president of network cloud.

  • Ericsson and AT&T give network slicing an open source boost

    The Linux Foundation’s annual Open Networking Summit (ONS) has become of rising interest to the mobile and telco community as the open source organization has become increasingly focused on telecoms networks. There will be coverage of the highlights in next week’s edition of Wireless Watch, but one development caught our eye even before the event started on Wednesday. This was a demonstration of network slicing, harnessing the capabilities of the open source ONAP (Open Network Automation Protocol) software, which handles the management and orchestration (MANO) of all the components in a virtualized network.

  • Telcos need to take ownership of open source or risk losing a golden opportunity

    Of the 14 keynote sessions at last week’s Open Networking Summit (ONS) North America in San Jose, only two featured communications service providers. AT&T CTO Andre Fuetsch spoke about open source’s role in 5G, and China Mobile Chief Scientist Junlan Feng spoke about open source for network-based AI. This is no means a criticism of organisers The Linux Foundation and its LF Networking group, but it is a reflection of how the broader telco community has yet to fully accept the strategic importance of open source. Yes, many CSPs are involved in various open source projects, and some are heavily invested and supportive, but as yet there has been a reluctance to step up and take more control over the direction and scope of these projects. Whether it is fear or ignorance that is holding them back, CSPs must do more. After all, the majority of these projects are specifically aimed at, or relevant for, telecoms networks – ONAP, OPNFV, Akraino, Open Daylight, etc – with many others about to become essential, such as Kubernetes and the work of the CNCF. And there are many other open source foundations and groups focused on telecoms to consider.

  • Telco white-box switches receive a boost as ONF takes on P4

    AT&T, which has been leading the use of white box switch and routers and seeding much of the source code to the open source community, developed its own home-rolled dNOS network operating system, which has now become the DANOS project within The Linux Foundation. But there is a second option available, which has been developed by the P4.org group. The eponymously named P4 programming language describes how switches, routers and NICs process packets across white box hardware.

Servers: Hadoop, Amazon Rivals, Red Hat/IBM, Kubernetes, OpenStack and More

Filed under
Server
  • Breaking Out of the Hadoop Cocoon

    The announcement last fall that top Hadoop vendors Cloudera and Hortonworks were coming together in a $5.2 billion merger – and reports about the financial toll that their competition took on each other in the quarters leading up to the deal – revived questions that have been raised in recent years about the future of Hadoop in an era where more workloads are moving into public clouds like Amazon Web Services (AWS) that offer a growing array of services that many of the jobs that the open-source technology already does.

    Hadoop gained momentum over the past several years as an open-source platform to collect, store and analyze various types of data, arriving as data was becoming the coin of the realm in the IT industry, something that has only steadily grown since. As we’ve noted here at The Next Platform, Hadoop has evolved over the years, with such capabilities as Spark in-memory processing and machine learning being added. But in recent years more workloads and data have moved to the cloud, and the top cloud providers, including Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform all offer their own managed services, such as AWS’ Elastic Map Reduce (EMR). Being in the cloud, these services also offer lower storage costs and easier management – the management of the infrastructure is done by the cloud provider themselves.

  • A guide for database as a service providers: How to stand your ground against AWS – or any other cloud

    NoSQL database platform MongoDB followed suit in October 2018 announcing a Server Side Public License (SSPL) to protect “open source innovation” and stop “cloud vendors who have not developed the software to capture all of the value while contributing little back to the community.” Event streaming company, Confluent issued its own Community License in December 2018 to make sure cloud providers could no longer “bake it into the cloud offering, and put all their own investments into differentiated proprietary offerings.”

  • The CEO of DigitalOcean explains how its 'cult following' helped it grow a $225 million business even under the shadow of Amazon Web Services

    DigitalOcean CEO Mark Templeton first taught himself to code at a small hardwood business. He wanted to figure out how to use the lumber in the factory most efficiently, and spreadsheets only got him so far.

    "I taught myself to write code to write a shop floor control and optimization system," Templeton told Business Insider. "That allowed us to grow, to run the factory 24 hours a day, all these things that grow in small business is new. As a self-taught developer, that's what launched me into the software industry."

    And now, Templeton is learning to embrace these developer roots again at DigitalOcean, a New York-based cloud computing startup. It's a smaller, venture-backed alternative to mega-clouds like Amazon Web Services, but has found its niche with individual programmers and smaller teams.

  • IBM’s Big-Ticket Purchase of Red Hat Gets a Vote of Confidence From Wall Street
  • How Monzo built a bank with open infrastructure

    When challenger bank Monzo began building its platform, the team decided it would get running with container orchestration platform Kubernetes "the hard way". The result is that the team now has visibility into outages or other problems, and Miles Bryant, platform engineer at Monzo, shared some observations at the bank at the recent Open Infrastructure Day event in London.

    Finance is, of course, a heavily regulated industry - and at the same time customer expectations are extremely exacting. If people can't access their money, they tend to get upset.

  • Kubernetes Automates Open-Source Deployment

    Whether for television broadcast and video content creation, delivery or transport of streamed media, they all share a common element, that is the technology supporting this industry is moving rapidly, consistently and definitively toward software and networking. The movement isn’t new by any means; what now seems like ages ago, in the days where every implementation required customized software on a customized hardware platform has now changed to open platforms running with open-source solution sets often developed for open architectures and collectively created using cloud-based services.

  • Using EBS and EFS as Persistent Volume in Kubernetes

    If your Kubernetes cluster is running in the cloud on Amazon Web Services (AWS), it comes with Elastic Block Storage (EBS). Or, Elastic File System (EFS) can be used for storage.

    We know pods are ephemeral and in most of the cases we need to persist the data in the pods. To facilitate this, we can mount folders into our pods that are backed by EBS volumes on AWS using AWSElasticBlockStore, a volume plugin provided by Kubernetes.

    We can also use EFS as storage by using efs-provisioner. Efs-provisioner runs as a pod in the Kubernetes cluster that has access to an AWS EFS resource.

  • Everything You Want To Know About Anthos - Google's Hybrid And Multi-Cloud Platform

    Google's big bet on Anthos will benefit the industry, open source community, and the cloud native ecosystem in accelerating the adoption of Kubernetes.

  • Raise a Stein for OpenStack: Latest release brings faster containers, cloud resource management

    The latest OpenStack release is out in the wilds. Codenamed Stein, the platform update is said to allow for much faster Kubernetes deployments, new IP and bandwidth management features, and introduces a software module focused on cloud resource management – Placement.

    In keeping with the tradition, the 19th version of the platform was named Stein after Steinstraße or "Stein Street" in Berlin, where the OpenStack design summit for the corresponding release took place in 2018.

    OpenStack is not a single piece of software, but a framework consisting of an integration engine and nearly 50 interdependent modules or projects, each serving a narrowly defined purpose, like Nova for compute, Neutron for networking and Magnum for container orchestration, all linked together using APIs.

  • OpenStack Stein launches with improved Kubernetes support

    The OpenStack project, which powers more than 75 public and thousands of private clouds, launched the 19th version of its software this week. You’d think that after 19 updates to the open-source infrastructure platform, there really isn’t all that much new the various project teams could add, given that we’re talking about a rather stable code base here. There are actually a few new features in this release, though, as well as all the usual tweaks and feature improvements you’d expect.

    While the hype around OpenStack has died down, we’re still talking about a very active open-source project. On average, there were 155 commits per day during the Stein development cycle. As far as development activity goes, that keeps OpenStack on the same level as the Linux kernel and Chromium.

  • Community pursues tighter Kubernetes integration in Openstack Stein

    The latest release of open source infrastructure platform Openstack, called 'Stein', was released today with updates to container functionality, edge computing and networking upgrades, as well as improved bare metal provisioning and tighter integration with popular container orchestration platform Kubernetes - led by super-user science facility CERN.

    It also marks roughly a year since the Openstack Foundation pivoted towards creating a more all-encompassing brand that covers under-the-bonnet open source in general, with a new umbrella organisation called the Open Infrastructure Foundation. Openstack itself had more than 65,000 code commits in 2018, with an average of 155 per day during the Stein cycle.

  • Why virtualisation remains a technology for today and tomorrow

    The world is moving from data centres to centres of data. In this distributed world, virtualisation empowers customers to secure business-critical applications and data regardless of where they sit, according to Andrew Haschka, Director, Cloud Platforms, Asia Pacific and Japan, VMware.

    “We think of server and network virtualisation as being able to enable three fundamental things: a cloud-centric networking fabric, with intrinsic security, and all of it delivered in software. This serves as a secure, consistent foundation that drives businesses forward,” said Haschka in an email interview with Networks Asia. “We believe that virtualisation offers our customers the flexibility and control to bring things together and choose which way their workloads and applications need to go – this will ultimately benefit their businesses the most.”

  • Happy 55th birthday mainframe

    7 April marked the 55th birthday of the mainframe. It was on that day in 1964 that the System/360 was announced and the modern mainframe was born. IBM’s Big Iron, as it came to be called, took a big step ahead of the rest of the BUNCH (Burroughs, UNIVAC, NCR, Control Data Corporation, and Honeywell). The big leap of imagination was to have software that was architecturally compatible across the entire System/360 line.

  • Red Hat strategy validated as open hybrid cloud goes mainstream

    “Any products, anything that would release to the market, the first filter that we run through is: Will it help our customers with their open hybrid cloud journey?” said Ranga Rangachari (pictured), vice president and general manager of storage and hyperconverged infrastructure at Red Hat.

    Rangachari spoke with Dave Vellante (@dvellante) and Stu Miniman (@stu), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the Google Cloud Next event. They discussed adoption of open hybrid cloud and how working as an ecosystem is critical for success in solving storage and infrastructure problems (see the full interview with transcript here). (* Disclosure below.)

Google Openwashing of Its Surveillance 'Cloud'

Filed under
Server
Google
OSS

GAFAM Competing Over Who's Friendliest to Free/Open Source Software

Filed under
Server
Google
OSS
  • Google Takes a Friendlier Path to Open Source Than Amazon

    Google recently announced partnerships with MongoDB, Redis Labs, and several other open-source data management companies. The crux of the partnership is that these companies' offerings will be more tightly integrated into Google's Cloud Platform. Customers will be able to use these select applications from one unified Google Cloud interface, rely on Google's technical support for these apps, and receive a unified bill for all.

    Financials were not disclosed, though TechCrunch suggested some sort of profit-sharing arrangement. While these open-source companies probably don't like giving away part of their revenue, Google is also taking care of associated customer support costs; in addition, some revenue on wider distribution is certainly better than nothing, which is what these companies receive when a user opts for Amazon's in-house imitations.

  • Google Cloud challenges AWS with new open-source integrations

    Google today announced that it has partnered with a number of top open-source data management and analytics companies to integrate their products into its Google Cloud Platform and offer them as managed services operated by its partners.

Syndicate content

More in Tux Machines

NomadBSD 1.2 released!

We are pleased to announce the release of NomadBSD 1.2! We would like to thank all the testers who sent us feedback and bug reports. Read more

Review: Alpine Linux 3.9.2

Alpine Linux is different in some important ways compared to most other distributions. It uses different libraries, it uses a different service manager (than most), it has different command line tools and a custom installer. All of this can, at first, make Alpine feel a bit unfamiliar, a bit alien. But what I found was that, after a little work had been done to get the system up and running (and after a few missteps on my part) I began to greatly appreciate the distribution. Alpine is unusually small and requires few resources. Even the larger Extended edition I was running required less than 100MB of RAM and less than a gigabyte of disk space after all my services were enabled. I also appreciated that Alpine ships with some security features, like PIE, and does not enable any services it does not need to run. I believe it is fair to say this distribution requires more work to set up. Installing Alpine is not a point-n-click experience, it's more manual and requires a bit of typing. Not as much as setting up Arch Linux, but still more work than average. Setting up services requires a little more work and, in some cases, reading too since Alpine works a little differently than mainstream Linux projects. I repeatedly found it was a good idea to refer to the project's wiki to learn which steps were different on Alpine. What I came away thinking at the end of my trial, and I probably sound old (or at least old fashioned), is Alpine Linux reminds me of what got me into running Linux in the first place, about 20 years ago. Alpine is fast, light, and transparent. It offered very few surprises and does almost nothing automatically. This results in a little more effort on our parts, but it means that Alpine does not do things unless we ask it to perform an action. It is lean, efficient and does not go around changing things or trying to guess what we want to do. These are characteristics I sometimes miss these days in the Linux ecosystem. Read more

today's howtos

Linux v5.1-rc6

It's Easter Sunday here, but I don't let little things like random major religious holidays interrupt my kernel development workflow. The occasional scuba trip? Sure. But everybody sitting around eating traditional foods? No. You have to have priorities. There's only so much memma you can eat even if your wife had to make it from scratch because nobody eats that stuff in the US. Anyway, rc6 is actually larger than I would have liked, which made me go back and look at history, and for some reason that's not all that unusual. We recently had similar rc6 bumps in both 4.18 and 5.0. So I'm not going to worry about it. I think it's just random timing of pull requests, and almost certainly at least partly due to the networking pull request in here (with just over a third of the changes being networking-related, either in drivers or core networking). Read more Also: Linux 5.1-rc6 Kernel Released In Linus Torvalds' Easter Day Message