Language Selection

English French German Italian Portuguese Spanish

Server

Server Administration

Filed under
Server
  • Big Blue Aims For The Sky With Power9

    Intel has the kind of control in the datacenter that only one vendor in the history of data processing has ever enjoyed. That other company is, of course, IBM, and Big Blue wants to take back some of the real estate it lost in the datacenters of the world in the past twenty years.

    The Power9 chip, unveiled at the Hot Chips conference this week, is the best chance the company has had to make some share gains against X86 processors since the Power4 chip came out a decade and a half ago and set IBM on the path to dominance in the RISC/Unix market.

    IBM laid out a roadmap out past 2020 for its Power family of processors back at the OpenPower Summit in early April, demonstrating its commitment the CPU market with chips that are offer a brawny alternative to CPUs and accelerators compared to the Xeon and Xeon Phi alternatives from Intel and the relatively less brawny chips from ARM server chip makers such as Applied Micro and Cavium and the expected products from AMD, Broadcom, and Qualcomm. We pondered IBM’s prospects in the datacenter in the wake of some details coming out about next year’s Power9 processors, which IBM said at the time would come in two flavors, one aimed at scale-out machines with one or two sockets and another aimed at scale up machines with NUMA architectures and lots of sockets and shared memory.

  • ARM Announces ARM v8-A with Scalable Vector Extensions: Aiming for HPC and Data Center

    Today ARM is announcing an update to their line of architecture license products. With the goal of moving ARM more into the server, the data center, and high-performance computing, the new license add-on tackles a fundamental data center and HPC issue: vector compute. ARM v8-A with Scalable Vector Extensions won’t be part of any ARM microarchitecture license today, but for the semiconductor companies that build their own cores with the instruction set, this could see ARM move up into the HPC markets. Fujitsu is the first public licensee on board, with plans to include ARM v8-A cores with SVE in the Post-K RIKEN supercomputer in 2020.

  • The Sad State of Docker

    I have always been a big fan of Docker. This is very visible if you regularly read this blog. However, I am very disappointed lately how Docker handled the 1.12 release. I like to think of version 1.12 as a great proof of concept that should not have received the amount of attention that it already received. Let’s dive deep into what I found wrong.

    First, I do not think a company should market and promote exciting new features that have not been tested well. Every time Docker makes an announcement, the news spreads like a virus to blogs and news sites all over the globe. Tech blogs will basically copy and paste the exact same procedure that Docker discussed into a new blog post as if they were creating original content. This cycle repeats over and over again and becomes annoying because I am seeing the same story a million times. What I hate most about these recent redundant articles is that the features do not work as well as what is written about them.

  • Containers debunked: DevOps, security and why containers will not replace virtual machines

    The tech industry is full of exciting trends that promise to change the face of the industry and business as we know it, but one that is gaining a huge amount of focus is containers.

    However, problems lie with the technology and threaten to root itself deep in the mythology about it, namely the misconceptions over what the technology is, what can be done with it, and the idea that they replace virtual machines.

    Lars Herrmann, GM, Integrated Solutions at Red Hat spoke to CBR about five common misconceptions, but first the benefits.

    Herrmann, said: “Containerisation can be an amazingly efficient way to do DevOps, so it’s a very practical way to get into a DevOps methodology and process inside an organisation, which is highly required in a lot of organisations because of the benefits in agility to be able to release software faster, better, and deliver more value.”

  • Rackspace Going Private after $4.3 Billion Buyout

    The company released Rackspace Private Cloud powered by Red Hat in February. Using the Hat Enterprise Linux OpenStack Platform, the product helped extend Rackspace's OpenStack-as-a-service product slate.

  • SoylentNews' Folding@Home Team is Now in the Top 500 in the World

    It has only been six short months since SoylentNews' Folding@Home team was founded, and we've made a major milestone: our team is now one of the top 500 teams in the world! We've already surpassed some heavy hitters like /. and several universities, including MIT. (But now is not the time to rest on our laurels. A certain Redmond-based software producer currently occupies #442.)

    In case you aren't familiar with folding@home, it's a distributed computing project that simulates protein folding in an attempt to better understand diseases such as Alzheimer's and Huntington's and thereby help to find a cure. To that end, SoylentNews' team has completed nearly 16,000 work units.

Servers/Networks

Filed under
Server
  • Rackspace to be Acquired for $4.3B

    Rackspace announced that it is being acquired in an all-cash deal valued at $4.3B. Pending regulatory anti-trust approval, the firm will be taken private by a group of investors led by Apollo Global Management in Q4 of 2016.

    This valuation equates to a price of $32/share. The 38% premium cited in the announcement is calculated against a base share price from August 3, as the news about the pending acquisition began increasing the company stock price as early as August 4.

    For historical context, this valuation falls considerably below the company’s peak market capitalization in January 2013 when Rackspace was worth $10.9B. This means that the company’s current valuation – including the premium – is less than 40% of what it was at its highest point.

  • More on Open Source Tools for Data Science

    Open source tools are having a transformative impact on the world of data science. In a recent guest post here on OStatic, Databricks' Kavitha Mariappan (shown here), who is Vice President of Marketing, discussed some of the most powerful open source solutions for use in the data science arena. Databricks was founded by the creators of the popular open source Big Data processing engine Apache Spark, which is itself transforming data science.

    Here are some other open source tools in this arena to know about.

    As Mariappan wrote: "Apache Spark, a project of the Apache Software Foundation, is an open source platform for distributed in-memory data processing. Spark supports complete data science pipelines with libraries that run on the Spark engine, including Spark SQL, Spark Streaming, Spark MLlib and GraphX. Spark SQL supports operations with structured data, such as queries, filters, joins, and selects. In Spark 2.0, released in July 2016, Spark SQL comprehensively supports the SQL 2003 standard, so users with experience working with SQL on relational databases can learn how to work with Spark quickly."

  • SDN, open source nexus to accelerate service creation

    What's new in the SDN blog world? One expert says SDN advancements will be accelerated, thanks to SDN and open source convergence, while another points out the influence SDN has in the cloud industry.

  • Platform9 & ZeroStack Make OpenStack a Little More VMware-Friendly

    Platform9 and ZeroStack are adding VMware high availability to their prefab cloud offerings, part of the ongoing effort to make OpenStack better accepted by enterprises.

    OpenStack is a platform, an archipelago of open source projects that help you run a cloud. But some assembly is required. Both Platform9 and ZeroStack are operating on the theory that OpenStack will better succeed if it’s turned into more of a shrink-wrapped product.

  • Putting Ops Back in DevOps

    What Agile means to your typical operations staff member is, “More junk coming faster that I will get blamed for when it breaks.” There always is tension between development and operations when something goes south. Developers are sure the code worked on their machine; therefore, if it does not work in some other environment, operations must have changed something that made it break. Operations sees the same code perform differently on the same machine with the same config, which means if something broke, the most recent change must have caused it … i.e. the code did it. The finger-pointing squabbles are epic (no pun intended). So how do we get Ops folks interested in DevOps without promising them only a quantum order of magnitude more problems—and delivered faster?

  • Cloud chronicles

    How open-source software and cloud computing have set up the IT industry for a once-in-a-generation battle

Hosting, Servers, VMs and Containers

Filed under
Server
  • Open Source, Containers and the Cloud: News from ContainerCon and LinuxCon

    LinuxCon and ContainerCon, events focused on Linux, containers and open source software, wrapped up this week in Toronto. Here's a round-up of the announcements and insights related to cloud computing that emerged from the meeting.

    LinuxCon and ContainerCon are co-located events. That made for an interesting combination this year because Linux is an established technology, which is celebrating its twenty-fifth anniversary. In contrast, containers remain a new and emerging enterprise technology. (Yes, containers themselves are much older, but it has only been in the past three years, with the launch of Docker, that containers are becoming a big deal commercially.)

    The two events thus paired discussion of a very entrenched platform, Linux, with one that is still very much in development. But open source, the coding and licensing model behind both Linux and container platforms like Docker, tied everything together.

  • Citrix Enables NetScaler for Containers and Micro-Services

    At the LinuxCon ContainerCon event here, a core topic of discussion is about how to enable enterprises to be able to embrace containers. Citrix has a few ideas on how to help and is announcing enhancements to its NetScaler networking gear to enable load balancing for containers and micro-services.

  • Want to Work for a Cloud Company? Here’s the Cream of the Crop

    What do Asana, Greenhouse Software, WalkMe, Chef Software, and Sprout Social have in common? They’ve been deemed the very best privately held “cloud” companies to work for, according to new rankings compiled by Glassdoor and venture capital firm Battery Ventures.

    For “The 50 Highest Rated Private Cloud Computing Companies,” Glassdoor and Battery worked with Mattermark to come up with a list of non-public companies that offer cloud-based services, and then culled them, making sure that each entry had at least 30 Glassdoor reviews, Neeraj Agrawal, Battery Ventures general partner told Fortune.

  • Red Hat Updates its Kernel-based Virtual Machine

    Red Hat updated its Kernel-based Virtual Machine (KVM)-powered virtualization platform for both Linux- and Windows-based workloads.

  • Red Hat Virtualization 4 Takes on Proprietary Competition

    Red Hat continues to move well beyond its core enteprise Linux-based roots with a string of new releases. The company has announced the general availability of Red Hat Virtualization 4, the latest release of its Kernel-based Virtual Machine (KVM) -powered virtualization platform. It fully supports OpenStack’s Neutron – the networking project leveraged in SDNs.

    The company emphasizes that Red Hat Virtualization 4 challenges the economics and complexities of proprietary virtualization solutions by providing a fully-open, high-performing, more secure, and centrally managed platform for both Linux- and Windows-based workloads. It combines an updated hypervisor, advanced system dashboard, and centralized networking for users’ evolving workloads.

Linux on Servers

Filed under
Server
  • Kontena Launches Container Platform, Banks Seed Funding

    Startup Kontena has launched a container and microservices platform that, it claims, is designed to be developer friendly, easy to install and able to run at any scale -- attributes that, Kontena says, differentiate it from the current crop of container platforms.

    The Menlo Park, Calif.-based company, founded in March 2015, has also raised $2 million seed funding from Helsinki-based Lifeline Ventures. It also has a clever name: Say it out loud -- cute, huh?

    According to the team at Kontena Inc. , the startup's container and microservices platform requires zero maintenance, is designed for automatic updates, and runs on any infrastructure, including on-premises, cloud and hybrid. Combined, those attributes make it an easy-to-use alternative to platforms such as Docker, Kubernetes, Heroku and Mesosphere, the company says.

  • Stabilizing the world of hot and fast containers

    Containers are moving targets in multiple ways. With multiple tools, frameworks, implementations, and use cases to accomplish any task, it can be a fast-moving chaotic container world, which is a natural consequence of being young and popular.

    The good news is that all of this creative incubation is hugely productive, and because it's all open source everyone gets to share the benefits of all of this fabulous creativity. The bad news is that it's a giant energized cat herd. How do we know what direction to take? Must we plan for the work we do today to be obsolete in a few months? And, what about portability?

    I'd like to provide a few insights into the future of containers, and the direction we can expect the state of the art technology to take.

  • How DIGIT Created High Availability on the Public Cloud to Keep Its Games Running

    Emmanuel & Ross: HA is achievable on the public cloud. In our case, we couple redundancy across Availability Zone (AZ) with monitoring and autonomous systems to ensure our games can keep running. Using only one AZ will not ensure HA, as that entire zone could fail for a short time. Each of our applications runs in multiple containers at the same time. They're are all being monitored to handle current load. When one container is down, another takes its place. The same applies for all parts of our infrastructure. All services are autoscaling and behind a service discovery system. On top of this, nodes in our cluster are deployed across multiple AZs, each of which being an isolated network with its own NAT gateway. This way we can survive a whole zone going down.

  • Citrix Gives Away Netscaler Containers for Free

    Netscaler CPX Express, a developer version of the CPX container, is available for free downloading, the company announced yesterday at LinuxCon North America in Toronto. There’s even a catchy URL for it: microloadbalancer.com

  • LinuxCon: How Facebook Monitors Hundreds of Thousands of Servers with Netconsole

    The original kernel documentation for the feature explains that the netconsole module logs kernel printk messages over UDP, allowing debugging of problems where disk logging fails and serial consoles are impractical.

    Many organizations will choose to use syslog as a way to track potential server errors, but Owens said kernel bugs can crash a machine, so it doesn't help nearly as much as netconsole.

    He added that Facebook had a system in the past for monitoring that used syslog-ng, but it was less than 60 percent reliable. In contrast, Owens stated netconsole is highly scalable and can handle enormous log volume with greater than 99.99 percent reliability.

    "Netconsole is fanatically easy to deploy," Owens said. "Configuration is independent of the hardware and by definition you already have a network."

Servers/Networks

Filed under
Server
  • PLUMgrid Advances SDN with CloudSecure

    Software Defined Networking (SDN) vendor PLUMgrid is helping to secure it product portfolio and its customers with a new technology it calls CloudSecure. The goal with CloudSecure is to help provide policy and structure for organizations to build secure micro-segmented networking in the cloud.

  • Networking, Security & Storage with Docker & Containers: A Free eBook Covers the Essentials
  • How Hardware Can Boost NFV Adoption
  • Datera’s Elastic Data Fabric Integrates With Kubernetes

    Today Datera announced a new integration with Google’s Kubernetes system. Datera states that its intent-defined universal data fabric complements the Kubernetes operational model well. An integration of the two enables automatic provisioning and deployment of stateful applications at scale. According to Datera, this integration with Kubernetes will let them translate application service level objectives, such as performance, durability, security and capacity into its universal data fabric. Datera goes on to claim that the integration will allow enterprise and service provider clouds to seamlessly and cost-effectively scale applications of any kind.

  • Huawei Launches a Kubernetes-based Container Engine

    Joining an increasing number of companies, Asian telecommunications giant Huawei Technologies has released its own container orchestration engine, the Cloud Container Engine (CCE).

Create modular server-side Java apps direct from mvn modules with diet4j instead of an app server

Filed under
Server

In the latest release, the diet4j module framework for Java has learned to run modular Java apps using the Apache jsvc daemon (best known from running Tomcat on many Linux distros).

Servers News

Filed under
Server
  • Cloud-Based Systems Can Accelerate the Benefits of Big Data
  • The Linux Foundation Announces Big Data Platform for Network Analytics

    The Linux Foundation, the nonprofit advancing professional open source management for mass collaboration today is announcing Platform for Network Data Analytics or PNDA is now a Linux Foundation Project. PNDA provides an open source, scalable platform for next-generation network analytics. The project has also announced the availability of its initial platform release.

  • Amazon Web Services Introduces Load Balancing for Containers

    The load balancing news comes as part of AWS’s move to make it easer for its customers to use containers. To do that, it’s in the process of integrating capabilities of its Amazon’s Elastic Compute Cloud (EC2) platform into its ECS — Amazon’s system that allows customers to run containerized applications.

  • Performance Improvement For Virtual NVMe Devices

    Helen Koike of Collabora has been one of the developers looking to optimize the performance of virtual NVMe devices, such as used by Google's Cloud Engine.

  • Keynote: Making Data Accessible - Ashish Thusoo, Co-founder & CEO, Qubole
  • OpenStack Community Challenged By Dearth Of Talent, Complexity

    The OpenStack community has grown at breakneck pace since the open-source cloud orchestration technology burst on the scene in 2010, a product of NASA and Rackspace Hosting.

    As envisioned by its developers, OpenStack provided a welcome alternative to proprietary IaaS solutions and an opportunity for independent service providers to build robust public and hybrid clouds with distributed computing resources that had the functionality and power to compete with the big boys, including industry-dominating Amazon Web Services.

  • How to Avoid Pitfalls in Doing Your OpenStack Deployment

    How fast is the OpenStack global cloud management market growing? Research and Markets analysts are out with a new report that forecasts the global OpenStack cloud management market to grow at a CAGR of 30.49% during the period 2016-2020.

    According to the report: "Cloud brokerage services that provide management and maintenance services to enterprises will be a key trend for market growth. However, this report and others forecast that technical issues and difficulties surrounding OpenStack deployments will be on the increase. In this post, you'll find resources that can help you avoid the pitfalls present in doing an OpenStack deployment.

    "OpenStack talent is a rarified discipline," Josh McKenty, who helped develop the platform, has told CRN, adding, "to be good with OpenStack, you need to be a systems engineer, a great programmer but also really comfortable working with hardware."

SUSE, IBM, and Servers

Filed under
GNU
Linux
Server
SUSE

More on SUSE, Mirantis, Red Hat, and OpenStack

Filed under
Red Hat
Server
OSS
SUSE
Syndicate content

More in Tux Machines

Opera Data Breach, Security of Personal Data

  • Opera User? Your Stored Passwords May Have Been Stolen
    Barely a week passes without another well-known web company suffering a data breach or hack of some kind. This week it is Opera’s turn. Opera Software, the company behind the web-browser and recently sold to a Chinese consortium for $600 million, reported a ‘server breach incident’ on its blog this weekend.
  • When it comes to protecting personal data, security gurus make their own rules
    Marcin Kleczynski, CEO of a company devoted to protecting people from hackers, has safeguarded his Twitter account with a 14-character password and by turning on two-factor authentication, an extra precaution in case that password is cracked. But Cooper Quintin, a security researcher and chief technologist at the Electronic Frontier Foundation, doesn’t bother running an anti-virus program on his computer. And Bruce Schneier? The prominent cryptography expert and chief technology officer of IBM-owned security company Resilient Systems, won’t even risk talking about what he does to secure his devices and data.

Android Leftovers

FOSS and Linux Events

  • On speaking at community conferences
    Many people reading this have already suffered me talking to them about Prometheus. In personal conversation, or in the talks I gave at DebConf15 in Heidelberg, the Debian SunCamp in Lloret de Mar, BRMlab in Prague, and even at a talk on a different topic at the RABS in Cluj-Napoca.
  • TPM Microconference Accepted into LPC 2016
    Although trusted platform modules (TPMs) have been the subject of some controversy over the years, it is quite likely that they have important roles to play in preventing firmware-based attacks, protecting user keys, and so on. However, some work is required to enable TPMs to successfully play these roles, including getting TPM support into bootloaders, securely distributing known-good hashes, and providing robust and repeatable handling of upgrades. In short, given the ever-more-hostile environments that our systems must operate in, it seems quite likely that much help will be needed, including from TPMs. For more details, see the TPM Microconference wiki page.
  • More translations added to the SFD countdown
    Software Freedom Day is celebrated all around the world and as usual our community helps us to provide marketing materials in their specific languages. While the wiki is rather simple to translate, the Countdown remains a bit more complicated and time consuming to localize. One needs to edit the SVG file and generate roughly a 100 pictures, then upload them to the wiki. Still this doesn’t scare the SFD teams around the world and we are happy to announce three more languages are ready to be used: French, Chinese and German!

Second FreeBSD 11.0 Release Candidate Restores Support for 'nat global' in IPFW

Glen Barber from the FreeBSD project announced the availability of the second RC (Release Candidate) development build of the upcoming FreeBSD 11.0 operating system. Read more