Language Selection

English French German Italian Portuguese Spanish

Containers: Docker Enterprise 2.1 and VMware Acquiring Heptio (for Kubernetes)

Filed under
Server
  • Docker Enterprise 2.1 Accelerates Application Migration to Containers

    Docker Inc. announced the release of Docker Enterprise 2.1 on Nov. 8, providing new features and services for containers running on both Windows and Linux servers.

    Among the capabilities that Docker is highlighting is the ability to migrate legacy applications, specifically Windows Server 2008, into containers, in an attempt to help with the challenge of end-of-life support issues. The release also provides enterprises with the new Docker Application Convertor, which identifies applications on Windows and Linux systems and then enables organizations to easily convert them into containerized applications. In addition, Docker is boosting security in the new release, with support for FIPS 140-2 (Federal Information Processing Standards) and SAML (Security Assertion Markup Language) 2.0 authentication.

    "We've added support for additional versions of Windows Server, and we're the only container platform that actually supports Windows Server today," Banjot Chanana, vice president of product at Docker Inc., told eWEEK. "All in all, this really puts Windows containers at parity with Linux counterparts."

  • Why VMware Is Acquiring Heptio and Going All In for Kubernetes

    VMware is the company that did more than perhaps any other to help usher in the era of enterprise server virtualization that has been the cornerstone of the last decade of computing. Now VMware once again is positioning itself to be a leader, this time in the emerging world of Kubernetes-based, cloud-native application infrastructure.

    On Nov. 6, VMware announced that it is acquiring privately held Kubernetes startup Heptio, in a deal that could help further cement VMware's position as a cloud-native leader. Heptio was launched in 2016 by the co-founders of Kubernetes, Craig McLuckie and Joe Beda, in an effort to make Kubernetes more friendly to use for enterprises. Financial terms of the deal have not been publicly disclosed, though Heptio has raised $33.5 million in venture funding.

    VMware's acquisition of Heptio comes a week after IBM announced its massive $34 billion deal for Red Hat. While Heptio is a small startup, the core of what IBM was after in Red Hat is similar to what VMware is seeking with Heptio, namely a leg up in the Kubernetes space to enable the next generation of the cloud.

  • The Kubernetes World: VMware Acquires Heptio

    One week ago, a one hundred and seven year old technology company bet its future, at least in part, on an open source project that turned four this past June. It shouldn’t come as a total surprise, therefore, that a twenty year old six hundred pound gorilla of virtualization paid a premium for one of the best regarded collections of talent of that same open source project, the fact that containers are disruptive to classic virtualization notwithstanding.

    But just because it shouldn’t come as a surprise in a rapidly consolidating and Kubernetes obsessed market doesn’t mean the rationale or the implications are immediately obvious. To explore the questions of why VMware paid an undisclosed but reportedly substantial sum for Heptio, then, let’s examine what it means for the market, for Heptio and for VMware in order.

More in Tux Machines

Deepin 15.8 - Attractive and Efficient, Excellent User Experience

Deepin is an open source GNU/Linux operating system, based on Linux kernel and desktop applications, supporting laptops, desktops and all-in-ones. deepin preinstalls Deepin Desktop Environment (DDE) and nearly 30 deepin native applications, as well as several applications from the open source community to meet users’ daily learning and work needs. In addition, about a thousand of applications are offered in Deepin Store to meet your more needs. deepin, developed by a professional operating system R&D team and deepin technical community (www.deepin.org), is from the name of deepin technical community - “deepin”, which means deep pursuit and exploration of the life and the future. Compared with deepin 15.7, the ISO size of deepin 15.8 has been reduced by 200MB. The new release is featured with newly designed control center, dock tray and boot theme, as well as improved deepin native applications, hoping to bring users a more beautiful and efficient experience. Read more

Kernel: Zinc and 4.20 Merge Window

  • Zinc: a new kernel cryptography API
    We looked at the WireGuard virtual private network (VPN) back in August and noted that it is built on top of a new cryptographic API being developed for the kernel, which is called Zinc. There has been some controversy about Zinc and why a brand new API was needed when the kernel already has an extensive crypto API. A recent talk by lead WireGuard developer Jason Donenfeld at Kernel Recipes 2018 would appear to be a serious attempt to reach out, engage with that question, and explain the what, how, and why of Zinc. WireGuard itself is small and, according to Linus Torvalds, a work of art. Two of its stated objectives are maximal simplicity and high auditability. Donenfeld initially did try to implement WireGuard using the existing kernel cryptography API, but after trying to do so, he found it impossible to do in any sane way. That led him to question whether it was even possible to meet those objectives using the existing API. By way of a case study, he considered big_key.c. This is kernel code that is designed to take a key, store it encrypted on disk, and then return the key to someone asking for it if they are allowed to have access to it. Donenfeld had taken a look at it, and found that the crypto was totally broken. For a start, it used ciphers in Electronic Codebook (ECB) mode, which is known to leave gross structure in ciphertext — the encrypted image of Tux on the left may still contain data perceptible to your eye — and so is not recommended for any serious cryptographic use. Furthermore, according to Donenfeld, it was missing authentication tags (allowing ciphertext to be undetectably modified), it didn't zero keys out of memory after use, and it didn't use its sources of randomness correctly; there were many CVEs associated with it. So he set out to rewrite it using the crypto API, hoping to better learn the API with a view to using it for WireGuard. The first step with the existing API is to allocate an instance of a cipher "object". The syntax for so doing is arguably confusing — for example, you pass the argument CRYPTO_ALG_ASYNC to indicate that you don't want the instance to be asynchronous. When you've got it set up and want to encrypt something, you can't simply pass data by address. You must use scatter/gather to pass it, which in turn means that data in the vmalloc() area or on the stack can't just be encrypted with this API. The key you're using ends up attached not to the object you just allocated, but to the global instance of the algorithm in question, so if you want to set the key you must take a mutex lock before doing so, in order to be sure that someone else isn't changing the key underneath you at the same time. This complexity has an associated resource cost: the memory requirements for a single key can approach a megabyte, and some platforms just can't spare that much. Normally one would use kvalloc() to get around this, but the crypto API doesn't permit it. Although this was eventually addressed, the fix was not trivial.
  • 4.20 Merge window part 2
    At the end of the 4.20 merge window, 12,125 non-merge changesets had been pulled into the mainline kernel repository; 6,390 came in since last week's summary was written. As is often the case, the latter part of the merge window contained a larger portion of cleanups and fixes, but there were a number of new features in the mix as well.

Limiting the power of package installation in Debian

There is always at least a small risk when installing a package for a distribution. By its very nature, package installation is an invasive process; some packages require the ability to make radical changes to the system—changes that users surely would not want other packages to take advantage of. Packages that are made available by distributions are vetted for problems of this sort, though, of course, mistakes can be made. Third-party packages are an even bigger potential problem because they lack this vetting, as was discussed in early October on the debian-devel mailing list. Solutions in this area are not particularly easy, however. Lars Wirzenius brought up the problem: "when a .deb package is installed, upgraded, or removed, the maintainer scripts are run as root and can thus do anything." Maintainer scripts are included in a .deb file to be run before and after installation or removal. As he noted, maintainer scripts for third-party packages (e.g. Skype, Chrome) sometimes add entries to the lists of package sources and signing keys; they do so in order to get security updates to their packages safely, but it may still be surprising or unwanted. Even simple mistakes made in Debian-released packages might contain unwelcome surprises of various sorts. He suggested that there could be a set of "profiles" that describe the kinds of changes that might be made by a package installation. He gave a few different examples, such as a "default" profile that only allowed file installation in /usr, a "kernel" profile that can install in /boot and trigger rebuilds of the initramfs, or "core" that can do anything. Packages would then declare which profile they required. The dpkg command could arrange that package's install scripts could only make the kinds of changes allowed by its profile. Read more

SpamAssassin is back

The SpamAssassin 3.4.2 release was the first from that project in well over three years. At the 2018 Open Source Summit Europe, Giovanni Bechis talked about that release and those that will be coming in the near future. It would seem that, after an extended period of quiet, the SpamAssassin project is back and has rededicated itself to the task of keeping junk out of our inboxes. Bechis started by noting that spam filtering is hard because everybody's spam is different. It varies depending on which languages you speak, what your personal interests are, which social networks you use, and so on. People vary, so results vary; he knows a lot of Gmail users who say that its spam filtering works well, but his Gmail account is full of spam. Since Google knows little about him, it is unable to train itself to properly filter his mail. Just like Gmail, SpamAssassin isn't the perfect filter for everybody right out of the box; it's really a framework that can be used to create that filter. Getting the best out of it can involve spending some time to write rules, for example. Read more