Language Selection

English French German Italian Portuguese Spanish

Server

Servers: Containers, MapR, 'Serverless', Bonitasoft

Filed under
Server
  • Containers versus Operating Systems

    The most popular docker base container image is either busybox, or scratch. This is driven by a movement that is equal parts puritanical and pragmatic. The puritan asks “Why do I need to run init(1) just to run my process?” The pragmatist asks “Why do I need a 700 meg base image to deploy my application?” And both, seeking immutable deployment units ask “Is it a good idea that I can ssh into my container?” But let’s step back for a second and look at the history of how we got to the point where questions like this are even a thing.

    In the very beginnings, there were no operating systems. Programs ran one at a time with the whole machine at their disposal. While efficient, this created a problem for the keepers of these large and expensive machines. To maximise their investment, the time between one program finishing and another starting must be kept to an absolute minimum; hence monitor programs and batch processing was born.

  • MapR: How Next-Gen Applications Will Change the Way We Look at Data

    MapR is a Silicon Valley-based big data company. Its founders realized that data was going to become ever increasingly important, and existing technologies, including open source Apache Hadoop, fell short of being able to support things like real-time transactional operational applications. So they spent years building out core technologies that resulted in the MapR products, including the flagship Converged Data Platform, platform-agnostic software that’s designed for the multicloud environment. It can even run on embedded Edge devices.

  • 7 Open-Source Serverless Frameworks Providing Functions as a Service

    With virtualization, organizations began to realize greater utilization of physical hardware. That trend continued with the cloud, as organizations began to get their machines into a pay-as-you-go service. Cloud computing further evolved when Amazon Web Services (AWS) launched its Lambda service in 2014, introducing a new paradigm in cloud computing that has become commonly referred to as serverless computing. In the serverless model, organizations pay for functions as a service without the need to pay for an always-on stateful, virtual machine.

  • Bonitasoft Offers Open Source, Low-Code Platform on AWS Cloud

    Bonitasoft, a specialist in open source business process management and digital transformation software, is partnering with the Amazon Web Services Inc. (AWS) cloud to broaden the reach of its low-code development platform.

    That platform, just released in a new version called Bonita 7.6, comes in an open source version and a subscription version with professional support and advanced features.

Servers: Concurrency, Purism, InSpec, Kubernetes, Docker/Containers

Filed under
Server
  • Thinking Concurrently: How Modern Network Applications Handle Multiple Connections

    The idea behind a process is fairly simple. A running program consists of not only executing code, but also data and some context. Because the code, data and context all exist in memory, the operating system can switch from one process to another very quickly. This combination of code + data + context is known as a "process", and it's the basis for how Linux systems work.

    When you start your Linux box, it has a single process. That process then "forks" itself, such that two identical processes are running. The second ("child") process reads new code, data and context ("exec"), and thus starts running a new process. This continues throughout the time that a system is running. When you execute a new program on the command line with & at the end of the line, you're forking the shell process and then exec'ing your desired program in its place.

  • New Purist Services – Standard Web Services Done Ethically

    When you sign up for a communication service, you are typically volunteering to store your personal, unencrypted data on someone else’s remote server farm. You have no way of ensuring that your data is safe or how it is being used by the owner of the server. However, online services are incredibly convenient especially when you have multiple devices.

  • Automated compliance testing with InSpec

    Don't equate compliance through certification with security, because compliance and security are not the same. We look at automated compliance testing with InSpec for the secure operation of enterprise IT.

  • How the Kubernetes Certification Ensures Interoperability

    Dan Kohn, executive director of the Cloud Native Computing Foundation, has called the launch of the new Kubernetes service provider certification program the most significant announcement yet made by the Foundation around the open source container orchestration engine.

    On this new episode of The New Stack Makers from KubeCon + CloudNativeCon 2017, we’ll learn more from Kohn and William Denniss, a product manager at Google, about how the program can help ensure interoperability and why that’s so important.

  • Container Structure Tests: Unit Tests for Docker Images

    Usage of containers in software applications is on the rise, and with their increasing usage in production comes a need for robust testing and validation. Containers provide great testing environments, but actually validating the structure of the containers themselves can be tricky. The Docker toolchain provides us with easy ways to interact with the container images themselves, but no real way of verifying their contents. What if we want to ensure a set of commands runs successfully inside of our container, or check that certain files are in the correct place with the correct contents, before shipping?

  • Prometheus vs. Heapster vs. Kubernetes Metrics APIs

    In this blog post, I will try to explain the relation between Prometheus, Heapster, as well as the Kubernetes metrics APIs and conclude with the recommended way how to autoscale workloads on Kubernetes.

  • Google Introduces Open Source Framework For Testing Docker Images

    Google has announced a new framework designed to help developers conduct unit tests on Docker container images. 

    The Container Structure Test gives enterprises a way to verify the structure and contents of individual containers to ensure that everything is as it should be before shipping to production, the company said in the company’s Open Source blog Jan. 9. 

    Google has been using the framework to test containers internally for more than a year and has released it publicly because it offers an easier way to validate the structure of Docker containers than other approaches, the company said.

Raspberry Pi: Hands-On with the Pi Server tool

Filed under
Linux
Server

When the Raspberry Pi Foundation announced Raspbian (Debian) Stretch for x86 and Macs, there was a very brief mention of something called PiServer to manage multiple Pi clients on a network, with a promise to cover it in more detail later.

Well, 'later' has now arrived, in the form of a new Raspberry Pi Blog post titled The Raspberry Pi PiServer Tool. In simple terms, the PiServer package allows you to manage multiple Raspberry Pi clients from a single PC or Mac server. Here are the key points:

Read more

Servers: Private Servers, Kubernetes Highlights

Filed under
Server
  • Explore private cloud platform options: Paid and open source

    An open source private cloud platform, Apache CloudStack offers a comprehensive management system that features usage metering and image deployment. It supports hypervisors including VMware ESXi, Microsoft Hyper-V, Citrix XenServer and KVM.

    CloudStack also handles features like tiered storage, Active Directory integration and some software-defined networking. As with other open source platforms, it takes a knowledgeable IT staff to install and support CloudStack.

  • 7 systems engineering and operations trends to watch in 2018

    Kubernetes domination

    Kubernetes came into its own in 2017 and its popularity will only grow in 2018. Edward Muller, engineering manager at Salesforce, predicts that building tools on top of Kubernetes is going to be more prevalent next year. “Previously, most tooling targeted one or more cloud infrastructure APIs,” says Muller. “Recent announcements of Kubernetes as a Service (KaaS?) from major cloud providers is likely to only hasten the shift.”

  • 2018: The Year of Kubernetes and Interoperability

    On its own, Kubernetes is a great story. What makes it even better is the soaring interoperability movement it’s fueling. An essential part of enabling interoperable cloud-native apps on Kubernetes is the Open Service Broker API. OSBAPI enables portability of cloud services across offerings and vendors. A collaborative project across multiple organizations, including Fujitsu, Google, IBM, Pivotal, Red Hat and SAP, it enables developers, ISVs, and SaaS vendors to deliver services to applications running within cloud-native platforms. In 2017, we saw adoption of the API by Microsoft and Google. Late in the year, Amazon and Pivotal partnered to enable expose Amazon’s services via the broker as well. Red Hat uses it to support the OpenShift marketplace.

Why I Find Nginx Practically Better Than Apache

Filed under
Server

According to the latest web server survey by Netcraft, which was carried out towards the end of 2017, (precisely in November), Apache and Nginx are the most widely used open source web servers on the Internet.

Apache is a free, open-source HTTP server for Unix-like operating systems and Windows. It was designed to be a secure, efficient and extensible server that provides HTTP services in sync with the prevailing HTTP standards.

Ever since it’s launch, Apache has been the most popular web server on the Internet since 1996. It is the de facto standard for Web servers in the Linux and open source ecosystem. New Linux users normally find it easier to set up and use.

Nginx (pronounced ‘Engine-x’) is a free, open-source, high-performance HTTP server, reverse proxy, and an IMAP/POP3 proxy server. Just like Apache, it also runs on Unix-like operating systems and Windows.

Well known for it’s high performance, stability, simple configuration, and low resource consumption, it has over the years become so popular and its usage on the Internet is heading for greater heights. It is now the web server of choice among experienced system administrators or web masters of top sites.

Read more

Servers: Five Linux Server Distributions to Consider in 2018, Spinnaker, 'Serverless', and Linux 2

Filed under
Server
  • Five Linux Server Distributions to Consider in 2018

    These five tried-and-tested Linux server distributions top our list for distros to consider for the data center or server room.

  • Get Started with Spinnaker on Kubernetes

    In the last previous installment of the series, we introduced Spinnaker as the multicloud deployment tool. We will explore how to setup Spinnaker on the Kubernetes open source container orchestration engine and deploy your first application through it.

    In this tutorial, I will walk you through how to setup and configure Spinnaker on Minikube. Once it is up and running, we will deploy and scale a containerized application running in Kubernetes.

    Spinnaker is usually installed in a VM running Ubuntu 14.04 LTS. Thanks to the Helm community, it is now available as a Chart to install with just one command.

  • Know when to implement serverless vs. containers

    Serverless computing is either the perfect answer to an application deployment problem or an expensive disaster waiting to happen.

    VMs, containers and serverless architecture all have distinct pros and cons, but serverless might break everything if the applications aren't suited for that deployment architecture. To prevent an implosion in IT, give developers an educated assessment of serverless vs. containers for new deployments.

  • Amazon counters hybrid cloud model with Linux 2: Amazon launches next Linux server OS

    Amazon Web Services (AWS) recently launched Linux 2, with access to the latest 4.9 LTS kernel. According to the company, the newest version “provides a high performance, stable, and secure execution environment for cloud and enterprise applications.” The system includes five years of long-term security support and access to software packages through the Amazon Linux Extras repository. It is currently available for all AWS regions.

Servers: Twistlock, Linux 2, Hyperledger

Filed under
Server
  • Twistlock 2.3 Advances Container Security with Serverless Support

    Container security vendor Twistlock released version 2.3 of its container security platform on Jan. 3, including new features to help protect container workloads.

    Among the new features in the Twistlock 2.3 release in an improved Cloud Native App Firewall (CNAF), per-layer vulnerability analysis functionality, application aware system call defense and new serverless security capabilities.

  • Amazon launches its own open-source OS 'Linux 2' for enterprise clients

    In a deviation from its earlier policy of not permitting its cloud services users to run operating systems on its clients’ servers, Amazon has since launched its own version of the Linux OS, according to a report in VCCircle. This move by Amazon Web Services is seen as a response to rivals Oracle and Microsoft who have been offering what is known as Hybrid technology to their clients in which the open platform OS Linux can be used by the clients availing cloud services to run many other programs, on their own severs as well as on the cloud.

    Up to now, Amazon did not provide this facility to its clients directly. Only the Amazon-owned data centers were permitted to run these OSs.

  • Hyperledger 3 years later: That's the sound of the devs... working on the chain ga-a-ang

    The Linux Foundation’s Hyperledger project was announced in December 2015. When Apache Web server daddy Brian Behlendorf took the helm five months later, the Foundation’s blockchain baby was still embryonic. He called it “day zero.”

    Driving Hyperledger was the notion of a blockchain, a distributed ledger whose roots are in digital currency Bitcoin, for the Linux ecosystem - a reference technology stack that those comfortable with a command line could experiment with and build their own blockchain systems and applications.

    Behlendorf, the project’s executive director, said upon assuming command in May 2016: “There are lots of things that we want to see built on top.”

Meltdown And Spectre CPU Flaws Put Computers, Laptops, Phones At Risk

Filed under
Linux
Server

Today Google security blog has posted about the two vulnerabilities that put virtually many computers, phones, laptops using Intel, AMD and ARM CPUs at risk. Using the two major flaws hackers can gain read access to the system memory that may include sensitive data including passwords, encryption keys etc.

Read<br />
more

Servers With GNU/Linux and Microsoft's Continuing Strategy of Gaming the Numbers by Taking Over Parked Domains

Filed under
GNU
Linux
Server
Microsoft
  • Amazon has quietly released a game changer for its cloud: Linux software that runs on corporate servers

    Amazon's cloud business quietly just took a big step outside the cloud.

    Last month, soon after Amazon Web Service's giant tech conference, the company started offering its enterprise customers a new version of the Linux operating system it calls Linux 2. The new product marks a departure for the cloud-computing juggernaut, as the software can be installed on customers' servers rather than run from Amazon's data centers.

    Amazon will rent access to Linux 2 to its cloud customers. But it's also making the software available for companies to install on their servers. There they can use it to run many of the most popular server software programs and technologies, including Microsoft's Hyper-V, VMware, Oracle's VM VirtualBox, Docker, and Amazon's Docker alternative, Amazon Machine Image.

  • December 2017 Web Server Survey

    The noticeable spike in Apache-powered domains in May 2013 was caused by the largest hosting company of the time, GoDaddy, switching a large number of its domains from Microsoft IIS to Apache Traffic Server (ATS) . GoDaddy switched back to using IIS 7.5 a few months later.

    Today, Apache still has the largest market share by number of domains, with 81.4 million giving it a market share of 38.2%. It also saw the largest gain this month, increasing its total by 1.53 million. This growth was closely followed by nginx, with a gain of 1.09 million domains increasing its total to 47.5 million. While Microsoft leads by overall number of hostnames, it lags in 3rd position when considering the number of unique domains those sites run on, with a total of 22.8 million.

Servers: UCS App Center and Tips to Help Your Company Succeed in the Server Side

Filed under
Server
  • Install Range of Enterprise Applications in Few Clicks with UCS App Center

    Since the rise of smartphones digital distribution platforms for computer software have multiplied and with them the use of applications as “apps”. Major players in this field are Apple and Google offering all kinds of apps that are easy to download and integrate on people’s mobiles.

    But what about server and business applications for an organization that can be used both on-premise and in the cloud? How about being able to install a whole range of enterprise applications and integrate them in your IT environment with just a few clicks?

  • 7 Tips to Help Your Company Succeed in the Cloud

    That statement is a reflection of the state of our industry: companies and investors are looking to improve the focus on delivering and developing a product and less time and investment on maintaining infrastructure. The needs of our products have not changed - but how we create and maintain them has. As Linux and open source professionals of all types, we are at the center of this revolution. Not only is Linux the “foundation” for most public cloud providers; studies show a steady dominance of Linux deployments in the cloud and the growth of container technologies such as Docker further grow the number of active Linux installs.

    The Linux and Dice Open Source Jobs Report echoes the importance of open source in companies today, with 60 percent looking for full-time professionals with open source experience. Plus, nearly half (47 percent) of hiring managers said they’ll pay for certifications just to bring employees up to speed on open source projects.

Syndicate content

More in Tux Machines

Linux Kernel Development

  • New Sound Drivers Coming In Linux 4.16 Kernel
    Due to longtime SUSE developer Takashi Iwai going on holiday the next few weeks, he has already sent in the sound driver feature updates targeting the upcoming Linux 4.16 kernel cycle. The sound subsystem in Linux 4.16 sees continued changes to the ASoC code, clean-ups to the existing drivers, and a number of new drivers.
  • Varlink: a protocol for IPC
    One of the motivations behind projects like kdbus and bus1, both of which have fallen short of mainline inclusion, is to have an interprocess communication (IPC) mechanism available early in the boot process. The D-Bus IPC mechanism has a daemon that cannot be started until filesystems are mounted and the like, but what if the early boot process wants to perform IPC? A new project, varlink, was recently announced; it aims to provide IPC from early boot onward, though it does not really address the longtime D-Bus performance complaints that also served as motivation for kdbus and bus1. The announcement came from Harald Hoyer, but he credited Kay Sievers and Lars Karlitski with much of the work. At its core, varlink is simply a JSON-based protocol that can be used to exchange messages over any connection-oriented transport. No kernel "special sauce" (such as kdbus or bus1) is needed to support it as TCP or Unix-domain sockets will provide the necessary functionality. The messages can be used as a kind of remote procedure call (RPC) using an API defined in an interface file.
  • Statistics for the 4.15 kernel
    The 4.15 kernel is likely to require a relatively long development cycle as a result of the post-rc5 merge of the kernel page-table isolation patches. That said, it should be in something close to its final form, modulo some inevitable bug fixes. The development statistics for this kernel release look fairly normal, but they do reveal an unexpectedly busy cycle overall. This development cycle was supposed to be relatively calm after the anticipated rush to get work into the 4.14 long-term-support release. But, while 4.14 ended up with 13,452 non-merge changesets at release, 4.15-rc6 already has 14,226, making it one of the busiest releases in the kernel project's history. Only 4.9 (16,214 changesets) and 4.12 (14,570) brought in more work, and 4.15 may exceed 4.12 by the time it is finished. So far, 1,707 developers have contributed to this kernel; they added 725,000 lines of code while removing 407,000, for a net growth of 318,000 lines of code.
  • A new kernel polling interface
    Polling a set of file descriptors to see which ones can perform I/O without blocking is a useful thing to do — so useful that the kernel provides three different system calls (select(), poll(), and epoll_wait() — plus some variants) to perform it. But sometimes three is not enough; there is now a proposal circulating for a fourth kernel polling interface. As is usually the case, the motivation for this change is performance. On January 4, Christoph Hellwig posted a new polling API based on the asynchronous I/O (AIO) mechanism. This may come as a surprise to some, since AIO is not the most loved of kernel interfaces and it tends not to get a lot of attention. AIO allows for the submission of I/O operations without waiting for their completion; that waiting can be done at some other time if need be. The kernel has had AIO support since the 2.5 days, but it has always been somewhat incomplete. Direct file I/O (the original use case) works well, as does network I/O. Many other types of I/O are not supported for asynchronous use, though; attempts to use the AIO interface with them will yield synchronous behavior. In a sense, polling is a natural addition to AIO; the whole point of polling is usually to avoid waiting for operations to complete.

Security: OpenSSL, IoT, and LWN Coverage of 'Intelpocalypse'

  • Another Face to Face: Email Changes and Crypto Policy
    The OpenSSL OMC met last month for a two-day face-to-face meeting in London, and like previous F2F meetings, most of the team was present and we addressed a great many issues. This blog posts talks about some of them, and most of the others will get their own blog posts, or notices, later. Red Hat graciously hosted us for the two days, and both Red Hat and Cryptsoft covered the costs of their employees who attended. One of the overall threads of the meeting was about increasing the transparency of the project. By default, everything should be done in public. We decided to try some major changes to email and such.
  • Some Basic Rules for Securing Your IoT Stuff

    Throughout 2016 and 2017, attacks from massive botnets made up entirely of hacked [sic] IoT devices had many experts warning of a dire outlook for Internet security. But the future of IoT doesn’t have to be so bleak. Here’s a primer on minimizing the chances that your IoT things become a security liability for you or for the Internet at large.

  • A look at the handling of Meltdown and Spectre
    The Meltdown/Spectre debacle has, deservedly, reached the mainstream press and, likely, most of the public that has even a remote interest in computers and security. It only took a day or so from the accelerated disclosure date of January 3—it was originally scheduled for January 9—before the bugs were making big headlines. But Spectre has been known for at least six months and Meltdown for nearly as long—at least to some in the industry. Others that were affected were completely blindsided by the announcements and have joined the scramble to mitigate these hardware bugs before they bite users. Whatever else can be said about Meltdown and Spectre, the handling (or, in truth, mishandling) of this whole incident has been a horrific failure. For those just tuning in, Meltdown and Spectre are two types of hardware bugs that affect most modern CPUs. They allow attackers to cause the CPU to do speculative execution of code, while timing memory accesses to deduce what has or has not been cached, to disclose the contents of memory. These disclosures can span various security boundaries such as between user space and the kernel or between guest operating systems running in virtual machines. For more information, see the LWN article on the flaws and the blog post by Raspberry Pi founder Eben Upton that well describes modern CPU architectures and speculative execution to explain why the Raspberry Pi is not affected.
  • Addressing Meltdown and Spectre in the kernel
    When the Meltdown and Spectre vulnerabilities were disclosed on January 3, attention quickly turned to mitigations. There was already a clear defense against Meltdown in the form of kernel page-table isolation (KPTI), but the defenses against the two Spectre variants had not been developed in public and still do not exist in the mainline kernel. Initial versions of proposed defenses have now been disclosed. The resulting picture shows what has been done to fend off Spectre-based attacks in the near future, but the situation remains chaotic, to put it lightly. First, a couple of notes with regard to Meltdown. KPTI has been merged for the 4.15 release, followed by a steady trickle of fixes that is undoubtedly not yet finished. The X86_BUG_CPU_INSECURE processor bit is being renamed to X86_BUG_CPU_MELTDOWN now that the details are public; there will be bug flags for the other two variants added in the near future. 4.9.75 and 4.4.110 have been released with their own KPTI variants. The older kernels do not have mainline KPTI, though; instead, they have a backport of the older KAISER patches that more closely matches what distributors shipped. Those backports have not fully stabilized yet either. KPTI patches for ARM are circulating, but have not yet been merged.
  • Is it time for open processors?
    The disclosure of the Meltdown and Spectre vulnerabilities has brought a new level of attention to the security bugs that can lurk at the hardware level. Massive amounts of work have gone into improving the (still poor) security of our software, but all of that is in vain if the hardware gives away the game. The CPUs that we run in our systems are highly proprietary and have been shown to contain unpleasant surprises (the Intel management engine, for example). It is thus natural to wonder whether it is time to make a move to open-source hardware, much like we have done with our software. Such a move may well be possible, and it would certainly offer some benefits, but it would be no panacea. Given the complexity of modern CPUs and the fierceness of the market in which they are sold, it might be surprising to think that they could be developed in an open manner. But there are serious initiatives working in this area; the idea of an open CPU design is not pure fantasy. A quick look around turns up several efforts; the following list is necessarily incomplete.
  • Notes from the Intelpocalypse
    Rumors of an undisclosed CPU security issue have been circulating since before LWN first covered the kernel page-table isolation patch set in November 2017. Now, finally, the information is out — and the problem is even worse than had been expected. Read on for a summary of these issues and what has to be done to respond to them in the kernel. All three disclosed vulnerabilities take advantage of the CPU's speculative execution mechanism. In a simple view, a CPU is a deterministic machine executing a set of instructions in sequence in a predictable manner. Real-world CPUs are more complex, and that complexity has opened the door to some unpleasant attacks. A CPU is typically working on the execution of multiple instructions at once, for performance reasons. Executing instructions in parallel allows the processor to keep more of its subunits busy at once, which speeds things up. But parallel execution is also driven by the slowness of access to main memory. A cache miss requiring a fetch from RAM can stall the execution of an instruction for hundreds of processor cycles, with a clear impact on performance. To minimize the amount of time it spends waiting for data, the CPU will, to the extent it can, execute instructions after the stalled one, essentially reordering the code in the program. That reordering is often invisible, but it occasionally leads to the sort of fun that caused Documentation/memory-barriers.txt to be written.

US Sanctions Against Chinese Android Phones, LWN Report on Eelo

  • A new bill would ban the US government from using Huawei and ZTE phones
    US lawmakers have long worried about the security risks posed the alleged ties between Chinese companies Huawei and ZTE and the country’s government. To that end, Texas Representative Mike Conaway introduced a bill last week called Defending U.S. Government Communications Act, which aims to ban US government agencies from using phones and equipment from the companies. Conaway’s bill would prohibit the US government from purchasing and using “telecommunications equipment and/or services,” from Huawei and ZTE. In a statement on his site, he says that technology coming from the country poses a threat to national security, and that use of this equipment “would be inviting Chinese surveillance into all aspects of our lives,” and cites US Intelligence and counterintelligence officials who say that Huawei has shared information with state leaders, and that the its business in the US is growing, representing a further security risk.
  • U.S. lawmakers urge AT&T to cut commercial ties with Huawei - sources
    U.S. lawmakers are urging AT&T Inc, the No. 2 wireless carrier, to cut commercial ties to Chinese phone maker Huawei Technologies Co Ltd and oppose plans by telecom operator China Mobile Ltd to enter the U.S. market because of national security concerns, two congressional aides said. The warning comes after the administration of U.S. President Donald Trump took a harder line on policies initiated by his predecessor Barack Obama on issues ranging from Beijing’s role in restraining North Korea to Chinese efforts to acquire U.S. strategic industries. Earlier this month, AT&T was forced to scrap a plan to offer its customers Huawei [HWT.UL] handsets after some members of Congress lobbied against the idea with federal regulators, sources told Reuters.
  • Eelo seeks to make a privacy-focused phone
    A focus on privacy is a key feature being touted by a number of different projects these days—from KDE to Tails to Nextcloud. One of the biggest privacy leaks for most people is their phone, so it is no surprise that there are projects looking to address that as well. A new entrant in that category is eelo, which is a non-profit project aimed at producing not only a phone, but also a suite of web services. All of that could potentially replace the Google or Apple mothership, which tend to collect as much personal data as possible.

today's howtos