Language Selection

English French German Italian Portuguese Spanish

Server

10 skills every Linux system administrator should have

Filed under
Server

I know what you're saying. You're saying, "Oh, great, someone else telling me that I need soft skills." Yes, that's what I'm telling you. Honing your interviewing skills can not only determine if you get a particular job, it can also be a major factor in the salary you get. It's true. Let's say, for example, that the salary range for a mid-level SA job is $56k to $85k per year. You might be fully qualified for the top of the range, but the company offers you $70k instead and mentions some nonsense about growth potential or they tell you that they'll bring you along when the time is right.

You need to practice answering questions. Answer the question that's asked. Don't give so much information that you see eyes glazing over, but giving answers that are too short will make you appear arrogant or flippant. Give enough examples of your work to let the interviewer(s) know that you know what you're talking about. They can ask for more details if they want to.

You have to learn to watch other people's behaviors. Are they listening to you? Are they focused on you and the interview? Do they look as though you haven't said enough when you pause to allow them to speak or ask another question? Watch and learn. Practice with other system administrators in your group. Do mock interviews with the group. I know it might sound silly, but it's important to be able to speak to other people about what you do. This practice can also be good for you in speaking with managers. Don't get too deep into the weeds with non-technical people. Keep your answers concise and friendly, and offer examples to illustrate your points.

Read more

ARM Linux on AWS

Filed under
GNU
Linux
Server
Hardware
  • Amazon Talks Up Big Performance Gains For Their 7nm Graviton2 CPUs

    If Amazon's numbers are accurate, Graviton2 should deliver a big performance boost for Amazon's ARM Linux cloud potential. Graviton2 processors are 7nm designs making use of Arm Neoverse cores. Amazon says they can deliver up to seven times the performance of current A1 instances, twice the FP performance, and support more memory channels as well as doubling the per-core cache.

  • AWS announces new ARM-based instances with Graviton2 processors

    AWS has been working with operating system vendors and independent software vendors to help them release software that runs on ARM. ARM-based EC2 instances support Amazon Linux 2, Ubuntu, Red Hat, SUSE, Fedora, Debian and FreeBSD. It also works with multiple container services (Docker, Amazon ECS, and Amazon Elastic Kubernetes Service).

  • Coming Soon – Graviton2-Powered General Purpose, Compute-Optimized, & Memory-Optimized EC2 Instances

    We launched the first generation (A1) of Arm-based, Graviton-powered EC2 instances at re:Invent 2018. Since that launch, thousands of our customers have used them to run many different types of scale-out workloads including containerized microservices, web servers, and data/log processing.

  • AWS EC2 6th Gen Arm Instances are 7x Faster thanks to Graviton 2 Arm Neoverse N1 Custom Processor

    Last year Amazon introduced their first 64-bit Arm-based ECS2 “A1” instances which were found to deliver up to 45% cost savings over x86 Instances for the right workloads.

  • AWS launches Braket, its quantum computing service

    With Braket, developers can get started on building quantum algorithms and basic applications and then test them in simulations on AWS, as well as the quantum hardware from its partners. That’s a smart move on AWS’s part, as it’s hedging its bets without incurring the cost of trying to build a quantum computer itself. And for its partners, AWS provides them with the kind of reach that would be hard to achieve otherwise. Developers and researchers, on the other hand, get access to all of these tools through a single interface, making it easier for them to figure out what works best for them.

News About Servers (SUSE, Ubuntu, Red Hat and More)

Filed under
Server
SUSE
  • What is Cloud Native?

    Cloud native is more than just a buzzword, though. It's an approach used by some of the largest organizations on the planet, including Walmart, Visa, JP Morgan Chase, China Mobile, Verizon and Target, among others. Cloud native is an approach that enable developers and organization to be more agile, providing workload portability and scalability.

  • What is Kata Containers and why should I care?

    Kata Containers can significantly improve the security and isolation of your container workloads. It combines the benefits of using a hypervisor, such as enhanced security, and container orchestration capabilities provided by Kubernetes.

    Together with Eric Erns from Intel, we have recently performed a webinar in which we presented the benefits of using Kata Containers in a Charmed Kubernetes environment. In this blog, we aim to highlight the key outcomes from this webinar.

  • An idiot's guide to Kubernetes, low-code developers, and other industry trends

    As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.

  • A blueprint for OpenStack and bare metal

    The bare metal cloud is an abstraction layer for the pools of dedicated servers with different capabilities (processing, networking or storage) that can be provisioned and consumed with cloud-like ease and speed. It embraces the orchestration and automation of the cloud and applies them to bare metal workload use cases.

    The benefit to end users is that they get access to the direct hardware processing power of individual servers and are able to provision workloads without the overhead of the virtualization layer—providing the ability to provision environments in an Infrastructure-as-code methodology with separation of tenants and projects.

  • Software Development, Microservices & Container Management – Part III – Why Kubernetes? A Deep Dive into Kubernetes world

    Together with my colleague Bettina Bassermann and SUSE partners, we will be running a series of blogs and webinars from SUSE (Software Development, Microservices & Container Management, a SUSE webinar series on modern Application Development), and try to address the former questions and doubts about K8s and Cloud Native development and how it is not compromising quality and control.

  • Epic Performance with New Tuning Guide – SUSE Linux Enterprise Server on AMD EPYC* 7002 Series Processors

    EPYC is AMD’s flagship mainstream server microprocessors and supports 1-way and 2-way multiprocessing. The first generation was originally announced back in May 2017 and replaced the previous Opteron server family with the introduction of the Zen microarchitecture for the mainstream market.

  • Content Lifecycle Management in SUSE Manager

    Content Lifecycle management is managing how patches flows through your infra in a staged manner. In ideal infra, latest patches will always be applied on development servers. If everything is good there then those patches will be applied to QA servers and lastly to production servers. This enables sysadmins to catch issues if any and hence preventing patching of prod system which may create downtime of live environments.

    SUSE Manager gives you this control via content lifecycle. In this, you create custom channels in SUSE Manager for example dev, qa and prod. Then you register your systems to those channels according to their criticality. Now whenever channels gets the new patches it will be available to respective systems (registered to those channels) to install. So if you control channels you control the patch availability to systems.

    In content lifecycle management, suse manager enables you to push patches to channels manually. Like on first deploy all latest patches will be available to dev channels and hence dev systems. At this stage, if you run update commands (zypper up, yum update) they will show latest patches only on dev servers. QA and prod servers wont show any new patches.

  • The Early History of Usenet, Part VII: Usenet Growth and B-News

    For quite a while, it looked like my prediction — one to two articles per day — was overly optimistic. By summer, there were only four new sites: Reed College, University of Oklahoma (at least, I think that that's what uucp node uok is), vax135, another Bell Labs machine — and, cruciallyy, U.C. Berkeley, which had a uucp connection to Bell Labs Research and was on the ARPANET.

    In principle, even a slow rate of exponential growth can eventually take over the world. But that assumes that there are no "deaths" that will drive the growth rate negative. That isn't a reasaonable assumption, though. If nothing else, Jim Ellis, Tom Truscott, Steve Daniel, and I all planned to graduate. (We all succeeded in that goal.) If Usenet hadn't shown its worth to our successors by then, they'd have let it wither. For that matter, university faculty or Bell Labs management could have pulled the plug, too. Usenet could easily have died aborning. But the right person at Berkeley did the right thing.

    Mary Horton was then a PhD student there. (After she graduated, she joined Bell Labs; she and I were two of the primary people who brought TCP/IP to the Labs, where it was sometimes known as the "datagram heresy". The phone network was, of course, circuit-switched…) Known to her but unknown to us, there were two non-technical ARPANET mailing lists that would be of great interest to many potential Usenet users, HUMAN-NETS and SF-LOVERS. She set up a gateway that relayed these mailing lists into Usenet groups; these were at some point moved to the fa ("From ARPANET") hierarchy. (For a more detailed telling of this part of the story, see Ronda Hauben's writings.) With an actual traffic source, it was easy to sell folks on the benefits of Usenet. People would have preferred a real ARPANET connection but that was rarely feasible and never something that a student could set up: ARPANET connections were restricted to places that had research contracts with DARPA. The gateway at Berkeley was, eventually, bidirectional for both Usenet and email; this enabled Usenet-style communication between the networks.

Kubernetes: Helm and Gardener Projects

Filed under
Server
OSS
  • Helm Package Manager for Kubernetes Moves Forward

    The official release of version 3.0 of the Helm package manager for Kubernetes is designed to make it easier for IT organizations to discover and securely deploy software on Kubernetes clusters more easily.

    Taylor Thomas, a core contributor to Helm who is also a software developer for Nike, says for the last year the committee that oversees the development of Helm under the auspices of the Cloud Native Computing Foundation (CNCF) has been structuring the package manager to rely more on the application programming interfaces (APIs) that Kubernetes exposes to store records of installation. Helm Charts, which are collections of YAML files describing a related set of Kubernetes resources, now can be rendered on the client, eliminating the need for the Tiller resource management tool resident in the previous release of Helm that ran on the Kubernetes cluster.

    In addition to providing a more secure way to render Helm Charts, Thomas says this approach provides a more streamlined mechanism for packaging software using Helm. Helm 3.0 also updates Helm Charts and associated libraries.
    Additionally, a revamped Helm Go software development kit (SDK) is designed to make Helm more accessible, with the aim of sharing and reusing code the Helm community has open-sourced with the broader Go community, says Thomas.

  • Gardener Project Update

    Last year, we introduced Gardener in the Kubernetes Community Meeting and in a post on the Kubernetes Blog. At SAP, we have been running Gardener for more than two years, and are successfully managing thousands of conformant clusters in various versions on all major hyperscalers as well as in numerous infrastructures and private clouds that typically join an enterprise via acquisitions.

    We are often asked why a handful of dynamically scalable clusters would not suffice. We also started our journey into Kubernetes with a similar mindset. But we realized that applying the architecture and principles of Kubernetes to productive scenarios, our internal and external customers very quickly required the rational separation of concerns and ownership, which in most circumstances led to the use of multiple clusters. Therefore, a scalable and managed Kubernetes as a service solution is often also the basis for adoption. Particularly, when a larger organization runs multiple products on different providers and in different regions, the number of clusters will quickly rise to the hundreds or even thousands.

    Today, we want to give an update on what we have implemented in the past year regarding extensibility and customizability, and what we plan to work on for our next milestone.

Kubernetes, IBM and Red Hat

Filed under
Red Hat
Server

FHIR and Free Software

Filed under
Server
OSS
  • Building FHIR Applications with MongoDB Atlas

    After a vigorous competition, the team at Asymmetrik was awarded winner of the reference implementation of a secure open source FHIR server based on MongoDB. For a deeper dive, the source code is available for developers and architects under the MIT license.

  • AMIA encourages NIH to fund FHIR for interoperability and clinical research

    While the FHIR standard is not a cure-all for interoperability challenges, the protocol has seen big momentum in recent years, and is seen as an important bridge between newer mobile devices and hospital networks.

    As a web-based spec that has seen a significant amount of buy-in, the standard could have a large impact on the ability of researchers to access better data.

  • AMIA: FHIR is not suitable for research, needs NIH R&D funding

    According to AMIA, it is critical that NIH assume a leadership position to coordinate a research and development strategy for using FHIR for research and that the agency devote “substantial resources” to the effort.

    Specifically, AMIA recommended that NIH directly fund FHIR research and development through grants; indirectly fund FHIR through special emphasis notices and project requirements that prioritize projects that will use FHIR; and educate the research community and help represent it in activities supported by HL7, the Office of the National Coordinator for Health IT and other standards developing organizations that have an interest in FHIR.

Life as a Linux system administrator

Filed under
GNU
Linux
Server

Linux system administration is a job. It can be fun, frustrating, mentally challenging, tedious, and often a great source of accomplishment and an equally great source of burnout. That is to say, it's a job like any other with good days and with bad. Like most system administrators, I have found a balance that works for me. I perform my regular duties with varying levels of automation and manual manipulation and I also do a fair amount of research, which usually ends up as articles. There are two questions I'm going to answer for you in this article. The first is, "How does one become a system administrator?," and second, "What does a Linux system administrator do?".

Read more

The 20 Best Control Panels for Hassle-Free Server Management

Filed under
Server
Software

It’s not very hard to manage web servers for most Linux powers users. However, it’s certainly not a child’s play, and new site owners often find it extremely difficult to manage their servers properly. Thankfully, there’s a huge list of robust control panels that makes server management hassle-free even for beginners. It can also be useful for experienced server owners who’re looking for convenient hosting panel management solutions. That’s why our editors have curated this guide outlining the 20 best admin panel for modern web servers.

Read more

Servers: SysAdmins, Kubernetes, OpenShift

Filed under
Red Hat
Server
  • Tales From The Sysadmin: Dumped Into The Grub Command Line

    Today I have a tale of mystery, of horror, and of hope. The allure of a newer kernel and packages was too much to resist, so I found myself upgrading to Fedora 30. All the packages had downloaded, all that was left was to let DNF reboot the machine and install all the new packages. I started the process and meandered off to find a cup of coffee: black, and darker than the stain this line of work leaves on the soul. After enough time had elapsed, I returned, expecting the warming light of a newly upgraded desktop. Instead, all that greeted me was the harsh darkness of a grub command line. Something was amiss, and it was bad.

    (An aside to the reader, I had this experience on two different machines, stemming from two different root problems. One was a wayward setting, and the other an unusual permissions problem.)

    How does the fledgling Linux sysadmin recover from such a problem? The grub command line is an inscrutable mystery to the uninitiated, but once you understand the basics, it’s not terribly difficult to boot your system and try to restore the normal boot process. This depends on what has broken, of course. If the disk containing your root partition has crashed, then sorry, this article won’t help.

  • Top Kubernetes Operators advancing across the Operator Capability Model

    At KubeCon North America 2019 we highlighted what it means to deliver a mature Kubernetes Operator. A Kubernetes Operator is a method of packaging, deploying and managing a Kubernetes application. The key attribute of an Operator is the active, ongoing management of the application, including failover, backups, upgrades and autoscaling, just like a cloud service.

    These capabilities are ranked into five levels, which are used to gauge maturity. We refer to this as the Operator Capability Model, which outlines a set of possible capabilities that can be applied to an application. Of course, if your app doesn’t store stateful data, a backup might not be applicable to you but log processing or alerting might be important. The important user experience that the Operator model aims for is getting that cloud-like, self-managing experience with knowledge baked in from the experts.

  • Red Hat simplifies transition to open source Kafka with new service registry and HTTP bridge

    Red Hat continues to increase the features available for users looking to implement a 100% open source, event-driven architecture (EDA) through running Apache Kafka on Red Hat OpenShift and Red Hat Enterprise Linux. The Red Hat Integration Q4 release provides new features and capabilities, including ones aimed at simplifying usage and deployment of the AMQ streams distribution of Apache Kafka.

    [...]

    In addition to the registry itself, users can leverage the included custom Kafka serializers and deserializers (SerDes). These SerDes Java classes allow Kafka applications to pull relevant schemas from the Service Registry instead of requiring the schemas to be bundled with the applications.

    Correspondingly, the registry has its own REST API to create, update, and delete artifacts as well as managing global and per-artifact rules. The registry API is compatible with another Kafka provider’s schema registry to facilitate a seamless migration to AMQ Streams as a drop-in replacement.

PHP Web Server GUI - Version 1.0.0 Released

Filed under
Server
Software

PHP's built-in web server is a CLI feature, as such it requires a specific command to use, one which is easy to forget and gets buried in your terminal's history. While writing a script can help, it too gets buried in your terminal history, or is often located in an inconvenient place on the filesystem, requiring you to browse to the script before you can use it. This basic GTK+ GUI solves these issues. It's as easy to use as any other app on your system.

It's also a great tool for teaching PHP or the fundamentals of how web servers work. It's an easy tool for students to use, for learning programming, in Raspberry Pi projects, robotics, or anything else that requires a web-based interface or centralized server communication. Many of these things are true of PHP's built-in web server itself, this GUI just makes it easier to use for people who are not comfortable using the command line.

Read more

Syndicate content

More in Tux Machines

Mozilla: GFX, JavaScript, DeepSpeech and RFC Process

  • Mozilla GFX: moz://gfx newsletter #49

    By way of introduction, I invite you to read Markus’ excellent post on this blog about CoreAnimation integration yielding substantial improvements in power usage if you haven’t already. Next steps in this OS compositor integration saga include taking advantage CoreAnimation with WebRender’s picture caching infrastructure (rendering tiles directly into CoreAnimation surfaces), as well as rendering using a similar mechanism on Windows via DirectComposition surfaces. Markus, Glenn and Sotaro are making good progress on all of these fronts.

  • JSConf JP 2019 - Tokyo, Japan

    I do not step often in JavaScript conference. The language is not my cup of tea. I go through minified, obfuscated broken code every day for webcompat work. JavaScript switched from language that "makes Web page inaccessible and non performant" to "waste of energy, cpu, and nightmare to debug". But this last week-end, I decided to participate to JSConf JP 2019 and I had a good time. I met cool and passionate people. I also felt old. You will understand later why.

  • DeepSpeech 0.6: Mozilla’s Speech-to-Text Engine Gets Fast, Lean, and Ubiquitous

    The Machine Learning team at Mozilla continues work on DeepSpeech, an automatic speech recognition (ASR) engine which aims to make speech recognition technology and trained models openly available to developers. DeepSpeech is a deep learning-based ASR engine with a simple API. We also provide pre-trained English models. Our latest release, version v0.6, offers the highest quality, most feature-packed model so far. In this overview, we’ll show how DeepSpeech can transform your applications by enabling client-side, low-latency, and privacy-preserving speech recognition capabilities.

  • AiC: Improving the pre-RFC process

    I want to write about an idea that Josh Triplett and I have been iterating on to revamp the lang team RFC process. I have written a draft of an RFC already, but this blog post aims to introduce the idea and some of the motivations. The key idea of the RFC is formalize the steps leading up to an RFC, as well as to capture the lang team operations around project groups. The hope is that, if this process works well, it can apply to teams beyond the lang team as well. [...] In general, you can think of the RFC process as a kind of “funnel” with a number of stages. We’ve traditionally thought of the process as beginning at the point where an RFC with a complete design is opened, but of course the design process really begins much earlier. Moreover, a single bit of design can often span multiple RFCs, at least for complex features – moreover, at least in our current process, we often have changes to the design that occur during the implementation stage as well. This can sometimes be difficult to keep up with, even for lang-team members. This post describes a revision to the process that aims to “intercept” proposals at an earlier stage. It also proposes to create “project groups” for design work and a dedicated repository that can house documents. For smaller designs, these groups and repositories might be small and simple. But for larger designs, they offer a space to include a lot more in the way of design notes and other documents. Assuming we adopt this process, one of the things I think we should be working on is developing “best practices” around these repositories. For example, I think that for every non-trivial design decision, we should be creating a summary document that describes the pros/cons and the eventual decision (along with, potentially, comments from people who disagreed with that decision outlining their reasoning).

Red Hat: Ceph Storage, RHEL, OpenShift and More

  • Comparing Red Hat Ceph Storage 3.3 BlueStore/Beast performance with Red Hat Ceph Storage 2.0 Filestore/Civetweb

    This post is the sequel to the object storage performance testing we did two years back based on Red Hat Ceph Storage 2.0 FileStore OSD backend and Civetweb RGW frontend. In this post, we will compare the performance of the latest available (at the time of writing) Ceph Storage i.e. version 3.3 (BlueStore OSD backend & Beast RGW frontend) with Ceph Storage 2.0 version (mid-2017) (FileStore OSD backend & Civetweb RGW frontend). We are conscious that results from both these performance studies are not scientifically comparable. However, we believe that comparing the two should provide you significant performance insights and enables you to make an informed decision when it comes to architecting your Ceph storage clusters. As expected, Ceph Storage 3.3 outperformed Ceph Storage 2.0 for all the workloads that we have tested. We believe that Ceph Storage 3.3 performance improvements are attributed to the combination of several things. The BlueStore OSD backend, the Beast web frontend for RGW, the use of Intel Optane SSDs for BlueStore WAL, block.db, and the latest generation Intel Cascade Lake processors.

  • Red Hat: Leading the enterprise Linux server market

    Red Hat has long believed that the operating system should do more than simply exist as part of a technology stack; it should be the catalyst for innovation. Underpinning almost every enterprise IT advancement, from cloud services and Kubernetes to containers and serverless, is the operating system; frequently, this operating system is Linux. Red Hat is proud of the leadership position we have long maintained in the enterprise operating system market, providing the Linux foundation to drive enterprise IT innovation forward. Today, we’re pleased to continue this leadership, with a new report from IDC that includes data showing that Red Hat as the leading choice for paid Linux in the worldwide server operating environment market as well as a powerful player in server operating systems at-large. According to the report, "Worldwide Server Operating Environments Market Shares, 2018: Overall Market Growth Accelerates:"

  • Microservices-Based Application Delivery with Citrix and Red Hat OpenShift

    Citrix is thrilled to have recently achieved Red Hat OpenShift Operator Certification (Press Release). This new integration simplifies the deployment and control of the Citrix Application Delivery Controller (ADC) to a few clicks through an easy-to-use Operator. Before we dive into how you can use Citrix Operators to speed up implementation and control in OpenShift environments, let me cover the benefits of using the Citrix Cloud Native Stack and how it solves the challenges of integrating ingress in Kubernetes.

  • Wavefront Automates and Unifies Red Hat OpenShift Observability, Full Stack

    Red Hat OpenShift is an enterprise Kubernetes platform intended to make the process of developing, deploying and managing cloud-native applications easier, scalable and more flexible. Wavefront by VMware provides enterprise-grade observability and analytics for OpenShift environments across multiple clouds. Wavefront ingests, analyzes and visualizes OpenShift telemetry – metrics, histograms, traces, and span logs – across the full-stack, including distributed applications, containers, microservices, and cloud infrastructure. As a result of Wavefront’s collaboration with Red Hat, you can now get automated enterprise observability for OpenShift that’s full stack, through the Red Hat OpenShift Certified Wavefront Operator for OpenShift 4.1 and later. This Operator is available in Operator Hub embedded in OpenShift, a registry for finding Kubernetes Operator-backed services.

  • RHEL 8.1: A minor release with major new container capabilities

    The release of Red Hat Enterprise Linux 8.1 is a minor update to RHEL, but a major step forward with containers. The container-tools:rhel8 application stream has been updated with new versions of Podman, Buildah, Skopeo, runc, container selinux policies and other libraries. The core set of base images in Red Hat Universal Base Image (UBI) have been updated to 8.1, and UBI has expanded to include Go 1.11.5 as a developer use case. There are now 37 images released as part of UBI - they can all be seen on the UBI product page. Finally, we have released some really good updated documentation covering rootless, and other new features in the container-tools module. [...] When we launched Red Hat Universal Base Image at Red Hat Summit in 2019, we got a lot of great feedback. One of the first requests we received was for Golang. It is a popular programming language in the Cloud Native space, and we immediately recognized the value of adding it (also, I know what you’re thinking! Stay tuned and you might see OpenJDK images soon). With the update to RHEL 8.1, we have added the ubi8/go-toolset container to the UBI family. This gives users the ability to compile Go applications using a pre-packaged container with Go 1.11.5.

  • Red Hat’s CTO sees open-source as driver of choice and consistency in hybrid environments

    A case can certainly be made that Red Hat Inc. and the open-source movement have commoditized portions of the information technology infrastructure. A much wider range of tools and systems are now available to enterprises than ever before. This trend is just part of the open-source journey, one that Chris Wright (pictured), as the senior vice president and chief technology officer of Red Hat and a veteran Linux developer, has seen evolve over more than 20 years as a software engineer. “What we’re experiencing in the Linux space is, it’s driving a commoditization of infrastructure,” Wright said. “It’s switching away from the traditional vertically integrated stack of a [reduced instruction set computer]/Unix environment to providing choice. As infrastructure changes, it’s not just hardware, it’s virtualized data centers, it’s public clouds.”

  • Introduction to the Red Hat OpenShift deployment extension for Microsoft Azure DevOps

today's howtos

Polo – A Modern Light-weight File Manager for Linux

Polo is a modern, light-weight and advanced file manager for Linux, that comes with a number of advanced features that are not present in many commonly used file managers or file browsers on Linux distributions. It comes with multiple panes with multiple tabs in each pane, support for archive creation, extraction and browsing, support for cloud storage, support for running KVM images, support for modifying PDF documents and image files, support for writing ISO files to UDB drives and much more. Read more