Language Selection

English French German Italian Portuguese Spanish

Server

10 skills every Linux system administrator should have

Filed under
Server

I know what you're saying. You're saying, "Oh, great, someone else telling me that I need soft skills." Yes, that's what I'm telling you. Honing your interviewing skills can not only determine if you get a particular job, it can also be a major factor in the salary you get. It's true. Let's say, for example, that the salary range for a mid-level SA job is $56k to $85k per year. You might be fully qualified for the top of the range, but the company offers you $70k instead and mentions some nonsense about growth potential or they tell you that they'll bring you along when the time is right.

You need to practice answering questions. Answer the question that's asked. Don't give so much information that you see eyes glazing over, but giving answers that are too short will make you appear arrogant or flippant. Give enough examples of your work to let the interviewer(s) know that you know what you're talking about. They can ask for more details if they want to.

You have to learn to watch other people's behaviors. Are they listening to you? Are they focused on you and the interview? Do they look as though you haven't said enough when you pause to allow them to speak or ask another question? Watch and learn. Practice with other system administrators in your group. Do mock interviews with the group. I know it might sound silly, but it's important to be able to speak to other people about what you do. This practice can also be good for you in speaking with managers. Don't get too deep into the weeds with non-technical people. Keep your answers concise and friendly, and offer examples to illustrate your points.

Read more

ARM Linux on AWS

Filed under
GNU
Linux
Server
Hardware
  • Amazon Talks Up Big Performance Gains For Their 7nm Graviton2 CPUs

    If Amazon's numbers are accurate, Graviton2 should deliver a big performance boost for Amazon's ARM Linux cloud potential. Graviton2 processors are 7nm designs making use of Arm Neoverse cores. Amazon says they can deliver up to seven times the performance of current A1 instances, twice the FP performance, and support more memory channels as well as doubling the per-core cache.

  • AWS announces new ARM-based instances with Graviton2 processors

    AWS has been working with operating system vendors and independent software vendors to help them release software that runs on ARM. ARM-based EC2 instances support Amazon Linux 2, Ubuntu, Red Hat, SUSE, Fedora, Debian and FreeBSD. It also works with multiple container services (Docker, Amazon ECS, and Amazon Elastic Kubernetes Service).

  • Coming Soon – Graviton2-Powered General Purpose, Compute-Optimized, & Memory-Optimized EC2 Instances

    We launched the first generation (A1) of Arm-based, Graviton-powered EC2 instances at re:Invent 2018. Since that launch, thousands of our customers have used them to run many different types of scale-out workloads including containerized microservices, web servers, and data/log processing.

  • AWS EC2 6th Gen Arm Instances are 7x Faster thanks to Graviton 2 Arm Neoverse N1 Custom Processor

    Last year Amazon introduced their first 64-bit Arm-based ECS2 “A1” instances which were found to deliver up to 45% cost savings over x86 Instances for the right workloads.

  • AWS launches Braket, its quantum computing service

    With Braket, developers can get started on building quantum algorithms and basic applications and then test them in simulations on AWS, as well as the quantum hardware from its partners. That’s a smart move on AWS’s part, as it’s hedging its bets without incurring the cost of trying to build a quantum computer itself. And for its partners, AWS provides them with the kind of reach that would be hard to achieve otherwise. Developers and researchers, on the other hand, get access to all of these tools through a single interface, making it easier for them to figure out what works best for them.

News About Servers (SUSE, Ubuntu, Red Hat and More)

Filed under
Server
SUSE
  • What is Cloud Native?

    Cloud native is more than just a buzzword, though. It's an approach used by some of the largest organizations on the planet, including Walmart, Visa, JP Morgan Chase, China Mobile, Verizon and Target, among others. Cloud native is an approach that enable developers and organization to be more agile, providing workload portability and scalability.

  • What is Kata Containers and why should I care?

    Kata Containers can significantly improve the security and isolation of your container workloads. It combines the benefits of using a hypervisor, such as enhanced security, and container orchestration capabilities provided by Kubernetes.

    Together with Eric Erns from Intel, we have recently performed a webinar in which we presented the benefits of using Kata Containers in a Charmed Kubernetes environment. In this blog, we aim to highlight the key outcomes from this webinar.

  • An idiot's guide to Kubernetes, low-code developers, and other industry trends

    As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.

  • A blueprint for OpenStack and bare metal

    The bare metal cloud is an abstraction layer for the pools of dedicated servers with different capabilities (processing, networking or storage) that can be provisioned and consumed with cloud-like ease and speed. It embraces the orchestration and automation of the cloud and applies them to bare metal workload use cases.

    The benefit to end users is that they get access to the direct hardware processing power of individual servers and are able to provision workloads without the overhead of the virtualization layer—providing the ability to provision environments in an Infrastructure-as-code methodology with separation of tenants and projects.

  • Software Development, Microservices & Container Management – Part III – Why Kubernetes? A Deep Dive into Kubernetes world

    Together with my colleague Bettina Bassermann and SUSE partners, we will be running a series of blogs and webinars from SUSE (Software Development, Microservices & Container Management, a SUSE webinar series on modern Application Development), and try to address the former questions and doubts about K8s and Cloud Native development and how it is not compromising quality and control.

  • Epic Performance with New Tuning Guide – SUSE Linux Enterprise Server on AMD EPYC* 7002 Series Processors

    EPYC is AMD’s flagship mainstream server microprocessors and supports 1-way and 2-way multiprocessing. The first generation was originally announced back in May 2017 and replaced the previous Opteron server family with the introduction of the Zen microarchitecture for the mainstream market.

  • Content Lifecycle Management in SUSE Manager

    Content Lifecycle management is managing how patches flows through your infra in a staged manner. In ideal infra, latest patches will always be applied on development servers. If everything is good there then those patches will be applied to QA servers and lastly to production servers. This enables sysadmins to catch issues if any and hence preventing patching of prod system which may create downtime of live environments.

    SUSE Manager gives you this control via content lifecycle. In this, you create custom channels in SUSE Manager for example dev, qa and prod. Then you register your systems to those channels according to their criticality. Now whenever channels gets the new patches it will be available to respective systems (registered to those channels) to install. So if you control channels you control the patch availability to systems.

    In content lifecycle management, suse manager enables you to push patches to channels manually. Like on first deploy all latest patches will be available to dev channels and hence dev systems. At this stage, if you run update commands (zypper up, yum update) they will show latest patches only on dev servers. QA and prod servers wont show any new patches.

  • The Early History of Usenet, Part VII: Usenet Growth and B-News

    For quite a while, it looked like my prediction — one to two articles per day — was overly optimistic. By summer, there were only four new sites: Reed College, University of Oklahoma (at least, I think that that's what uucp node uok is), vax135, another Bell Labs machine — and, cruciallyy, U.C. Berkeley, which had a uucp connection to Bell Labs Research and was on the ARPANET.

    In principle, even a slow rate of exponential growth can eventually take over the world. But that assumes that there are no "deaths" that will drive the growth rate negative. That isn't a reasaonable assumption, though. If nothing else, Jim Ellis, Tom Truscott, Steve Daniel, and I all planned to graduate. (We all succeeded in that goal.) If Usenet hadn't shown its worth to our successors by then, they'd have let it wither. For that matter, university faculty or Bell Labs management could have pulled the plug, too. Usenet could easily have died aborning. But the right person at Berkeley did the right thing.

    Mary Horton was then a PhD student there. (After she graduated, she joined Bell Labs; she and I were two of the primary people who brought TCP/IP to the Labs, where it was sometimes known as the "datagram heresy". The phone network was, of course, circuit-switched…) Known to her but unknown to us, there were two non-technical ARPANET mailing lists that would be of great interest to many potential Usenet users, HUMAN-NETS and SF-LOVERS. She set up a gateway that relayed these mailing lists into Usenet groups; these were at some point moved to the fa ("From ARPANET") hierarchy. (For a more detailed telling of this part of the story, see Ronda Hauben's writings.) With an actual traffic source, it was easy to sell folks on the benefits of Usenet. People would have preferred a real ARPANET connection but that was rarely feasible and never something that a student could set up: ARPANET connections were restricted to places that had research contracts with DARPA. The gateway at Berkeley was, eventually, bidirectional for both Usenet and email; this enabled Usenet-style communication between the networks.

Kubernetes: Helm and Gardener Projects

Filed under
Server
OSS
  • Helm Package Manager for Kubernetes Moves Forward

    The official release of version 3.0 of the Helm package manager for Kubernetes is designed to make it easier for IT organizations to discover and securely deploy software on Kubernetes clusters more easily.

    Taylor Thomas, a core contributor to Helm who is also a software developer for Nike, says for the last year the committee that oversees the development of Helm under the auspices of the Cloud Native Computing Foundation (CNCF) has been structuring the package manager to rely more on the application programming interfaces (APIs) that Kubernetes exposes to store records of installation. Helm Charts, which are collections of YAML files describing a related set of Kubernetes resources, now can be rendered on the client, eliminating the need for the Tiller resource management tool resident in the previous release of Helm that ran on the Kubernetes cluster.

    In addition to providing a more secure way to render Helm Charts, Thomas says this approach provides a more streamlined mechanism for packaging software using Helm. Helm 3.0 also updates Helm Charts and associated libraries.
    Additionally, a revamped Helm Go software development kit (SDK) is designed to make Helm more accessible, with the aim of sharing and reusing code the Helm community has open-sourced with the broader Go community, says Thomas.

  • Gardener Project Update

    Last year, we introduced Gardener in the Kubernetes Community Meeting and in a post on the Kubernetes Blog. At SAP, we have been running Gardener for more than two years, and are successfully managing thousands of conformant clusters in various versions on all major hyperscalers as well as in numerous infrastructures and private clouds that typically join an enterprise via acquisitions.

    We are often asked why a handful of dynamically scalable clusters would not suffice. We also started our journey into Kubernetes with a similar mindset. But we realized that applying the architecture and principles of Kubernetes to productive scenarios, our internal and external customers very quickly required the rational separation of concerns and ownership, which in most circumstances led to the use of multiple clusters. Therefore, a scalable and managed Kubernetes as a service solution is often also the basis for adoption. Particularly, when a larger organization runs multiple products on different providers and in different regions, the number of clusters will quickly rise to the hundreds or even thousands.

    Today, we want to give an update on what we have implemented in the past year regarding extensibility and customizability, and what we plan to work on for our next milestone.

Kubernetes, IBM and Red Hat

Filed under
Red Hat
Server

FHIR and Free Software

Filed under
Server
OSS
  • Building FHIR Applications with MongoDB Atlas

    After a vigorous competition, the team at Asymmetrik was awarded winner of the reference implementation of a secure open source FHIR server based on MongoDB. For a deeper dive, the source code is available for developers and architects under the MIT license.

  • AMIA encourages NIH to fund FHIR for interoperability and clinical research

    While the FHIR standard is not a cure-all for interoperability challenges, the protocol has seen big momentum in recent years, and is seen as an important bridge between newer mobile devices and hospital networks.

    As a web-based spec that has seen a significant amount of buy-in, the standard could have a large impact on the ability of researchers to access better data.

  • AMIA: FHIR is not suitable for research, needs NIH R&D funding

    According to AMIA, it is critical that NIH assume a leadership position to coordinate a research and development strategy for using FHIR for research and that the agency devote “substantial resources” to the effort.

    Specifically, AMIA recommended that NIH directly fund FHIR research and development through grants; indirectly fund FHIR through special emphasis notices and project requirements that prioritize projects that will use FHIR; and educate the research community and help represent it in activities supported by HL7, the Office of the National Coordinator for Health IT and other standards developing organizations that have an interest in FHIR.

Life as a Linux system administrator

Filed under
GNU
Linux
Server

Linux system administration is a job. It can be fun, frustrating, mentally challenging, tedious, and often a great source of accomplishment and an equally great source of burnout. That is to say, it's a job like any other with good days and with bad. Like most system administrators, I have found a balance that works for me. I perform my regular duties with varying levels of automation and manual manipulation and I also do a fair amount of research, which usually ends up as articles. There are two questions I'm going to answer for you in this article. The first is, "How does one become a system administrator?," and second, "What does a Linux system administrator do?".

Read more

The 20 Best Control Panels for Hassle-Free Server Management

Filed under
Server
Software

It’s not very hard to manage web servers for most Linux powers users. However, it’s certainly not a child’s play, and new site owners often find it extremely difficult to manage their servers properly. Thankfully, there’s a huge list of robust control panels that makes server management hassle-free even for beginners. It can also be useful for experienced server owners who’re looking for convenient hosting panel management solutions. That’s why our editors have curated this guide outlining the 20 best admin panel for modern web servers.

Read more

Servers: SysAdmins, Kubernetes, OpenShift

Filed under
Red Hat
Server
  • Tales From The Sysadmin: Dumped Into The Grub Command Line

    Today I have a tale of mystery, of horror, and of hope. The allure of a newer kernel and packages was too much to resist, so I found myself upgrading to Fedora 30. All the packages had downloaded, all that was left was to let DNF reboot the machine and install all the new packages. I started the process and meandered off to find a cup of coffee: black, and darker than the stain this line of work leaves on the soul. After enough time had elapsed, I returned, expecting the warming light of a newly upgraded desktop. Instead, all that greeted me was the harsh darkness of a grub command line. Something was amiss, and it was bad.

    (An aside to the reader, I had this experience on two different machines, stemming from two different root problems. One was a wayward setting, and the other an unusual permissions problem.)

    How does the fledgling Linux sysadmin recover from such a problem? The grub command line is an inscrutable mystery to the uninitiated, but once you understand the basics, it’s not terribly difficult to boot your system and try to restore the normal boot process. This depends on what has broken, of course. If the disk containing your root partition has crashed, then sorry, this article won’t help.

  • Top Kubernetes Operators advancing across the Operator Capability Model

    At KubeCon North America 2019 we highlighted what it means to deliver a mature Kubernetes Operator. A Kubernetes Operator is a method of packaging, deploying and managing a Kubernetes application. The key attribute of an Operator is the active, ongoing management of the application, including failover, backups, upgrades and autoscaling, just like a cloud service.

    These capabilities are ranked into five levels, which are used to gauge maturity. We refer to this as the Operator Capability Model, which outlines a set of possible capabilities that can be applied to an application. Of course, if your app doesn’t store stateful data, a backup might not be applicable to you but log processing or alerting might be important. The important user experience that the Operator model aims for is getting that cloud-like, self-managing experience with knowledge baked in from the experts.

  • Red Hat simplifies transition to open source Kafka with new service registry and HTTP bridge

    Red Hat continues to increase the features available for users looking to implement a 100% open source, event-driven architecture (EDA) through running Apache Kafka on Red Hat OpenShift and Red Hat Enterprise Linux. The Red Hat Integration Q4 release provides new features and capabilities, including ones aimed at simplifying usage and deployment of the AMQ streams distribution of Apache Kafka.

    [...]

    In addition to the registry itself, users can leverage the included custom Kafka serializers and deserializers (SerDes). These SerDes Java classes allow Kafka applications to pull relevant schemas from the Service Registry instead of requiring the schemas to be bundled with the applications.

    Correspondingly, the registry has its own REST API to create, update, and delete artifacts as well as managing global and per-artifact rules. The registry API is compatible with another Kafka provider’s schema registry to facilitate a seamless migration to AMQ Streams as a drop-in replacement.

PHP Web Server GUI - Version 1.0.0 Released

Filed under
Server
Software

PHP's built-in web server is a CLI feature, as such it requires a specific command to use, one which is easy to forget and gets buried in your terminal's history. While writing a script can help, it too gets buried in your terminal history, or is often located in an inconvenient place on the filesystem, requiring you to browse to the script before you can use it. This basic GTK+ GUI solves these issues. It's as easy to use as any other app on your system.

It's also a great tool for teaching PHP or the fundamentals of how web servers work. It's an easy tool for students to use, for learning programming, in Raspberry Pi projects, robotics, or anything else that requires a web-based interface or centralized server communication. Many of these things are true of PHP's built-in web server itself, this GUI just makes it easier to use for people who are not comfortable using the command line.

Read more

Syndicate content

More in Tux Machines

Linux on the MAG1 8.9 inch mini-laptop (Ubuntu and Fedora)

The Magic Ben MAG1 mini-laptop is a 1.5 pound notebook computer that measures about 8.2″ x 5.8″ x 0.7″ and which features an 8.9 inch touchscreen display and an Intel Core m3-8100Y processor. As I noted in my MAG1 review, the little computer also has one of the best keyboards I’ve used on a laptop this small and a tiny, but responsive trackpad below the backlit keyboard. Available from GeekBuying for $630 and up, the MAG1 ships with Windows 10, but it’s also one of the most Linux-friendly mini-laptops I’ve tested to date. [...] I did not install either operating system to local storage, so I cannot comment on sleep, battery life, fingerprint authentication, or other features that you’d only be able to truly test by fully installing Ubuntu, Fedora, or another GNU/Linux-based operating system. But running from a liveUSB is a good way to kick the tires and see if there are any obvious pain points before installing an operating system, and for the most part the two operating systems I tested look good to go. Booting from a flash drive is also pretty easy. Once you’ve prepared a bootable drive using Rufus, UNetbootin, or a similar tool, just plug it into the computer’s USB port, hit the Esc key during startup to bring up the UEFI/SETUP utility. Read more Also: Top 10 technical skills that will get you hired in 2020

Android Leftovers

An Extensive Look At The AMD Naples vs. Rome Power Efficiency / Performance-Per-Watt

Since the AMD EPYC 7002 "Rome" series launch in August we have continue to be captivated by the raw performance of AMD's Zen 2 server processors across many different workloads as covered now in countless articles. The performance-per-dollar / TCO is also extremely competitive against Intel's Xeon Scalable line-up, but how is the power efficiency of these 7nm EPYC processors? We waited to deliver those numbers until having a retail Rome board for carrying out those tests and now after that and then several weeks of benchmarking, here is an extensive exploration of the AMD EPYC 7002 series power efficiency as well as a look at the peak clock frequencies being achieved in various workloads to also provide some performance-per-clock metrics compared to Naples. Read more

Firefox Picture in Picture is Sweet, Here’s How to Use it on Linux

Picture in picture (PIP) is a novel feature that makes it a doddle to watch a video while you’re busy doing something else (like reading blog posts). How? It allows video content to “pop out” of a web page and play in a separate floating window (with mouse-over player controls, where possible). With PIP you no longer need to tear out a browser tab, resize it narrowly, and try and fit it in somewhere on your screen. And Firefox 72, which is currently in beta, supports this handy feature on the Linux desktop. Read more