Language Selection

English French German Italian Portuguese Spanish

Server

EU turns from American public clouds to Nextcloud private clouds

Filed under
Server
OSS
Security

Just like their American counterparts, more than half of European businesses with over 1,000 employees now use a public cloud platform. But European governments aren't so sure that they should trust their data on Amazon Web Services (AWS), Azure, Google Cloud, or the IBM Cloud. They worry that the US CLOUD act enables US law enforcement to unilaterally demand access to EU citizens' cloud data -- even when it's stored outside the States. So, they're turning to private European-based clouds, such as those running on Nextcloud, a popular open-source Infrastructure-as-a-Service (IaaS) cloud.

Read more

The birth of the Bash shell

Filed under
Development
GNU
Server
OSS

Shell scripting is an essential discipline for anyone in a sysadmin type of role, and the predominant shell in which people write scripts today is Bash. Bash comes as default on nearly all Linux distributions and modern MacOS versions and is slated to be a native part of Windows Terminal soon enough. Bash, you could say, is everywhere.

So how did it get to this point? This week's Command Line Heroes podcast dives deeply into that question by asking the very people who wrote the code.

Read more

Server: Cilium, Unix at 50, SUSE and HPC

Filed under
Server
  • Thomas Graf on Cilium, the 1.6 Release, eBPF Security, & the Road Ahead

    Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes. It is a CNI plugin that offers layer 7 features typically seen with a service mesh. On this week’s podcast, Thomas Graf (one of the maintainers of Cilium and co-founder of Isovalent) discusses the recent 1.6 release, some of the security questions/concerns around eBPF, and the future roadmap for the project.

  • Unix at 50 : The OS that powered smartphones started from failure

    UNIX was born 50 years ago from the failure of an ambitious project that involved titans like Bell Labs, GE, and MIT. This OS powers nearly all smartphones sold worldwide. The story of UNIX began from a meeting on the top floor of an unremarkable annex at the Bell Labs complex in Murray Hills, New Jersey.

  • We offer enterprise-grade open source solutions from edge to core to cloud: Brent Schroeder, Global CTO, SUSE

    The open source market is taking an interesting turn of its own. With IBM acquiring Red Hat for $34 billion, the wheels of competition and innovation have truly been set into motion in the open source market.

    In such interesting times, Brent Schroeder, Global CTO, SUSE took over from Thomas Di Giacomo, the now president for engineering at the company. In an exclusive interview with ETCIO, Schroeder talks about how SUSE intends to power digital transformation for companies to innovate and compete.

  • Julita Inca: Building a foundation of HPC knowledge

    The curriculum for courses are previously arranged in advance by the teachers and teaching assistants and published one week before on the intranet. They consist of the theorical materials and practical exercises to support the theory. Some reinforcing workshops were also used in order to address questions and concerns.

Servers: Databases, Microservices, Stackrox, Docker Block Storage and UNIX Turning 50

Filed under
Server
  • Open source databases: Today’s viable alternative for enterprise computing

    There was a time when proprietary solutions from well-capitalized software companies could be expected to provide superior solutions to those produced by a community of dedicated and talented developers. Just as Linux destroyed the market for expensive UNIX versions, open source database management systems like EDB Postgres are forcing Oracle, Microsoft, SAP, and other premium database management products to justify their pricing. With so many large, critical applications running reliably on open source products, it’s a hard case to make.

  • 5 questions everyone should ask about microservices

    The basis of the question is uncertainty in what’s going to happen once they start decomposing existing monolithic applications in favor of microservices where possible. What we need to understand is that the goal of splitting out these services is to favor deployment speed over API invocation speed.

    The main reason to split off microservices out of an existing monolith should be to isolate the development of the service within a team, completely separate from the application development team. The service engineering team can now operate at their own intervals, deploying changes weekly, daily, or even hourly if a noteworthy Common Vulnerabilities and Exposures (CVE) is applicable.

    The penalty for unknown network invocations is the trade-off to your monolith’s highly regimented deployment requirements that cause it to move at two- to three-month deployment intervals. Now, with microservice teams, you can react quicker to the business, competition, and security demands with faster delivery intervals. Equally critical for network invocations is to look closely at how course-grained your network calls become in this new distributed architecture.

  • Stackrox Launches Kubernetes Security Platform Version 2.0

    StackRox, the security for holders and Kubernetes company, declared the general accessibility of form 2.5 of the StackRox Kubernetes Security Platform. The new form incorporates upgraded arrangement and runtime controls that empower organizations to flawlessly authorize security controls to improve use cases, including threat detection, network segmentation, configuration management, and vulnerability management.

  • Pete Zaitcev: Docker Block Storage... say what again?

    Okay. Since they talk about consistency and replication together, this thing probably provides actual service, in addition to the necessary orchestration. Kind of the ill-fated Sheepdog. They may under-estimate the amount of work necesary, sure. Look no further than Ceph RBD. Remember how much work it took for a genius like Sage? But a certain arrogance is essential in a start-up, and Rancher only employs 150 people.

    Also, nobody is dumb enough to write orchestration in Go, right? So this probably is not just a layer on top of Ceph or whatever.

    Well, it's still possible that it's merely an in-house equivalent of OpenStack Cinder, and they want it in Go because they are a Go house and if you have a hammer everything looks like a nail.

    Either way, here's the main question: what does block storage have to do with Docker?

  • Changing the face of computing: UNIX turns 50

    In the late 1960s, a small team of programmers was aspiring to write a multi-tasking, multi-user operating system. Then in August 1969 Ken Thompson, a programmer at AT&T Bell Laboratories, started development of the first-ever version of the UNIX operating system (OS).

    Over the next few years, he and his colleagues Dennis Ritchie, Brian Kernighan, and others developed both this and the C-programming language. As the UNIX OS celebrates its 50th birthday, let’s take a moment to reflect on its impact on the world we live in today.

  • The Legendary OS once kicked by many big companies turns 50. The Story.

    Maybe its pervasiveness has long obscured its roots. But Unix, the OS which proves to be legendary and, in one derivative or another, powers nearly all smartphones sold worldwide, came 50 years ago from the failure of an ambitious project involving titans like GE, Bell Labs, and MIT.

    [...]

    Still, it was something to work on, and as long as Bell Labs was working on Multics, they would also have a $7 million mainframe computer to play around with in their spare time. Dennis Ritchie, one of the programmers working on Multics, later said they all felt some stake in the victory of the project, even though they knew the odds of that success were exceedingly remote.

    Cancellation of Multics meant the end of the only project that the programmers in the Computer science department had to work on—and it also meant the loss of the only computer in the Computer science department. After the GE 645 mainframe was taken apart and hauled off, the computer science department’s resources were reduced to little more than office supplies and a few terminals.

Announcing etcd 3.4

Filed under
Server
OSS

etcd v3.4 includes a number of performance improvements for large scale Kubernetes workloads.

In particular, etcd experienced performance issues with a large number of concurrent read transactions even when there is no write (e.g. “read-only range request ... took too long to execute”). Previously, the storage backend commit operation on pending writes blocks incoming read transactions, even when there was no pending write. Now, the commit does not block reads which improve long-running read transaction performance.

We further made backend read transactions fully concurrent. Previously, ongoing long-running read transactions block writes and upcoming reads. With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. We also ran Kubernetes 5000-node scalability test on GCE with this change and observed similar improvements. For example, in the very beginning of the test where there are a lot of long-running “LIST pods”, the P99 latency of “POST clusterrolebindings” is reduced by 97.4%. This non-blocking read transaction is now used for compaction, which, combined with the reduced compaction batch size, reduces the P99 server request latency during compaction.

More improvements have been made to lease storage. We enhanced lease expire/revoke performance by storing lease objects more efficiently, and made lease look-up operation non-blocking with current lease grant/revoke operation. And etcd v3.4 introduces lease checkpoint as an experimental feature to persist remaining time-to-live values through consensus. This ensures short-lived lease objects are not auto-renewed after leadership election. This also prevents lease object pile-up when the time-to-live value is relatively large (e.g. 1-hour TTL never expired in Kubernetes use case).

Read more

Unix at 50, Tectonic Shifts and Servers

Filed under
Server
  • Celebrating 50 years of the Unix operating system

    Towards the end of the 1960s, a small group of programmers were embarking upon a project which would transform the face of computing forever.

  • Unix at 50: How the OS that powered smartphones started from failure

    Today, Unix powers iOS and Android—its legend begins with a gator and a trio of researchers.

  • To Be Always Surfing On Tectonic Shifts

    If you think about it for a minute, it is amazing that any of the old-time IT suppliers, like IBM and Hewlett Packard, and to a certain extent now Microsoft and Dell, have persisted in the datacenter for decades or, in the case of Big Blue, for more than a century. It is difficult to be constantly adapting to new conditions, but to their great credit, they still do as they world is changing – sometimes tumultuously – both around them and underneath their feet.

    So it is with HPE, which is going through its umpteenth restructuring and refocusing since we entered IT publishing more than three decades ago, this time under the helm of Antonio Neri, its relatively new president and chief executive officer. The current Hewlett Packard is a very different animal than the one that sold proprietary minicomputers and then Unix systems in the 1980s and 1990s, and it is in many ways more of a successor to the systems businesses of Compaq and Digital Equipment, which the company absorbed two decades ago.

  • Cloud providers and telemetry via Qt MQTT

    First, the focus is on getting devices connected to the cloud. Being able to send and receive messages is the prime target. This post will not talk about services, features, or costs by the cloud providers themselves once messages are in the cloud.

    Furthermore, the idea is to only use Qt and/or Qt MQTT to establish a connection. Most, if not all, vendors provide SDKs for either devices or monitoring (web and native) applications. However, using these SDKs extends the amount of additional dependencies, leading to higher requirements for storage and memory.

  • SUSE Enterprise Storage and Veeam go great together

    Whether you’re new to the popular Windows-based backup tool Veeam or an old pro, you know that ever-growing demands on your storage resources are a true challenge. The flexibility of Ceph makes it a good choice for a back-up target, and SUSE Enterprise Storage makes it easy.

Servers: Puppet, Openstack and OpenPOWER

Filed under
Server
  • [Older] Why choose Puppet for DevOps?

    If you’re like most in the DevOps world, you’re always interested in automating tasks and securing your infrastructure. But it’s important to find ways that won’t sacrifice the quality or lose efficiency. Enter Puppet for DevOps. Forty-two percent of all DevOps businesses currently use this handy tool, for good reason.

    Puppet for DevOps is unique because it allows you to enforce automation, enhance organization, boost security measures, and ramp up the overall speed across an entire infrastructure. Puppet’s special abilities are clearly game-changing. And a big part of this sharp setup is due to the initialization of the module authoring process.

  • BT bets big on Canonical for core 5G network

                   

                     

    The foundations for the future of BT's 5G network will be open source, with practically every virtualised aspect of the future infrastructure to be delivered and managed with Canonical's Charmed Openstack distro.  

  • OpenPOWER opens further

    n what was to prove something of a theme throughout the morning, Hugh Blemings said that he had been feeling a bit like a kid waiting for Christmas recently, but that the day when the presents can be unwrapped had finally arrived. He is the executive director of the OpenPOWER Foundation and was kicking off the keynotes for the second day of the 2019 OpenPOWER Summit North America; the keynotes would reveal the "most significant and impressive announcements" in the history of the project, he said. Multiple presentations outlined a major change in the openness of the OpenPOWER instruction set architecture (ISA), along with various related hardware and software pieces; in short, OpenPOWER can be used by compliant products without paying royalties and with a grant of the patents that IBM holds on it. In addition, the foundation will be moving under the aegis of the Linux Foundation.

    Blemings also wrote about the changes in a blog post at the foundation web site. To set the stage for the announcements to come, he played a promotional video (which can be found in the post) that gave an overview of the foundation and the accomplishments of the OpenPOWER architecture, which includes underlying the two most powerful supercomputers in the world today.

Red Hat/IBM Servers and Databases

Filed under
Red Hat
Server
  • Themes driving digital transformation and leadership in financial services

    Incumbent banks should know they have to modernize their organization to compete in a world where customers want better and more personalized digital experiences. Eager to realize the cost-savings and increased revenue that can result from micro-targeting products and services, they can adopt next-generation technologies to transform their businesses to lead their market.

    Digital leaders are focused on end-to-end customer experiences. Processes, policies, and procedures defined for branch networks are being reimagined to support new digital customer engagement. By modernizing the back office and business processes, banks have an opportunity to streamline, codify, and thereby automate - which, in turn, can reduce friction caused by manual checks and inconsistent policies. This can enable more seamless customer experiences and speedier customer service, with transparency into servicing while reducing operational costs.

  • Introducing Red Hat OpenShift 4.2 in Developer Preview: Releasing Nightly Builds

    You might have read about the architectural changes and enhancements in Red Hat OpenShift 4 that resulted in operational and installation benefits. Or maybe you read about how OpenShift 4 assists with developer innovation and hybrid cloud deployments. I want to draw attention to another part of OpenShift 4 that we haven’t exposed to you yet…until today.

    When Red Hat acquired CoreOS, and had the opportunity to blend Container Linux with RHEL and Tectonic with OpenShift, the innovation did not remain only in the products we brought to market.

    An exciting part about working on new cloud-native technology is the ability to redefine how you work. Redefine how you hammer that nail with your hammer. These Red Hat engineers were building a house, and sometimes the tools they needed simply did not exist.

  • IBM POWER Instruction Set Architecture Now Open Source

    IBM has open sourced the POWER Instruction Set Architecture (ISA), which is used in its Power Series chips and in many embedded devices by other manufacturers. In addition, the OpenPOWER Foundation will become part of The Linux Foundation to further open governance.

    IBM created the OpenPOWER Foundation in 2013 with the aim to make it easier for server vendors to build customized servers based on IBM Power architecture. By joining the OpenPOWER Foundation, vendors had access to processor specifications, firmware, and software and were allowed to manufacture POWER processors or related chips under a liberal license. With IBM latest announcement, vendors can create chips using the POWER ISA without paying any royalties and have full access to the ISA definition. As IBM OpenPOWER general manager Ken King highlights, open sourcing the POWER ISA enables the creation of computers that are completely open source, from the foundation of the hardware, including the processor instruction set, firmware, boot code, and so on up to the software stack.

  • Julien Danjou: The Art of PostgreSQL is out!

    f you remember well, a couple of years ago, I wrote about Mastering PostgreSQL, a fantastic book written by my friend Dimitri Fontaine.

    Dimitri is a long-time PostgreSQL core developer — for example, he wrote the extension support in PostgreSQL — no less. He is featured in my book Serious Python, where he advises on using databases and ORM in Python.

    Today, Dimitri comes back with the new version of this book, named The Art of PostgreSQL.

  • Surf’s Up! Riding The Second Wave Of Open Source

    have never surfed before, but I am told it is incredibly exciting and great exercise, which as we all know is very good for you. For some it may sound daunting, because it is so unlike any other sport, but for those prepared to take the challenge it can be hugely rewarding. Stretching yourself – perhaps literally – and taking your body out of its comfort zone is a proven way of staying healthy. I would argue there are similarities for IT departments as they evaluate how to get their database architectures fit to support businesses that want to become more agile and responsive to customers.

    Making sure that IT systems are fit-for-purpose, robust and reliable enables companies to embrace new markets, innovative products and re-engineered processes: all are typical of organisations which are looking to survive and thrive in an increasingly fraught business environment.

7 Best SNMP Monitoring Tools For Linux

Filed under
GNU
Linux
Server
Software

SNMP monitoring is by far the most common type of network monitoring technology. It allows administrators of networks of any size to be kept informed of the status of the networks they manage as well as their utilization. Likewise, Linus is also a very common platform that many network administrators have turned to. Although it is not yet as common in the desktop world as the commercial offerings from some mega-vendors, it is very common in the server world. Even IBM has made it its OS of choice on many of its higher-range systems.

Read more

Kali Linux Team has Renamed their Meta-packages to More Meaningful

Filed under
GNU
Linux
Server
Security
Debian

Kali Linux team has renamed their meta-packages to more meaningful to understand it in a better way.

This implementation will optimize Kali, reduce ISO size, and organize meta-packages in a better way.

Some of you may already know about it, however, i will give you an overview about meta-package before discuss further on this topic.

What’s Meta-package?

Meta-packages are specialized packages, they do not contain any files usually found in packages.

Meta-package is a way to collect and group related software packages, they simply depend on other packages to be installed.

It allows entire sets of software to be installed by selecting only the appropriate meta-package.

Say for example, Each Linux desktop environments comes with a wide range of applications, it can be installed by running a single command because they were already grouped together.

This will reduce download requirements, i mean to say, this will obtain all the Gnome packages in one download.

Read more

Syndicate content

More in Tux Machines

Analyzing Distrowatch Trends

Free software is so diverse that its trends are hard to follow. How can information be gathered without tremendous effort and expense? Recently, it occurred to me that a very general sense of free software trends can be had by using the search page on Distrowatch. Admittedly, it is not a very exact sense — it is more like the sparklines on a spreadsheet that show general trends rather the details. Still, the results are suggestive. As you probably know, Distrowatch has been tracking Linux distributions since 2002. It is best-known for its page hit rankings for distributions. These rankings do not show how many people are actually using each distro, but the interest in each distro. Still, this interest often does seem to be a broad indicator. For instance, in the last few years Ubuntu has slipped from the top ranking that it held for years to its current position of fifth, which does seem to bear some resemblance to its popularity today. However, Distrowatch’s search page for distributions is less well-known. Hidden in the home page header, the search function includes filters for such useful information as the version of packages, init software, and what derivatives a distro might have, and lists matching distros in order of popularity. Although I have heard complaints that Distrowatch can be slow to add or update the distros listed, it occurs to me that the number of results indicates general trends. The results could not be plausibly used to suggest that a difference of one or two results was signficant, but greater differences are likely to be more significant. Read more

4MLinux 31.0 STABLE released.

The status of the‭ 4MLinux 31.0 series has been changed to STABLE. Edit your documents with LibreOffice 6.3.4.2 and GNOME Office (AbiWord 3.0.2, GIMP 2.10.14, Gnumeric 1.12.44), share your files using DropBox ‬85.4.155,‭ surf the Internet with Firefox 71.0 and Chromium ‬78.0.3904.108,‭ send emails via Thunderbird 68.3.0, enjoy your music collection with Audacious 3.10.1, watch your favorite videos with VLC 3.0.8 and mpv 0.29.1, play games powered by Mesa 19.1.5 and Wine 4.21. You can also setup the 4MLinux LAMP Server (Linux 4.19.86, Apache 2.4.41, MariaDB 10.4.10, PHP 5.6.40 and PHP 7.3.12). Perl 5.30.0, Python 2.7.16, and Python 3.7.3 are also available. Read more

Programming: C, Perl, Python and More

  • C, what the fuck??!

    A trigraph is only a trigraph when the ??s are followed by one of the nine string literals. So in this case, the C preprocessor will replace the code above with the following: [...]

  • Rewriting Perl Code for Raku IV: A New Hope

    Back in Part III of our series on Raku programming, we talked about some of the basics of OO programming. This time we’ll talk about another aspect of OO programming. Perl objects can be made from any kind of reference, although the most common is a hash. I think Raku objects can do the same, but in this article we’ll just talk about hash-style Perl objects.

    Raku objects let you superclass and subclass them, instantiate them, run methods on them, and store data in them. In previous articles we’ve talked about all but storing data. It’s time to remedy that, and talk about attributes.

  • Mike Driscoll: PyDev of the Week: Ted Petrou

    I graduated with a masters degree in statistics from Rice University in Houston, Texas in 2006. During my degree, I never heard the phrase “machine learning” uttered even once and it was several years before the field of data science became popular. I had entered the program pursuing a Ph.D with just six other students. Although statistics was a highly viable career at the time, it wasn’t nearly as popular as it is today. After limping out of the program with a masters degree, I looked into the fields of actuarial science, became a professional poker play, taught high school math, built reports with SQL and Excel VBA as a financial analyst before becoming a data scientist at Schlumberger. During my stint as a data scientist, I started the meetup group Houston Data Science where I gave tutorials on various Python data science topics. Once I accumulated enough material, I started my company Dunder Data, teaching data science full time.

  • Authorized Google API access from Python (part 2 of 2)

    In this final installment of a (currently) two-part series introducing Python developers to building on Google APIs, we'll extend from the simple API example from the first post (part 1) just over a month ago. Those first snippets showed some skeleton code and a short real working sample that demonstrate accessing a public (Google) API with an API key (that queried public Google+ posts). An API key however, does not grant applications access to authorized data. Authorized data, including user information such as personal files on Google Drive and YouTube playlists, require additional security steps before access is granted. Sharing of and hardcoding credentials such as usernames and passwords is not only insecure, it's also a thing of the past. A more modern approach leverages token exchange, authenticated API calls, and standards such as OAuth2. In this post, we'll demonstrate how to use Python to access authorized Google APIs using OAuth2, specifically listing the files (and folders) in your Google Drive. In order to better understand the example, we strongly recommend you check out the OAuth2 guides (general OAuth2 info, OAuth2 as it relates to Python and its client library) in the documentation to get started. The docs describe the OAuth2 flow: making a request for authorized access, having the user grant access to your app, and obtaining a(n access) token with which to sign and make authorized API calls with. The steps you need to take to get started begin nearly the same way as for simple API access. The process diverges when you arrive on the Credentials page when following the steps below.

  • Friendly Mu
  •                
  • Announcing Google Summer of Code 2020!
                     
                       

    Are you a university student interested in learning how to prepare for the 2020 GSoC program? It’s never too early to start thinking about your proposal or about what type of open source organization you may want to work with. You should read the student guide for important tips on preparing your proposal and what to consider if you wish to apply for the program in mid-March. You can also get inspired by checking out the 200+ organizations that participated in Google Summer of Code 2019, as well as the projects that students worked on.

  •                    
  • Decentralised SMTP is for the greater good
                         
                           

    In August, I published a small article titled “You should not run your mail server because mail is hard” which was basically my opinion on why people keep saying it is hard to run a mail server. Unexpectedly, the article became very popular, reached 100K reads and still gets hits and comments several months after publishing.

                           

    As a follow up to that article, I published in September a much lenghtier article titled “Setting up a mail server with OpenSMTPD, Dovecot and Rspamd” which described how you could setup a complete mail server. I went from scratch and up to inboxing at various Big Mailer Corps using an unused domain of mine with a neutral reputation and describing precisely for each step what was done and why it was done. The article became fairly popular, nowhere near the first one which wasn’t so technical, but reached 40K reads and also still gets hits and comments several months after publishing.

                           

    The content you’re about to read was part of the second article but it didn’t belong there, it was too (geo-)political to be part of a technical article, so I decided to remove it from there and make it a dedicated one. I don’t want the tech stack to go in the way of the message, this is not about OpenSMTPD.

today's howtos