Language Selection

English French German Italian Portuguese Spanish

Server

CentOS 8 To Be Released Next Week

Filed under
Red Hat
Server

The CentOS Project has announced that CentOS 8.0 will be available for download beginning Tuesday, September 24. This release was deferred so that work to release CentOS 7.7 could be completed, which means that CentOS 7.7 will be out shortly as well (and 7.7 it is already beginning to appear in mirrors and repos). This comes 20 weeks to the day from the release of Red Hat Enterprise Linux 8.

Read more

Kubernetes Leftovers

Filed under
Server
OSS
  • With its Kubernetes bet paying off, Cloud Foundry doubles down on developer experience

    More than 50% of the Fortune 500 companies are now using the open-source Cloud Foundry Platform-as-a-Service project — either directly or through vendors like Pivotal — to build, test and deploy their applications. Like so many other projects, including the likes of OpenStack, Cloud Foundry went through a bit of a transition in recent years as more and more developers started looking to containers — and especially the Kubernetes project — as a platform on which to develop. Now, however, the project is ready to focus on what always differentiated it from its closed- and open-source competitors: the developer experience.

  • Kubernetes in the Enterprise: A Primer

    As Kubernetes moves deeper into the enterprise, its growth is having an impact on the ecosystem at large.

    When Kubernetes came on the scene in 2014, it made an impact and continues to impact the way companies build software. Large companies have backed it, causing a ripple effect in the industry and impacting open source and commercial systems. To understand how K8S will continue to affect the industry and change the traditional enterprise data center, we must first understand the basics of Kubernetes.

  • Google Cloud rolls out Cloud Dataproc on Kubernetes

    Google Cloud is trialling alpha availability of a new platform for data scientists and engineers through Kubernetes.

    Cloud Dataproc on Kubernetes combines open source, machine learning and cloud to help modernise big data resource management.

    The alpha availability will first start with workloads on Apache Spark, with more environments to come.

  • Google announces alpha of Cloud Dataproc for Kubernetes

    Not surprisingly, Google, the company that created K8s, thinks the answer to that question is yes. And so, today, the company is announcing the Alpha release of Cloud Dataproc for Kubernetes (K8s Dataproc), allowing Spark to run directly on Google Kubernetes Engine (GKE)-based K8s clusters. The service promises to reduce complexity, in terms of open source data components' inter-dependencies, and portability of Spark applications. That should allow data engineers, analytics experts and data scientists to run their Spark workloads in a streamlined way, with less integration and versioning hassles.

Databases: MariaDB, ScyllaDB, Percona, Cassandra

Filed under
Server
  • MariaDB opens US headquarters in California

    MariaDB Corporation, the database company born as a result of forking the well-known open-source MySQL database...

  • ScyllaDB takes on Amazon with new DynamoDB migration tool

    There are a lot of open-source databases out there, and ScyllaDB, a NoSQL variety, is looking to differentiate itself by attracting none other than Amazon users. Today, it announced a DynamoDB migration tool to help Amazon customers move to its product.

  • ScyllaDB Announces Alternator, an Open Source Amazon DynamoDB-Compatible API

    ScyllaDB today announced the Alternator project, open-source software that will enable application- and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB. Scylla’s DynamoDB-compatible API will be available for use with Scylla Open Source, supporting the majority of DynamoDB use cases and features.

  • ScyllaDB Secures $25 Million to Open Source Amazon DynamoDB-compatible API

    Fast-growing NoSQL database company raises funds to extend operations and bring new deployment flexibility to users of Amazon DynamoDB.

  • ScyllaDB Announces Alternator, an Open Source Amazon DynamoDB-Compatible API

    ScyllaDB today announced the Alternator project, open-source software that will enable application- and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB. Scylla’s DynamoDB-compatible API will be available for use with Scylla Open Source, supporting the majority of DynamoDB use cases and features.

  • ScyllaDB powers up Alternator: an open Amazon DynamoDB API

    Companies normally keep things pretty quiet in the run up to their annual user conferences, so they can pepper the press with a bag of announcements designed to show how much market momentum and traction that have going.

    Not so with ScyllaDB, the company has been dropping updates in advance of its Scylla Summit event in what is perhaps an unusually vocal kind of way.

    [...]

    Scylla itself is a real-time big data database that is fully compatible with Apache Cassandra and is known for its ‘shared-nothing’ approach (a distributed-computing architecture in which each update request is satisfied by a single node –processor/memory/storage unit to increase throughput and storage capacity.

  • Percona Announces Full Conference Schedule for Percona Live Open Source Database Conference Europe 2019

    The Percona Live Open Source Database Conference Europe 2019 is the premier open source database event. Percona Live conferences provide the open source database community with an opportunity to discover and discuss the latest open source trends, technologies and innovations. The conference includes the best and brightest innovators and influencers in the open source database industry.

  • Thwarting Digital Ad Fraud at Scale: An Open Source Experiment with Anomaly Detection

    Our experiment assembles Kafka, Cassandra, and our anomaly detection application in a Lambda architecture, in which Kafka and our streaming data pipeline are the speed layer, and Cassandra acts as the batch and serving layer. In this configuration, Kafka makes it possible to ingest streaming digital ad data in a fast and scalable manner, while taking a “store and forward” approach so that Kafka can serve as a buffer to protect the Cassandra database from being overwhelmed by major data surges. Cassandra’s strength is in storing high-velocity streams of ad metric data in its linearly scalable, write-optimized database. In order to handle automation for provisioning, deploying, and scaling the application, the anomaly detection experiment relies on Kubernetes on AWS EKS.

Server: Kubeflow + OpenShift Container Platform, SUSE's SLES and More

Filed under
Server

Red Hat: Flask on Red Hat Enterprise Linux, OpenShift and SAN vs. NAS

Filed under
Red Hat
Server
  • Develop with Flask and Python 3 in a container on Red Hat Enterprise Linux

    In my previous article, Run Red Hat Enterprise Linux 8 in a container on RHEL 7, I showed how to start developing with the latest versions of languages, databases, and web servers available with Red Hat Enterprise Linux 8 even if you are still running RHEL 7. In this article, I?ll build on that base to show how to get started with the Flask microframework using the current RHEL 8 application stream version of Python 3.

    From my perspective, using Red Hat Enterprise Linux 8 application streams in containers is preferable to using software collections on RHEL 7. While you need to get comfortable with containers, all of the software installs in the locations you?d expect. There is no need to use scl commands to manage the selected software versions. Instead, each container gets an isolated user space. You don?t have to worry about conflicting versions.

    In this article, you?ll create a Red Hat Enterprise Linux 8 Django container with Buildah and run it with Podman. The code will be stored on your local machine and mapped into the container when it runs. You?ll be able to edit the code on your local machine as you would any other application. Since it is mapped via a volume mount, the changes you make to the code will be immediately visible from the container, which is convenient for dynamic languages that don?t need to be compiled. While this approach isn?t the way to do things for production, you get the same development inner loop as you?d have when developing locally without containers. The article also shows how to use Buildah to build a production image with your completed application.

  • IBM brings Cloud Foundry and Red Hat OpenShift together

    At the Cloud Foundry Summit in The Hague, IBM today showcased its Cloud Foundry Enterprise Environment on Red Hat?s OpenShift container platform.

    For the longest time, the open-source Cloud Foundry Platform-as-a-Service ecosystem and Red Hat?s Kubernetes-centric OpenShift were mostly seen as competitors, with both tools vying for enterprise customers who want to modernize their application development and delivery platforms. But a lot of things have changed in recent times. On the technical side, Cloud Foundry started adopting Kubernetes as an option for application deployments and as a way of containerizing and running Cloud Foundry itself.

  • SAN vs. NAS: Comparing two approaches to data storage

    For a new sysadmin, storage can be one of the more confusing aspects of infrastructure. This confusion can be caused by lack of exposure to new or different technologies, often because storage needs may be managed by another team. Without a specific interest in storage, an admin might find one’s self with a number of misconceptions, questions, or concerns about how or why to implement different solutions.

    When discussing enterprise storage, two concepts are at the core of most conversations: storage area networks (SAN) and network-attached storage (NAS). Both options provide storage to clients across a network, which offers the huge benefit of removing individual servers as single points of failure. Using one of these options also reduces the cost of individual clients, as there is no longer a need to have large amounts of local storage.

Servers: "Docker Not Doomed?" and Some IBM/Red Hat Leftovers

Filed under
Red Hat
Server
  • Docker Not Doomed?

    Modern application development essentially consists of composing an application from a variety of services. These services aren't just infrastructure components that live on a server any more. They're delivered via an API and could be almost anything underneath as the abstractions start to pile up.

    COBOL code at the other end of a message bus with a lambda-function frontend? Okay. Ephemeral container running a Spring Boot service that connects to an RDBMS on a physical Unix server on the other side of the country? Sure, why not? Modern applications don't really care, because it's all about getting the job done. The name of the game is loosely-coupled modular components.

    This is why Docker has joined forces with Microsoft, Bitnami, HashiCorp, and a few others to create the Cloud Native Application Bundle (CNAB) specification. Docker uses this spec as part of its Docker App tool, which behaves a lot like docker-compose to collect a variety of services together into a single application bundle that can be shared around. It's a lot like a container collection, and brings the same easy portability of containers to composed applications.

    "[Docker App] allows you to describe not just containers, but other services around which the app is dependent," says Johnston. "And it allows you to do things that enterprises care about, such as signing the bundle, verifying that signature, and automatically promoting it based on that signature and things like that."

  • Red Hat OpenShift Service Mesh is now available: What you should know

    As Kubernetes and Linux-based infrastructure take hold in digitally transforming organizations, modern applications frequently run in a microservices architecture and therefore can have complex route requests from one service to another. With Red Hat OpenShift Service Mesh, we’ve gone beyond routing the requests between services and included tracing and visualization components that make deploying a service mesh more robust. The service mesh layer helps us simplify the connection, observability and ongoing management of every application deployed on Red Hat OpenShift, the industry’s most comprehensive enterprise Kubernetes platform.

    Red Hat OpenShift Service Mesh is available through the OpenShift Service Mesh Operator, and we encourage teams to try this out on Red Hat OpenShift 4 here.

  • Catching up with Red Hat at Sibos 2019

    Red Hat is excited to once again be attending Sibos, an annual financial services industry conference exhibition and networking event that is hosted by SWIFT. This year, the event is being held in London, England from September 23rd through 26th. Red Hat will be attending to sponsor a number of activities and discuss how and why enterprise open source technologies offer innovative capabilities that can help firms thrive in their digital journeys.

Server: Red Hat, Intel and SUSE

Filed under
Linux
Red Hat
Server
SUSE
  • Introduction to virtio-networking and vhost-net

    In this post we have scratched the surface of the virtio-networking ecosystem, introducing you to the basic building blocks of virtualization and networking used by virtio-networking. We have briefly covered the virtio spec and the vhost protocol, reviewed the frontend and backend architecture used for implementing the virtio interface and have taken you through the vhost-net/virtio-net architecture of vhost-net (host kernel) communicating with virtio-net (guest kernel).

    A fundamental challenge we had when trying to explain things was the historical overloading of terms. As one example, virtio-net refers both to the virtio networking device implementation in the virtio specification and also to the guest kernel front end described in the vhost-net/virtio-net architecture. We attempted to address this by explaining the context of terms and using virtio-net to only describe the guest kernel frontend.

    As will be explained in later posts, there are other implementations for the virtio spec networking device based on using DPDK and different hardware offloading techniques which are all under the umbrella of the virtio-networking.

    The next two posts are intended to provide a deeper understanding of the vhost-net/virtio-net architecture. One post will be intended for architects providing a technical deep dive into the vhost-net/virtio-net and explaining how in practice the data plane and control planes are implemented. The other post intended for developers will be a hands on session including Ansible scripts to enable experimenting with the vhost-net/virtio-net architecture.

    If you prefer high level overviews we recommend you keep an eye out for the virtio-networking and DPDK introductions, to be published in the upcoming weeks.

  • Intel Issues Second Release Of Its Rust-Written Cloud-Hypervisor For Modern Linux VMs

    Intel's open-source crew has released version 0.2 of its primarily Rust-developed Cloud Hypervisor and associated firmware also in Rust.

    The Intel Cloud Hypervisor is their experimental VMM running atop KVM designed for modern Linux distributions and VirtIO para-virtualized devices without any legacy device support.

  • Announcing SUSE CaaS Platform 4

    SUSE CaaS Platform 4 raises the bar for robust Kubernetes platform operations with enhancements that expand platform scalability options, strengthen application security, and make it easier to keep pace with technology advancements. Integrating the latest releases of Kubernetes and SUSE Linux Enterprise, SUSE CaaS Platform 4 continues to provide industry leading application delivery capabilities as an enterprise-ready solution.

  • A new era in Cloud Native Application Delivery is here
  • 3 Infrastructure Compliance Best Practices for DevOps

    For most IT organizations, the need for compliance goes without saying. Internal corporate policies and external regulations like HIPAA and Sarbanes Oxley require compliance. Businesses in heavily regulated industries like healthcare, financial services, and public service are among those with the greatest need for strong compliance programs.

Linux Foundation and Cloud Native Computing Foundation (CNCF)

Filed under
Linux
Server
  • The Linux Kernel Mentorship is Life Changing

    My name is Kelsey Skunberg and I am starting my senior year for my Undergraduate in Computer Science at Colorado State University. This summer, I had the honor of participating in the Linux Kernel Mentorship Program through CommunityBridge. Throughout the mentorship, I grew very fond of working on open source projects, learned to work with the open source communities, and my confidence as a developer has grown tremendously.

    Since the beginning, I found the Linux kernel community to be very welcoming and willing to help. Many of the developers and maintainers have taken time to answer questions, review patches, and provide advice. I’ve come to learn contributing is not quite as scary as I first anticipated. It’s ok to make mistakes, just be open to learning and new ideas. There are a lot of resources for learning, and developers willing to invest time in mentoring and helping new contributors.

    [...]

    I chose to work on PCI Utilities and Linux PCI with Bjorn Helgaas as my mentor. Bjorn has been an incredible mentor who provided me with a great amount of advice and has introduced me to several tools which make the development process easier.

  • Sysdig Makes Container Security Case for Falco

    Sysdig is doubling down on its efforts to make its open source Falco project the de facto means for pulling security metrics for runtime security and intrusion detection. The company has already contributed Falco to the Cloud Native Computing Foundation (CNCF) and has hired Kris Nova, a CNCF ambassador who worked for Heptio (now part of VMware) and Deis (now part of Microsoft). Nova is also credited with developing kubicorn, an infrastructure management tool for Kubernetes.

  • Software Development, Microservices & Container Management – Part I – Microservices – Is it the Holy Grail?

    Together with my colleague Bettina Bassermann and SUSE partners, we will be running a series of blogs and webinars from SUSE (Software Development, Microservices & Container Management, a SUSE webinar series on modern Application Development), and try to break the ice about Microservices Architecture (MSA) and Cloud Native Application Development (CNA) in the software development field.

4 Open source alternatives to Slack and...

Filed under
Server
OSS

Within this segment, the strongest sound is Matrix, an interesting open and decentralized standard for communication designed for interoperability in a similar way to the interoperability existing in the e-mail segment, Enabling real-time communication between users regardless of the customers or servers they use.

Currently, the standard and all its development is maintained by Matrix.org Foundation, a non-profit organization based in the United Kingdom.

Matrix has been developed with privacy and security in mind, taking into account the federation between servers, so that a user can communicate in any existing room securely, with end-to-end encryption, regardless of the server Where you have registered your account, and using any client of your choice.

There are also gateways to participate through messaging programs such as Telegram, discord or Slack, among others.

Matrix allows communication between users basically via text chat, audio calls and video calls, along with other possibilities.

In addition, it aims to surpass the relative success achieved by the standards SIP, XMPP and RCS trying to circumvent the obstacles that have prevented that the standards now mentioned have not been able to go to more.

Among the customers, the best known is Riot, also open-source. Those who do not want to create their own self-hosted Matrix servers, have the possibility to hire some of Modular.im’s plans to create their servers with a few clicks away, depending on their needs.

Read more

Also: Sparky Linux: Riot

Server: Microsoft Ripoff, Open Infrastructure Summit, Edge [and Fog] Computing, Hyperledger Fabric

Filed under
Server
  • Microsoft set to close licensing loopholes, leave cloud rivals high and dry

    Microsoft this fall will begin closing loopholes in its licensing rules that have let customers bring their own licenses for Windows, Windows Server, SQL Server and other software to rival cloud providers like Google and Amazon.

    The Redmond, Wash. company laid down the new law in an Aug. 1 announcement, the same day it previewed Azure Dedicated Host, a new service that runs Windows virtual machines (VMs) on dedicated, single-tenant physical servers.

  • Schedule for Open Infrastructure Shanghai now released

    It may feel like summer is still in full swing, but before you know it, we’ll be facing those shorter days that autumn (or fall, depending on your geographic location and/or linguistic preference) brings. To brighten up these shorter days, many in the open source community will be looking forward to the Open Infrastructure Summit (sometimes shortened to OIS) in Shanghai. The first of these summits to be held in mainland China, this is an exciting event as it will bring together some of the finest minds in open source from around the world in one location.

  • What is Edge [and Fog] Computing and How is it Redefining the Data Center?

    Some of you may have noticed that a hot new buzzword is circulating the Internet: Edge Computing. Truth be told, this is probably a buzzword you should be paying attention to. It is creating enough of a hype for the Linux Foundation to define edge computing and its associated concepts in an Open Glossary of Edge Computing. So, what is edge computing? And how does it redefine the way in which we process data? In order to answer this, we may need to take a step backwards and explain the problem edge computing solves.
    We all have heard of this Cloud. In its most general terms, cloud computing enables companies, service providers and individuals to provision the appropriate amount of computing resources dynamically (compute nodes, block or object storage and so on) for their needs. These application services are accessed over a network—and not necessarily a public network. Three distinct types of cloud deployments exist: public, private and a hybrid of both.

    The public cloud differentiates itself from the private cloud in that the private cloud typically is deployed in the data center and under the proprietary network using its cloud computing technologies—that is, it is developed for and maintained by the organization it serves. Resources for a private cloud deployment are acquired via normal hardware purchasing means and through traditional hardware sales channels. This is not the case for the public cloud. Resources for the public cloud are provisioned dynamically to its user as requested and may be offered under a pay-per-usage model or for free (e.g. AWS, Azure, et al). As the name implies, the hybrid model allows for seamless access and transitioning between both public and private (or on-premise) deployments, all managed under a single framework.

  • An introduction to Hyperledger Fabric

    One of the biggest projects in the blockchain industry, Hyperledger, is comprised of a set of open source tools and subprojects. It's a global collaboration hosted by The Linux Foundation and includes leaders in different sectors who are aiming to build a robust, business-driven blockchain framework.

    There are three main types of blockchain networks: public blockchains, consortiums or federated blockchains, and private blockchains. Hyperledger is a blockchain framework that aims to help companies build private or consortium permissioned blockchain networks where multiple organizations can share the control and permission to operate a node within the network.

Syndicate content

More in Tux Machines

Happy 10th birthday, TAILS -- the real Paranoid Linux!

In my 2008 novel Little Brother, the underground resistance uses a secure operating system called "Paranoid Linux" that is designed to prevent surveillance and leave no evidence of its use; that was fiction, but there's a real Paranoid Linux out there: Tails, The Amnesic Incognito Live System, and it turns 10 today. Tails is a fork of Debian, a popular GNU/Linux operating system, stripped down and re-engineered so that you can boot most PCs from a Tails thumbdrive, use the web securely and anonymously, and shut the system down again without leaving any trace behind. Read more

Analyzing Distrowatch Trends

Free software is so diverse that its trends are hard to follow. How can information be gathered without tremendous effort and expense? Recently, it occurred to me that a very general sense of free software trends can be had by using the search page on Distrowatch. Admittedly, it is not a very exact sense — it is more like the sparklines on a spreadsheet that show general trends rather the details. Still, the results are suggestive. As you probably know, Distrowatch has been tracking Linux distributions since 2002. It is best-known for its page hit rankings for distributions. These rankings do not show how many people are actually using each distro, but the interest in each distro. Still, this interest often does seem to be a broad indicator. For instance, in the last few years Ubuntu has slipped from the top ranking that it held for years to its current position of fifth, which does seem to bear some resemblance to its popularity today. However, Distrowatch’s search page for distributions is less well-known. Hidden in the home page header, the search function includes filters for such useful information as the version of packages, init software, and what derivatives a distro might have, and lists matching distros in order of popularity. Although I have heard complaints that Distrowatch can be slow to add or update the distros listed, it occurs to me that the number of results indicates general trends. The results could not be plausibly used to suggest that a difference of one or two results was signficant, but greater differences are likely to be more significant. Read more

4MLinux 31.0 STABLE released.

The status of the‭ 4MLinux 31.0 series has been changed to STABLE. Edit your documents with LibreOffice 6.3.4.2 and GNOME Office (AbiWord 3.0.2, GIMP 2.10.14, Gnumeric 1.12.44), share your files using DropBox ‬85.4.155,‭ surf the Internet with Firefox 71.0 and Chromium ‬78.0.3904.108,‭ send emails via Thunderbird 68.3.0, enjoy your music collection with Audacious 3.10.1, watch your favorite videos with VLC 3.0.8 and mpv 0.29.1, play games powered by Mesa 19.1.5 and Wine 4.21. You can also setup the 4MLinux LAMP Server (Linux 4.19.86, Apache 2.4.41, MariaDB 10.4.10, PHP 5.6.40 and PHP 7.3.12). Perl 5.30.0, Python 2.7.16, and Python 3.7.3 are also available. Read more

Programming: C, Perl, Python and More

  • C, what the fuck??!

    A trigraph is only a trigraph when the ??s are followed by one of the nine string literals. So in this case, the C preprocessor will replace the code above with the following: [...]

  • Rewriting Perl Code for Raku IV: A New Hope

    Back in Part III of our series on Raku programming, we talked about some of the basics of OO programming. This time we’ll talk about another aspect of OO programming. Perl objects can be made from any kind of reference, although the most common is a hash. I think Raku objects can do the same, but in this article we’ll just talk about hash-style Perl objects.

    Raku objects let you superclass and subclass them, instantiate them, run methods on them, and store data in them. In previous articles we’ve talked about all but storing data. It’s time to remedy that, and talk about attributes.

  • Mike Driscoll: PyDev of the Week: Ted Petrou

    I graduated with a masters degree in statistics from Rice University in Houston, Texas in 2006. During my degree, I never heard the phrase “machine learning” uttered even once and it was several years before the field of data science became popular. I had entered the program pursuing a Ph.D with just six other students. Although statistics was a highly viable career at the time, it wasn’t nearly as popular as it is today. After limping out of the program with a masters degree, I looked into the fields of actuarial science, became a professional poker play, taught high school math, built reports with SQL and Excel VBA as a financial analyst before becoming a data scientist at Schlumberger. During my stint as a data scientist, I started the meetup group Houston Data Science where I gave tutorials on various Python data science topics. Once I accumulated enough material, I started my company Dunder Data, teaching data science full time.

  • Authorized Google API access from Python (part 2 of 2)

    In this final installment of a (currently) two-part series introducing Python developers to building on Google APIs, we'll extend from the simple API example from the first post (part 1) just over a month ago. Those first snippets showed some skeleton code and a short real working sample that demonstrate accessing a public (Google) API with an API key (that queried public Google+ posts). An API key however, does not grant applications access to authorized data. Authorized data, including user information such as personal files on Google Drive and YouTube playlists, require additional security steps before access is granted. Sharing of and hardcoding credentials such as usernames and passwords is not only insecure, it's also a thing of the past. A more modern approach leverages token exchange, authenticated API calls, and standards such as OAuth2. In this post, we'll demonstrate how to use Python to access authorized Google APIs using OAuth2, specifically listing the files (and folders) in your Google Drive. In order to better understand the example, we strongly recommend you check out the OAuth2 guides (general OAuth2 info, OAuth2 as it relates to Python and its client library) in the documentation to get started. The docs describe the OAuth2 flow: making a request for authorized access, having the user grant access to your app, and obtaining a(n access) token with which to sign and make authorized API calls with. The steps you need to take to get started begin nearly the same way as for simple API access. The process diverges when you arrive on the Credentials page when following the steps below.

  • Friendly Mu
  •                
  • Announcing Google Summer of Code 2020!
                     
                       

    Are you a university student interested in learning how to prepare for the 2020 GSoC program? It’s never too early to start thinking about your proposal or about what type of open source organization you may want to work with. You should read the student guide for important tips on preparing your proposal and what to consider if you wish to apply for the program in mid-March. You can also get inspired by checking out the 200+ organizations that participated in Google Summer of Code 2019, as well as the projects that students worked on.

  •                    
  • Decentralised SMTP is for the greater good
                         
                           

    In August, I published a small article titled “You should not run your mail server because mail is hard” which was basically my opinion on why people keep saying it is hard to run a mail server. Unexpectedly, the article became very popular, reached 100K reads and still gets hits and comments several months after publishing.

                           

    As a follow up to that article, I published in September a much lenghtier article titled “Setting up a mail server with OpenSMTPD, Dovecot and Rspamd” which described how you could setup a complete mail server. I went from scratch and up to inboxing at various Big Mailer Corps using an unused domain of mine with a neutral reputation and describing precisely for each step what was done and why it was done. The article became fairly popular, nowhere near the first one which wasn’t so technical, but reached 40K reads and also still gets hits and comments several months after publishing.

                           

    The content you’re about to read was part of the second article but it didn’t belong there, it was too (geo-)political to be part of a technical article, so I decided to remove it from there and make it a dedicated one. I don’t want the tech stack to go in the way of the message, this is not about OpenSMTPD.