Language Selection

English French German Italian Portuguese Spanish

Server

4 Open source alternatives to Slack and...

Filed under
Server
OSS

Within this segment, the strongest sound is Matrix, an interesting open and decentralized standard for communication designed for interoperability in a similar way to the interoperability existing in the e-mail segment, Enabling real-time communication between users regardless of the customers or servers they use.

Currently, the standard and all its development is maintained by Matrix.org Foundation, a non-profit organization based in the United Kingdom.

Matrix has been developed with privacy and security in mind, taking into account the federation between servers, so that a user can communicate in any existing room securely, with end-to-end encryption, regardless of the server Where you have registered your account, and using any client of your choice.

There are also gateways to participate through messaging programs such as Telegram, discord or Slack, among others.

Matrix allows communication between users basically via text chat, audio calls and video calls, along with other possibilities.

In addition, it aims to surpass the relative success achieved by the standards SIP, XMPP and RCS trying to circumvent the obstacles that have prevented that the standards now mentioned have not been able to go to more.

Among the customers, the best known is Riot, also open-source. Those who do not want to create their own self-hosted Matrix servers, have the possibility to hire some of Modular.im’s plans to create their servers with a few clicks away, depending on their needs.

Read more

Also: Sparky Linux: Riot

Server: Microsoft Ripoff, Open Infrastructure Summit, Edge [and Fog] Computing, Hyperledger Fabric

Filed under
Server
  • Microsoft set to close licensing loopholes, leave cloud rivals high and dry

    Microsoft this fall will begin closing loopholes in its licensing rules that have let customers bring their own licenses for Windows, Windows Server, SQL Server and other software to rival cloud providers like Google and Amazon.

    The Redmond, Wash. company laid down the new law in an Aug. 1 announcement, the same day it previewed Azure Dedicated Host, a new service that runs Windows virtual machines (VMs) on dedicated, single-tenant physical servers.

  • Schedule for Open Infrastructure Shanghai now released

    It may feel like summer is still in full swing, but before you know it, we’ll be facing those shorter days that autumn (or fall, depending on your geographic location and/or linguistic preference) brings. To brighten up these shorter days, many in the open source community will be looking forward to the Open Infrastructure Summit (sometimes shortened to OIS) in Shanghai. The first of these summits to be held in mainland China, this is an exciting event as it will bring together some of the finest minds in open source from around the world in one location.

  • What is Edge [and Fog] Computing and How is it Redefining the Data Center?

    Some of you may have noticed that a hot new buzzword is circulating the Internet: Edge Computing. Truth be told, this is probably a buzzword you should be paying attention to. It is creating enough of a hype for the Linux Foundation to define edge computing and its associated concepts in an Open Glossary of Edge Computing. So, what is edge computing? And how does it redefine the way in which we process data? In order to answer this, we may need to take a step backwards and explain the problem edge computing solves.
    We all have heard of this Cloud. In its most general terms, cloud computing enables companies, service providers and individuals to provision the appropriate amount of computing resources dynamically (compute nodes, block or object storage and so on) for their needs. These application services are accessed over a network—and not necessarily a public network. Three distinct types of cloud deployments exist: public, private and a hybrid of both.

    The public cloud differentiates itself from the private cloud in that the private cloud typically is deployed in the data center and under the proprietary network using its cloud computing technologies—that is, it is developed for and maintained by the organization it serves. Resources for a private cloud deployment are acquired via normal hardware purchasing means and through traditional hardware sales channels. This is not the case for the public cloud. Resources for the public cloud are provisioned dynamically to its user as requested and may be offered under a pay-per-usage model or for free (e.g. AWS, Azure, et al). As the name implies, the hybrid model allows for seamless access and transitioning between both public and private (or on-premise) deployments, all managed under a single framework.

  • An introduction to Hyperledger Fabric

    One of the biggest projects in the blockchain industry, Hyperledger, is comprised of a set of open source tools and subprojects. It's a global collaboration hosted by The Linux Foundation and includes leaders in different sectors who are aiming to build a robust, business-driven blockchain framework.

    There are three main types of blockchain networks: public blockchains, consortiums or federated blockchains, and private blockchains. Hyperledger is a blockchain framework that aims to help companies build private or consortium permissioned blockchain networks where multiple organizations can share the control and permission to operate a node within the network.

EU turns from American public clouds to Nextcloud private clouds

Filed under
Server
OSS
Security

Just like their American counterparts, more than half of European businesses with over 1,000 employees now use a public cloud platform. But European governments aren't so sure that they should trust their data on Amazon Web Services (AWS), Azure, Google Cloud, or the IBM Cloud. They worry that the US CLOUD act enables US law enforcement to unilaterally demand access to EU citizens' cloud data -- even when it's stored outside the States. So, they're turning to private European-based clouds, such as those running on Nextcloud, a popular open-source Infrastructure-as-a-Service (IaaS) cloud.

Read more

The birth of the Bash shell

Filed under
Development
GNU
Server
OSS

Shell scripting is an essential discipline for anyone in a sysadmin type of role, and the predominant shell in which people write scripts today is Bash. Bash comes as default on nearly all Linux distributions and modern MacOS versions and is slated to be a native part of Windows Terminal soon enough. Bash, you could say, is everywhere.

So how did it get to this point? This week's Command Line Heroes podcast dives deeply into that question by asking the very people who wrote the code.

Read more

Server: Cilium, Unix at 50, SUSE and HPC

Filed under
Server
  • Thomas Graf on Cilium, the 1.6 Release, eBPF Security, & the Road Ahead

    Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes. It is a CNI plugin that offers layer 7 features typically seen with a service mesh. On this week’s podcast, Thomas Graf (one of the maintainers of Cilium and co-founder of Isovalent) discusses the recent 1.6 release, some of the security questions/concerns around eBPF, and the future roadmap for the project.

  • Unix at 50 : The OS that powered smartphones started from failure

    UNIX was born 50 years ago from the failure of an ambitious project that involved titans like Bell Labs, GE, and MIT. This OS powers nearly all smartphones sold worldwide. The story of UNIX began from a meeting on the top floor of an unremarkable annex at the Bell Labs complex in Murray Hills, New Jersey.

  • We offer enterprise-grade open source solutions from edge to core to cloud: Brent Schroeder, Global CTO, SUSE

    The open source market is taking an interesting turn of its own. With IBM acquiring Red Hat for $34 billion, the wheels of competition and innovation have truly been set into motion in the open source market.

    In such interesting times, Brent Schroeder, Global CTO, SUSE took over from Thomas Di Giacomo, the now president for engineering at the company. In an exclusive interview with ETCIO, Schroeder talks about how SUSE intends to power digital transformation for companies to innovate and compete.

  • Julita Inca: Building a foundation of HPC knowledge

    The curriculum for courses are previously arranged in advance by the teachers and teaching assistants and published one week before on the intranet. They consist of the theorical materials and practical exercises to support the theory. Some reinforcing workshops were also used in order to address questions and concerns.

Servers: Databases, Microservices, Stackrox, Docker Block Storage and UNIX Turning 50

Filed under
Server
  • Open source databases: Today’s viable alternative for enterprise computing

    There was a time when proprietary solutions from well-capitalized software companies could be expected to provide superior solutions to those produced by a community of dedicated and talented developers. Just as Linux destroyed the market for expensive UNIX versions, open source database management systems like EDB Postgres are forcing Oracle, Microsoft, SAP, and other premium database management products to justify their pricing. With so many large, critical applications running reliably on open source products, it’s a hard case to make.

  • 5 questions everyone should ask about microservices

    The basis of the question is uncertainty in what’s going to happen once they start decomposing existing monolithic applications in favor of microservices where possible. What we need to understand is that the goal of splitting out these services is to favor deployment speed over API invocation speed.

    The main reason to split off microservices out of an existing monolith should be to isolate the development of the service within a team, completely separate from the application development team. The service engineering team can now operate at their own intervals, deploying changes weekly, daily, or even hourly if a noteworthy Common Vulnerabilities and Exposures (CVE) is applicable.

    The penalty for unknown network invocations is the trade-off to your monolith’s highly regimented deployment requirements that cause it to move at two- to three-month deployment intervals. Now, with microservice teams, you can react quicker to the business, competition, and security demands with faster delivery intervals. Equally critical for network invocations is to look closely at how course-grained your network calls become in this new distributed architecture.

  • Stackrox Launches Kubernetes Security Platform Version 2.0

    StackRox, the security for holders and Kubernetes company, declared the general accessibility of form 2.5 of the StackRox Kubernetes Security Platform. The new form incorporates upgraded arrangement and runtime controls that empower organizations to flawlessly authorize security controls to improve use cases, including threat detection, network segmentation, configuration management, and vulnerability management.

  • Pete Zaitcev: Docker Block Storage... say what again?

    Okay. Since they talk about consistency and replication together, this thing probably provides actual service, in addition to the necessary orchestration. Kind of the ill-fated Sheepdog. They may under-estimate the amount of work necesary, sure. Look no further than Ceph RBD. Remember how much work it took for a genius like Sage? But a certain arrogance is essential in a start-up, and Rancher only employs 150 people.

    Also, nobody is dumb enough to write orchestration in Go, right? So this probably is not just a layer on top of Ceph or whatever.

    Well, it's still possible that it's merely an in-house equivalent of OpenStack Cinder, and they want it in Go because they are a Go house and if you have a hammer everything looks like a nail.

    Either way, here's the main question: what does block storage have to do with Docker?

  • Changing the face of computing: UNIX turns 50

    In the late 1960s, a small team of programmers was aspiring to write a multi-tasking, multi-user operating system. Then in August 1969 Ken Thompson, a programmer at AT&T Bell Laboratories, started development of the first-ever version of the UNIX operating system (OS).

    Over the next few years, he and his colleagues Dennis Ritchie, Brian Kernighan, and others developed both this and the C-programming language. As the UNIX OS celebrates its 50th birthday, let’s take a moment to reflect on its impact on the world we live in today.

  • The Legendary OS once kicked by many big companies turns 50. The Story.

    Maybe its pervasiveness has long obscured its roots. But Unix, the OS which proves to be legendary and, in one derivative or another, powers nearly all smartphones sold worldwide, came 50 years ago from the failure of an ambitious project involving titans like GE, Bell Labs, and MIT.

    [...]

    Still, it was something to work on, and as long as Bell Labs was working on Multics, they would also have a $7 million mainframe computer to play around with in their spare time. Dennis Ritchie, one of the programmers working on Multics, later said they all felt some stake in the victory of the project, even though they knew the odds of that success were exceedingly remote.

    Cancellation of Multics meant the end of the only project that the programmers in the Computer science department had to work on—and it also meant the loss of the only computer in the Computer science department. After the GE 645 mainframe was taken apart and hauled off, the computer science department’s resources were reduced to little more than office supplies and a few terminals.

Announcing etcd 3.4

Filed under
Server
OSS

etcd v3.4 includes a number of performance improvements for large scale Kubernetes workloads.

In particular, etcd experienced performance issues with a large number of concurrent read transactions even when there is no write (e.g. “read-only range request ... took too long to execute”). Previously, the storage backend commit operation on pending writes blocks incoming read transactions, even when there was no pending write. Now, the commit does not block reads which improve long-running read transaction performance.

We further made backend read transactions fully concurrent. Previously, ongoing long-running read transactions block writes and upcoming reads. With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. We also ran Kubernetes 5000-node scalability test on GCE with this change and observed similar improvements. For example, in the very beginning of the test where there are a lot of long-running “LIST pods”, the P99 latency of “POST clusterrolebindings” is reduced by 97.4%. This non-blocking read transaction is now used for compaction, which, combined with the reduced compaction batch size, reduces the P99 server request latency during compaction.

More improvements have been made to lease storage. We enhanced lease expire/revoke performance by storing lease objects more efficiently, and made lease look-up operation non-blocking with current lease grant/revoke operation. And etcd v3.4 introduces lease checkpoint as an experimental feature to persist remaining time-to-live values through consensus. This ensures short-lived lease objects are not auto-renewed after leadership election. This also prevents lease object pile-up when the time-to-live value is relatively large (e.g. 1-hour TTL never expired in Kubernetes use case).

Read more

Unix at 50, Tectonic Shifts and Servers

Filed under
Server
  • Celebrating 50 years of the Unix operating system

    Towards the end of the 1960s, a small group of programmers were embarking upon a project which would transform the face of computing forever.

  • Unix at 50: How the OS that powered smartphones started from failure

    Today, Unix powers iOS and Android—its legend begins with a gator and a trio of researchers.

  • To Be Always Surfing On Tectonic Shifts

    If you think about it for a minute, it is amazing that any of the old-time IT suppliers, like IBM and Hewlett Packard, and to a certain extent now Microsoft and Dell, have persisted in the datacenter for decades or, in the case of Big Blue, for more than a century. It is difficult to be constantly adapting to new conditions, but to their great credit, they still do as they world is changing – sometimes tumultuously – both around them and underneath their feet.

    So it is with HPE, which is going through its umpteenth restructuring and refocusing since we entered IT publishing more than three decades ago, this time under the helm of Antonio Neri, its relatively new president and chief executive officer. The current Hewlett Packard is a very different animal than the one that sold proprietary minicomputers and then Unix systems in the 1980s and 1990s, and it is in many ways more of a successor to the systems businesses of Compaq and Digital Equipment, which the company absorbed two decades ago.

  • Cloud providers and telemetry via Qt MQTT

    First, the focus is on getting devices connected to the cloud. Being able to send and receive messages is the prime target. This post will not talk about services, features, or costs by the cloud providers themselves once messages are in the cloud.

    Furthermore, the idea is to only use Qt and/or Qt MQTT to establish a connection. Most, if not all, vendors provide SDKs for either devices or monitoring (web and native) applications. However, using these SDKs extends the amount of additional dependencies, leading to higher requirements for storage and memory.

  • SUSE Enterprise Storage and Veeam go great together

    Whether you’re new to the popular Windows-based backup tool Veeam or an old pro, you know that ever-growing demands on your storage resources are a true challenge. The flexibility of Ceph makes it a good choice for a back-up target, and SUSE Enterprise Storage makes it easy.

Servers: Puppet, Openstack and OpenPOWER

Filed under
Server
  • [Older] Why choose Puppet for DevOps?

    If you’re like most in the DevOps world, you’re always interested in automating tasks and securing your infrastructure. But it’s important to find ways that won’t sacrifice the quality or lose efficiency. Enter Puppet for DevOps. Forty-two percent of all DevOps businesses currently use this handy tool, for good reason.

    Puppet for DevOps is unique because it allows you to enforce automation, enhance organization, boost security measures, and ramp up the overall speed across an entire infrastructure. Puppet’s special abilities are clearly game-changing. And a big part of this sharp setup is due to the initialization of the module authoring process.

  • BT bets big on Canonical for core 5G network

                   

                     

    The foundations for the future of BT's 5G network will be open source, with practically every virtualised aspect of the future infrastructure to be delivered and managed with Canonical's Charmed Openstack distro.  

  • OpenPOWER opens further

    n what was to prove something of a theme throughout the morning, Hugh Blemings said that he had been feeling a bit like a kid waiting for Christmas recently, but that the day when the presents can be unwrapped had finally arrived. He is the executive director of the OpenPOWER Foundation and was kicking off the keynotes for the second day of the 2019 OpenPOWER Summit North America; the keynotes would reveal the "most significant and impressive announcements" in the history of the project, he said. Multiple presentations outlined a major change in the openness of the OpenPOWER instruction set architecture (ISA), along with various related hardware and software pieces; in short, OpenPOWER can be used by compliant products without paying royalties and with a grant of the patents that IBM holds on it. In addition, the foundation will be moving under the aegis of the Linux Foundation.

    Blemings also wrote about the changes in a blog post at the foundation web site. To set the stage for the announcements to come, he played a promotional video (which can be found in the post) that gave an overview of the foundation and the accomplishments of the OpenPOWER architecture, which includes underlying the two most powerful supercomputers in the world today.

Red Hat/IBM Servers and Databases

Filed under
Red Hat
Server
  • Themes driving digital transformation and leadership in financial services

    Incumbent banks should know they have to modernize their organization to compete in a world where customers want better and more personalized digital experiences. Eager to realize the cost-savings and increased revenue that can result from micro-targeting products and services, they can adopt next-generation technologies to transform their businesses to lead their market.

    Digital leaders are focused on end-to-end customer experiences. Processes, policies, and procedures defined for branch networks are being reimagined to support new digital customer engagement. By modernizing the back office and business processes, banks have an opportunity to streamline, codify, and thereby automate - which, in turn, can reduce friction caused by manual checks and inconsistent policies. This can enable more seamless customer experiences and speedier customer service, with transparency into servicing while reducing operational costs.

  • Introducing Red Hat OpenShift 4.2 in Developer Preview: Releasing Nightly Builds

    You might have read about the architectural changes and enhancements in Red Hat OpenShift 4 that resulted in operational and installation benefits. Or maybe you read about how OpenShift 4 assists with developer innovation and hybrid cloud deployments. I want to draw attention to another part of OpenShift 4 that we haven’t exposed to you yet…until today.

    When Red Hat acquired CoreOS, and had the opportunity to blend Container Linux with RHEL and Tectonic with OpenShift, the innovation did not remain only in the products we brought to market.

    An exciting part about working on new cloud-native technology is the ability to redefine how you work. Redefine how you hammer that nail with your hammer. These Red Hat engineers were building a house, and sometimes the tools they needed simply did not exist.

  • IBM POWER Instruction Set Architecture Now Open Source

    IBM has open sourced the POWER Instruction Set Architecture (ISA), which is used in its Power Series chips and in many embedded devices by other manufacturers. In addition, the OpenPOWER Foundation will become part of The Linux Foundation to further open governance.

    IBM created the OpenPOWER Foundation in 2013 with the aim to make it easier for server vendors to build customized servers based on IBM Power architecture. By joining the OpenPOWER Foundation, vendors had access to processor specifications, firmware, and software and were allowed to manufacture POWER processors or related chips under a liberal license. With IBM latest announcement, vendors can create chips using the POWER ISA without paying any royalties and have full access to the ISA definition. As IBM OpenPOWER general manager Ken King highlights, open sourcing the POWER ISA enables the creation of computers that are completely open source, from the foundation of the hardware, including the processor instruction set, firmware, boot code, and so on up to the software stack.

  • Julien Danjou: The Art of PostgreSQL is out!

    f you remember well, a couple of years ago, I wrote about Mastering PostgreSQL, a fantastic book written by my friend Dimitri Fontaine.

    Dimitri is a long-time PostgreSQL core developer — for example, he wrote the extension support in PostgreSQL — no less. He is featured in my book Serious Python, where he advises on using databases and ORM in Python.

    Today, Dimitri comes back with the new version of this book, named The Art of PostgreSQL.

  • Surf’s Up! Riding The Second Wave Of Open Source

    have never surfed before, but I am told it is incredibly exciting and great exercise, which as we all know is very good for you. For some it may sound daunting, because it is so unlike any other sport, but for those prepared to take the challenge it can be hugely rewarding. Stretching yourself – perhaps literally – and taking your body out of its comfort zone is a proven way of staying healthy. I would argue there are similarities for IT departments as they evaluate how to get their database architectures fit to support businesses that want to become more agile and responsive to customers.

    Making sure that IT systems are fit-for-purpose, robust and reliable enables companies to embrace new markets, innovative products and re-engineered processes: all are typical of organisations which are looking to survive and thrive in an increasingly fraught business environment.

Syndicate content

More in Tux Machines

RedisInsight Revealed and WordPress 5.2.4 Released

  • Redis Labs eases database management with RedisInsight

    The robust market of tools to help users of the Redis database manage their systems just got a new entrant. Redis Labs disclosed the availability of its RedisInsight tool, a graphical user interface (GUI) for database management and operations. Redis is a popular open source NoSQL database that is also increasingly being used in cloud-native Kubernetes deployments as users move workloads to the cloud. Open source database use is growing quickly according to recent reports as the need for flexible, open systems to meet different needs has become a common requirement. Among the challenges often associated with databases of any type is ease of management, which Redis is trying to address with RedisInsight.

  • WordPress 5.2.4 Update

    Late-breaking news on the 5.2.4 short-cycle security release that landed October 14. When we released the news post, I inadvertently missed giving props to Simon Scannell of RIPS Technologies for finding and disclosing an issue where path traversal can lead to remote code execution. Simon has done a great deal of work on the WordPress project, and failing to mention his contributions is a huge oversight on our end. Thank you to all of the reporters for privately disclosing vulnerabilities, which gave us time to fix them before WordPress sites could be attacked.

Desktop GNU/Linux: Rick and Morty, Georges Basile Stavracas Neto on GNOME and Linux Format on Eoan Ermine

  • We know where Rick (from Rick and Morty) stands on Intel vs AMD debate

    For one, it appears Rick is running a version of Debian with a very old Linux kernel (3.2.0) — one dating back to 2012. He badly needs to install some frickin’ updates. “Also his partitions are real weird. It’s all Microsoft based partitions,” a Redditor says. “A Linux user would never do [this] unless they were insane since NTFS/Exfat drivers on Linux are not great.”

  • Georges Basile Stavracas Neto: Every shell has a story

    … a wise someone once muttered while walking on a beach, as they picked up a shell lying on the sand. Indeed, every shell began somewhere, crossed a unique path with different goals and driven by different motivations. Some shells were created to optimize for mobility; some, for lightness; some, for speed; some were created to just fit whoever is using it and do their jobs efficiently. It’s statistically close to impossible to not find a suitable shell, one could argue. So, is this a blog about muttered shell wisdom? In some way, it actually is. It is, indeed, about Shell, and about Mutter. And even though “wisdom” is perhaps a bit of an overstatement, it is expected that whoever reads this blog doesn’t leave it less wise, so the word applies to a certain degree. Evidently, the Shell in question is composed of bits and bytes; its protection is more about the complexities of a kernel and command lines than sea predators, and the Mutter is actually more about compositing the desktop than barely audible uttering.

  • Adieu, 32

    The tenth month of the year arrives and so does a new Ubuntu 19.10 (Eoan Ermine) update. Is it a portent that this is the 31st release of Ubuntu and with the 32nd release next year, 32-bit x86 Ubuntu builds will end?

Linux Kernel and Linux Foundation

  • Linux's Crypto API Is Adopting Some Aspects Of Zinc, Opening Door To Mainline WireGuard

    Mainlining of the WireGuard secure VPN tunnel was being held up by its use of the new "Zinc" crypto API developed in conjunction with this network tech. But with obstacles in getting Zinc merged, WireGuard was going to be resorting to targeting the existing kernel crypto interfaces. Instead, however, it turns out the upstream Linux crypto developers were interested and willing to incorporate some elements of Zinc into the existing kernel crypto implementation. Back in September is when Jason Donenfeld decided porting WireGuard to the existing Linux crypto API was the best path forward for getting this secure networking functionality into the mainline kernel in a timely manner. But since then other upstream kernel developers working on the crypto subsystem ended up with patches incorporating some elements of Zinc's design.

  • zswap: use B-tree for search
    The current zswap implementation uses red-black trees to store
    entries and to perform lookups. Although this algorithm obviously
    has complexity of O(log N) it still takes a while to complete
    lookup (or, even more for replacement) of an entry, when the amount
    of entries is huge (100K+).
    
    B-trees are known to handle such cases more efficiently (i. e. also
    with O(log N) complexity but with way lower coefficient) so trying
    zswap with B-trees was worth a shot.
    
    The implementation of B-trees that is currently present in Linux
    kernel isn't really doing things in the best possible way (i. e. it
    has recursion) but the testing I've run still shows a very
    significant performance increase.
    
    The usage pattern of B-tree here is not exactly following the
    guidelines but it is due to the fact that pgoff_t may be both 32
    and 64 bits long.
    
    
  • Zswap Could See Better Performance Thanks To A B-Tree Search Implementation

    For those using Zswap as a compressed RAM cache for swapping on Linux systems, the performance could soon see a measurable improvement. Developer Vitaly Wool has posted a patch that switches the Zswap code from using red-black trees to a B-tree for searching. Particularly for when having to search a large number of entries, the B-trees implementation should do so much more efficiently.

  • AT&T Finally Opens Up dNOS "DANOS" Network Operating System Code

    One and a half years late, the "DANOS" (known formerly as "dNOS") network operating system is now open-source under the Linux Foundation. AT&T and the Linux Foundation originally announced their plan in early 2018 wish pushing for this network operating system to be used on more mobile infrastructure. At the time they expected it to happen in H2'2018, but finally on 15 November 2019 the goal came to fruition.

Security Patches and FUD/Drama