Language Selection

English French German Italian Portuguese Spanish

Red Hat Storage and Fedora

Filed under
Red Hat
  • Achieving maximum performance from a fixed size Ceph object storage cluster

    We have tested a variety of configurations, object sizes, and client worker counts in order to maximize the throughput of a seven node Ceph cluster for small and large object workloads. As detailed in the first post the Ceph cluster was built using a single OSD (Object Storage Device) configured per HDD, having a total of 112 OSDs per Ceph cluster. In this post, we will understand the top-line performance for different object sizes and workloads.

    Note: The terms "read" and HTTP GET is used interchangeably throughout this post, as are the terms HTTP PUT and "write."

  • File systems unfit as distributed storage backends: lessons from 10 years of Ceph evolution

    For a decade, the Ceph distributed file system followed the conventional wisdom of building its storage backend on top of local file systems. This is a preferred choice for most distributed file systems today because it allows them to benefit from the convenience and maturity of battle-tested code. Ceph's experience, however, shows that this comes at a high price. First, developing a zero-overhead transaction mechanism is challenging. Second, metadata performance at the local level can significantly affect performance at the distributed level. Third, supporting emerging storage hardware is painstakingly slow.

    Ceph addressed these issues with BlueStore, a new back- end designed to run directly on raw storage devices. In only two years since its inception, BlueStore outperformed previous established backends and is adopted by 70% of users in production. By running in user space and fully controlling the I/O stack, it has enabled space-efficient metadata and data checksums, fast overwrites of erasure-coded data, inline compression, decreased performance variability, and avoided a series of performance pitfalls of local file systems. Finally, it makes the adoption of backwards-incompatible storage hardware possible, an important trait in a changing storage landscape that is learning to embrace hardware diversity.

  • podman-compose: Review Request

    Want to use docker-compose.yaml files with podman on Fedora?

More in Tux Machines

Type Title Author Replies Last Postsort icon
Story Raspberry Pi Foundation Release Their Own Silicon, the Raspberry Pi Pico Marius Nestor 9 23/01/2021 - 12:46pm
Story How to create bootable Ubuntu 20.04 on windows 10 trendoceangd 23/01/2021 - 11:40am
Story Audio/Video: LHS, Going Linux, and DistroTube Roy Schestowitz 23/01/2021 - 10:59am
Story Security Leftovers Roy Schestowitz 23/01/2021 - 10:32am
Story Android Leftovers Rianne Schestowitz 23/01/2021 - 8:48am
Story Schedule appointments with an open source alternative to Doodle Rianne Schestowitz 23/01/2021 - 8:37am
Story This week in KDE: the Plasma 5.20 beta is here! Roy Schestowitz 23/01/2021 - 7:48am
Story Python Programming Roy Schestowitz 1 23/01/2021 - 7:40am
Story PCLinuxOS Review: This Classic Independent Linux Distribution is Definitely Worth a Look itsfoss 45 23/01/2021 - 7:38am
Story Free, Libre, and Open Source Software Leftovers Roy Schestowitz 23/01/2021 - 7:36am