Language Selection

English French German Italian Portuguese Spanish

original content

This began as a list of original articles found on tuxmachines.org, either by me or someone else, but it has since morphed into a list of original articles found on tuxmachines.org and the articles I've had published elsewhere.

  1. Linux Tycoon: Design and Manage Your Own Distribution - March 31, 2012
  2. Ubuntu 12.04 Beta 2 Arrives for Testing - March 29, 2012
  3. GNOME 3.4 Released with Lots of Improvement - March 28, 2012
  4. Greg K-H Updates Tumbleweed Status - March 27, 2012
  5. LibreOffice 3.4.6 Released - March 22, 2012
  6. openSUSE 12.2 M2, Better Late than Never - March 21, 2012
  7. Mitchell Baker Says H.264 is About User Experience - March 19, 2012
  8. LibreOffice 3.5.1 Released with Fixes - March 18, 2012
  9. Mageia 2 Beta 2, Still No Live Images - March 16, 2012
  10. KDE Spark Tablet Renamed to Honor Classical Composer - March 15, 2012
  11. Final Debian 5 Update Released - March 13, 2012
  12. Arch Turns Ten - Mar 12, 2012
  13. Raspberry Pi Orders Now Being Accepted - Feb 29, 2012
  14. Upcoming GNOME 3.4 Previewed - Feb 28, 2012
  15. Fedora's Beefy Miracle Sizzling with Alpha 1 - Feb 28, 2012
  16. Amnesia, Scariest Game Ever, to Get Sequel - Feb 24, 2012
  17. Intel Joins TDF, Adds LibreOffice to AppUp Center - Feb 23, 2012
  18. Red Hat Enterprise Linux 5.7 to 5.8 Risk Report - Feb 21, 2012
  19. The Document Foundation Incorporated in Germany - Feb 20, 2012
  20. KDE Spark Tablet Pre-Order Registration Open - Feb 16, 2012
  21. LibreOffice 3.5 Released - Feb 14, 2012
  22. Debian GNU/Linux 5.0 Reaches End of Life - Feb 10, 2012
  23. Pardus Future Uncertain, Fork Probable - Feb 07, 2012
  24. PCLinuxOS 2012.2 Released - Feb 02, 2012
  25. openSUSE has a Dream - Jan 31, 2012
  26. Mandriva Bankruptcy Crisis Averted, For Now - Jan 30, 2012
  27. GhostBSD 2.5 - Now with an Easy Graphic Installer - Jan 26, 2012
  28. Gentoo-based Toorox Releases 01.2012 GNOME Edition - Jan 25, 2012
  29. Mandriva Decision Delayed Again - Jan 23, 2012
  30. Xfce's Early April Fool's Joke - Jan 20, 2012
  31. KDE 4.9 to get a New Widgets Explorer - Jan 19, 2012
  32. Meet Bodhi's Bulky Brother: Bloathi - Jan 18, 2012
  33. Mandriva Delays Bankruptcy Decision - Jan 17, 2012
  34. LibreOffice 3.4.5 Released - Jan 16, 2012
  35. Fedora Running Beefy Contest - Jan 13, 2012
  36. Mageia 2 Inches Along with Another Alpha - Jan 12, 2012
  37. Linux Mint 12 KDE Almost Ready - Jan 11, 2012
  38. Greg KH Posts Status of Kernel Tree - Jan 10, 2012
  39. Unused LibreOffice Code Expunged - Jan 9, 2012
  40. Is Mandriva Finished This Time? - Jan 5, 2012
  41. New aptosid Fork, siduction 11.1 Released - Jan 4, 2012
  42. Lefebvre Introduces GNOME 3 Fork - Jan 3, 2012
  43. Gentoo Gets New Year's Release - Jan 2, 2012










More in Tux Machines

LWN: Spectre, Linux and Debian Development

  • Grand Schemozzle: Spectre continues to haunt

    The Spectre v1 hardware vulnerability is often characterized as allowing array bounds checks to be bypassed via speculative execution. While that is true, it is not the full extent of the shenanigans allowed by this particular class of vulnerabilities. For a demonstration of that fact, one need look no further than the "SWAPGS vulnerability" known as CVE-2019-1125 to the wider world or as "Grand Schemozzle" to the select group of developers who addressed it in the Linux kernel. Segments are mostly an architectural relic from the earliest days of x86; to a great extent, they did not survive into the 64-bit era. That said, a few segments still exist for specific tasks; these include FS and GS. The most common use for GS in current Linux systems is for thread-local or CPU-local storage; in the kernel, the GS segment points into the per-CPU data area. User space is allowed to make its own use of GS; the arch_prctl() system call can be used to change its value. As one might expect, the kernel needs to take care to use its own GS pointer rather than something that user space came up with. The x86 architecture obligingly provides an instruction, SWAPGS, to make that relatively easy. On entry into the kernel, a SWAPGS instruction will exchange the current GS segment pointer with a known value (which is kept in a model-specific register); executing SWAPGS again before returning to user space will restore the user-space value. Some carefully placed SWAPGS instructions will thus prevent the kernel from ever running with anything other than its own GS pointer. Or so one would think.

  • Long-term get_user_pages() and truncate(): solved at last?

    Technologies like RDMA benefit from the ability to map file-backed pages into memory. This benefit extends to persistent-memory devices, where the backing store for the file can be mapped directly without the need to go through the kernel's page cache. There is a fundamental conflict, though, between mapping a file's backing store directly and letting the filesystem code modify that file's on-disk layout, especially when the mapping is held in place for a long time (as RDMA is wont to do). The problem seems intractable, but there may yet be a solution in the form of this patch set (marked "V1,000,002") from Ira Weiny. The problems raised by the intersection of mapping a file (via get_user_pages()), persistent memory, and layout changes by the filesystem were the topic of a contentious session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit. The core question can be reduced to this: what should happen if one process calls truncate() while another has an active get_user_pages() mapping that pins some or all of that file's pages? If the filesystem actually truncates the file while leaving the pages mapped, data corruption will certainly ensue. The options discussed in the session were to either fail the truncate() call or to revoke the mapping, causing the process that mapped the pages to receive a SIGBUS signal if it tries to access them afterward. There were passionate proponents for both options, and no conclusion was reached. Weiny's new patch set resolves the question by causing an operation like truncate() to fail if long-term mappings exist on the file in question. But it also requires user space to jump through some hoops before such mappings can be created in the first place. This approach comes from the conclusion that, in the real world, there is no rational use case where somebody might want to truncate a file that has been pinned into place for use with RDMA, so there is no reason to make that operation work. There is ample reason, though, for preventing filesystem corruption and for informing an application that gets into such a situation that it has done something wrong.

  • Hardening the "file" utility for Debian

    In addition, he had already encountered problems with file running in environments with non-standard libraries that were loaded using the LD_PRELOAD environment variable. Those libraries can (and do) make system calls that the regular file binary does not make; the system calls were disallowed by the seccomp() filter. Building a Debian package often uses FakeRoot (or fakeroot) to run commands in a way that appears that they have root privileges for filesystem operations—without actually granting any extra privileges. That is done so that tarballs and the like can be created containing files with owners other than the user ID running the Debian packaging tools, for example. Fakeroot maintains a mapping of the "changes" made to owners, groups, and permissions for files so that it can report those to other tools that access them. It does so by interposing a library ahead of the GNU C library (glibc) to intercept file operations. In order to do its job, fakeroot spawns a daemon (faked) that is used to maintain the state of the changes that programs make inside of the fakeroot. The libfakeroot library that is loaded with LD_PRELOAD will then communicate to the daemon via either System V (sysv) interprocess communication (IPC) calls or by using TCP/IP. Biedl referred to a bug report in his message, where Helmut Grohne had reported a problem with running file inside a fakeroot.

Flameshot is a brilliant screenshot tool for Linux

The default screenshot tool in Ubuntu is alright for basic snips but if you want a really good one you need to install a third-party screenshot app. Shutter is probably my favorite, but I decided to give Flameshot a try. Packages are available for various distributions including Ubuntu, Arch, openSuse and Debian. You find installation instructions on the official project website. Read more

Android Leftovers

IBM/Red Hat and Intel Leftovers

  • Troubleshooting Red Hat OpenShift applications with throwaway containers

    Imagine this scenario: Your cool microservice works fine from your local machine but fails when deployed into your Red Hat OpenShift cluster. You cannot see anything wrong with the code or anything wrong in your services, configuration maps, secrets, and other resources. But, you know something is not right. How do you look at things from the same perspective as your containerized application? How do you compare the runtime environment from your local application with the one from your container? If you performed your due diligence, you wrote unit tests. There are no hard-coded configurations or hidden assumptions about the runtime environment. The cause should be related to the configuration your application receives inside OpenShift. Is it time to run your app under a step-by-step debugger or add tons of logging statements to your code? We’ll show how two features of the OpenShift command-line client can help: the oc run and oc debug commands.

  • What piece of advice had the greatest impact on your career?

    I love learning the what, why, and how of new open source projects, especially when they gain popularity in the DevOps space. Classification as a "DevOps technology" tends to mean scalable, collaborative systems that go across a broad range of challenges—from message bus to monitoring and back again. There is always something new to explore, install, spin up, and explore.

  • How DevOps is like auto racing

    When I talk about desired outcomes or answer a question about where to get started with any part of a DevOps initiative, I like to mention NASCAR or Formula 1 racing. Crew chiefs for these race teams have a goal: finish in the best place possible with the resources available while overcoming the adversity thrown at you. If the team feels capable, the goal gets moved up a series of levels to holding a trophy at the end of the race. To achieve their goals, race teams don’t think from start to finish; they flip the table to look at the race from the end goal to the beginning. They set a goal, a stretch goal, and then work backward from that goal to determine how to get there. Work is delegated to team members to push toward the objectives that will get the team to the desired outcome. [...] Race teams practice pit stops all week before the race. They do weight training and cardio programs to stay physically ready for the grueling conditions of race day. They are continually collaborating to address any issue that comes up. Software teams should also practice software releases often. If safety systems are in place and practice runs have been going well, they can release to production more frequently. Speed makes things safer in this mindset. It’s not about doing the “right” thing; it’s about addressing as many blockers to the desired outcome (goal) as possible and then collaborating and adjusting based on the real-time feedback that’s observed. Expecting anomalies and working to improve quality and minimize the impact of those anomalies is the expectation of everyone in a DevOps world.

  • Deep Learning Reference Stack v4.0 Now Available

    Artificial Intelligence (AI) continues to represent one of the biggest transformations underway, promising to impact everything from the devices we use to cloud technologies, and reshape infrastructure, even entire industries. Intel is committed to advancing the Deep Learning (DL) workloads that power AI by accelerating enterprise and ecosystem development. From our extensive work developing AI solutions, Intel understands how complex it is to create and deploy applications for deep learning workloads. That?s why we developed an integrated Deep Learning Reference Stack, optimized for Intel Xeon Scalable processor and released the companion Data Analytics Reference Stack. Today, we?re proud to announce the next Deep Learning Reference Stack release, incorporating customer feedback and delivering an enhanced user experience with support for expanded use cases.

  • Clear Linux Releases Deep Learning Reference Stack 4.0 For Better AI Performance

    Intel's Clear Linux team on Wednesday announced their Deep Learning Reference Stack 4.0 during the Linux Foundation's Open-Source Summit North America event taking place in San Diego. Clear Linux's Deep Learning Reference Stack continues to be engineered for showing off the most features and maximum performance for those interested in AI / deep learning and running on Intel Xeon Scalable CPUs. This optimized stack allows developers to more easily get going with a tuned deep learning stack that should already be offering near optimal performance.