Language Selection

English French German Italian Portuguese Spanish

GNOME: Getting Real GNOME Back in Ubuntu 18.04, Bug Fix for Memory Leak

Filed under
GNOME
  • Getting Real GNOME Back in Ubuntu 18.04 [Quick Tip]

    Ubuntu 18.04 uses a customized version of GNOME and GNOME users might not like those changes. This tutorial shows you how to install vanilla GNOME on Ubuntu 18.04.

    One of the main new features of Ubuntu 18.04 is the customized GNOME desktop. Ubuntu has done some tweaking on GNOME desktop to make it look similar to its Unity desktop.

    So you get minimize options in the windows control, a Unity like launcher on the left of the screen, app indicator support among some other changes.

  • The Infamous GNOME Shell Memory Leak

    at this point, I think it’s safe to assume that many of you already heard of a memory leak that was plaguing GNOME Shell. Well, as of yesterday, the two GitLab’s MRs that help fixing that issue were merged, and will be available in the next GNOME version. The fixes are being considered for backporting to GNOME 3.28 – after making sure they work as expected and don’t break your computer.

  • The Big GNOME Shell Memory Leak Has Been Plugged, Might Be Backported To 3.28

    The widely talked about "GNOME Shell memory leak" causing excessive memory usage after a while with recent versions of GNOME has now been fully corrected. The changes are currently staged in Git for what will become GNOME 3.30 but might also be backported to 3.28.

    Well known GNOME developer Georges Stavracas has provided an update on the matter and confirmed that the issue stems from GJS - the GNOME JavaScript component - with the garbage collection process not being fired off as it should.

More in Tux Machines

RISC-V and NVIDIA

  • Open-Source RISC-V-Based SoC Platform Enlists Deep Learning Accelerator
    SiFive introduces what it’s calling the first open-source RISC-V-based SoC platform for edge inference applications based on NVIDIA's Deep Learning Accelerator (NVDLA) technology. A demo shown at the Hot Chips conference consists of NVDLA running on an FPGA connected via ChipLink to SiFive's HiFive Unleashed board powered by the Freedom U540, the first Linux-capable RISC-V processor. The complete SiFive implementation is suited for intelligence at the edge, where high-performance with improved power and area profiles are crucial. SiFive's silicon design capabilities and innovative business model enables a simplified path to building custom silicon on the RISC-V architecture with NVDLA.
  • SiFive Announces First Open-Source RISC-V-Based SoC Platform With NVIDIA Deep Learning Accelerator Technology
    SiFive, the leading provider of commercial RISC-V processor IP, today announced the first open-source RISC-V-based SoC platform for edge inference applications based on NVIDIA's Deep Learning Accelerator (NVDLA) technology. The demo will be shown this week at the Hot Chips conference and consists of NVDLA running on an FPGA connected via ChipLink to SiFive's HiFive Unleashed board powered by the Freedom U540, the world's first Linux-capable RISC-V processor. The complete SiFive implementation is well suited for intelligence at the edge, where high-performance with improved power and area profiles are crucial. SiFive's silicon design capabilities and innovative business model enables a simplified path to building custom silicon on the RISC-V architecture with NVDLA.
  • SiFive Announces Open-Source RISC-V-Based SoC Platform with Nvidia Deep Learning Accelerator Technology
    SiFive, a leading provider of commercial RISC-V processor IP, today announced the first open-source RISC-V-based SoC platform for edge inference applications based on NVIDIA’s Deep Learning Accelerator (NVDLA) technology. The demo will be shown this week at the Hot Chips conference and consists of NVDLA running on an FPGA connected via ChipLink to SiFive’s HiFive Unleashed board powered by the Freedom U540, the world’s first Linux-capable RISC-V processor. The complete SiFive implementation is well suited for intelligence at the edge, where high-performance with improved power and area profiles are crucial. SiFive’s silicon design capabilities and innovative business model enables a simplified path to building custom silicon on the RISC-V architecture with NVDLA.
  • NVIDIA Unveils The GeForce RTX 20 Series, Linux Benchmarks Should Be Coming
    NVIDIA CEO Jensen Huang has just announced the GeForce RTX 2080 series from his keynote ahead of Gamescom 2018 this week in Cologne, Germany.
  • NVIDIA have officially announced the GeForce RTX 2000 series of GPUs, launching September
    The GPU race continues on once again, as NVIDIA have now officially announced the GeForce RTX 2000 series of GPUs and they're launching in September. This new series will be based on their Turing architecture and their RTX platform. These new RT Cores will "enable real-time ray tracing of objects and environments with physically accurate shadows, reflections, refractions and global illumination." which sounds rather fun.

today's leftovers

GNOME Shell, Mutter, and Ubuntu's GNOME Theme

Benchmarks on GNU/Linux

  • Linux vs. Windows Benchmark: Threadripper 2990WX vs. Core i9-7980XE Tested
    The last chess benchmark we’re going to look at is Crafty and again we’re measuring performance in nodes per second. Interestingly, the Core i9-7980XE wins out here and saw the biggest performance uplift when moving to Linux, a 5% performance increase was seen opposed to just 3% for the 2990WX and this made the Intel CPU 12% faster overall.
  • Which is faster, rsync or rdiff-backup?
    As our data grows (and some filesystems balloon to over 800GBs, with many small files) we have started seeing our night time backups continue through the morning, causing serious disk i/o problems as our users wake up and regular usage rises. For years we have implemented a conservative backup policy - each server runs the backup twice: once via rdiff-backup to the onsite server with 10 days of increments kept. A second is an rsync to our offsite backup servers for disaster recovery. Simple, I thought. I will change the rdiff-backup to the onsite server to use the ultra fast and simple rsync. Then, I'll use borgbackup to create an incremental backup from the onsite backup server to our off site backup servers. Piece of cake. And with each server only running one backup instead of two, they should complete in record time. Except, some how the rsync backup to the onsite backup server was taking almost as long as the original rdiff-backup to the onsite server and rsync backup to the offsite server combined. What? I thought nothing was faster than the awesome simplicity of rsync, especially compared to the ancient python-based rdiff-backup, which hasn't had an upstream release since 2009.