Language Selection

English French German Italian Portuguese Spanish

Firefox ESR 60 Is Now Available on Ubuntu as a Snap, Here's How to Install It

Filed under
Moz/FF
Ubuntu

Every six weeks, a new major Firefox release hits the streets, and it's soon available in the Ubuntu repositories, but thanks to Canonical's Snappy technologies, users now have access to the latest ESR versions of Firefox too, which are mostly intended for the company's enterprise partners who want long-term supported Firefox release.

"The ESR version of Firefox is aimed at corporations who want to have more control over the version of Firefox their employees have installed," said Canonical in a blog post. "Mozilla recommends that users stay on the Rapid Release version if they wish the newest product features offered by Firefox."

Read more

More Snaps

  • Plex arrives in Canonical’s Snap Store

    Canonical, the company behind Ubuntu, today announces Plex as a Snap, bringing the over-the-top (OTT) media service to millions of Linux users via the ever-expanding Snap Store. Plex is a top-rated streaming media company with apps and content customised to fit users’ personal preferences and needs.

  • Fresh Snaps from September 2018

    Another month passes, and we’ve got a collection of interesting applications that came to our attention (Twitter feed) during September 2018. We have a mix of developer tools, languages, password management, productivity tools and some fun too. Take a look down the list, and discover something new today.

Firefox ESR 60 availability on Ubuntu

  • Firefox ESR 60 availability on Ubuntu

    Mozilla Release Engineering created a special customised packaging of the Ubuntu version of Firefox intended for our enterprise partners of Canonical. This is particularly useful if partners decide they need to apply policies to Firefox for an install base they administer. This Extended Support Release (ESR) is similar in concept to the Ubuntu LTS releases. This nominated version of Firefox, released to a specific cadence, will be given additional maintenance over and above the regular more frequent releases. The ESR release will get support for approximately one year. It will have an overlap period with the next ESR of 12 weeks, which gives users a window to upgrade to the latest ESR and ensures they are always on a supported version. Like Ubuntu LTS releases, Mozilla is selective about what patches/fixes/updates get backported to the ESR version.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

Kernel: Linux 5.3, DragonFlyBSD Takes Linux Bits, LWN Paywall Expires for Recent Articles

  • Ceph updates for 5.3-rc1
    Hi Linus,
    
    The following changes since commit 0ecfebd2b52404ae0c54a878c872bb93363ada36:
    
    Linux 5.2 (2019-07-07 15:41:56 -0700)
    
    are available in the Git repository at:
    
    https://github.com/ceph/ceph-client.git tags/ceph-for-5.3-rc1
    
    for you to fetch changes up to d31d07b97a5e76f41e00eb81dcca740e84aa7782:
    
    ceph: fix end offset in truncate_inode_pages_range call (2019-07-08 14:01:45 +0200)
    
    There is a trivial conflict caused by commit 9ffbe8ac05db
    ("locking/lockdep: Rename lockdep_assert_held_exclusive() ->
    lockdep_assert_held_write()"). I included the resolution in
    for-linus-merged.
    
  • Ceph Sees "Lots Of Exciting Things" For Linux 5.3 Kernel

    Ceph for Linux 5.3 is bringing an addition to speed-up reads/discards/snap-diffs on sparse images, snapshot creation time is now exposed to support features like "restore previous versions", support for security xattrs (currently limited to SELinux), addressing a missing feature bit so the kernel client's Ceph features are now "luminous", better consistency with Ceph FUSE, and changing the time granularity from 1us to 1ns. There are also bug fixes and other work as part of the Ceph code for Linux 5.3. As maintainer Ilya Dryomov put it, "Lots of exciting things this time!"

  • The NVMe Patches To Support Linux On Newer Apple Macs Are Under Review

    At the start of the month we reported on out-of-tree kernel work to support Linux on the newer Macs. Those patches were focused on supporting Apple's NVMe drive behavior by the Linux kernel driver. That work has been evolving nicely and is now under review on the kernel mailing list. Volleyed on Tuesday were a set of three patches to the Linux kernel's NVMe code for dealing with the Apple hardware of the past few years in order for Linux to deal with these drives. On Apple 2018 systems and newer, their I/O queue sizing/handling is odd and in other areas not properly following NVMe specifications. These patches take care of that while hopefully not regressing existing NVMe controller support.

  • DragonFlyBSD Pulls In The Radeon Driver Code From Linux 4.4

    While the Linux 4.4 kernel is quite old (January 2016), DragonFlyBSD has now re-based its AMD Radeon kernel graphics driver against that release. It is at least a big improvement compared to its Radeon code having been derived previously from Linux 3.19. DragonFlyBSD developer François Tigeot continues doing a good job herding the open-source Linux graphics driver support to this BSD. With the code that landed on Monday, DragonFlyBSD's Radeon DRM is based upon the state found in the Linux 4.4.180 LTS tree.

  • Destaging ION

    The Android system has shipped a couple of allocators for DMA buffers over the years; first came PMEM, then its replacement ION. The ION allocator has been in use since around 2012, but it remains stuck in the kernel's staging tree. The work to add ION to the mainline started in 2013; at that time, the allocator had multiple issues that made inclusion impossible. Recently, John Stultz posted a patch set introducing DMA-BUF heaps, an evolution of ION, that is designed to do exactly that — get the Android DMA-buffer allocator to the mainline Linux kernel. Applications interacting with devices often require a memory buffer that is shared with the device driver. Ideally, it would be memory mapped and physically contiguous, allowing direct DMA access and minimal overhead when accessing the data from both sides at the same time. ION's main goal is to support that use case; it implements a unified way of defining and sharing such memory buffers, while taking into account the constraints imposed by the devices and the platform.

  • clone3(), fchmodat4(), and fsinfo()

    The kernel development community continues to propose new system calls at a high rate. Three ideas that are currently in circulation on the mailing lists are clone3(), fchmodat4(), and fsinfo(). In some cases, developers are just trying to make more flag bits available, but there is also some significant new functionality being discussed. clone3() The clone() system call creates a new process or thread; it is the actual machinery behind fork(). Unlike fork(), clone() accepts a flags argument to modify how it operates. Over time, quite a few flags have been added; most of these control what resources and namespaces are to be shared with the new child process. In fact, so many flags have been added that, when CLONE_PIDFD was merged for 5.2, the last available flag bit was taken. That puts an end to the extensibility of clone().

  • Soft CPU affinity

    On NUMA systems with a lot of CPUs, it is common to assign parts of the workload to different subsets of the available processors. This partitioning can improve performance while reducing the ability of jobs to interfere with each other. The partitioning mechanisms available on current kernels might just do too good a job in some situations, though, leaving some CPUs idle while others are overutilized. The soft affinity patch set from Subhra Mazumdar is an attempt to improve performance by making that partitioning more porous. In current kernels, a process can be restricted to a specific set of CPUs with either the sched_setaffinity() system call or the cpuset mechanism. Either way, any process so restricted will only be able to run on the specified CPUs regardless of the state of the system as a whole. Even if the other CPUs in the system are idle, they will be unavailable to any process that has been restricted not to run on them. That is normally the behavior that is wanted; a system administrator who has partitioned a system in this way probably has some other use in mind for those CPUs. But what if the administrator would rather relax the partitioning in cases where the fenced-off CPUs are idle and going to waste? The only alternative currently is to not partition the system at all and let processes roam across all CPUs. One problem with that approach, beyond losing the isolation between jobs, is that NUMA locality can be lost, resulting in reduced performance even with more CPUs available. In theory the AutoNUMA balancing code in the kernel should address that problem by migrating processes and their memory to the same node, but Mazumdar notes that it doesn't seem to work properly when memory is spread out across the system. Its reaction time is also said to be too slow, and the cost of the page scanning required is high.

How the Open Source Operating System Has Silently Won Over the World

The current and future potential for Linux based systems is limitless. The system’s flexibility allows for the hardware that uses it to be endlessly updated. Functionality can, therefore, be maintained even as the technology around the devices change. This flexibility also means that the function of the hardware can be modified to suit an ever-changing workplace. For example, because the INSYS icom OS has been specifically designed for use in routers, this has allowed it to be optimised to be lightweight and hardened to increase its security. Multipurpose OS have large libraries of applications for a diverse range of purposes. Great for designing new uses, but these libraries can also be exploited by actors with malicious intent. Stripping down these libraries to just what is necessary through a hardening process can drastically improve security by reducing the attackable surfaces. Overall, Windows may have won the desktop OS battle with only a minority of them using Linux OS. However, desktops are only a minute part of the computing world. Servers, mobile systems and embedded technology that make up the majority are predominately running Linux. Linux has gained this position by being more adaptable, lightweight and portable than its competitors. Read more

Operating-System-Directed Power-Management (OSPM) Summit

  • The third Operating-System-Directed Power-Management summit

    he third edition of the Operating-System-Directed Power-Management (OSPM) summit was held May 20-22 at the ReTiS Lab of the Scuola Superiore Sant'Anna in Pisa, Italy. The summit is organized to collaborate on ways to reduce the energy consumption of Linux systems, while still meeting performance and other goals. It is attended by scheduler, power-management, and other kernel developers, as well as academics, industry representatives, and others interested in the topics.

  • The future of SCHED_DEADLINE and SCHED_RT for capacity-constrained and asymmetric-capacity systems

    The kernel's deadline scheduling class (SCHED_DEADLINE) enables realtime scheduling where every task is guaranteed to meet its deadlines. Unfortunately SCHED_DEADLINE's current view on CPU capacity is far too simple. It doesn't take dynamic voltage and frequency scaling (DVFS), simultaneous multithreading (SMT), asymmetric CPU capacity, or any kind of performance capping (e.g. due to thermal constraints) into consideration. In particular, if we consider running deadline tasks in a system with performance capping, the question is "what level of guarantee should SCHED_DEADLINE provide?". An interesting discussion about the pro and cons of different approaches (weak, hard, or mixed guarantees) developed during this presentation. There were many different views but the discussion didn't really conclude and will have to be continued at the Linux Plumbers Conference later this year. The topic of guaranteed performance will become more important for mobile systems in the future as performance capping is likely to become more common. Defining hard guarantees is almost impossible on real systems since silicon behavior very much depends on environmental conditions. The main pushback on the existing scheme is that the guaranteed bandwidth budget might be too conservative. Hence SCHED_DEADLINE might not allow enough bandwidth to be reserved for use cases with higher bandwidth requirements that can tolerate bandwidth reservations not being honored.

  • Scheduler behavioral testing

    Validating scheduler behavior is a tricky affair, as multiple subsystems both compete and cooperate with each other to produce the task placement we observe. Valentin Schneider from Arm described the approach taken by his team (the folks behind energy-aware scheduling — EAS) to tackle this problem.

  • CFS wakeup path and Arm big.LITTLE/DynamIQ

    "One task per CPU" workloads, as emulated by multi-core Geekbench, can suffer on traditional two-cluster big.LITTLE systems due to the fact that tasks finish earlier on the big CPUs. Arm has introduced a more flexible DynamIQ architecture that can combine big and LITTLE CPUs into a single cluster; in this case, early products apply what's known as phantom scheduler domains (PDs). The concept of PDs is needed for DynamIQ so that the task scheduler can use the existing big.LITTLE extensions in the Completely Fair Scheduler (CFS) scheduler class. Multi-core Geekbench consists of several tests during which N CFS tasks perform an equal amount of work. The synchronization mechanism pthread_barrier_wait() (i.e. a futex) is used to wait for all tasks to finish their work in test T before starting the tasks again for test T+1. The problem for Geekbench on big.LITTLE is related to the grouping of big and LITTLE CPUs in separate scheduler (or CPU) groups of the so-called die-level scheduler domain. The two groups exists because the big CPUs share a last-level cache (LLC) and so do the LITTLE CPUs. This isn't true any more for DynamIQ, hence the use of the "phantom" notion here. The tasks of test T finish earlier on big CPUs and go to sleep at the barrier B. Load balancing then makes sure that the tasks on the LITTLE CPUs migrate to the big CPUs where they continue to run the rest of their work in T before they also go to sleep at B. At this moment, all the tasks in the wake queue have a big CPU as their previous CPU (p->prev_cpu). After the last task has entered pthread_barrier_wait() on a big CPU, all tasks on the wake queue are woken up.

  • I-MECH: realtime virtualization for industrial automation

    The typical systems used in industrial automation (e.g. for axis control) consist of a "black box" executing a commercial realtime operating system (RTOS) plus a set of control design tools meant to be run on a different desktop machine. This approach, besides imposing expensive royalties on the system integrator, often does not offer the desired degree of flexibility for testing/implementing novel solutions (e.g., running both control code and design tools on the same platform).

  • Virtual-machine scheduling and scheduling in virtual machines

    As is probably well known, a scheduler is the component of an operating system that decides which CPU the various tasks should run on and for how long they are allowed to do so. This happens when an OS runs on the bare hardware of a physical host and it is also the case when the OS runs inside a virtual machine. The only difference being that, in the latter case, the OS scheduler marshals tasks among virtual CPUs. And what are virtual CPUs? Well, in most platforms they are also a kind of special task and they want to run on some CPUs ... therefore we need a scheduler for that! This is usually called the "double-scheduling" property of systems employing virtualization because, well, there literally are two schedulers: one — let us call it the host scheduler, or the hypervisor scheduler — that schedules the virtual CPUs on the host physical CPUs; and another one — let us call it the guest scheduler — that schedules the guest OS's tasks on the guest's virtual CPUs. Now what are these two schedulers? That depends on the virtualization platform. They are always different, in the sense that it will never happen that, at runtime, a scheduler has to deal with scheduling virtual CPUs and also scheduling tasks that want to run on those same virtual CPUs (well, it can happen, but then you are not doing virtualization). They can be the same, in terms of code, or they can be completely different from that respect as well.

  • Rock and a hard place: How hard it is to be a CPU idle-time governor

    In the opening session of OSPM 2019, Rafael Wysocki from Intel gave a talk about potential problems faced by the designers of CPU idle-time-management governors, which was inspired by his own experience from the timer-events oriented (TEO) governor work done last year. In the first place, he said, it should be noted that "CPU idleness" is defined at the level of logical CPUs, which may be CPU cores or simultaneous multithreading (SMT) threads, depending on the hardware configuration of the processor. In Linux, a logical CPU is idle when there are no runnable tasks in its queue, so it falls back to executing the idle task associated with it (there is one idle task for each logical CPU in the system, but they all share the same code, which is the idle loop). Therefore "CPU idleness" is an OS (not hardware) concept and if the idle loop is entered by a CPU, there is an opportunity to save some energy with a relatively small impact on performance (or even without any impact on performance at all) — if the hardware supports that. The idle loop runs on each idle CPU and it only takes this particular CPU into consideration. As a rule, two code modules are invoked in every iteration of it. The first one, referred to as the CPU idle-time-management governor, is responsible for deciding whether or not to stop the scheduler tick and what to tell the hardware to do; the second one, called the CPU idle-time-management driver, passes the governor's decisions down to the hardware, usually in an architecture- or platform-specific way. Then, presumably, the processor enters a special state in which the CPU in question stops fetching instructions (that is, it does literally nothing at all); that may allow the processor's power draw to be reduced and some energy to be saved as a result. If that happens, the processor needs to be woken up from that state by a hardware event after spending some time, referred to as the idle duration, in it. At that point, the governor is called again so it can save the idle-duration value for future use.

Red Hat/IBM and Fedora Leftovers

  • An introduction to cloud-native CI/CD with Red Hat OpenShift Pipelines

    Red Hat OpenShift 4.1 offers a developer preview of OpenShift Pipelines, which enable the creation of cloud-native, Kubernetes-style continuous integration and continuous delivery (CI/CD) pipelines based on the Tekton project. In a recent article on the Red Hat OpenShift blog, I provided an introduction to Tekton and pipeline concepts and described the benefits and features of OpenShift Pipelines. OpenShift Pipelines builds upon the Tekton project to enable teams to build Kubernetes-style delivery pipelines that they can fully control and own the complete lifecycle of their microservices without having to rely on central teams to maintain and manage a CI server, plugins, and its configurations.

  • IBM's New Open Source Kabanero Promises to Simplify Kubernetes for DevOps

    At OSCON, IBM unveiled a new open source platform that promises to make Kubernetes easier to manage for DevOps teams.

  • MySQL for developers in Red Hat OpenShift

    As a software developer, it’s often necessary to access a relational database—or any type of database, for that matter. If you’ve been held back by that situation where you need to have someone in operations provision a database for you, then this article will set you free. I’ll show you how to spin up (and wipe out) a MySQL database in seconds using Red Hat OpenShift. Truth be told, there are several databases that can be hosted in OpenShift, including Microsoft SQL Server, Couchbase, MongoDB, and more. For this article, we’ll use MySQL. The concepts, however, will be the same for other databases. So, let’s get some knowledge and leverage it.

  • What you need to know to be a sysadmin

    The system administrator of yesteryear jockeyed users and wrangled servers all day, in between mornings and evenings spent running hundreds of meters of hundreds of cables. This is still true today, with the added complexity of cloud computing, containers, and virtual machines. Looking in from the outside, it can be difficult to pinpoint what exactly a sysadmin does, because they play at least a small role in so many places. Nobody goes into a career already knowing everything they need for a job, but everyone needs a strong foundation. If you're looking to start down the path of system administration, here's what you should be concentrating on in your personal or formal training.

  • Building blocks of syslog-ng

    Recently I gave a syslog-ng introductory workshop at Pass the SALT conference in Lille, France. I got a lot of positive feedback, so I decided to turn all that feedback into a blog post. Naturally, I shortened and simplified it, but still managed to get enough material for multiple blog posts.

  • PHP version 7.2.21RC1 and 7.3.8RC1

    Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages. RPM of PHP version 7.387RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30 or remi-php73-test repository for Fedora 28-29 and Enterprise Linux. RPM of PHP version 7.2.20RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 28-29 or remi-php72-test repository for Enterprise Linux.

  • QElectroTech version 0.70

    RPM of QElectroTech version 0.70, an application to design electric diagrams, are available in remi for Fedora and Enterprise Linux 7. A bit more than 1 year after the version 0.60 release, the project have just released a new major version of their electric diagrams editor.