Language Selection

English French German Italian Portuguese Spanish

today's leftovers

Filed under
Misc
  • AMDGPU For Linux 5.1 Tweaks The Golden Settings For Vega, Corrects Fiji Power Reading

    Since last week the big set of DRM driver changes has been part of the mainline kernel for Linux 5.1 while working its way to mainline now are a couple of early fixes to the AMDGPU driver.

  • Krita 4.2.0: the First Painting Application to bring HDR Support to Windows

    We’re deep in bug fixing mode now, because in May we want to release the next major version of Krita: Krita 4.2.0. While there will be a host of new features, a plethora of bug fixes and performance improvements, one thing is unique: support for painting in HDR mode. Krita is the very first application, open source or proprietary, that offers this!

    So, today we release a preview version of Krita 4.2.0 with HDR support baked in, so you can give the new functionality a try!

    Of course, at this moment, only Windows 10 supports HDR monitors, and only with some very specific hardware. Your CPU and GPU need to be new enough, and you need to have a monitor that supports HDR. We know that the brave folks at Intel are working on HDR support for Linux, though!

  • Ubuntu Desktop To Auto-Install Necessary VM Tools/Drivers When Running On VMware

    In seeking to improve the out-of-the-box experience when running the Ubuntu desktop as a guest virtual machine within VMware's products, Ubuntu is planning on having the open-vm-tools-desktop package be automatically installed for providing a better initial experience. 

  •  

  • QA Report: February 2019
  • Amazon steps up its open-source game, and Elastic stock falls as a result

    Open-source search software company Elastic saw its stock fall as much as 5 percent on Tuesday after Amazon Web Services announced the launch of a separate library of open-source code for Elasticsearch, a set of technologies that can be use to build search engines for web sites, and an important part of Elastic's business.

  • MongoDB backs off unpopular license; MDB +4%

    Key quote: "We continue to believe that the SSPL complies with the Open Source Definition and the four essential software freedoms. However, based on its reception by the members of this list and the greater open source community, the community consensus required to support OSI approval does not currently appear to exist regarding the copyleft provision of SSPL. Thus, in order to be respectful of the time and efforts of the OSI board and this list’s members, we are hereby withdrawing the SSPL from OSI consideration."

  • When "Zoë" !== "Zoë". Or why you need to normalize Unicode strings

    It first hit me many years ago, when I was building an app (in Objective-C) that imported a list of people from an user’s address book and social media graph, and filtered out duplicates. In certain situations, I would see the same person added twice because the names wouldn’t compare as equal strings.

    In fact, while the two strings above look identical on screen, the way they’re represented on disk, the bytes saved in the file, are different. In the first “Zoë”, the ë character (e with umlaut) was represented a single Unicode code point, while in the second case it was in the decomposed form. If you’re dealing with Unicode strings in your application, you need to take into account that characters could be represented in multiple ways.

More in Tux Machines

How to use Spark SQL: A hands-on tutorial

In the first part of this series, we looked at advances in leveraging the power of relational databases "at scale" using Apache Spark SQL and DataFrames. We will now do a simple tutorial based on a real-world dataset to look at how to use Spark SQL. We will be using Spark DataFrames, but the focus will be more on using SQL. In a separate article, I will cover a detailed discussion around Spark DataFrames and common operations. I love using cloud services for my machine learning, deep learning, and even big data analytics needs, instead of painfully setting up my own Spark cluster. I will be using the Databricks Platform for my Spark needs. Databricks is a company founded by the creators of Apache Spark that aims to help clients with cloud-based big data processing using Spark. Read more Also: Scaling relational databases with Apache Spark SQL and DataFrames

4 questions Uber's open source program office answers with data

It's been said that "Software is eating the world," and every company will eventually become a "software company." Since open source is becoming the mainstream path for developing software, the way companies manage their relationships with the open source projects they depend on will be crucial for their success. An open source program office (OSPO) is a company's asset to manage such relationships, and more and more companies are setting them up. Even the Linux Foundation has a project called the TODO Group "to collaborate on practices, tools, and other ways to run successful and effective open source projects and programs". Read more

Kernel: LWN on Linux 5.1 and More, 'Lake'-named Hardware

  • 5.1 Merge window part 1
    As of this writing, 6,135 non-merge changesets have been pulled into the mainline repository for the 5.1 release. That is approximately halfway through the expected merge-window volume, which is a good time for a summary. A number of important new features have been merged for this release; read on for the details.
  • Controlling device peer-to-peer access from user space
    The recent addition of support for direct (peer-to-peer) operations between PCIe devices in the kernel has opened the door for different use cases. The initial work concentrated on in-kernel support and the NVMe subsystem; it also added support for memory regions that can be used for such transfers. Jérôme Glisse recently proposed two extensions that would allow the mapping of those regions into user space and mapping device files between two devices. The resulting discussion surprisingly led to consideration of the future of core kernel structures dealing with memory management. Some PCIe devices can perform direct data transfers to other devices without involving the CPU; support for these peer-to-peer transactions was added to the kernel for the 4.20 release. The rationale behind the functionality is that, if the data is passed between two devices without modification, there is no need to involve the CPU, which can perform other tasks instead. The peer-to-peer feature was developed to allow Remote Direct Memory Access (RDMA) network interface cards to pass data directly to NVMe drives in the NVMe fabrics subsystem. Using peer-to-peer transfers lowers the memory bandwidth needed (it avoids one copy operation in the standard path from device to system memory, then to another device) and CPU usage (the devices set up the DMAs on their own). While not considered directly in the initial work, graphics processing units (GPUs) and RDMA interfaces have been able to use that functionality in out-of-tree modules for years. The merged work concentrated on support at the PCIe layer. It included setting up special memory regions and the devices that will export and use those regions. It also allows finding out if the PCIe topology allows the peer-to-peer transfers.
  • Intel Posts Linux Perf Support For Icelake CPUs
    With the core functionality for Intel Icelake CPUs appearing to be in place, Intel's open-source developers have been working on the other areas of hardware enablement for these next-generation processors. The latest Icelake Linux patches we are seeing made public by Intel is in regards to the "perf" subsystem support. Perf, of course, is about exposing the hardware performance counters and associated instrumentation that can be exercised by user-space when profiling performance of the hardware and other events.
  • What is after Gemini Lake?
    Based on a 10 nm manufacturing process, the Elkhart Lake SoC uses Tremont microarchitectures (Atom) [2] and features Gen 11 graphics similar to the Ice Lake processors [3]. Intel’s Gen 11 solution offers 64 execution units, and it has managed over 1 TFLOP in GPU performance [4]. This can be compared with the Nvidia GeForce GT 1030 which offered a peak throughput of 0.94 TFLOPs [5]. Code has already been added in the Linux mainline kernel [6] suggesting a possible Computex announcement and mid to late 2019 availability [7].

GNOME Desktop: Parental Controls and GNOME Bugzilla

  • Parental controls hackfest
    Various of us have been meeting in the Red Hat offices in London this week (thanks Red Hat!) to discuss parental controls and digital wellbeing. The first two days were devoted to this; today and tomorrow will be dedicated to discussing metered data (which is unrelated to parental controls, but the hackfests are colocated because many of the same people are involved in both).
  • GNOME Bugzilla closed for new bug entry
    As part of GNOME’s ongoing migration from Bugzilla to Gitlab, from today on there are no products left in GNOME Bugzilla which allow the creation of new tickets. The ID of the last GNOME Bugzilla ticket is 797430 (note that there are gaps between 173191–200000 and 274555–299999 as the 2xxxxx ID range was used for tickets imported from Ximian Bugzilla). Since the year 2000, the Bugzilla software had served as GNOME’s issue tracking system. As forges emerged which offer tight and convenient integration of issue tracking, code review of proposed patches, automated continuous integration testing, code repository browsing and hosting and further functionality, Bugzilla’s shortcomings became painful obstacles for modern software development practices. Nearly all products which used GNOME Bugzilla have moved to GNOME Gitlab to manage issues. A few projects (Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy) have moved to other places (such as freedesktop.org Gitlab, self-hosted Bugzilla instances, or Github) to track their issues.