Language Selection

English French German Italian Portuguese Spanish

RISC-V: Military/Aerospace Designs, Road Ahead, Libre GPU

Filed under
  • RISC-V Eases Innovation in Military/Aerospace Designs

    The RISC-V Instruction Set Architecture (ISA), and open hardware standards in general, have the potential to be a real boon the military and aerospace designers. “RISC-V is being received with open arms by the military and aerospace sectors,” said Tim Morin, director of strategic marketing in Microchip Technnology’s FPGA business unit. “They are very excited about it.”

    From a design perspective, the ISA addresses the need to minimize power consumption, streamline bill of material (BOM) costs, and optimize board space. “With RISC-V, when you create an integrated circuit, you do exactly what you need,” said Michael Cave, senior director, strategic technology at SiFive, adding that the company is bidding on DARPA projects currently. “The government loves that reality. The government feels like if they don’t do something innovative, China is going to capture the lead.”

  • RISC-V: The Road Ahead

    Now that RISC-V has established a beachhead as a deeply embedded controller in SoCs, it’s time to start asking the next question: Can this open-source instruction-set architecture (ISA) make the next big leap into being an alternative to Arm and the x86 as a host processor?

    The short answer is yes, but it could take several years and there are plenty of pitfalls along the way. Essentially, the freewheeling open-source community behind RISC-V will need to develop and adhere to a wide range of system-level standards.

    So far, Nvidia and Western Digital plan to use RISC-V controllers in their SoCs, and Microsemi will use it in a new FPGA. Andes, Cortus, and startup SiFive sell IP cores, and a handful of startups plan to launch mainly machine-learning accelerators using it.

    RISC-V is in as many as 20 million fitness bands and smartwatches in China. In the U.S., SiFive has shipped more than 2,500 development boards using processors that it aims to sell as IP cores or as SoCs through its design services.

    “The lowest-hanging fruit is the embedded space where the APIs are not exposed to programmers,” said Rick O’Connor, executive director of the non-profit RISC-V Foundation. “That’s the easiest thing to do, but there’s healthy activity in all segments.

  • Libre RISC-V GPU Aiming For 2.5 Watt Power Draw Continues Being Plotted

    Besides having a dedicated Intel GPU to look forward to in 2020, the effort around creating an open-source RISC-V architecture based graphics processor continues being spearheaded by Luke Kenneth Casson Leighton and other libre hardware developers.

    This is the ambitious effort for effectively creating a RISC-V-based Vulkan accelerator that hopes to be able to achieve 25 FPS @ 720p, 5~6 GFLOPs. Part of how they plan to make a RISC-V based GPU viable is via their Simple-V extension for RISC-V. While the performance target is incredibly lax by today's standards, they do plan for an aggressive power consumption target of just about 2.5 Watts.

More in Tux Machines

How to use Spark SQL: A hands-on tutorial

In the first part of this series, we looked at advances in leveraging the power of relational databases "at scale" using Apache Spark SQL and DataFrames. We will now do a simple tutorial based on a real-world dataset to look at how to use Spark SQL. We will be using Spark DataFrames, but the focus will be more on using SQL. In a separate article, I will cover a detailed discussion around Spark DataFrames and common operations. I love using cloud services for my machine learning, deep learning, and even big data analytics needs, instead of painfully setting up my own Spark cluster. I will be using the Databricks Platform for my Spark needs. Databricks is a company founded by the creators of Apache Spark that aims to help clients with cloud-based big data processing using Spark. Read more Also: Scaling relational databases with Apache Spark SQL and DataFrames

4 questions Uber's open source program office answers with data

It's been said that "Software is eating the world," and every company will eventually become a "software company." Since open source is becoming the mainstream path for developing software, the way companies manage their relationships with the open source projects they depend on will be crucial for their success. An open source program office (OSPO) is a company's asset to manage such relationships, and more and more companies are setting them up. Even the Linux Foundation has a project called the TODO Group "to collaborate on practices, tools, and other ways to run successful and effective open source projects and programs". Read more

Kernel: LWN on Linux 5.1 and More, 'Lake'-named Hardware

  • 5.1 Merge window part 1
    As of this writing, 6,135 non-merge changesets have been pulled into the mainline repository for the 5.1 release. That is approximately halfway through the expected merge-window volume, which is a good time for a summary. A number of important new features have been merged for this release; read on for the details.
  • Controlling device peer-to-peer access from user space
    The recent addition of support for direct (peer-to-peer) operations between PCIe devices in the kernel has opened the door for different use cases. The initial work concentrated on in-kernel support and the NVMe subsystem; it also added support for memory regions that can be used for such transfers. Jérôme Glisse recently proposed two extensions that would allow the mapping of those regions into user space and mapping device files between two devices. The resulting discussion surprisingly led to consideration of the future of core kernel structures dealing with memory management. Some PCIe devices can perform direct data transfers to other devices without involving the CPU; support for these peer-to-peer transactions was added to the kernel for the 4.20 release. The rationale behind the functionality is that, if the data is passed between two devices without modification, there is no need to involve the CPU, which can perform other tasks instead. The peer-to-peer feature was developed to allow Remote Direct Memory Access (RDMA) network interface cards to pass data directly to NVMe drives in the NVMe fabrics subsystem. Using peer-to-peer transfers lowers the memory bandwidth needed (it avoids one copy operation in the standard path from device to system memory, then to another device) and CPU usage (the devices set up the DMAs on their own). While not considered directly in the initial work, graphics processing units (GPUs) and RDMA interfaces have been able to use that functionality in out-of-tree modules for years. The merged work concentrated on support at the PCIe layer. It included setting up special memory regions and the devices that will export and use those regions. It also allows finding out if the PCIe topology allows the peer-to-peer transfers.
  • Intel Posts Linux Perf Support For Icelake CPUs
    With the core functionality for Intel Icelake CPUs appearing to be in place, Intel's open-source developers have been working on the other areas of hardware enablement for these next-generation processors. The latest Icelake Linux patches we are seeing made public by Intel is in regards to the "perf" subsystem support. Perf, of course, is about exposing the hardware performance counters and associated instrumentation that can be exercised by user-space when profiling performance of the hardware and other events.
  • What is after Gemini Lake?
    Based on a 10 nm manufacturing process, the Elkhart Lake SoC uses Tremont microarchitectures (Atom) [2] and features Gen 11 graphics similar to the Ice Lake processors [3]. Intel’s Gen 11 solution offers 64 execution units, and it has managed over 1 TFLOP in GPU performance [4]. This can be compared with the Nvidia GeForce GT 1030 which offered a peak throughput of 0.94 TFLOPs [5]. Code has already been added in the Linux mainline kernel [6] suggesting a possible Computex announcement and mid to late 2019 availability [7].

GNOME Desktop: Parental Controls and GNOME Bugzilla

  • Parental controls hackfest
    Various of us have been meeting in the Red Hat offices in London this week (thanks Red Hat!) to discuss parental controls and digital wellbeing. The first two days were devoted to this; today and tomorrow will be dedicated to discussing metered data (which is unrelated to parental controls, but the hackfests are colocated because many of the same people are involved in both).
  • GNOME Bugzilla closed for new bug entry
    As part of GNOME’s ongoing migration from Bugzilla to Gitlab, from today on there are no products left in GNOME Bugzilla which allow the creation of new tickets. The ID of the last GNOME Bugzilla ticket is 797430 (note that there are gaps between 173191–200000 and 274555–299999 as the 2xxxxx ID range was used for tickets imported from Ximian Bugzilla). Since the year 2000, the Bugzilla software had served as GNOME’s issue tracking system. As forges emerged which offer tight and convenient integration of issue tracking, code review of proposed patches, automated continuous integration testing, code repository browsing and hosting and further functionality, Bugzilla’s shortcomings became painful obstacles for modern software development practices. Nearly all products which used GNOME Bugzilla have moved to GNOME Gitlab to manage issues. A few projects (Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy) have moved to other places (such as Gitlab, self-hosted Bugzilla instances, or Github) to track their issues.