Language Selection

English French German Italian Portuguese Spanish

Rust 1.39.0 Release and Beyond

Filed under
Development
  • Announcing Rust 1.39.0

    The Rust team is happy to announce a new version of Rust, 1.39.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

    [...]

    The highlights of Rust 1.39.0 include async/.await, shared references to by-move bindings in match guards, and attributes on function parameters. Also, see the detailed release notes for additional information.

  • Rust 1.39.0 released

    Version 1.39.0 of the Rust language is available. The biggest new feature appears to be the async/await mechanism, which is described in this blog post: "So, what is async await? Async-await is a way to write functions that can 'pause', return control to the runtime, and then pick up from where they left off. Typically those pauses are to wait for I/O, but there can be any number of uses."

  • Async-await on stable Rust!

    On this coming Thursday, November 7, async-await syntax hits stable Rust, as part of the 1.39.0 release. This work has been a long time in development -- the key ideas for zero-cost futures, for example, were first proposed by Aaron Turon and Alex Crichton in 2016! -- and we are very proud of the end result. We believe that Async I/O is going to be an increasingly important part of Rust's story.

    While this first release of "async-await" is a momentous event, it's also only the beginning. The current support for async-await marks a kind of "Minimum Viable Product" (MVP). We expect to be polishing, improving, and extending it for some time.

    Already, in the time since async-await hit beta, we've made a lot of great progress, including making some key diagnostic improvements that help to make async-await errors far more approachable. To get involved in that work, check out the Async Foundations Working Group; if nothing else, you can help us by filing bugs about polish issues or by nominating those bugs that are bothering you the most, to help direct our efforts.

  • Support lifecycle for Clang/LLVM, Go, and Rust in Red Hat Enterprise Linux 8

    The Go and Rust languages continue to evolve and add new features with each compiler update, which is why so many users are interested in getting the latest versions of the compilers. At the same time, these compilers are designed to remain compatible with older code. So, even as we advance to newer versions of Go and Rust within the RHEL 8 application streams, you should not need to update your codebase to keep it compilable. Once you’ve compiled your valid code using the Go or Rust application stream, you can make the assumption that it will continue to compile with that stream for the full life of RHEL 8.

    We are excited to continue to bring you the latest and greatest in new compiler technologies. Stay tuned to the Red Hat Developer blog to learn more about what you can do with LLVM, Go, and Rust.

Rust 1.39 Released With Async-Await Support

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

Games: Valve, Half-Life, and Counter-Strike: Global Offensive

  • Valve Announcing Half-Life: Alyx VR Game On Thursday

    Valve has confirmed recent rumors around one of their new virtual reality games in development being Half-Life: Alyx. Valve tweeted out a short time ago that Half-Life: Alyx will be announced on Thursday. However, the VR game isn't expected to ship until sometime in 2020.

  • Valve has now confirmed Half-Life: Alyx, their new VR flagship title

    Well, that was a little sooner than expected. Valve have now officially confirmed Half-Life is back with their VR title Half-Life: Alyx.

  • Counter-Strike: Global Offensive releases the huge Operation Shattered Web update

    Not content with just announcing Half-Life: Alyx, their new VR flagship title, Valve also updated Counter-Strike: Global Offensive with a big new operation called Shattered Web. I have to admit, I'm really loving the humour from whoever has been running the CS:GO Twitter account lately. Earlier today they put up a poll on Twitter, asking what people preferred between a new Operation and a weapon nerf. They then quickly replied with "Loud and clear, Twitter. We'll get started." and then minutes later "OK, we're done"—brilliant. Not great for me mind you, being in the UK the timings are never great with it now gone midnight but here I am…

Supercomputing Articles

  • Exascale meets hyperscale: How high-performance computing is transitioning to cloud-like environments

    Twice a year the high-performance computing (HPC) community anxiously awaits the announcement of the latest edition of the Top500 list, cataloging the most powerful computers on the planet. The excitement of a supercomputer breaking the coveted exascale barrier and moving into the top position typically overshadows the question of which country will hold the record. As it turned out, the top 10 systems on the November 2019 Top500 list are unchanged from the previous revision with Summit and Sierra still holding #1 and #2 positions, respectively. Despite the natural uncertainty around the composition of the Top500 list, there is little doubt about software technologies that are helping to reshape the HPC landscape. Starting at the International Supercomputing conference earlier this year, one of the technologies leading this charge is containerization, lending further credence to how traditional enterprise technologies are influencing the next generation of supercomputing applications. Containers are borne out of Linux, the operating system underpinning Top500 systems. Because of that, the adoption of container technologies has gained momentum and many supercomputing sites already have some portion of their workflows containerized. As more supercomputers are being used to run artificial intelligence (AI) and machine learning (ML) applications to solve complex problems in science-- including disciplines like astrophysics, materials science, systems biology, weather modeling and cancer research, the focus of the research is transitioning from using purely computational methods to AI-accelerated approaches. This often requires the repackaging of applications and restaging the data for easier consumption, where containerized deployments are becoming more and more important.

  • Exploring AMD’s Ambitious ROCm Initiative

    Three years ago, AMD released the innovative ROCm hardware-accelerated, parallel-computing environment [1] [2]. Since then, the company has continued to refine its bold vision for an open source, multiplatform, high-performance computing (HPC) environment. Over the past three years, ROCm developers have contributed many new features and components to the ROCm open software platform. ROCm is a universal platform for GPU-accelerated computing. A modular design lets any hardware vendor build drivers that support the ROCm stack [3]. ROCm also integrates multiple programming languages and makes it easy to add support for other languages. ROCm even provides tools for porting vendor-specific CUDA code into a vendor-neutral ROCm format, which makes the massive body of source code written for CUDA available to AMD hardware and other hardware environments.

  • High-Performance Python – GPUs

    When GPUs became available, C code via CUDA, a parallel computing platform and programming model developed by Nvidia for GPUs, was the logical language of choice. Since then, Python has become the tool of choice for machine learning, deep learning, and, to some degree, scientific code in general. Not long after the release of CUDA, the Python world quickly created tools for use with GPUs. As with new technologies, a plethora of tools emerged to integrate Python with GPUs. For some time, the tools and libraries were adequate, but soon they started to show their age. The biggest problem was incompatibility. If you used a tool to write code for the GPU, no other tools could read or use the data on the GPU. After making computations on the GPU with one tool, the data had to be copied back to the CPU. Then a second tool had to copy the data from the CPU to the GPU before commencing its computations. The data movement between the CPU and the GPU really affected overall performance. However, these tools and libraries allowed people to write functions that worked with Python. In this article, I discuss the Python GPU tools that are being actively developed and, more importantly, likely to interoperate. Some tools don’t need to know CUDA for GPU code, and other tools do need to know CUDA for custom Python kernels.

  • Porting CUDA to HIP

    You’ve invested money and time in writing GPU-optimized software with CUDA, and you’re wondering if your efforts will have a life beyond the narrow, proprietary hardware environment supported by the CUDA language. Welcome to the world of HIP, the HPC-ready universal language at the core of AMD’s all-open ROCm platform [1]. You can use HIP to write code once and compile it for either the Nvidia or AMD hardware environment. HIP is the native format for AMD’s ROCm platform, and you can compile it seamlessly using the open source HIP/​Clang compiler. Just add CUDA header files, and you can also build the program with CUDA and the NVCC compiler stack (Figure 1).

  • OpenMP – Coding Habits and GPUs

    When first using a new programming tool or programming language, it’s always good to develop some good general habits. Everyone who codes with OpenMP directives develops their own habits – some good and some perhaps not so good. As this three-part OpenMP series finishes, I highlight best practices from the previous articles that can lead to good habits. Enamored with new things, especially those that drive performance and scalability, I can’t resist throwing a couple more new directives and clauses into the mix. After covering these new directives and clauses, I will briefly discuss OpenMP and GPUs. This pairing is fairly recent, and compilers are still catching up to the newer OpenMP standards, but it is important for you to understand that you can run OpenMP code on targeted offload devices (e.g., GPUs).

  • News and views on the GPU revolution in HPC and Big Data:

    Exploring AMD's Ambitious ROCm Initiative Porting CUDA to HIP Python with GPUs OpenMP – Coding Habits and GPUs

IPFire 2.23 - Core Update 138 released

Just days after the last one, we are releasing IPFire 2.23 - Core Update 138. It addresses and mitigates recently announced vulnerabilities in Intel processors. Read more

today's howtos