Language Selection

English French German Italian Portuguese Spanish

Nvidia scientist calls for expanded research into parallelism

Filed under
Hardware

Expanded research is needed into techniques for identifying and preserving parallelism in chip applications, according to chip maker Nvidia's chief scientist.

Comparing the futures of general-purpose CPUs and graphics processors (GPUs), David Kirk told the 17th Hot Chips conference on the Stanford Univeristy campus here on Tuesday (Aug. 16) that a crisis looms in programming technology. He said this could not only blight the future CPU market but also bring an end to improvements in graphics performance despite continued improvements in GPUs.

"If we look at the situation of general-purpose CPUs," Kirk said, "we see architects moving rapidly to multithreading and multicore designs. But we don't see a lot more threads to run. What parallelism there may be in algorithms is often concealed or lost altogether in the programming process."

Kirk grounded his pessimism in the experiences of game developers trying to exploit new multicore CPU chips. "We are already seeing some games limited by CPU throughput. We can render images faster than the CPU can send us the data," Kirk said. "But when game developers try to use dual-core CPUs to help, we have seen virtually no benefit to the second CPU core. And if the developer doesn't clearly understand the interactions of the cores with the caches, we have seen the application actually run slower on a dual-core machine."

Kirk contrasted this situation against the entirely different structure inside the GPU. "Graphics has been called embarrassingly parallel," he said. "In effect, each stage in our pipeline, each vertex in the scene and each pixel in the image is independent. And we have put a lot of effort into not concealing this parallelism with programming."

This allows a GPU developer to simply add more vertex processors and shading engines to handle more vertices and more pixels in parallel, as process technology allows. "We are limited by chip area, not by parallelism," Kirk observed.

Full Story.

More in Tux Machines

Intel Dominates The Perf Changes For Linux 4.2

For those using perf for Linux profiling with performance counters, the Linux 4.2 kernel will bring many improvements to benefit Intel customers. Read more Also: Exciting Features Merged So Far For The Linux 4.2 Kernel

BeagleCore Open Source Internet Of Things Development Board (video)

BeagleCore is a new Internet of Things development board that has been created to be 100 percent open source and provide an easy way for makers, developers and hobbyists to have access to all all the core features of BeagleBone Black in a miniaturised computer module. Read more

Red Hat CEO Warns About Faux Open Source

Red Hat CEO Jim Whitehurst spent last week’s annual Summit praising the progress made in open source, but during his opening keynote he also warned attendees about companies that claim to be open source without actually encouraging open participation and innovation from a broad group of users. Read more

Check the Ubuntu Touch Wish List for Apps and New Features

If you have any questions regarding new features and apps that are present, absent, or in the works for the Ubuntu Touch platform, you need to know that there is already a comprehensive wish list out there that takes care of everything. Read more Also: Unity 8 Just Got a Cool 3D App Switcher for the Desktop