For those running an Intel ultrabook, here's some benchmarks using the Linux 3.16 kernel on this portable x86 hardware compared to Linux 3.15. Unfortunately, the results aren't too promising.
As some extra Linux 3.16 kernel benchmarks to share, I used the stable Linux 3.15 and compared it to Linux 3.16 Git on an ASUS Zenbook Prime UX32VDA ultrabook running a Core i7 "Ivy Bridge" processor with an Ubuntu 14.04 LTS host.
Up for your viewing pleasure today were some quick benchmarks done of the next-generation KDE desktop stack compared to the KDE 4.13.0 and Unity 7.2.1 desktops of Ubuntu 14.04 LTS.
For delivering some early preview figures of KDE Frameworks 5 with Plasma-Next, I used the Project Neon PPA recently to test out the full-screen Linux OpenGL gaming performance to see if it was affected differently than KDE4 or Unity. Much more in-depth testing will come when the next-gen KDE stack has been stabilized, but this should serve as some interesting preview figures.
During the Big Buck Bunny video playback process, the CPU usage was monitored by the test profile and we also monitored each graphics card's GPU temperature, GPU usage, and the overall AC system power draw (via a WattsUp power meter). The additional sensors can be polled automatically by the Phoronix Test Suite by setting the MONITOR=gpu.usage,gpu.temp,sys.power environment variable. This testing is quite straight forward and mainly intended for reference purposes for those thinking about a NVIDIA GPU for a Linux HTPC / multimedia PC, so let's get straight to the data.
Besides the Nouveau driver performance being faster thanks to experimental re-clocking when using the Linux 3.16 kernel, there are also performance improvements to note with some generations of AMD graphics processors.
The changes found within Linux 3.16 to benefit the Radeon DRM graphics performance are the GPU VM optimizations and large PTE support. Separate from this performance-related work for this kernel-side open-source AMD update is also HDMI deep color support, HDMI audio clean-ups, and other bug-fixes.
After having some interesting discussions last week around KVM and Xen performance improvements over the past years, I decided to do a little research on my own. The last complete set of benchmarks I could find were from the Phoronix Haswell tests in 2013. There were some other benchmarks from 2011 but those were hotly debated due to the Xen patches headed into kernel 3.0.
Last week Eric Anholt left Intel's Linux graphics driver team to go work for Broadcom developing a VC4 DRM/KMS and Gallium3D driver for the GPU that supports the Raspberry Pi.
Eric ended up making more progress in his first week than he anticipated in starting off this new open-source Linux graphics driver project. He ended up getting his work items done that originally he anticipated would take him about one month. The basic "hack driver" is now working to run triangle code running on a kernel with a relocations-based GEM interface. Thursday he already started on the Broadcom VC4 Gallium3D driver, which in turn is based upon the Freedreno driver for Qualcomm's ARM hardware.
Back in 2012 with the NVIDIA 310 Linux driver series a threaded OpenGL optimization was added to the proprietary graphics driver. When this driver premiered we tested NVIDIA's Linux threaded OpenGL optimizations to mixed results. We're back now re-testing the OpenGL threaded optimizations to see if it makes any more of a difference now with modern Linux games and OpenGL workloads while using the latest 337.25 Linux driver.
NVIDIA's OpenGL threaded optimization feature allows offloading the CPU computational workload to a separate processor thread. This feature is designed to benefit CPU-heavy workloads but can potentially worsen the performance depending upon the game/application's particular OpenGL calls. As a result, the threaded optimization feature remains disabled by default while it's been around for two years. For more information on the threaded optimization feature and how to enable it, see the earlier article.
Russia Industry And Trade Ministry To Replace Untrusted Intel And AMD Processors With Their Own ARM DesignSubmitted by Roy Schestowitz on Sunday 22nd of June 2014 02:22:39 PM Filed under
Support for running Wayland's Weston compositor directly off the DRM kernel driver for the NVIDIA Tegra K1 SoC found within the Jetson TK1 development board has been proposed for mainline Weston.
Covered last week on Phoronix was news that Codethink got a blob-free Linux driver stack working on the Jetson TK1, a fabulous sub-$200 ARM development board that's rather powerful. There's an emerging DRM driver for the Kepler "GK20A" graphics found within this new Cortex-A15 SoC. Codethink managed to get Wayland's Weston compositor working and this week they released their TK1 patch-set.
Our latest Debian GNU/Linux benchmarks following the recent GNU/kFreeBSD vs. GNU/Linux comparison are benchmarks of Debian GNU/Linux in its latest testing form for 8.0 "Jessie" compared to a stock Ubuntu 14.04 LTS plus with an assortment of updates.
From the same Core i7 3960X Extreme Edition system with 8GB of RAM, 64GB OCZ Vertex solid-state drive, and Radeon HD 4850 graphics, the following configurations were benchmarked:
- Debian GNU/Linux "Testing" of 8.0 Jessie with the Linux 3.14 kernel, X.Org Server 1.15.1, Mesa 10.1.4, GCC 4.8.3, and the default EXT4 file-system. It's worth noting that with the Linux 3.14 kernel in Debian testing the i7-3960X EE system defaulted to the P-State scaling driver with the powersave governor.
- Ubuntu 14.04 LTS with the Linux 3.13 stock kernel, Mesa 10.1.0, X.Org Server 1.15.1, and an EXT4 file-system.
- Ubuntu 14.04 LTS updated to the Linux 3.15 mainline kernel (from the mainline PPA) that besides bumping the kernel version forward also switches over from the ACPI CPUfreq ondemand governor to the Intel P-State performance governor.
- The updated Ubuntu 14.04 LTS + Linux 3.15 stack plus enabling the Oibaf PPA for tapping Mesa 10.3.0-devel.
- The most updated stack (ditto above) plus pulling down the GCC 4.9 kernel onto Ubuntu 14.04 to replace GCC 4.8.
All of these Debian and Ubuntu Linux benchmarks were carried out via the Phoronix Test Suite benchmarking software.
Outside of Logitech, there's many Linux users that have come up with several different open-source utilities for supporting Logitech under Linux. For most of these apps the hardware support is limited to the few keyboards/mice that the developer owns, but it isn't too hard reverse-engineering a USB keyboard for others to help out and contribute.
As it's been a while since last delivering any "4K" resolution OpenGL benchmarks at Phoronix, out today -- now that we're done with our massive 60+ GPU open-source testing and 35-way proprietary driver comparison -- are benchmarks of several NVIDIA GeForce and AMD Radeon graphics cards when running an assortment of Linux games and other OpenGL tests at the 4K resolution.
A number of commits have landed within mainline Mesa today for improving the open-source Radeon driver's video encoding support via the recently exposed VCE video encoding engines and the recently introduced OpenMAX state tracker to Gallium3D.
First up, Gallium3D's video layer code and OpenMAX encode state tracker gained support for H.264 level support. H.264 encoding levels exposed are AVC Level 1/1b, 1.1, 1.2, 1.3, 2, 2.1, 2.2, 3, 3.1, 3.2, 4, 4.1, 4.2, 5, and 5.1. H.264 levels are a set of constraints for indicating encoder/decoder performance for a given profile in being able to meet a set of defined speeds for that level and all lower levels. Details on the H.264 levels are documented via Wikipedia.
The short answer to the question of which is the best filesystem for a MariaDB server is ext4, XFS, or Btrfs. Why those three? All are solid enterprise journaling filesystems that scale nicely from small to very large files and very large storage volumes.
Trying to figure out which filesystem gives the best performance may be fun, but the filesystem won't make a large difference in the performance of your MariaDB server. Your hardware is the most crucial factor in eking out the most speed. Fast hard drives, discrete drive controllers, lots of fast RAM, a multi-core processor, and a fast network have a larger impact on performance than the filesystem. You can also tailor your MariaDB configuration options for best performance for your workloads.
First off, Canonical emphasized to Ars multiple times that it is not getting into the hardware business. If you really want to buy one of these things, you can have Tranquil PC build one for you (for £7,575, or about $12,700), but Canonical won’t sell you an Orange Box for your lab—there are too many partner relationships it could jeopardize by wading into the hardware game. But what Canonical does want to do is let you fiddle with an Orange Box. It makes for an amazing demo platform—a cloud-in-a-box that Canonical can use to show off the fancy services and tools it offers.
Inside the custom orange chassis are ten stripped Intel Ivy Bridge D53427RKE NUCs. Each comes with 16GB of RAM and a 120GB SSD, and they’re all connected to a gigabit Ethernet switch. One of the NUCs is the control node; its USB and HDMI ports are wired to the Orange Box’s rear panel, and that particular node also runs Canonical’s MAAS software. Its single unified internal 320W power supply runs on a single 110v outlet—even when all ten nodes are going flat-out, it doesn't require a second power plug.
Last weekend I published 2D performance benchmarks comparing Nouveau to NVIDIA's official driver. To no real surprise, the proprietary NVIDIA driver beat Nouveau in most micro-benchmarks when it comes to 2D (and separately, 3D) performance. With the open-source Radeon stack, however, it presents a much tougher fight against the proprietary Catalyst driver.
With the Linux 3.16 kernel comes the ability to re-clock select NVIDIA GeForce GPUs when using the open-source, reverse-engineered Nouveau driver. Here's my first impressions with trying out this option to maximize the performance of NVIDIA graphics cards on open-source drivers.
As explained previously, the GPUs where Nouveau in Linux 3.16 will support re-clocking are the NV40, NVAA, and NVE0 GPU series. The NV40 chip family is the GeForce 6 and 7 series. The NVAA series meanwhile is part of the NV50 family but consists of just the GeForce 8100/8200/8300 mobile GPUs / nForce 700a series and 8200M G. NVE0 meanwhile is the most interesting of the bunch and consists of the Kepler (GeForce 600/700 series) GPUs. Re-clocking support for other graphics processor generations is still a work-in-progress.
After last weekend delivering 30-way Intel/AMD/NVIDIA 2D Linux benchmarks this weekend I have some results comparing the GeForce GPU performance for 2D operations between the open-source Nouveau driver and the closed-source proprietary NVIDIA Linux driver.
All testing happened from the same Intel Core i7 4770K system running Ubuntu 14.04 64-bit. The Nouveau stack was powered by the Linux 3.15 kernel, Mesa 10.3-devel, and xf86-video-nouveau 1.0.10. The proprietary NVIDIA Linux graphics driver stack was the NVIDIA 337.25 proprietary driver running on Linux 3.13.