Language Selection

English French German Italian Portuguese Spanish

AMD Athlon 64 4800+ X2 - Dual Core CPU

Filed under
Hardware
Reviews

On the 25th of April AMD announced loads of dual core stuff. Besides the launch of the dual core 8xx series Opteron it also announced the 2xx dual core Opteron and the dual core Athlon 64 X2. Today we’re a step closer to the launch of Athlon 64 X2 but it’s not here quite yet - you’ll have to wait until June for that pleasure. If only there was a large international IT trade show that started at the end of May why, that would be the perfect venue to announce a new processor.

Until the official launch happens we won’t be able to get our hands on a fully fledged Athlon 64 X2 PC, so what we have here is a technical preview based on an AMD press kit of an Asus A8N SLI Deluxe motherboard, an Athlon 64 X2 4800+ and 1GB of Corsair 3200XL Pro memory.

There are four processors in the Athlon 64 X2 family which share a number of features with each other, and with existing models of Athlon 64. Athlon 64 X2 continues to use socket 939, the fabrication process is 90nm (.09 micron) using SOI (Silicon on Insulator), the 128-bit memory controller is compatible with PC1600, PC2100, PC2700 and PC3200 DDR, although you’d be barking mad to use anything but top notch memory, and there’s one bi-directional 1GHz Hyper Transport link. This gives an effective data bandwidth of 14.4GB/sec (8GB/sec x1 HyperTransport link + 6.4GB/sec memory bandwidth). X2 has 64KB of L1 instruction and 64KB of L1 data cache, just like Athlon 64.

The second core raises the transistor count to 233.2 million, but thanks to the 90nm fabrication process the die size is only 199 square millimetres. Compare that to the 130nm SOI Athlon 64 4000+ and Athlon 64 FX-55 which have cores that use 105.9 million transistors but which have an area of 193 square mm and you’ll see what an effective die shrink can bring to the party.

The Athlon 64 X2 4800+ has a nominal operating voltage of 1.35-1.40V and a TDP (Thermal Design Power) of 110W which compares very favourably to the FX-55 at 104W and the 4000+ at 89W.

Add in support for SSE3 and a revised memory controller to help compatibility with a broader range of memory modules and what you’ve effectively got is a pair of the new Venice cores tied together with the dual Opteron crossbar.

Full Review.

More in Tux Machines

Radisys Contributes Its LTE RAN Software to M-CORD

Linux and Linux Foundation

  • Linux 4.10 Released as First New Kernel of 2017
    After a one week delay, Linus Torvalds released the first new Linux kernel of 2017 on Feb. 19, with the debut of Linux 4.10. The Linux 4.9 kernel (aka 'Roaring Lionus'' was released back on Dec. 11. There was some talk in 2016 that seemed to indicate that Linux 4.10 would in fact be re-numbered as Linux 5.0 but that didn't end up happening. "On the whole, 4.10 didn't end up as small as it initially looked," Torvalds wrote in his release announcement. "After the huge release that was 4.9, I expected things to be pretty quiet, but it ended up very much a fairly average release by modern kernel standards." "So we have about 13,000 commits (not counting merges- that would be another 1200+ commits if you count those)," Torvalds added.
  • The Companies That Support Linux and Open Source: Mender.io
    IoT is largely transitioning from hype to implementation with the growth of smart and connected devices spanning across all industries including building automation, energy, healthcare and manufacturing. The automotive industry has given some of the most tangible examples of both the promise and risk of IoT, with Tesla’s ability to deploy over-the-air software updates a prime example of forward-thinking efficiency. On the other side, the Jeep Cherokee hack in July 2015 displayed the urgent need for security to be a top priority for embedded devices as several security lapses made it vulnerable and gave hackers the ability to remotely control the vehicle. One of the security lapses included the firmware update of the head unit (V850) not having the proper authenticity checks.
  • Open Source Networking: Disruptive Innovation Ready for Prime Time
    Innovations are much more interesting than inventions. The “laser” is a classic invention and “FedEx” is a classic innovation. Successful innovation disrupts entire industries and ecosystems as we’ve seen with Uber, AirBnB, and Amazon to name just a few. The entire global telecommunication industry is at the dawn of a new era of innovation. Innovations should be the rising tide in which everybody wins except what’s referred to as “laggards.” Who are the laggards going to be in this new era of open communications? You don’t want to be one. [...] It’s clear from this presentation that The Linux Foundation and its Open Source Networking and Orchestration portfolio of projects is driving real innovation in the networking ecosystem. Successful and impactful innovations take time as the disruptive forces ripple throughout the ecosystem. The Linux Foundation is taking on the complex task of coordinating multiple open source initiatives with the goal to eliminate barriers to adoption. Providing end-to-end testing and harmonization will reduce many deployment barriers and accelerate the time required for production deployments. Those interested in the future of open source networking should attend ONS 2017. No one wants to be a “laggard.”

today's howtos

Servers/Networks

  • Of Pies and Platforms: Platform-as-a-Service vs. Containers-as-a-Service
    I’m often asked about the difference between using a platform as a service (PaaS) vs. a containers-as-a-service (CaaS) approach to developing cloud applications. When does it makes sense to choose one or the other? One way to describe the difference and how it affects your development time and resources is to look at it like the process of baking a pie.
  • Understanding OpenStack's Success
    At the time I got into the data storage industry, I was working with and developing RAID and JBOD (Just a Bunch Of Disks) controllers for 2 Gbit Fibre Channel Storage Area Networks (SAN). This was a time before "The Cloud". Things were different—so were our users. There was comfort in buying from a single source or single vendor. In an ideal world, it should all work together, harmoniously, right? And when things go awry, that single vendor should be able to solve every problem within that entire deployment.
  • KEYNOTE Mesos + DCOS, Not Mesos versus DCOS