Language Selection

English French German Italian Portuguese Spanish

Calls to end US domination of the internet

Filed under
Web

WHENEVER you surf the web, send emails or download music, an unseen force is at work in the background, making sure you connect to the sites, inboxes and databases you want. The name of this brooding presence? The US government.

Some 35 years after the US military invented the internet, the US Department of Commerce retains overall control of the master computers that direct traffic to and from every web and email address on the planet.

But a group convened by the UN last week to thrash out the future of the net is calling for an end to US domination of the net, proposing that instead a multinational forum of governments, companies and civilian organisations is created to run it.

The UN's Working Group on Internet Governance (WGIG) says US control hinders many developments that might improve it. These range from efforts to give the developing world more affordable net access to coming up with globally agreed and enforceable measures to boost net privacy and fight cybercrime.

US control also means that any changes to the way the net works, including the addition of new domain names such as .mobi for cellphone-accessed sites, have to be agreed by the US, whatever experts in the rest of the world think. The flipside is that the US could make changes without the agreement of the rest of the world.

In a report issued in Geneva in Switzerland on 14 July, the WGIG seeks to overcome US hegemony. "The internet should be run multilaterally, transparently and democratically. And it must involve all stakeholders," says Markus Kummer, a Swiss diplomat who is executive coordinator of the WGIG.

So why is the internet's overarching technology run by the US? The reason is that the net was developed there in the late 1960s by the Pentagon's Advanced Research Projects Agency (ARPA) in a bid to create a communications medium that would still work if a Soviet nuclear strike took out whole chunks of the network. This medium would send data from node to node in self-addressed "packets" that could take any route they liked around the network, avoiding any damaged parts.

Today the internet has 13 vast computers dotted around the world that translate text-based email and web addresses into numerical internet protocol (IP) node addresses that computers understand. In effect a massive look-up table, the 13 computers are collectively known as the Domain Name System (DNS). But the DNS master computer, called the master root server, is based in the US and is ultimately controlled by the Department of Commerce. Because the data it contains is propagated to all the other DNS servers around the world, access to the master root server file is a political hot potato.

Currently, only the US can make changes to that master file. And that has some WGIG members very worried indeed.

Full Article.

two words

Fuck that.

If they think the US will hand over control after inventing the 'Net in the first place, they're even dumber then most of the politicians in America.

Let them build their own damn Internet.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

Games: The Spicy Meatball Saves The Day, Uebergame, DwarfCorp

Android Leftovers

Baidu puts open source deep learning into smartphones

A year after it open sourced its PaddlePaddle deep learning suite, Baidu has dropped another piece of AI tech into the public domain – a project to put AI on smartphones. Mobile Deep Learning (MDL) landed at GitHub under the MIT license a day ago, along with the exhortation “Be all eagerness to see it”. MDL is a convolution-based neural network designed to fit on a mobile device. Baidu said it is suitable for applications such as recognising objects in an image using a smartphone's camera. Read more

AMD and Linux Kernel

  • Ataribox runs Linux on AMD chip and will cost at least $250
    Atari released more details about its Ataribox game console today, disclosing for the first time that the machine will run Linux on an Advanced Micro Devices processor and cost $250 to $300. In an exclusive interview last week with GamesBeat, Ataribox creator and general manager Feargal Mac (short for Mac Conuladh) said Atari will begin a crowdfunding campaign on Indiegogo this fall and launch the Ataribox in the spring of 2018. The Ataribox will launch with a large back catalog of the publisher’s classic games. The idea is to create a box that makes people feel nostalgic about the past, but it’s also capable of running the independent games they want to play today, like Minecraft or Terraria.
  • Linux 4.14 + ROCm Might End Up Working Out For Kaveri & Carrizo APUs
    It looks like the upstream Linux 4.14 kernel may end up playing nicely with the ROCm OpenCL compute stack, if you are on a Kaveri or Carrizo system. While ROCm is promising as AMD's open-source compute stack complete with OpenCL 1.2+ support, its downside is that for now not all of the necessary changes to the Linux kernel drivers, LLVM Clang compiler infrastructure, and other components are yet living in their upstream repositories. So for now it can be a bit hairy to setup ROCm compute on your own system, especially if running a distribution without official ROCm packages. AMD developers are working to get all their changes upstreamed in each of the respective sources, but it's not something that will happen overnight and given the nature of Linux kernel development, etc, is something that will still take months longer to complete.
  • Latest Linux kernel release candidate was a sticky mess
    Linus Torvalds is not noted as having the most even of tempers, but after a weekend spent scuba diving a glitch in the latest Linux kernel release candidate saw the Linux overlord merely label the mess "nasty". The release cycle was following its usual cadence when Torvalds announced Linux 4.14 release candidate 2, just after 5:00PM on Sunday, September 24th.
  • Linus Torvalds Announces the Second Release Candidate of Linux Kernel 4.14 LTS
    Development of the Linux 4.14 kernel series continues with the second Release Candidate (RC) milestone, which Linus Torvalds himself announces this past weekend. The update brings more updated drivers and various improvements. Linus Torvalds kicked off the development of Linux kernel 4.14 last week when he announced the first Release Candidate, and now the second RC is available packed full of goodies. These include updated networking, GPU, and RDMA drivers, improvements to the x86, ARM, PowerPC, PA-RISC, MIPS, and s390 hardware architectures, various core networking, filesystem, and documentation changes.