Language Selection

English French German Italian Portuguese Spanish

Reinventing the Internet

Filed under
Web

Part of the Internet was broken — for the 76th time that week.

Haugsness was on duty for the Internet Storm Center, the closest thing to a 911 emergency-response system for the global network.

Haugsness wrote up an alert and a suggested solution, and posted it on the Web.

Built by academics when everyone online was assumed to be a "good citizen," the Internet today is buckling under the weight of what is estimated to be nearly a billion diverse users surfing, racing and tripping all over the network.

Hackers, viruses, worms, spam, spyware and phishing sites have proliferated to the point where it's nearly impossible for most computer users to go online without falling victim to them.

Last year, the Carnegie Mellon University CERT Coordination Center logged 3,780 new computer-security vulnerabilities, compared with 1,090 in 2000 and 171 in 1995.

"I'm very pessimistic about it all," said Haugsness, who has worked for the storm center for two years. "There are huge problems and outages all the time, and I see things getting worse."

Now a movement to upgrade the network — to create an Internet 2.0 — is gathering steam. How, or even if, that could be done is a subject of debate. But experts are increasingly convinced that the Internet's potential will never be met unless it is reinvented.

"The Internet is stuck in the flower-power days of the '60s during which people thought the world would be beautiful if you are just nice," said Karl Auerbach, a former Cisco Systems computer scientist who volunteers with several engineering groups trying to improve the Internet.

Many of the bugs in the Internet are part of its top layers of software: the programs that come loaded on new computers or that are sold in retail stores. But some of the most-critical issues were built into the network's core design, written decades ago and invisible to the average user.

"The problem with the Internet is that anything you do with it now is worth a lot of money. It's not just about science anymore. It's about who gets to reap the rewards to bringing safe technologies to people," said Daniel Lynch, 63, who as an engineer at the Stanford Research Institute and the University of Southern California in the 1970s helped develop the Internet's framework.

As the number of users exploded to more than 429 million in 2000 from 45 million in 1995, Lynch remembered watching in horror as hackers defaced popular Web sites and shady marketers began to bombard people's e-mail inboxes with so much spam that real messages couldn't get through.

When the Internet's founders were designing the network in the 1960s and 1970s, they thought a lot about how the network would survive attacks from the outside — threats such as tornadoes, hurricanes, even nuclear war.

What they didn't spend much time thinking about was internal sabotage. Only several hundred people had access to the first version of the Internet, and most knew each other well.

"We were all pals," Lynch said. "So we just built it without security. And the darn thing got out of the barn."

Years passed before the Internet's founders realized what they had created. "All this was an experiment. We were trying to figure out whether this technology would work. We weren't anticipating this would become the telecommunications network of the 21st century."

Internet2, a consortium of mostly academic institutions that has built a network separate from the public Internet, is testing a technology that allows users to identify themselves as belonging to some sort of group.

"You've heard the saying that on the Internet nobody knows you're a dog, and that's of course the problem," said Douglas Van Houweling, president of Internet2 and a professor at the University of Michigan. "Authentication will allow communities to form where people are known and therefore can be trusted."

But there's a trade-off for such security: The network becomes balkanized, with more parts of it closed to most people.

Lynch believes the Internet will never truly be secure, though, because of the diversity of software and devices that run on it. If one has a flaw, others are vulnerable.

And so it is up to people like Haugsness at the Internet Storm Center to keep the network up and running.

Nothing "super bad" so far, Haugsness concluded of this one particular day. All in all, only about a half-dozen documented problems. That might have been considered a disaster a decade ago. But it was a pretty good day for the Internet in 2005.

Full Article.

More in Tux Machines

OpenStack Roundup

  • OpenStack Summit Returns to Austin With Much Fanfare
    Back in July 2010, 75 developers gathered at the Omni hotel here for the very first OpenStack Summit. At the time, OpenStack was in the earliest stages of development. In April 2016, OpenStack returned to Austin in triumph as the de facto standard for private cloud deployment and the platform of choice for a significant share of the Fortune 100 companies. About 7,500 people from companies of all sizes from all over the world attended the 2016 OpenStack Summit in Austin from April 25 to April 29. In 2010, there were no users, because there wasn't much code running, but in 2016, that has changed. Among the many OpenStack users speaking at the summit were executives from Verizon and Volkswagen Group. While the genesis of OpenStack was a joint effort between NASA and Rackspace, the 2016 summit was sponsored by some of the biggest names in technology today—including IBM, Cisco, Dell, EMC and Hewlett Packard Enterprise. In this slide show, eWEEK takes a look at some highlights of the 2016 OpenStack Summit.
  • A Look Into IBM's OpenStack Meritocracy
    Angel Diaz, IBM vice president of Cloud Architecture and Technology, discusses how Big Blue has earned its place in the OpenStack community.
  • OpenStack cloud’s “killer use case”: Telcos and NFV
    Today, 114 petabytes of data traverse AT&T's network daily, and the carrier predicts a 10x increase in traffic by 2020. To help manage this, AT&T is transitioning from purpose-built appliances to white boxes running open source software. And according to AT&T Senior Vice President of Software Development and Engineering Sarabh Saxena, OpenStack has been a key part of this shift.

Ubuntu 16.04 vs. vs. Clear Linux vs. openSUSE vs. Scientific Linux 7

Here are some extra Linux distribution benchmarks for your viewing pleasure this weekend. Following the release of Ubuntu 16.04 LTS last week, I was running another fresh performance comparison of various Linux distributions on my powerful Xeon E3-1270 v5 Skylake system. I made it a few Linux distributions in before the motherboard faced an untimely death. Not sure of the cause yet, but the motherboard is kaput and thus the testing was ended prematurely. Read more

GhostBSD 10.3 ALPHA1 is now ready for Testing

Yes we skip 10.2 for 10.3 since was FreeBSD 10.3 was coming we thought we should wait for 10.3. This is the first ALPHA development release for testing and debugging for GhostBSD 10.3, only as MATE been released yet which is available on SourceForge and for the amd64 and i386 architectures. Read more

Leftovers: Ubuntu

  • Ubuntu-based Smartphones And Tablets Sound Good, On Paper, But...Do They Make Any Sense?
    As I previously stated in a recent article, I'm a huge fan of Ubuntu as a desktop operating system. It's friendly, reliable, consumes little resources and is largely virus-free.
  • Elementary OS 0.4 ‘Loki’ expected to be based on Ubuntu 16.04
    Elementary OS 0.4 ‘Loki’ coming soon, to be based on Ubuntu 16.04 and have plenty of new features
  • BQ Aquaris M10 Ubuntu Edition tablet - The heat is on
    Some investments are financial. Some are emotional. When it comes to Linux on tablets, my motives are mostly of the latter kind. I was super-excited to learn BQ was launching a tablet with Ubuntu, something that I have been waiting for a good solid three years now. We had the phone released last spring, and now there's a tablet. The cycle is almost complete. Now, as you know, I was only mildly pleased with the Ubuntu phone. It is a very neat product, but it is not yet as good as the competitors, across all shades of the usability spectrum. But this tablet promises a lot. Full HD, desktop-touch continuum, seamless usage model, and more. Let us have a look.
  • Kubuntu-16.04 — a review
    The kubuntu implementation of Plasma 5 seems to work quite well. It’s close to what I am seeing in other implementations. It includes the Libre Office software, rather than the KDE office suite. But most users will prefer that anyway. I’m not a big fan of the default menu. But the menu can easily be switched to one of the alternative forms. I’ve already done that, and am preferring the “launcher based on cascading popup menus”. If you are trying kubuntu, I suggest you experiment with the alternative formats to see which you prefer.
  • Ubuntu 16.04 LTS Review: Very Stable & Improved, Buggy Software Center, Though
    In almost all the occasions that I tested Ubuntu LTS releases, quite rightly so, they’ve always worked better than the non-LTS releases. And this Ubuntu 16.04 LTS, the 6th of such release is no exception. This one actually is even more impressive than the others because it has addressed some security related issues and even although not critical, subtle issues that I mentioned in the review. As far as the performance was concerned, Ubuntu 16.04 LTS was only largely outperformed by the memory usage where there is a large increase in memory usage. Other than that, those numbers look pretty good to me. That ‘.deb’ file issues with the Software Center is the only major concern that I can come up with. But I’m sure it’ll be fixed very soon.