Internet of the future

Security and the Internet have not exactly gone hand-in-hand. One way to stop people from knowing the contents of messages would be to disguise the fact that a message was sent. The development of so-called ‘chaos communications’ could make it so that information is sent as highly irregular waves based on fractals, rather than as traceable binary code, as it is today. Not only does chaos communications effectively camouflage data, but it also greatly increases the speed at which information is sent.

Quantum cryptography, which uses lasers that transmit single packets of light at a time, adds a further layer of security. It takes advantage of one of the principles of quantum mechanics – that any system is irrevocably changed by someone looking at it – to provide a built-in intrusion detection system. What that means is that, even if a hacker gets direct access to the fibre optic cable that conveys a message, he or she cannot examine the data going through the cable without changing it – thereby alerting the recipient that the message has been intercepted. In theory, quantum cryptography could also create unbreakable codes.

Everything will be in the web

The Internet was designed at a time of low bandwidth, where simple data was the only concern and streaming video, voice over IP and other high-bandwidth forms of communication had not been invented. While bandwidth has improved, there is still no way to prioritise data to ensure that voice communications, say, can reserve the majority of the bandwidth. Instead, all data has to take its chances with the rest.

Quality of service (QoS) technology is designed to address this. Currently available only for private networks, QoS ensures that the bandwidth necessary for certain applications is available. The trick is getting it to work on the Internet as well.

Dynamic synchronous transfer mode (DTM), a technology originally developed by Ericsson Telecom and the Swedish Royal Institute of Technology, could provide 100% QoS on existing infrastructure, its inventors claim. But it is not yet a fully approved standard, which could pose equipment-incompatibility problems. There are, however, other standards being developed under the auspices of the Internet Engineering Task Force. This has added QoS features to IPv6 (see box, Six is the Magic Number) as well as other networking protocols, which could make technologies such as DTM unnecessary. The race is on to see which technology gets adopted first.


Living life in the fast lane

The Internet of the future will be faster. Researchers at CERN, the European organisation for nuclear research, together with Caltech in the US hold the current Internet speed record: 5.44 gigabits per second while transferring data from Geneva to California – the equivalent of a full-length DVD movie in seven seconds. They achieved this using the same infrastructure the Internet currently uses, only using Fast TCP instead of the current TCP.

Both TCP and Fast TCP break messages down into packets and transmit each packet in turn. It is how they respond to glitches in the network that is the main difference. TCP does not have a built-in mechanism for monitoring network performance. Fast TCP, on the other hand, continually tracks how long it takes for the packets to reach their destination and how long it takes for the acknowledgments to come back. As a result, it can raise or lower transmission speeds far more efficiently to take account of glitches or improved bandwidth.

The main drawback of Fast TCP is the lack of industry support for it – no operating system or hardware manufacturer has yet committed themselves to adopting it. Whether support eventually comes from vendors building Fast TCP into their operating systems, third parties creating add-on software, or IT managers downloading and installing the systems themselves, remains to be seen.

Six is the magic number

The world’s supply of Internet protocol (IP) addresses is running out. The current standard, version four (IPv4), can cope with up to four billion users. But the number of devices that can access the Internet has increased dramatically over the last few years, as web-enabled phones, wireless laptops, PDAs and even IP wristwatches have emerged. And ‘smart’ products, such as fridges that order food when stocks run low and cars that receive information about parking spaces, are coming.

At the current rate, IPv4 addresses will be used up in 2005.

Fortunately, there is a solution: IP version six. As well as increasing the number of IP addresses to about 340 billion billion billion billion – more than the number of grains of sand in the Sahara desert – IPv6 offers a number of other benefits, including built-in security and encryption, self-configuration, and quality of service (QoS) improvements.

Most major operating systems now support IPv6, as do a growing number of routers and other networking equipment. But the standard is not wholly backwards compatible, which has slowed its rate of adoption. Also, certain network configurations will no longer work under IPv6 and will need to be rethought by enterprises. And for the Internet to continue to reach all parts of the world, all parts of the world will need to start using IPv6. But, despite these problems, a choice will need to be made – soon.

The web tailored to the user

Despite developers’ best efforts, the Internet remains largely an impersonal place. Better categorisation of web pages would help to improve the searching process and thus personalise the experience. But with more than three billion web sites now online, even thousands of web-cataloguing computers can take days to rank new sites.

Google says that a ranking system that produced results personalised for the individual web user would take some 5,000 computers about five days to complete.

But as advances in algorithms continue, topic-based searching should become possible at least. What would help Google and other search engine companies is if web pages had descriptions of themselves that were machine-readable. This so-called ‘semantic web’, now being developed by the World Wide Web Consortium (W3C), uses embedded descriptions based on XML and the resource description framework (RDF) to give search engines a clear understanding of a web page beyond just the words used in the text. Armed with this information, search engines could offer far better results, personalised to the user’s preferences. All that is now needed is for the standards to be adopted to the point where search engines recognise semantic web information as well as they understand current web pages.


What if the web talked back?

The web was never intended to be simply the world’s biggest library. According to its inventor, Tim Berners-Lee, the original vision had been to design a “creative and collaborative space where people could build things together”. While blogging and instant messaging (IM) have captured some of that intent, truly collaborative working remains some way off.

‘Sparrow Web’, which is being developed by researchers at Xerox, promotes a different genre of web page: the community-shared page, which can be modified or added to by any interested contributor as easily as they can read the page.

Another development is virtual whiteboarding, in which different users ‘draw’ on a shared window. This is already available today in certain programs. And improved bandwidth will ensure audio and potentially video-based collaboration will be possible.

But a technology called the session initiation protocol (SIP) has the potential to create true Internet-based collaboration. All SIP users have SIP addresses. When they want to call another SIP user, they send an invite request over the Internet to the recipient’s address. This request contains the caller’s preferred media types and formats. The sender’s and recipient’s computers can then negotiate the best mode of communication for both parties. If the recipient is not available on their usual phone, for example, his or her system will redirect the call to another phone, an IM session or an email, depending on the sender’s preferences.

Is it here already?

The Internet was originally developed for academics and the US government, but ever since its worldwide adoption, its original users have been developing a better, faster version – which they intend to keep for themselves.

A group of 205 universities, and various government agencies and corporate partners, have clubbed together in the US to create ‘Internet2’, a new network and software combination capable of giving its users 100Mbps Internet(2) access with assured quality of service (see box, Everything will be in the web) for video, voice and audio traffic.

When Internet2 technologies are mature enough, they will be handed down to ordinary Internet users and a little bit of the future will be theirs.


Quotes

“Right now, we’ve got 3% of what one could put on the web and do with it. So there’s still a lot of work to be done.”
Robert Cailliau, co-inventor of the web

“Even the little network, which will exist in most homes 20 years from now, will just be a part of the Internet.”
Steve Ballmer, president and CEO, Microsoft

       

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics