Photonics: IT’s bright future

The idea of using light to transmit data was pioneered by Alexander Graham Bell, the inventor of the telephone. In 1880, Bell and his lab assistant Charles Tainter transmitted the world’s first wireless telephone message over 213 metres, using a thin mirror that vibrated when the user spoke, modulating a beam of sunlight. Bell called the ‘photophone’ his “greatest invention”.

Useful applications of light as a medium for communication emerged 100 years later with the advent of lasers, which could produce a compact beam of light, and optical fibres, fine strands of glass that kept the light confined and prevented the signal dispersing as it would when travelling through open air.

Since then, fibre optics has become the standard for sending data over long distances, because firing photons down glass strands at the speed of light (photonics) requires less energy than sending electrons through copper cables (electronics). The resistance of a copper wire increases with its length, so the longer it is the more energy is wasted as heat. Above certain network transmission speeds, this becomes untenable.

Related: Fibre optic cabling may be more cost effective than you think

“As you raise transmission speeds, it takes an increasing amount of power to drive a signal across an electrical line,” explains Karen Liu, telecommunications analyst at Ovum. “To send a signal, you have to make the voltage on that line go up very quickly and come back down again very quickly, and the faster you do that, the more heat you generate.”

As well as increasing the heat emitted and power required to send data, driving up electronic transmission speeds also introduces errors and noise. This degrades the signal as it travels along the electronic channel, meaning that sophisticated signal processing is required at the receiving end to get the data back out.

Fibre in the data centre

Data centre networks, meanwhile, are crying out for faster transmissions speeds, as the amount of data passed between servers and storage devices increases, so the use of fibre-optic cable inside data centres seems like a logical fit.  

However, according to Liu, the need for optical data centre networks has to date been precluded by innovations in signal processing. “People suggested that we’d have to move to photonics when [data centre network] transmission speeds passed one Gigabit (Gb) per second,’ she says. “But signal processing has improved with every turn of the crank of Moore’s law, so we’ve ended up using that to compensate.”

However, as network speeds increase, the case for fibre in the data centre becomes more compelling. “With each speed increment, the distance you can go with electronic transmission gets shorter, so the distance at which optical connection is favourable gets shorter too.”

Another advantage of optical fibre is that it occupies less space than copper wire, relative to the data transmission capacity. Achieving high speeds over copper may therefore require more cables.

Fibre-optic cabling is therefore standard within many high-end data centres, where space is at a premium and management complexity must be minimised. Hosting service provider PEER1, for example, uses fibre-optic interconnects throughout its 40Gb core data centre network.

“Fibre is standard across our infrastructure,” says data centre manager Mike Duncan. “The last thing we need is wasted space or difficult cable management. You can set 72 pairs of fibre into one unit of rack space, and there’s no way you could do that with copper. Once you get up to 10Gb copper and higher, it’s quite challenging to manage, especially with the rack densities that I have to use here.” The costs of optical fibre are also favourable compared to copper, he adds.

Ewan Wilson, managing director of HT Data, which supplies fibre optics to PEER1, underlines the basic space savings available through using fibre rather than copper interconnects. “One of the products that we supply to the PEER1 data centre has got 576 fibres in one unit,” he explains. “If you wanted to do the same in copper, you’d end up using 12 units, taking up space that could otherwise be sold to customers.”

PEER1 has recently launched a new cloud service in partnership with another provider, which Duncan declines to name. The company in question insisted on using copper cable to connect the servers that support the cloud service. This means that the servers have to work harder to send a signal down the cable, increasing their heat output.

It has therefore been necessary to split the servers across three racks – increasing the floor space they occupy – just so that the heat produced by the copper can be managed. “It caused us issues with cabling and meant that we had to split it across three racks to exhaust that heat,” explains Duncan.

Transmission revamp

Meanwhile, fibre-optic technologies continue to deliver unprecedented speeds for transmission networks. One early adopter is Janet, the shared services network for the UK’s higher education sector.

Rob Evans, chief technical adviser at Janet, explains: “As a publicly funded research and education network, one of the roles Janet sees for itself is to be a bit of an early adopter, to use the money we’re given to try new things, test them out before the market has a chance to and report back on that in an open manner.”

Janet is midway through a next-generation network deployment named Janet6. Last year, Evans oversaw the implementation of four entirely optical 100Gb Ethernet circuits, running from London’s Docklands to Reading, based on equipment from networking company Juniper and optical specialist Ciena. The plan is to roll out 36 more 100Gb circuits across the UK.

Evans says that the data transmission speeds made possible by this network will boost collaboration in the scientific community. For example, collaborative research on the vast quantities of data produced by the Large Hadron Collider is currently hamstrung by network transfer speeds between universities. It is not entirely fanciful to suggest that the new network capacity may lead to new discoveries in particle physics.

However, even optical fibres run into a speed barrier. “It’s a problem that we’ve been engaging with the optical vendors on,” Evans says.

Polar express

In order to cram more information through a glass fibre, technicians such as Evans can send light beams down the cord at different polarisations – the angle of rotation at which waves of light are beamed down the cable. At the end of the line, different data streams can be read off each different polarisation, multiplying the amount of information that can be sent down a cable.

Using tricks like this, Evans says that future, faster transmission will pack information into light waves more efficiently, using proportionately less of the frequency spectrum to transmit messages. “100Gb fibre uses 50GHz of optical spectrum on fibre currently,” he explains, “but at 400Gb, they’re probably going to be able to transmit it using only 150GHz, and at 1TB, maybe just 200GHz.”

Using optical tweaks to do more with less will keep raw transmission speeds rising for some time, Evans says, but optical speed increases will start to level off as fibre transmission reaches a fundamental physical cut-off known as the Shannon limit.

“The Shannon limit is the absolute physical limit to the amount of data you can get through an information system given a certain signal-to-noise ratio,” Evans says. To get around the limit, he says, the optical fibre will need to be redesigned so it loses less information, giving optics a higher signal-to-noise ratio.

But 100Gb Ethernet is rare today, and the tricks and tweaks that let optical scientists push more data through the same fibre are the realm of research and development, not real-world commercial adoption. Evans says he thinks it will be four or five years before the wider uptake of 100Gb Ethernet.

While fibre optics can move information around the world and within the data centre more quickly and efficiently than copper cabling, using light inside processors remains a distant goal as light is inherently difficult to manipulate.

“Photonics is actually very bad for processing because you can’t define what the photons are doing,” explains Ovum’s Liu. “They just travel, literally at the speed of light, and it’s very difficult to slow them down.”

In today’s systems, data output from electronic components is fed through a transceiver to produce optical signals suitable for fibre-optic transmission. However, the emerging field of silicon photonics seeks to bring electronics and optics closer together.

There are a number of possible ways to do this. Some researchers have attempted to generate light directly from the chip, but so far this has proved challenging, Liu says. Another approach could be to use the output of microprocessors to manipulate light from miniature lasers outside the chip, in such a way that transmits data.

“Light generation is analogous to the electrical power supply on a normal silicon chip: you don’t generate your 3.3 volts right on the chip, you send it in and then you do something with it. Therefore, you could have an external laser, perhaps shared across many interconnects, and modulate the light.”

Successfully integrating optical transmission into silicon systems could deliver huge accelerations in system performance while reducing the required space, power and cooling. This is just one more reason why the future of computing rests with the photon. 

Beatrice Bartlay

Beatrice Bartlay founded 2B Interface, a temporary and permanent staffing agency in 2005 and has since been serving the UK recruitment sector with specialised services. With more than ten years’ experience...

Related Topics

Photonics