The evolution of virtualisation

The rapid proliferation of virtualisation in recent years, growing from 40% of server workloads in 2012 to nearly 80% in 2016, makes it appear as if it’s a very recent technology. But it’s not quite the overnight success it appears.

For instance, did you know the first hypervisor providing virtualisation was on IBM’s one-off CP-40 research system in 1967? And there were a number of versions of virtual machine technologies in the 1970s and 1980s, many of them from IBM, but it was only with the emergence of VMware and the launch of ESX in 2001 that virtualisation began its heady ascent.

During virtualisation’s long gestation period, storage was also evolving. Emerging from the mainframe era, the first disk array with integrated cache for the mainframe arrived in 1990 and kick-started the billion-dollar market for disk-centric block storage.

The versatility of block level storage made it usable for almost any kind of application, including file storage, database storage and virtual machine file system volumes.

With the digitalisation of the masses and the resultant explosion in data from digital cameras, camcorders, MPs, laptops and smartphones, file storage became more attractive because most users only needed a simple centralised place to store files and folders.

In addition, NAS devices that save files on a file level provide a lot of space at a much lower cost than the more complex block storage.

The limitations of block and file storage technologies were exposed by the arrival of server virtualisation as a mainstream technology in the enterprise and the rise of cloud technology.

>See also: The difference between ‘cloud’ and ‘virtualisation’- why cloud is the biggest misnomer in business

Designed for a physical world decades before the arrival of virtualisation, block and file storage were ill-equipped to support virtualisation.

Virtual environments generate far more random I/O patterns than physical ones, which can seriously choke hard disk storage.

While servers can support upwards of tens of thousands of virtual servers, each generating its own I/O stream, disk-centric storage can’t keep up.

Flash storage technology was adopted as a means to match the need for higher I/Os because it could achieve up to 20 times lower latencies and tens of thousands of IOPs, while offering high density and low power consumption.

Adding a flash layer to create a hybrid system helped bypass the storage bottleneck and address the performance issues arising from the increased workloads and demands of virtualisation and cloud computing. But while flash can put a lot of IOPS at an organisation’s disposal, it can only do so if it is put to work in the right places.

In an increasingly virtualised data centre, one way to improve performance in the long term is to have VM-level visibility as well as VM-level manageability.

VM-aware storage (VAS) addresses the mismatch between storage and virtualisation. It offers direct visibility into VMs, enabling VM-level analytics that replaces guesswork with precision and automation, while eliminating the root cause of storage pain.

With VM-level visibility, storage admins can help eliminate planning and complex troubleshooting by providing control, insight and agility. End-to-end visibility shows latency break down across the host, network and storage, allowing users to solve problems in a few clicks

Sourced from Mark Young, director of systems engineering, Tintri

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Storage