Virtualisation

A dictionary definition of virtual reads: “in effect, though not in fact.” And applied in an enterprise storage setting, virtualisation commonly means creating what appears – at least to servers and applications – to be a single, large storage resource from what is in fact a composite of many networked storage devices of different capacities and types. The benefits to organisations of creating this single, virtual resource is that it adds much greater storage system ‘intelligence’, expanding the ability of organisations to automate data management operations and make them more responsive to changing business requirements.

In effect, virtualisation makes day-to-day storage management far simpler. However, until recently, embarking on a virtualisation project was an extremely complex business,

 
 

Why go virtual?

  • Reduced costs, through improved use of existing storage technology resources.

  • Investment protection, by incorporating existing storage devices into the new virtualised infrastructure.

  • Enhanced business performance, through decreased downtime and increased data availability.

  • Improved service levels, ensuring data availability and decreasing the likelihood of violations in service-level agreements (SLAs).

  • Decreased administrative burden, reducing management costs and enabling the deployment of IT personnel to key strategic projects instead of infrastructure management.

    Source: InTechnology

     

  •  

    fraught with difficult choices – and risk – for IT managers. But at least some of the more difficult decisions have become less daunting.

    The issue, for instance, of whether to invest in ‘in-band’ or ‘out-band’ systems is now more or less resolved. Early virtualisation systems required a dedicated virtualisation server to apply management policies to data traffic as it crossed the storage network. This could be done using a device which sat between the server and the storage fabric, directly manipulating data traffic as it passed through. This in-band method is the most sophisticated, and has the greatest scope for applying management policies, but it also introduced latency issues and was complex to install.

    Out-band systems were simpler (although they did require ‘agents’ to be installed on all participating servers), and they only processed meta-data, reducing the bottleneck effect that could be created by in-band systems.

    Today, only a handful of pure out-band vendors remain and most storage virtualisation pioneers, like DataCore, Falcon and Softek have become pure software vendors.

    Most of the major storage players, including IBM, Hitachi Data Systems, EMC and Veritas now also port their virtualisation products to intelligent switches, and most organisations will probably opt for the comfort that comes with investing in such brand name technology. However, the ability of switches to manage really large volumes of traffic without compromising performance is still at issue, and a new generation of smaller suppliers is emerging with a hybrid solution: network storage servers (NSS).

    NSS devices are essentially switches that can manipulate flows of data traffic. However, they differ from other switches: while they allow Fibre Channel traffic to pass through, they will only switch SCSI traffic. This means that users will still require a Fibre Channel switch to fan-out traffic to disk arrays.

    In the meantime, there is enough stability in the present generation of virtualisation products, and enough maturity in the standards, for users to begin the next phase of storage management development: the evolution from data-centric to information-centric management.

    Avatar photo

    Ben Rossi

    Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

    Related Topics