The main purpose of a data centre is to run applications and store data within an enterprise or the cloud. Video streaming applications, together with video gaming and business applications, have expanded rapidly – reaching levels of adoption beyond any initial expectation.
As a result, the amount of data that is being processed has grown significantly, fuelled by a global economy that requires data access at all times from all locations.
As the amount of data needing to be processed grows, storage has become the most critical and vulnerable element of the data centre. Indeed, data that serves today’s business and cloud applications need to be stored securely and accessed efficiently.
Layer on top of this the need for agile response to changing user and application needs and it’s easy to understand why a well-designed storage-area network is essential to the operation of all data centres.
Organisations need to take a closer look at how to build and support the infrastructure to access the most important of digital assets – data.
Storage virtualisation is the amalgamation of multiple network storage devices into what appears to be a single storage unit. However, choosing these storage components for virtual infrastructures can be challenging.
Traditional direct-attach storage (DAS) deployments have been preferred in the past for their low cost of ownership. However, since applications have become more complex and the need for flexibility has become more relevant, there has been a migration towards more centralised approaches.
For this reason, network attached storage (NAS) or storage area networks (SAN) have become more predominant as they also help reduce the amount of hardware and cabling infrastructure.
DAS is the most basic level of storage, consisting of a typical storage device that directly attaches to a server or workstation. Although simple to design, DAS has an increased risk of downtime, low utilisation of storage capacity, and limited scalability – not ideal for any business anticipating rapid data growth.&
As such, enterprises have migrated towards more centralised approaches, such as NAS and SAN, which reduce the amount of hardware and cabling infrastructure, and support faster data transfer speeds.
SAN connects storage devices, such as disk arrays and tape library, allowing clients and applications running over the network to access the storage area. It topologies help to increase storage capacity and simplify storage administration as multiple servers can share space on the storage disk.
It has become widely adopted due to its scalability, as well as its ability to provide high-speed throughput with low-latency Input/Output (I/O).
Fundamentally, SAN increases storage capacity, simplifies storage administration and adds flexibility to the network, so that planning and implementation are easier to achieve. However, the downside is that it only allows a single client to access data at one time.
NAS, on the other hand, allows clients to access the same data simultaneously. Essentially made up of a regular server with minor operating system capabilities, its only purpose is to supply file-based data storage services to other devices on the network.
These can be network file system (NFS), server message block (SMB) or common internet file system (CIFS). The benefit of NAS over a SAN or DAS is that clients across the network can access the same data simultaneously.
NAS is subsequently ideal for a simple, cost-effective and fast data access for multiple clients. The downside of NAS is that not all applications will support it because most clustering solutions are designed to run on a SAN and they require a block-level storage device as opposed to file-based.
In addition to traditional storage architectures, cloud computing is becoming an ever greater reality and raises important questions and challenges for the storage community.
Computing services, platforms, software and applications that would traditionally have been located on an organisation’s network are migrating from the enterprise to the cloud.
The goal is to enable access to both computing power and applications wherever and whenever they are needed, which represents a fundamental change to the enterprise IT model.
Based on highly virtualised infrastructure, cloud storage has a number of advantages over traditional data storage. It is scalable to an organisation’s needs, it has unlimited capacity and it enables users to access data from any location.
Businesses also only need to pay for the storage they use, which may not make it necessarily cheaper, but shifts investments that were previously part of the capital expenditure (capex) budget into the corresponding operational expenditure (opex) budget.
However, apart from the challenges of making such a fundamental change to the IT operating model, there is one other major concern that the cloud service companies need to overcome in order to speed up adoption – and that is data security. Storing data in the cloud raises all sorts of concerns about where it is located, how access is controlled, and what happens if the data gets lost.
Ultimately, organisations must consider all criteria when choosing a storage solution. Digital assets will only continue to grow, making it paramount for storage infrastructure to deliver cost-effective expansion and agile scalability.
Structured cabling solutions that allow for easy migration and expansion of the storage investment should be considered to avoid adopting a ‘rip-and-replace’ approach.
Indeed, the main goal is to support all current applications while providing a seamless pathway transition to future technologies.
Sourced Alastair Waite, TE Connectivity