Designing for unanticipated storage requirements

Every IT project starts with a set of requirements. IT architecture decisions and vendor selection are driven by these requirements.

All too often, however, somewhere between the development of initial requirements and the end of the IT lifecycle, unanticipated requirements get added into the mix.

These requirements may include unforseen greater capacity, lower I/O response time, higher I/O throughput, or a need for high availability.

The decisions made today will determine whether or not the chosen architecture can adapt to these changing requirements or will require a totally new design, requiring new infrastructure in the near future.

While price always matters, in times of rapid growth and rapid change, adaptability of the infrastructure may be just as important as initial price. And surprisingly, new approaches that are more adaptable than traditional architectures can provide lower up-front costs as well.

Capacity requirements

Storage capacity requirements continue to grow in almost all organisations.

Capacity planners often estimate 1, 2, or 3-year requirements and purchased enclosures with sufficient drive bays to meet the anticipated future need.

The organisation may fully populate arrays or choose to buy only the current capacity requirement and add drives to the array as capacity requirements increase.

>See also: Software is redefining IT infrastructure

Buying fully populated arrays avoids the potential disruption of future upgrades, but with rapidly declining drive costs, this approach results in higher up front cost, lower average utilisation, and higher operating costs.

Buying drives and populating arrays on an as-needed basis can result in the need to write-off unamortised assets, as the remaining useful life of the array can be quite short for late-term upgrades.

In addition, with the rapid evolution of drive technologies, there may be limited availability for 2 and 3-year-old drive types and price declines for add-on drives may not keep pace with price declines on drives in new arrays.

Finally, the cost of the integrated array controller and accompanying add-on software can represent a significant portion of up-front cost. When the controller is amortized across a partially populated array, the average cost per GB is greater.

I/O response time requirements

Consumers have become increasingly impatient, and they demand to be kept informed.

Shoppers want to complete a transaction as quickly as possible. Transit users want accurate information on traffic conditions and real-time updates on the progress of an airplane, bus, train, or taxi.

In the world of stock trading, decisions are made and transactions completed in microseconds, where in the past, milliseconds were good enough. In a world that is increasingly self-service, millions of customers are checking inventory, completing purchases, and tracking shipments on-line.

All of this puts tremendous pressure on response-time requirements for reading, writing, and analysing data.

Throughput requirements

One of the more significant drivers of storage capacity growth is image data.

This can be in the form of higher-definition digital video capture from security cameras or streaming video and image broadcast for marketing applications.

>See also: The hidden costs of weak IT infrastructure

A single retail store may have 10s to hundreds of security cameras and a similar range of digital signage.

Systems to support these applications may need to scale to 10s to 100s of terabytes and the throughput requirements could reach Gbps range. And a camera upgrade, an increase in frames per second, or an enhanced screen resolution can result in a step-function increase in throughput requirements.

Uptime requirements

As processes become more automated and more integrated, the demand for application uptime increases.

The same is true when organisations decide to virtualise servers and consolidate applications onto fewer physical servers.

Server downtime with consolidated applications and with automated, integrated processes means that when the server goes down, everything grinds to a halt.

Many organisations have completely eliminated and no longer train employees on manual systems that could serve as a backup, in the event of a server failure.

Instead, they are designing their infrastructure to be always on. And even when high-availability is not a current requirement, during the useful life of the IT infrastructure new applications developed to increase efficiency or create new revenue streams may drive such a requirement.

Designing for adaptability

Modern, software-defined approaches provide greater adaptability in the IT infrastructure.

As an example, software-defined storage solutions that run on general-purpose servers do not require the large, upfront investment of a storage controller.

They can often accommodate a variety of drive types, both internal to the server and externally attached.

As capacity requirements increase, upgrades can often be done in small increments, and with advanced caching and tiering capabilities, can enable companies to meet changing response-time and throughput performance requirements by adding higher-performing disk drives and memory into the mix.

>See also: Is the future of infrastructure converged?

The incremental investment required to enable high-availability can also be quite small.

In some cases, the hardware infrastructure may already be in place, and all that is required is to turn on data mirroring and advanced server virtualisation capabilities.

In other cases, capacity may need to be upgraded and an additional server added, but because previous investments can be leveraged in a software-defined infrastructure, the incremental cost will be less than a full-scale replacement.

In the fast-changing environment in which IT must operate, it is unreasonable to expect that requirements can be accurately predicted over a 4, 3, or even 2-year period.

But IT infrastructure architects can design an approach that will enable them to adapt to unanticipated requirements without requiring mid-term replacements.

Key considerations when making a storage decision include, “Can the storage architecture?”:

  1. Support non-disruptive capacity upgrades?
  2. Support mixed drive types, taking advantage of rapid price declines?
  3. Support add-on capacity without requiring additional, expensive array controllers?
  4. Support a variety of connectivity performance levels?
  5. Support memory caching, solid state drives, high-performance HDDs and lower-performance, high-capacity drives to deliver a range of performance and cost points within a single architecture?
  6. Support upgrades from non-high-availability to high-availability designs, without requiring replacement of prior investments?

Sourced by Hans O’Sullivan, CEO of StorMagic

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

IT Infrastructure