Multi-cloud is the right tool for the job

It’s a multi-cloud world. Why?

In the enterprise, multi-cloud is increasingly the go-to cloud strategy. According to the 2018 State of the Cloud report by RightScale, 81% of organisations have a multi-cloud strategy and are managing workloads in both public and private clouds.

For organisations considering going multi-cloud (or making a strategy out of what has already happened), it’s important to balance those considerations against an objective analysis. In a recent report, Cloud Academy examined the viability of some of the most common arguments for multi-cloud. Here’s what they learned.

Reason 1: Avoiding vendor lock-in

Fear of vendor lock-in is not uncommon in the enterprise. For some companies, avoiding a single provider is a core business requirement; for others, it’s about ensuring portability to another framework or platform. Ultimately, avoiding vendor lock-in at all costs means passing up some unique vendor functionality.

>See also: Organisations need a multi-cloud strategy ‘urgently’ – IDC

“Avoiding lock-in isn’t a binary choice,” the report said, “it’s about degrees of tolerance and design decisions.” One way is by abstracting away vendor-specific functionality. Here are two simple examples:

• Code level: Accessing functionality such as blob storage through an interface that could be implemented using any storage back-end (local storage, S3, Azure Storage, Google Cloud Storage, among others). In addition to the flexibility it provides during testing, this makes it easier for developers to port to a new platform if needed.
• Containers: Containers and their orchestration tools are additional abstraction layers that can make workloads more flexible and portable.

As a best practice, organisations must weigh the pros and cons of depending too heavily on any single platform or tools.

Reason 2: High availability

With the average cost of downtime estimated at $8,850 per minute, businesses can’t afford to risk system failure. By design, many public cloud services are already replicated across different geographic zones to ensure availability. However, outages happen.

>See also: A better private cloud means a better multi-cloud strategy

Marketing teams would have you believe that running a service on AWS with a failover to Azure, say is a reasonable solution for avoiding downtime. However, given the technology available today, operating on multiple clouds is largely avoidable if teams employ best practices at the single cloud level.

Take a storage failover (which, as some have argued, could have reduced the business impact of the February 2017 AWS S3 outage). Cross-cloud replication would have added significantly more complexity when simply using a cross-region replication would have worked.

Reason 3: Selecting the best tool for the job

The multi-cloud implementations having the greatest success in the enterprise are those that take a best-fit approach. Teams working in certain industries (finance, biotech, or healthcare) or with certain compliance demands, may consider specific workloads and applications to be better suited on one platform over another. In your organisation, teams may choose a platform based on their existing experience with it, or strictly based on costs. For sophisticated teams, best-fit might mean using multi-platform services in a single app.

Establishing a best-fit framework generally takes two forms: best platform or best APIs.

Application or team-driven

With the right controls and training in place, organisations are giving individual teams greater agency in choosing how to build and run their applications. Here, team experience and ease of development and deployment will greatly influence the choice of platform. Access to innovation and the freedom to choose the best platform for a particular workload are other motivating factors.

>See also: Painting a multi-cloud masterpiece

This sort of flexibility can provide value, but it is not without its challenges. Each new platform:

• Adds to the amount of domain knowledge required for the company.
• Increases the overall attack surface that needs to be secured.
• Expands the tool sets needed to build and deploy.
• Increases the overhead that needs to be managed.

Allowing teams to choose the best platform for their application should be paired with a careful evaluation process that considers the entire lifecycle of the application and the skills and experience of the team itself.

Task or API-driven

Increasingly, companies are shifting application logic to the client side and assembling the back-end with the best services available for each task. A handful of technologies have facilitated this change.

New JavaScript frameworks can abstract away complexity, speeding up and simplifying development. Container technologies (lxc, Docker, rkt) enable the use of microservices and serve as the basis for serverless technologies, making it easy to get services into production across different cloud platforms.

>See also: Multi-cloud and application services are fuelling digital transformation

Nimble teams can leverage an increasing availability of specialised, client-side consumable services for tasks such as authentication, machine learning, data storage, and payment processing.

These third-party services are pre-built, and they enable an à la carte development process for apps that requires the UI to sit at center of API coordination. UI-centric coordination generally avoids the latency that is inherent when services communicate directly with one another.

The bottom line is, there is really one argument that stands up to objective scrutiny, and it’s a very important one: multi-cloud enables companies to select and deploy using the best tool for the job.


Sourced by Cloud Academy

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics