Hybrid cloud migration requires consideration given to people and process as well as technology. That’s why having a very clear idea of the plans and aims in moving to hybrid cloud are important, and that these goals are discussed with IT teams at all levels (application, storage, network, system, etc.), as building their knowledge into the preparations will prove invaluable.
With these teams, you can develop technical requirements, answering critical questions such as whether applications need to be scalable depending on the future needs of the business, or looking at load balancing (distribution of workloads) for when the demand is greatest.
With this in mind, here are six tips for balancing the technology requirements withthe all important people and process element when migrating to a hybrid cloud environment.
Tier, tier, tier
One of the biggest issues for hybrid cloud integration is where applications need to run. If you ask the business how important their application is they will almost certainly want it on a tier one infrastructure – however the reality is that applications can be tiered depending on their criticality to the business, commercial differentiation/competitive advantage, and performance required.
> See also: The golden rules of hybrid cloud
Applications can be ranked depending on commercial value and competitive advantage. Once this is achieved you can list out what can go to public cloud (payroll, accounts, SalesForce.com, etc.), what can go on private cloud (applications that are not particularly performance or security sensitive), and dedicated infrastructure (applications that are running the business such as Supply chain, SAP, Oracle, etc).
SaaS offers great benefits but can also add load to the infrastructure and network. Benchmark before and after testing Saas in a development environment to see if it performs as promised or if it will need new faster hardware/networking to achieve this.
Conscript a specialist, but prepare first
Before approaching a third party for help with your migration, you should be equipped with the knowledge of what you have and what you want to achieve. Knowing how your applications are performing and what you want as a guarantee before you approach a third party will stand you in good stead. Third parties will rarely offer an application performance SLA, just availability – yet performance is what you are buying, coupled with availability and security.
For this the third party needs an Infrastructure Performance Management platform so they can guarantee outsourced application performance in a shared environment. You wouldn’t lease a car only knowing it was going to be available to you but with no idea if it was secure or how it was going to perform.
Design with the Internet of Things in mind
The Internet of Things is all connected to a physical infrastructure so they need to ensure that infrastructure is performing. Like all applications they perform down to the slowest component. It is critical to ensure the servers, switches, network and storage are all performing as you would expect under load before you design the API’s for the Internet of Things to connect to.
Know your traffic flow
It is critical to know what the incoming traffic is. With unstructured streaming you can’t expect a structured infrastructure to cope unless it is massively over-provisioned, and the reason you are considering Cloud is to avoid over – provisioning costs.
The way round this is implementing an Infrastructure Performance Management (IPM) platform that shows exactly what the incoming datacentre workflow is, end to end, through the servers, switches and storage, so you can right-size the hardware elements to cope by monitoring and planning using applied analytics and historical growth and traffic information.
Give your team intelligence
IT teams must know how the infrastructure is performing, end-to-end, in real time, across all elements. You can no longer just manage the storage or the server and leave the rest to someone else to worry about. Cloud is about efficiency and cost savings so the IT team needs to monitor the whole infrastructure and be able to react quickly to change.
IT teams need to have a clear understanding of exactly what is going on within the infrastructure, and given the highly complex nature of datacentre environments with virtualisation, consolidation and the cloud, this is more critical than ever before. The days of over-provisioning to guarantee application performance and each manager/department/supplier, blaming each other when there is a latency issue or outage are gone.
Sourced from Nicholas Dimotakis, regional services director, Virtual Instruments