While cloud adoption driven by IT is believed to date back to 2009, the Covid-19 pandemic has certainly accelerated take-up, with organisations being forced into facilitating remote workers. With this trend looking likely to continue post-Covid, demand is growing for tools that can enable faster innovation, while allowing for continuous cost saving, and the cloud modernisation era is sure to see further developments.
Speaking during his keynote speech at AWS re:Invent, AWS CEO Andy Jassy announced a number of new products and enhancements, which were led by the cloud service provider’s usual focus on the customer, and pain points that have been encountered within the past year. But how do these new tools lend themselves to the future of the cloud?
During his talk, Jassy explained that a common pain point cited among customers was that they wanted to orchestrate containers within on-premise environments, as well moving them to the cloud, without performance being compromised. It’s no wonder that demand in this area is surging, given that Gartner has forecast container management software to grow by more than double its value in the next four years.
In response to the pain point, two new AWS products were announced for this space:
- Amazon Elastic Container Service (ECS) Anywhere, which allows for use of the existing ECS in customers’ own data centres;
- Amazon Elastic Kubernetes Service (EKS) Anywhere, which allows for EKS in customers’ own data centres, and comes with EKS Distro, a distribution capability for manual Kubernetes cluster creation.
Navigating open standards for Kubernetes
Additionally, a new automation service called AWS Proton was announced for developers to automate container and serverless application development and deployment, as well as a new way to share and deploy container software publicly, called Amazon Elastic Container Registry (ECR) Public.
For Brad Campbell, chief technologist at Cloudreach, container run time for Lambda stood out as a capability that looks set to simplify the development process, when it comes to proof-of-concept within microservices.
He said: “Proving out individual services piece-meal using Lambda like this could also represent a significant cost-savings mechanism, as a full Kubernetes cluster is no longer necessary to test an individual service.
“The possibilities that this opens up for K8s-hosted microservice CI/CD pipelines is also promising, as K8s API mocks can serve the place of full clusters (which either cost money to be readily available for testing or slow down pipelines as they are created and destroyed on-the-fly).”
Commenting on EKS Anywhere, Campbell expressed belief that this will speed up adoption of cloud-native architectures for companies that are beginning data centre exits to the cloud: “Giving customers parallel paths for adoption of the platform while also allowing for net-new architectural pattern adoption — even from a data centre that may soon be exited — is a very compelling thought.
“It changes the entire cloud adoption conversation: instead of starting off with customers saying, ‘OK, we need to mass migrate these hundreds (or thousands) of apps/VMs on a compressed timeframe, then let’s think about modernisation because you’ve accomplished your short-term goals, and now have an established enterprise-ready landing zone presence on the platform to facilitate that modernisation’, we can have that modernisation conversation at the same time.”
Kevin Davis, cloud strategist at Cloudreach, added: “When I saw Proton being launched, I thought about the consistent nature of complexity as it exists in managing technology and deployments.
“In the past we packed a lot of that complexity into the deployments and management into the servers and software installs. We shifted all of that complexity into the micro deployable units that we’ve built with containers and serverless, reducing the deployment complexity, but introducing new complexity in the need to manage volume of deployments and the need to orchestrate the deployment motions.
“I think Proton will help to manage that complexity at scale.”
AWS’s relational database service, Amazon Aurora, was rolled out in 2014 to allow customers to leverage a combination of enterprise-scale performance with the cost-effectiveness that comes with open source software. Compatible with MySQL and PostgreSQL, the tool evolved to take advantage of serverless, with the launch of Aurora Serverless in 2018. Going forward, it became apparent that customers want to keep scaling up.
With the need to continue speeding up scaling of operations while saving costs in mind, Jassy announced the launch of Aurora Serverless V2, which can instantly scale an Aurora database from hundreds to hundreds-of-thousands of transactions per second, while allowing for cost saving of up to 90%.
John Loughlin, chief technologist at Cloudreach, said: “If AWS has PostgreSQL available, as they plan to, in the first half of next year, I think [Aurora Serverless V2] could be pretty powerful in terms of being a target for moving some of the commercial databases into PostgreSQL.”
How to scale up with microservices and serverless computing
The use of analytics, powered by the cloud, has been vital for viewing accurate customer behaviour insights, which has allowed companies to keep up with changing demands.
To further assist its own customers, AWS launched three new analytics capabilities:
- Advanced Query Accelerator (AQUA) for Amazon Redshift, which aims to simplify and accelerate the process of gaining answers to queries in the cloud;
- AWS Glue Elastic Views, which lets developers easily build materialised views that automatically combine and replicate data across multiple data stores;
- Amazon QuickSight Q, a capability that allows for questions about business data to be asked in natural language, and receive accurate answers in seconds.
“Elastic Views is going to change plenty in terms of how we talk about data lakes, if you actually have that kind of flexibility,” said Coughlin.
Davis added: “We have these new levels of compute capability, where they’re going to have to drive that innovation towards making these view and search capabilities perform, especially on the fly.”
How to implement AI and advanced analytics, and observe the technologies’ transformational impact
Partner Network 2021
Later in the week at AWS re:Invent, a Partner Keynote was delivered to recognise the progress that AWS partners have made with the aid of the cloud service provider’s capabilities within the past year, and AWS’s vision for its partner network in 2021.
Cited examples of successful partnerships included Cohesity, which announced the launch of Data Management as a Service (DMaaS) in the form of Cohesity DataProtect, and Southwest Airlines, which has scaled its operations using a cloud-native data lake.
During the keynote, it was announced that third-party professional services will be added to AWS Marketplace, which allows customers to access assessments, implementations, and other services from consulting partners, independent software vendors (ISVs), and managed service providers (MSPs).
With the AWS Partner Network adding more than 50 new organisations everyday, such as Cloudreach, a new ISV Partner Path was launched to help ISVs build and grow successful businesses in the cloud. Also announced was the establishment of AWS SaaS Boost, an open source reference environment dedicated to aiding rapid migration of applications to an SaaS delivery model.
Additionally, an expansion of the AWS Competency Program was announced, spelling intent on the part of the CSP to further transcend industries. Four new competency areas were announced: Travel and Hospitality; Energy; Mainframe Migration; and Public Safety and Disaster Response.