Keynote

Werner Vogels : The Enterprise Journey to Cloud Native

Dr. Werner Vogels describes the evolution of enterprise IT towards Cloud Native computing.

In the AWS Summit 2019 keynote presentation, from 19:45, Dr. Werner Vogels describes the evolution of enterprise IT towards Cloud Native computing.

Referencing Amazon.com and AWS customer examples, CTO Werner Vogels explains the shift from monolith enterprise software to a Serverless and Microservices model, walking through how to build such an architecture on AWS services.

Setting the scene Werner states AWS set out to achieve for IT what Amazon.com achieved for e-commerce, to be entirely customer-driven where the customers are entirely in control of the product roadmap and economic cost model.

They have massively disrupted the traditional vendor supply chain, now offering over 165 different Cloud services across DevOps, Blockchain, ML and more.

The Cloud has transformed how IT is delivered and costed, citing S3 Glacier Deep Archive storage as an example and that it is now generally available, describing a use case such as a hospital storing MRI and cat scan images, that have a regulatory requirement to keep them for 30 years.

No cost effective storage options were available but now the Cloud has commoditized this resource to such an extent they can easily make use of near infinite levels of storage for a fraction of the previous costs.

Overall the evolution is one from moving from traditional vendor monolith applications, where you adapt your business model to their functionality, to one of an era of Cloud Builders – The ability to plug and play a number of different Cloud services, the right tool for each job, together into the solution you need.

Monolith to Microservices: Lessons learned from Amazon.com

At Amazon.com they faced the same scaling challenges as their enterprise customers, and to address them they adopted a number of practices now known as DevOps and Microservices.

He describes S3 as one of their own examples, evolving it from eight simple Microservices to now over 235 distributed Microservices.

From 31:15 he explains the essential dynamics of re-engineering monolith software to Microservices, highlighting that each Microservice has very different scaling and reliability requirements.

Therefore to most effectively design a suitable Microservices architecture the best approach is to undertake a functional decomposition, like in their case to identify different services such as a customer login and an address book service.

One is used infrequently while the other is accessed repeatedly, and so the whole component needs to scale to the level of just one small service. Similarly the whole component has access to both the credential store and also the address book store, a violation of security policies.

Therefore the re-engineering process is one of decomposing down to a level of the smallest possible building blocks for each service and then have them scale independently, so that a service like login can utilize all the resource it needs without impact on the whole site.

Containers and Microservices

From 35:25 Verner states that containers are an essential technology for implementing Microservices.

Highlighting McDonalds as one example of an enterprise customer using this approach, he describes how they built a new home delivery service using ECS to build a microservices application. The use of an API architecture enabled them to integrate with partners like Uber Eats.

From 37:00 Werner walks through the decision process to design your Microservices container environment – You can choose between ECS or ECS for Kubernetes at the orchestration level, and at the compute level you can manage your own clusters on EC2 or use Fargate, which turns it into a Serverless container service, eliminating the need to manage infrastructure at all.

Serverless Microservices

From 41:00 Werner describes the adoption of these products as a continuing abstraction of software development, a journey moving upwards from IaaS instances through containers to Lambda, the AWS Serverless layer.

He highlights that the Cloud first customers of today who are starting out to build an entirely new service are now doing so on Serverless, as they can be completely freed from the hassles of managing the underlying infrastructure and can instead immediately just focus on adding value and building new business logic.

At 42:30 he references a case study of HomeAway, an Expedia sharing economy venture for vacation homes.

Werner describes how they’ve built it entirely at the Serverless layer, making use of other AWS services like Dynamo DB, Kinesis and S3 to enable uploading and processing of six million images a month.

He adds that the adoption of Serverless is being driven by tech startups yes, but also they are seeing large enterprises being equally quick to embrace the trend, given it greatly increases developer productivity. He cites another very powerful example of Capital One, who entirely migrated billions of mainframe transactions to a Serverless approach.

From 56:00 Werner makes the key point that Serverless isn’t just about Lambda, that is the developer tool that stitches together multiple AWS services like S3 and DynamoDB, but many of the services can be considered Serverless, because fundamentally they require no management of underlying server infrastructure.

Serverless Application Model

Werner touches on AWS ‘SAM’, the Serverless Application Model (AWS SAM, previously known as Project Flourish) extends AWS CloudFormation to provide a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.

As the developer guide describes:

“A serverless application is a combination of Lambda functions, event sources, and other resources that work together to perform tasks. Note that a serverless application is more than just a Lambda function—it can include additional resources such as APIs, databases, and event source mappings.”

and that

“AWS SAM is an extension for the AWS CloudFormation template language that lets you define serverless applications at a higher level. It abstracts away common tasks such as function role creation, which makes it easier to write templates. AWS SAM is supported directly by AWS CloudFormation, and includes additional functionality through the AWS CLI and AWS SAM CLI.”

Continuous Deployment Practices

From 44:40 Werner talks through the heart of building a Microservices application on AWS.

The keynote challenge is service discovery and communication – How does each microservice find one another and manage their data exchanges?

For this AWS launched App Mesh. This gives a complete view of the distributed system, handles reliability and communication between services, and provides insights into the loads and paths these communications generate.

At 58:45 Werner describes how the Microservices approach is seeing development teams moving from a single to multiple software deployment lifecycles, achieved through autonomous teams that can react individually to changing customer needs. Rather than multiple teams all trying to move one monolith software build through a very infrequent deployment process now they are all continuously deploying at a high frequency rate.

He describes how AWS offers a complete toolchain for enabling this lifecycle, including CodeCommitCodeBuild and CodeDeploy, with X-Ray and CloudWatch for monitoring. X-Ray provides a debugging visualization of all the components in a Microservices environment, across containers and Serverless,

Popular development tools can be used for building Serverless microservice applications, including AWS Cloud9, AWS Toolkits for PycharmIntelliJ and Visual Studio.

Continuous Security

From 1:04:24 Werner focuses in on security. He says most if not all of the data breaches that occur today are due to older systems and security practices associated with them, that are no longer appropriate for today’s modern world.

In line with the microservices team model seeing each individual team take whole responsibility for the operation as well as development of their services, so security needs to become part of this too, rather than being a separate team. “Security is everyone’s job”.

This is achieved through security being an integral part of the Continuous Development pipeline, both in terms of securing of the development pipeline, as well as the security of the software it produces, achieved through pipeline access controls and hardened build servers, and through artifact validation and static code analysis. Alarms and checks need to occur when for example new libraries are added into the build, to ensure that they are approved and secure.

As much as possible all of this checking should be automated, using AWS tools such as Inspector and CloudTrail.

Similarly Werner says that nowadays encryption shouldn’t just be selectively applied to some data but be entirely ubiquitous throughout the application environment.

Aurora – A Database Designed for the Cloud

From 1:15:00 Werner moves on to exploring the role of the underlying databases. He relates how there has been a big shift away from the traditional enterprise vendors to open source, primarily to escape the prohibitive costing models they employ.

However they too are not especially designed for the Cloud, and so the practice of ‘sharding‘ can be used here, but even then this approach faces many scaling issues, given it was an approach developed in the 90s.

So AWS has built Aurora, their own relational database re-engineered to be Cloud Native, a scaled out distributed architecture based on a shared storage model using SSDs that is database-aware. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three Availability Zones.

Aurora has been the fastest growing service in the history of AWS.

At 1:23:00 Werner concludes this section by relating it back to a Microservices architecture, highlighting that the unique design-per-microservice approach means that it may not be a relational database that service needs – Perhaps it is best served by a graph database for example.

Depending on the requirements of that particular service it could use a number of possible AWS options, such as DynamoDBDocumentDBElasticCacheNeptuneTimestream or QLDB.

Data analytics

From 1:25:00 Werner concludes his session through a review of AWS data analytics capabilities.

He begins with the critical insight that IT itself is no longer a competitive differentiation, as there is ubiquitous access to all the same tools for everyone, so it is now it is the kind of data that you have and how smartly you use it that defines your advantage.

Furthermore where data warehouses used to be a heavy, slow and expensive technology to set up and use, now the Cloud has made them lightweight, agile and on demand. Redshift, the AWS data warehouse product, can be spun up and used only for a couple of hours.

Data warehousing may be thought of as an old-style world but wrapping up Werner makes the point it is actually integral to every modern and cutting edge application and business model.

Using Fortnite as an example he highlights that they have a massive analytics engine underpinning the game, enabling core components like service health, game usage to improve playability and tournaments.

Werner describes data analytics as having three main pillars: Historical reporting, real-time status and forecasting, where the data can provide the foundation for machine learning to better predict the future, leading on to the follow on presentation on AWS’s AI services.

Related Articles

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button