Vendor Profile

Orchestrating complex deployments using AWS Developer Tools

In this session, Leo Zhadanovsky, Principal Solution Architect at AWS talks about how you can easily orchestrate and manage highly-complexed deployment scenarios using AWS CodePipeline and AWS Step Functions.

In this session, Leo Zhadanovsky, Principal Solution Architect at AWS talks about how you can easily orchestrate and manage highly-complexed deployment scenarios using AWS CodePipeline and AWS Step Functions.

At 1:36, the speaker begins the session by stating the advantages of AWS developer tools. AWS developer tools are highly appreciated by the users and are commonly used by users for developing, building, and deploying applications. AWS has a robust set of tools that are designed to support a wide range of use-cases.

These tools were built to support modern application development techniques like infrastructure as code and CICD (Continuous integration and continuous deployment) and technologies like serverless and containers.

At 3:50, Mr.Leo briefs on the AWS developer tools for CI/CD. AWS CodeCommit developer tool is for committing your source code, AWS CodeBuild is for building the code, AWS CodeBuild + third party for testing your code, AWS CodeDeploy for deploying your code, AWS X-ray and Amazon cloud watch for monitoring purpose. He also defines CodePipeline as the continuous delivery service for fast and reliable application updates, modeling and visualizing the software release process, and integrating with 3rd party tools and AWS.

At 4:31, the speakers talk about the supported sources for the CodePipeline. You can use an object or a container image to kick off the build process. For EC2, you can use CodeDeploy, Elastic BeanStalk, and OpsWorks Stacks to kick off the build process.

AWS Step Functions

At 5:35, Mr.Leo explains about Step Functions. He defined the AWS Step function serves as a fully managed state machine in the cloud. This service allows you to coordinate multiple AWS services into serverless workflows which will help enhance the build and update speed of the applications.

This service has built-in error handling, has powerful AWS service integration, and provides auditable execution history and visual monitoring. At 7:15, the speaker explains how the functionalities of the AWS step functions. Each step of the workflow is known as state and the workflows you build with the step functions are known as state machines. Moving from one state to another is known as a state transition.

Components can be easily reused and the sequence of steps can be edited. The speaker mentions the different state types present which include task, choice, wait, parallel, map, succeed, fail and pass.

Next, at 8:58, the speaker talks about the different types of deployment. The complex type of deployments usually involves the marshaling of data across stages and actions and involves fine-grained customization. There are 2 types of actions for complex deployments.

One is a lambda custom action and the other is step function custom action. The maximum execution time when Step Function custom action is used is 365 days while the maximum execution time with a lambda custom action is 15 minutes. Step function actions are best for complex actions that require complex logic, orchestration, and decision making.

Continuous delivery at Amazon

At 12:19, Mr.Leo talks about how continuous delivery is achieved at Amazon by using the AWS developer tools. The developer builds the source code, the code then reaches the pre-production stages. Automated tests are performed at the pre-production stages, usually at the alpha stage.

Once the code changes reach the beta stage, automated integration tests are performed followed by automated load/performance tests and browser tests. More automated integration tests, synthetic tests, and API smoke tests are performed at the gamma level. The next phase is ‘production deploy’. Here you perform a canary deploy, then deploy one availability zone followed by deploying the second availability zone. If you encounter any failure during these stages, you can stop the deployment.

In case of no failure, you perform synthetic monitoring At 13:54, the speaker briefs on the distributed load test with step functions. Here, the step function state machine triggers the load tests in Fargate. ECS cluster runs in Fargate where the load tests get executed. State machine orchestrates distributed load tests executed across tasks in ECS Fargate. Once the tasks are complete, results are parsed and sent to Amazon cloud watch.

Conclusion

In this video, the speakers talk about the advent of AWS developer tools and the wide range of benefits it provides to the customers. AWS services are built in such a manner to cater to the needs of developing small-scale as well as large-scale applications.

AWS teams make sure to review the customer feedback and continue to provide new updates and new features to the existing AWS developer tools. In real-time, we might need to have a really complex architecture involving complex logic and complex deployment processes. Sometimes, the deployment might include a good mix of automated and manual steps and may also require coordination actions between AWS and the data center. The speaker has briefed on how different AWS tools can be orchestrated efficiently to perform complex deployments.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button