DOP-C01 : AWS DevOps Engineer Professional : Part 05
-
A DevOps Engineer needs to deploy a scalable three-tier Node.js application in AWS. The application must have zero downtime during deployments and be able to roll back to previous versions. Other applications will also connect to the same MySQL backend database.
The CIO has provided the following guidance for logging:
– Centrally view all current web access server logs.
– Search and filter web and application logs in near-real time.
– Retain log data for three months.How should these requirements be met?
- Deploy the application using AWS Elastic Beanstalk. Configure the environment type for Elastic Load Balancing and Auto Scaling. Create an Amazon RDS MySQL instance inside the Elastic Beanstalk stack. Configure the Elastic Beanstalk log options to stream logs to Amazon CloudWatch Logs. Set retention to 90 days.
- Deploy the application on Amazon EC2. Configure Elastic Load Balancing and Auto Scaling. Use an Amazon RDS MySQL instance for the database tier. Configure the application to store log files in Amazon S3. Use Amazon EMR to search and filter the data. Set an Amazon S3 lifecycle rule to expire objects after 90 days.
- Deploy the application using AWS Elastic Beanstalk. Configure the environment type for Elastic Load Balancing and Auto Scaling. Create the Amazon RDS MySQL instance outside the Elastic Beanstalk stack. Configure the Elastic Beanstalk log options to stream logs to Amazon CloudWatch Logs. Set retention to 90 days.
- Deploy the application on Amazon EC2. Configure Elastic Load Balancing and Auto Scaling. Use an Amazon RDS MySQL instance for the database tier. Configure the application to load streaming log data using Amazon Kinesis Data Firehose into Amazon ES. Delete and create a new Amazon ES domain every 90 days.
Explanation:
The Amazon EC2 instances in your Elastic Beanstalk environment generate logs that you can view to troubleshoot issues with your application or configuration files. Logs created by the web server, application server, Elastic Beanstalk platform scripts, and AWS CloudFormation are stored locally on individual instances. You can easily retrieve them by using the environment management console or the EB CLI. You can also configure your environment to stream logs to Amazon CloudWatch Logs in real-time. -
An IT team has built an AWS CloudFormation template so others in the company can quickly and reliably deploy and terminate an application. The template creates an Amazon EC2 instance with a user data script to install the application and an Amazon S3 bucket that the application uses to serve static webpages while it is running.
All resources should be removed when the CloudFormation stack is deleted. However, the team observes that CloudFormation reports an error during stack deletion, and the S3 bucket created by the stack is not deleted.
How can the team resolve the error in the MOST efficient manner to ensure that all resources are deleted without errors?
- Add DeletionPolicy attribute to the S3 bucket resource, with the value Delete forcing the bucket to be removed when the stack is deleted.
- Add a custom resource when an AWS Lambda function with the DependsOn attribute specifying the S3 bucket, and an IAM role. Writhe the Lambda function to delete all objects from the bucket when the RequestType is Delete.
- Identify the resource that was not deleted. From the S3 console, empty the S3 bucket and then delete it.
- Replace the EC2 and S3 bucket resources with a single AWS OpsWorks Stacks resource. Define a custom recipe for the stack to create and delete the EC2 instance and the S3 bucket.
-
A DevOps Engineer just joined a new company that is already running workloads on Amazon EC2 instances. AWS has been adopted incrementally with no central governance. The Engineer must now assess how well the existing deployments comply with the following requirements:
– EC2 instances are running only approved AMIs.
– Amazon EBS volumes are encrypted.
– EC2 instances have an Owner tag.
– Root login over SSH is disabled on EC2 instances.Which services should the Engineer use to perform this assessment with the LEAST amount of effort? (Choose two.)
- AWS Config
- Amazon GuardDuty
- AWS System Manager
- AWS Directory Service
- Amazon Inspector
-
A healthcare company has a critical application running in AWS. Recently, the company experienced some down time. If it happens again, the company needs to be able to recover its application in another AWS Region. The application uses Elastic Load Balancing and Amazon EC2 instances. The company also maintains a custom AMI that contains its application. This AMI is changed frequently.
The workload is required to run in the primary region, unless there is a regional service disruption, in which case traffic should fail over to the new region. Additionally, the cost for the second region needs to be low. The RTO is 2 hours.
Which solution allows the company to fail over to another region in the event of a failure, and also meet the above requirements?
- Maintain a copy of the AMI from the main region in the backup region. Create an Auto Scaling group with one instance using a launch configuration that contains the copied AMI. Use an Amazon Route 53 record to direct traffic to the load balancer in the backup region in the event of failure, as required. Allow the Auto Scaling group to scale out as needed during a failure.
- Automate the copying of the AMI in the main region to the backup region. Generate an AWS Lambda function that will create an EC2 instance from the AMI and place it behind a load balancer. Using the same Lambda function, point the Amazon Route 53 record to the load balancer in the backup region. Trigger the Lambda function in the event of a failure.
- Place the AMI in a replicated Amazon S3 bucket. Generate an AWS Lambda function that can create a launch configuration and assign it to an already created Auto Scaling group. Have one instance in this Auto Scaling group ready to accept traffic. Trigger the Lambda function in the event of a failure. Use an Amazon Route 53 record and modify it with the same Lambda function to point to the load balancer in the backup region.
- Automate the copying of the AMI to the backup region. Create an AWS Lambda function that can create a launch configuration and assign it to an already created Auto Scaling group. Set the Auto Scaling group maximum size to 0 and only increase it with the Lambda function during a failure. Trigger the Lambda function in the event of a failure. Use an Amazon Route 53 record and modify it with the same Lambda function to point to the load balancer in the backup region.
-
A legacy web application stores access logs in a proprietary text format. One of the security requirements is to search application access events and correlate them with access data from many different systems. These searches should be near-real time.
Which solution offloads the processing load on the application server and provides a mechanism to search the data in near-real time?
- Install the Amazon CloudWatch Logs agent on the application server and use CloudWatch Events rules to search logs for access events. Use Amazon CloudSearch as an interface to search for events.
- Use the third-party file-input plugin Logstash to monitor the application log file, then use a custom dissect filter on the agent to parse the log entries into the JSON format. Output the events to Amazon ES to be searched. Use the Elasticsearch API for querying the data.
- Upload the log files to Amazon S3 by using the S3 sync command. Use Amazon Athena to define the structure of the data as a table, with Athena SQL queries to search for access events.
- Install the Amazon Kinesis Agent on the application server, configure it to monitor the log files, and send it to a Kinesis stream. Configure Kinesis to transform the data by using an AWS Lambda function, and forward events to Amazon ES for analysis. Use the Elasticsearch API for querying the data.
-
A company runs a database on a single Amazon EC2 instance in a development environment. The data is stored on separate Amazon EBS volumes that are attached to the EC2 instance. An Amazon Route 53 A record has been created and configured to point to the EC2 instance. The company would like to automate the recovery of the database instance when an instance or Availability Zone (AZ) fails. The company also wants to keep its costs low. The RTO is 4 hours and the RPO is 12 hours.
Which solution should a DevOps Engineer implement to meet these requirements?
- Run the database in an Auto Scaling group with a minimum and maximum instance count of 1 in multiple AZs. Add a lifecycle hook to the Auto Scaling group and define an Amazon CloudWatch Events rule that is triggered when a lifecycle event occurs. Have the CloudWatch Events rule invoke an AWS Lambda function to detach or attach the Amazon EBS data volumes from the EC2 instance based on the event. Configure the EC2 instance UserData to mount the data volumes (retry on failure with a short delay), then start the database and update the Route 53 record.
- Run the database on two separate EC2 instances in different AZs with one active and the other as a standby. Attach the data volumes to the active instance. Configure an Amazon CloudWatch Events rule to invoke an AWS Lambda function on EC2 instance termination. The Lambda function launches a replacement EC2 instance. If the terminated instance was the active node, then the function attaches the data volumes to the standby node. Start the database and update the Route 53 record.
- Run the database in an Auto Scaling group with a minimum and maximum instance count of 1 in multiple AZs. Create an AWS Lambda function that is triggered by a scheduled Amazon CloudWatch Events rule every 4 hours to take a snapshot of the data volume and apply a tag. Have the instance UserData get the latest snapshot, create a new volume from it, and attach and mount the volume. Then start the database and update the Route 53 record.
- Run the database on two separate EC2 instances in different AZs. Configure one of the instances as a master and the other as a standby. Set up replication between the master and standby instances. Point the Route 53 record to the master. Configure an Amazon CloudWatch Events rule to invoke an AWS Lambda function upon the EC2 instance termination. The Lambda function launches a replacement EC2 instance. If the terminated instance was the active node, the function promotes the standby to master and points the Route 53 record to it.
-
A consulting company was hired to assess security vulnerabilities within a client company’s application and propose a plan to remediate all identified issues. The architecture is identified as follows: Amazon S3 storage for content, an Auto Scaling group of Amazon EC2 instances behind an Elastic Load Balancer with attached Amazon EBS storage, and an Amazon RDS MySQL database. There are also several AWS Lambda functions that communicate directly with the RDS database using connection string statements in the code.
The consultants identified the top security threat as follows: the application is not meeting its requirement to have encryption at rest.
What solution will address this issue with the LEAST operational overhead and will provide monitoring for potential future violations?
- Enable SSE encryption on the S3 buckets and RDS database. Enable OS-based encryption of data on EBS volumes. Configure Amazon Inspector agents on EC2 instances to report on insecure encryption ciphers. Set up AWS Config rules to periodically check for non-encrypted S3 objects.
- Configure the application to encrypt each file prior to storing on Amazon S3. Enable OS-based encryption of data on EBS volumes. Encrypt data on write to RDS. Run cron jobs on each instance to check for unencrypted data and notify via Amazon SNS. Use S3 Events to call an AWS Lambda function and verify if the file is encrypted.
- Enable Secure Sockets Layer (SSL) on the load balancer, ensure that AWS Lambda is using SSL to communicate to the RDS database, and enable S3 encryption. Configure the application to force SSL for incoming connections and configure RDS to only grant access if the session is encrypted. Configure Amazon Inspector agents on EC2 instances to report on insecure encryption ciphers.
- Enable SSE encryption on the S3 buckets, EBS volumes, and the RDS database. Store RDS credentials in EC2 Parameter Store. Enable a policy on the S3 bucket to deny unencrypted puts. Set up AWS Config rules to periodically check for non-encrypted S3 objects and EBS volumes, and to ensure that RDS storage is encrypted.
-
A new zero-day vulnerability was found in OpenSSL requiring the immediate patching of a production web fleet running on Amazon Linux. Currently, OS updates are performed manually on a monthly basis and deployed using updates to the production Auto Scaling Group’s launch configuration.
Which method should a DevOps Engineer use to update packages in-place without downtime?
- Use AWS CodePipline and AWS CodeBuild to generate new copies of these packages, and update the Auto Scaling group’s launch configuration.
- Use AWS Inspector to run “yum upgrade” on all running production instances, and manually update the AMI for the next maintenance window.
- Use Amazon EC2 Run Command to issue a package update command to all running production instances, and update the AMI for future deployments.
- Define a new AWS OpsWorks layer to match the running production instances, and use a recipe to issue a package update command to all running production instances.
-
A company runs a production application workload in a single AWS account that uses Amazon Route 53, AWS Elastic Beanstalk, and Amazon RDS. In the event of a security incident, the Security team wants the application workload to fail over to a new AWS account. The Security team also wants to block all access to the original account immediately, with no access to any AWS resources in the original AWS account, during forensic analysis.
What is the most cost-effective way to prepare to fail over to the second account prior to a security incident?
- Migrate the Amazon Route 53 configuration to a dedicated AWS account. Mirror the Elastic Beanstalk configuration in a different account. Enable RDS Database Read Replicas in a different account.
- Migrate the Amazon Route 53 configuration to a dedicated AWS account. Save/copy the Elastic Beanstalk configuration files in a different AWS account. Copy snapshots of the RDS Database to a different account.
- Save/copy the Amazon Route 53 configurations for use in a different AWS account after an incident. Save/copy Elastic Beanstalk configuration files to a different account. Enable the RDS database read replica in a different account.
- Save/copy the Amazon Route 53 configurations for use in a different AWS account after an incident. Mirror the configuration of Elastic Beanstalk in a different account. Copy snapshots of the RDS database to a different account.
-
Two teams are working together on different portions of an architecture and are using AWS CloudFormation to manage their resources. One team administers operating system-level updates and patches, while the other team manages application-level dependencies and updates. The Application team must take the most recent AMI when creating new instances and deploying the application.
What is the MOST scalable method for linking these two teams and processes?
- The Operating System team uses CloudFormation to create new versions of their AMIs and lists the Amazon Resource Names (ARNs) of the AMIs in an encrypted Amazon S3 object as part of the stack output section. The Application team uses a cross-stack reference to load the encrypted S3 object and obtain the most recent AMI ARNs.
- The Operating System team uses CloudFormation stack to create an AWS CodePipeline pipeline that builds new AMIs, then places the latest AMI ARNs in an encrypted Amazon S3 object as part of the pipeline output. The Application team uses a cross-stack reference within their own CloudFormation template to get that S3 object location and obtain the most recent AMI ARNs to use when deploying their application.
- The Operating System team uses CloudFormation stack to create an AWS CodePipeline pipeline that builds new AMIs. The team then places the AMI ARNs as parameters in AWS Systems Manager Parameter Store as part of the pipeline output. The Application team specifies a parameter of type SSM in their CloudFormation stack to obtain the most recent AMI ARN from the Parameter Store.
- The Operating System team maintains a nested stack that includes both the operating system and Application team templates. The Operating System team uses a stack update to deploy updates to the application stack whenever the Application team changes the application code.
-
The Development team has grown substantially in recent months and so has the number of projects that use separate code repositories. The current process involves configuring AWS CodePipeline manually. There have been service limit alerts regarding the number of Amazon S3 buckets that exist.
Which pipeline option will reduce S3 bucket sprawl alerts?
- Combine the multiple separate code repositories into a single one, and deploy using an AWS CodePipeline that has logic for each project.
- Create new pipelines by using the AWS API or AWS CLI, and configure them to use a single S3 bucket with separate prefixes for each project.
- Create a new pipeline in a different region for each project to bypass the service limits for S3 buckets in a single region.
- Create a new pipeline and S3 bucket for each project by using the AWS API or AWS CLI to bypass the service limits for S3 buckets in a single account.
-
A startup company is developing a web application on AWS. It plans to use Amazon RDS for persistence and deploy the application to Amazon EC2 with an Auto Scaling group. The company would also like to separate the environments for development, testing, and production.
What is the MOST secure approach to manage the application configuration?
- Create a property file to include the configuration and the encrypted passwords. Check in the property file to the source repository, package the property file with the application, and deploy the application. Create an environment tag for the EC2 instances and tag the instances respectively. The application will extract the necessary property values based on the environment tag.
- Create a property file for each environment to include the environment-specific configuration and an encrypted password. Check in the property files to the source repository. During deployment, use only the environment-specific property file with the application. The application will read the needed property values from the deployed property file.
- Create a property file for each environment to include the environment-specific configuration. Create a private Amazon S3 bucket and save the property files in the bucket. Save the passwords in the bucket with AWS KMS encryption. During deployment, the application will read the needed property values from the environment-specific property file in the S3 bucket.
- Create a property file for each environment to include the environment-specific configuration. Create a private Amazon S3 bucket and save the property files in the bucket. Save the encrypted passwords in the AWS Systems Manager Parameter Store. Create an environment tag for the EC2 instances and tag the instances respectively. The application will read the needed property values from the environment-specific property file in the S3 bucket and the parameter store.
-
A DevOps Engineer is using AWS CodeDeploy across a fleet of Amazon EC2 instances in an EC2 Auto Scaling group. The associated CodeDeploy deployment group, which is integrated with EC2 Auto Scaling, is configured to perform in-place deployments with CodeDeployDefault.OneAtATime. During an ongoing new deployment, the Engineer discovers that, although the overall deployment finished successfully, two out of five instances have the previous application revision deployed. The other three instances have the newest application revision.
What is likely causing this issue?
- The two affected instances failed to fetch the new deployment.
- A failed AfterInstall lifecycle event hook caused the CodeDeploy agent to roll back to the previous version on the affected instances.
- The CodeDeploy agent was not installed in two affected instances.
- EC2 Auto Scaling launched two new instances while the new deployment had not yet finished, causing the previous version to be deployed on the affected instances.
-
A company runs a three-tier web application in its production environment, which is built on a single AWS CloudFormation template made up of Amazon EC2 instances behind an ELB Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones. Data is stored in an Amazon RDS Multi-AZ DB instance with read replicas. Amazon Route 53 manages the application’s public DNS record.
A DevOps Engineer must create a workflow to mitigate a failed software deployment by rolling back changes in the production environment when a software cutover occurs for new application software.
What steps should the Engineer perform to meet these requirements with the LEAST amount of downtime?
- Use CloudFormation to deploy an additional staging environment and configure the Route 53 DNS with weighted records. During cutover, change the Route 53 A record weights to achieve an even traffic distribution between the two environments. Validate the traffic in the new environment and immediately terminate the old environment if tests are successful.
- Use a single AWS Elastic Beanstalk environment to deploy the staging and production environments. Update the environment by uploading the ZIP file with the new application code. Swap the Elastic Beanstalk environment CNAME. Validate the traffic in the new environment and immediately terminate the old environment if tests are successful.
- Use a single AWS Elastic Beanstalk environment and an AWS OpsWorks environment to deploy the staging and production environments. Update the environment by uploading the ZIP file with the new application code into the Elastic Beanstalk environment deployed with the OpsWorks stack. Validate the traffic in the new environment and immediately terminate the old environment if tests are successful.
- Use AWS CloudFormation to deploy an additional staging environment, and configure the Route 53 DNS with weighted records. During cutover, increase the weight distribution to have more traffic directed to the new staging environment as workloads are successfully validated. Keep the old production environment in place until the new staging environment handles all traffic.
-
A company wants to adopt a methodology for handling security threats from leaked and compromised IAM access keys. The DevOps Engineer has been asked to automate the process of acting upon compromised access keys, which includes identifying users, revoking their permissions, and sending a notification to the Security team.
Which of the following would achieve this goal?
- Use the AWS Trusted Advisor generated security report for access keys. Use Amazon EMR to run analytics on the report. Identify compromised IAM access keys and delete them. Use Amazon CloudWatch with an EMR Cluster State Change event to notify the Security team.
- Use AWS Trusted Advisor to identify compromised access keys. Create an Amazon CloudWatch Events rule with Trusted Advisor as the event source, and AWS Lambda and Amazon SNS as targets. Use AWS Lambda to delete compromised IAM access keys and Amazon SNS to notify the Security team.
- Use the AWS Trusted Advisor generated security report for access keys. Use AWS Lambda to scan through the report. Use scan result inside AWS Lambda and delete compromised IAM access keys. Use Amazon SNS to notify the Security team.
- Use AWS Lambda with a third-party library to scan for compromised access keys. Use scan result inside AWS Lambda and delete compromised IAM access keys. Create Amazon CloudWatch custom metrics for compromised keys. Create a CloudWatch alarm on the metrics to notify the Security team.
-
A company wants to use Amazon ECS to provide a Docker container runtime environment. For compliance reasons, all Amazon EBS volumes used in the ECS cluster must be encrypted. Rolling updates will be made to the cluster instances and the company wants the instances drained of all tasks before being terminated.
How can these requirements be met? (Choose two.)
- Modify the default ECS AMI user data to create a script that executes docker rm –f {id} for all running container instances. Copy the script to the /etc/init.d/rc.d directory and execute chconfig enabling the script to run during operating system shutdown.
- Use AWS CodePipeline to build a pipeline that discovers the latest Amazon-provided ECS AMI, then copies the image to an encrypted AMI outputting the encrypted AMI ID. Use the encrypted AMI ID when deploying the cluster.
- Copy the default AWS CloudFormation template that ECS uses to deploy cluster instances. Modify the template resource EBS configuration setting to set ‘Encrypted: True’ and include the AWS KMS alias: ‘aws/ebs’ to encrypt the AMI.
- Create an Auto Scaling lifecycle hook backed by an AWS Lambda function that uses the AWS SDK to mark a terminating instance as DRAINING. Prevent the lifecycle hook from completing until the running tasks on the instance are zero.
- Create an IAM role that allows the action ECS::EncryptedImage. Configure the AWS CLI and a profile to use this role. Start the cluster using the AWS CLI providing the –use-encrypted-image and –kms-key arguments to the create-cluster ECS command.
-
A government agency has multiple AWS accounts, many of which store sensitive citizen information. A Security team wants to detect anomalous account and network activities (such as SSH brute force attacks) in any account and centralize that information in a dedicated security account. Event information should be stored in an Amazon S3 bucket in the security account, which is monitored by the department’s Security Information and Event Management (SIEM) system.
How can this be accomplished?
- Enable Amazon Macie in every account. Configure the security account as the Macie Administrator for every member account using invitation/acceptance. Create an Amazon CloudWatch Events rule in the security account to send all findings to Amazon Kinesis Data Firehose, which should push the findings to the S3 bucket.
- Enable Amazon Macie in the security account only. Configure the security account as the Macie Administrator for every member account using invitation/acceptance. Create an Amazon CloudWatch Events rule in the security account to send all findings to Amazon Kinesis Data Streams. Write an application using KCL to read data from the Kinesis Data Streams and write to the S3 bucket.
- Enable Amazon GuardDuty in every account. Configure the security account as the GuardDuty Administrator for every member account using invitation/acceptance. Create an Amazon CloudWatch rule in the security account to send all findings to Amazon Kinesis Data Firehose, which will push the findings to the S3 bucket.
- Enable Amazon GuardDuty in the security account only. Configure the security account as the GuardDuty Administrator for every member account using invitation/acceptance. Create an Amazon CloudWatch rule in the security account to send all findings to Amazon Kinesis Data Streams. Write an application using KCL to read data from Kinesis Data Streams and write to the S3 bucket.
-
An AWS CodePipeline pipeline has implemented a code release process. The pipeline is integrated with AWS CodeDeploy to deploy versions of an application to multiple Amazon EC2 instances for each CodePipeline stage.
During a recent deployment, the pipeline failed due to a CodeDeploy issue. The DevOps team wants to improve monitoring and notifications during deployment to decrease resolution times.
What should the DevOps Engineer do to create notifications when issues are discovered?
- Implement AWS CloudWatch Logs for CodePipeline and CodeDeploy, create an AWS Config rule to evaluate code deployment issues, and create an Amazon SNS topic to notify stakeholders of deployment issues.
- Implement AWS CloudWatch Events for CodePipeline and CodeDeploy, create an AWS Lambda function to evaluate code deployment issues, and create an Amazon SNS topic to notify stakeholders of deployment issues.
- Implement AWS CloudTrail to record CodePipeline and CodeDeploy API call information, create an AWS Lambda function to evaluate code deployment issues, and create an Amazon SNS topic to notify stakeholders of deployment issues.
- Implement AWS CloudWatch Events for CodePipeline and CodeDeploy, create an Amazon Inspector assessment target to evaluate code deployment issues, and create an Amazon SNS topic to notify stakeholders of deployment issues.
-
A company runs an application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones in us-east-1. The application stores data in an Amazon RDS MySQL Multi-AZ DB instance.
A DevOps engineer wants to modify the current solution and create a hot standby of the environment in another region to minimize downtime if a problem occurs in us-east-1.
Which combination of steps should the DevOps engineer take to meet these requirements? (Choose three.)
- Add a health check to the Amazon Route 53 alias record to evaluate the health of the primary region. Use AWS Lambda, configured with an Amazon CloudWatch Events trigger, to promote the Amazon RDS read replica in the disaster recovery region.
- Create a new Application Load Balancer and Amazon EC2 Auto Scaling group in the disaster recovery region.
- Extend the current Amazon EC2 Auto Scaling group to the subnets in the disaster recovery region.
- Enable multi-region failover for the RDS configuration for the database instance.
- Deploy a read replica of the RDS instance in the disaster recovery region.
- Create an AWS Lambda function to evaluate the health of the primary region. If it fails, modify the Amazon Route 53 record to point at the disaster recovery region and promote the RDS read replica.
-
A DevOps Engineer needs to design and implement a backup mechanism for Amazon EFS.
– The Engineer is given the following requirements:
– The backup should run on schedule.
– The backup should be stopped if the backup window expires.
– The backup should be stopped if the backup completes before the backup window.
– The backup logs should be retained for further analysis.
– The design should support highly available and fault-tolerant paradigms.
– Administrators should be notified with backup metadata.Which design will meet these requirements?
- Use AWS Lambda with an Amazon CloudWatch Events rule for scheduling the start/stop of backup activity. Run backup scripts on Amazon EC2 in an Auto Scaling group. Use Auto Scaling lifecycle hooks and the SSM Run Command on EC2 for uploading backup logs to Amazon S3. Use Amazon SNS to notify administrators with backup activity metadata.
- Use Amazon SWF with an Amazon CloudWatch Events rule for scheduling the start/stop of backup activity. Run backup scripts on Amazon EC2 in an Auto Scaling group. Use Auto Scaling lifecycle hooks and the SSM Run Command on EC2 for uploading backup logs to Amazon Redshift. Use CloudWatch Alarms to notify administrators with backup activity metadata.
- Use AWS Data Pipeline with an Amazon CloudWatch Events rule for scheduling the start/stop of backup activity. Run backup scripts on Amazon EC2 in a single Availability Zone. Use Auto Scaling lifecycle hooks and the SSM Run Command on EC2 for uploading the backup logs to Amazon RDS. Use Amazon SNS to notify administrators with backup activity metadata.
- Use AWS CodePipeline with an Amazon CloudWatch Events rule for scheduling the start/stop of backup activity. Run backup scripts on Amazon EC2 in a single Availability Zone. Use Auto Scaling lifecycle hooks and the SSM Run Command on Amazon EC2 for uploading backup logs to Amazon S3. Use Amazon SES to notify admins with backup activity metadata.
Subscribe
0 Comments
Newest