Amazon AWS Certified DevOps Engineer - Professional DOP-C01
Prev

There are 96 results

Next
#81 (Accuracy: 100% / 3 votes)
A DevOps Engineer administers an application that manages video files for a video production company. The application runs on Amazon EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. Data is stored in an Amazon RDS PostgreSQL
Multi-AZ DB instance, and the video files are stored in an Amazon S3 bucket.
On a typical day, 50 GB of new video are added to the S3 bucket. The Engineer must implement a multi-region disaster recovery plan with the least data loss and the lowest recovery times. The current application infrastructure is already described using AWS CloudFormation.
Which deployment option should the Engineer choose to meet the uptime and recovery objectives for the system?
  • A. Launch the application from the CloudFormation template in the second region, which sets the capacity of the Auto Scaling group to 1. Create an Amazon RDS read replica in the second region. In the second region, enable cross-region replication between the original S3 bucket and a new S3 bucket. To fail over, promote the read replica as master. Update the CloudFormation stack and increase the capacity of the Auto Scaling group.
  • B. Launch the application from the CloudFormation template in the second region, which sets the capacity of the Auto Scaling group to 1. Create a scheduled task to take daily Amazon RDS cross-region snapshots to the second region. In the second region, enable cross-region replication between the original S3 bucket and Amazon Glacier. In a disaster, launch a new application stack in the second region and restore the database from the most recent snapshot.
  • C. Launch the application from the CloudFormation template in the second region, which sets the capacity of the Auto Scaling group to 1. Use Amazon CloudWatch Events to schedule a nightly task to take a snapshot of the database, copy the snapshot to the second region, and replace the DB instance in the second region from the snapshot. In the second region, enable cross-region replication between the original S3 bucket and a new S3 bucket. To fail over, increase the capacity of the Auto Scaling group.
  • D. Use Amazon CloudWatch Events to schedule a nightly task to take a snapshot of the database and copy the snapshot to the second region. Create an AWS Lambda function that copies each object to a new S3 bucket in the second region in response to S3 event notifications. In the second region, launch the application from the CloudFormation template and restore the database from the most recent snapshot.
#82 (Accuracy: 100% / 2 votes)
A consulting company was hired to assess security vulnerabilities within a client company's application and propose a plan to remediate all identified issues. The architecture is identified as follows: Amazon S3 storage for content, an Auto Scaling group of Amazon EC2 instances behind an Elastic Load Balancer with attached Amazon EBS storage, and an Amazon RDS MySQL database. There are also several AWS Lambda functions that communicate directly with the RDS database using connection string statements in the code.

The consultants identified the top security threat as follows: the application is not meeting its requirement to have encryption at rest.


What solution will address this issue with the LEAST operational overhead and will provide monitoring for potential future violations?
  • A. Enable SSE encryption on the S3 buckets and RDS database. Enable OS-based encryption of data on EBS volumes. Configure Amazon Inspector agents on EC2 instances to report on insecure encryption ciphers. Set up AWS Config rules to periodically check for non-encrypted S3 objects.
  • B. Configure the application to encrypt each file prior to storing on Amazon S3. Enable OS-based encryption of data on EBS volumes. Encrypt data on write to RDS. Run cron jobs on each instance to check for unencrypted data and notify via Amazon SNS. Use S3 Events to call an AWS Lambda function and verify if the file is encrypted.
  • C. Enable Secure Sockets Layer (SSL) on the load balancer, ensure that AWS Lambda is using SSL to communicate to the RDS database, and enable S3encryption. Configure the application to force SSL for incoming connections and configure RDS to only grant access if the session is encrypted. Configure Amazon Inspector agents on EC2 instances to report on insecure encryption ciphers.
  • D. Enable SSE encryption on the S3 buckets, EBS volumes, and the RDS database. Store RDS credentials in EC2 Parameter Store. Enable a policy on the S3 bucket to deny unencrypted puts. Set up AWS Config rules to periodically check for non-encrypted S3 objects and EBS volumes, and to ensure that RDS storage is encrypted.
#83 (Accuracy: 95% / 8 votes)
A company is deploying a new application that uses Amazon EC2 instances. The company needs a solution to query application logs and AWS account API activity.

Which solution will meet these requirements?
  • A. Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon CloudWatch Logs. Configure AWS CloudTrail to deliver the API logs to Amazon S3. Use CloudWatch to query both sets of logs.
  • B. Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon CloudWatch Logs. Configure AWS CloudTrail to deliver the API logs to CloudWatch Logs. Use CloudWatch Logs Insights to query both sets of logs.
  • C. Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon Kinesis. Configure AWS CloudTrail to deliver the API logs to Kinesis. Use Kinesis to load the data into Amazon Redshift. Use Amazon Redshift to query both sets of logs.
  • D. Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon S3. Use AWS CloudTrail to deliver the API logs to Amazon S3. Use Amazon Athena to query both sets of logs in Amazon S3.
#84 (Accuracy: 95% / 4 votes)
A company runs several applications across multiple AWS accounts in an organization in AWS Organizations. Some of the resources are not tagged properly and the company's finance team cannot determine which costs are associated with which applications. A DevOps engineer must remediate this issue and prevent this issue from happening in the future.
Which combination of actions should the DevOps engineer take to meet these requirements? (Choose two.)
  • A. Activate the user-defined cost allocation tags in each AWS account.
  • B. Create and attach an SCP that requires a specific tag.
  • C. Define each line of business (LOB) in AWS Budgets. Assign the required tag to each resource.
  • D. Scan all accounts with Tag Editor. Assign the required tag to each resource.
  • E. Use the budget report to find untagged resources. Assign the required tag to each resource.
#85 (Accuracy: 100% / 3 votes)
A development team is building a full-stack serverless web application. The serverless application will consist of a backend REST API and a front end that is built with a single-page application (SPA) framework.
The team wants to use a Git-based workflow to develop and deploy the application.
The team has created an AWS CodeCommit repository to store the application code. The team wants to use multiple development branches to test new features. In addition, the team wants to ensure that code changes on the development branches are deployed to the different development environments. Code changes to the main branches must be released automatically to production.
The development deployments must be available as a subdomain of the main application website, which is hosted in an Amazon Route 53 public hosted zone.

What should a DevOps engineer do to meet these requirements?
  • A. Create an application in the AWS Amplify console, and connect the CodeCommit repository. Create a feature branch deployment for each of the environments. Connect the Route 53 domain to the application. Activate the automatic creation of subdomains.
  • B. Create a single AWS CodePipeline pipeline that uses the CodeCommit repository as a source. Configure the pipeline so that it deploys to different environments based on the changed branch. Create an AWS Lambda function that creates a new subdomain based on the source branch name. Invoke the Lambda function in the deployment workflow.
  • C. Create an application in AWS Elastic Beanstalk that uses the CodeCommit repository as a source. Configure Elastic Beanstalk so that it creates a new application environment based on the changed branch. Connect the Route 53 domain to the application. Activate the automatic creation of subdomains.
  • D. Create multiple AWS CodePipeline pipelines that use the CodeCommit repository as a source. Configure each pipeline so that it deploys to a specific environment based on the configured branch. Configure an AWS CodeDeploy step in the pipeline to deploy the application components and to create the Route 53 public hosted zone.
#86 (Accuracy: 100% / 2 votes)
A company has developed an AWS Lambda function that handles orders received through an API. The company is using AWS CodeDeploy to deploy the Lambda function as the final stage of a CI/CD pipeline.
A DevOps Engineer has noticed there are intermittent failures of the ordering API for a few seconds after deployment.
After some investigation, the DevOps
Engineer believes the failures are due to database changes not having fully propagated before the Lambda function begins executing.

How should the DevOps Engineer overcome this?
  • A. Add a BeforeAllowTraffic hook to the AppSpec file that tests and waits for any necessary database changes before traffic can flow to the new version of the Lambda function
  • B. Add an AfterAllowTraffic hook to the AppSpec file that forces traffic to wait for any pending database changes before allowing the new version of the Lambda function to respond
  • C. Add a BeforeInstall hook to the AppSpec file that tests and waits for any necessary database changes before deploying the new version of the Lambda function
  • D. Add a ValidateService hook to the AppSpec file that inspects incoming traffic and rejects the payload if dependent services, such as the database, are not yet ready
#87 (Accuracy: 100% / 3 votes)
A DevOps engineer notices that all Amazon EC2 instances running behind an Application Load Balancer in an Auto Scaling group are failing to respond to user requests. The EC2 instances are also failing target group HTTP health checks.
Upon inspection, the engineer notices the application process was not running in any EC2 instances.
There are a significant number of out of memory messages in the system logs. The engineer needs to improve the resilience of the application to cope with a potential application memory leak. Monitoring and notifications should be enabled to alert when there is an issue.
Which combination of actions will meet these requirements? (Choose two.)
  • A. Change the Auto Scaling configuration to replace the instances when they fail the load balancer's health checks.
  • B. Change the target group health check HealthCheckIntervalSeconds parameter to reduce the interval between health checks.
  • C. Change the target group health checks from HTTP to TCP to check if the port where the application is listening is reachable.
  • D. Enable the available memory consumption metric within the Amazon CloudWatch dashboard for the entire Auto Scaling group. Create an alarm when the memory utilization is high. Associate an Amazon SNS topic to the alarm to receive notifications when the alarm goes off.
  • E. Use the Amazon CloudWatch agent to collect the memory utilization of the EC2 instances in the Auto Scaling group. Create an alarm when the memory utilization is high and associate an Amazon SNS topic to receive a notification.
#88 (Accuracy: 93% / 6 votes)
A company recently migrated its legacy application from on-premises to AWS. The application is hosted on Amazon EC2 instances behind an Application Load
Balancer, which is behind Amazon API Gateway.
The company wants to ensure users experience minimal disruptions during any deployment of a new version of the application. The company also wants to ensure it can quickly roll back updates if there is an issue.
Which solution will meet these requirements with MINIMAL changes to the application?
  • A. Introduce changes as a separate environment parallel to the existing one. Configure API Gateway to use a canary release deployment to send a small subset of user traffic to the new environment.
  • B. Introduce changes as a separate environment parallel to the existing one. Update the application's DNS alias records to point to the new environment.
  • C. Introduce changes as a separate target group behind the existing Application Load Balancer. Configure API Gateway to route user traffic to the new target group in steps.
  • D. Introduce changes as a separate target group behind the existing Application Load Balancer. Configure API Gateway to route all traffic to the Application Load Balancer, which then sends the traffic to the new target group.
#89 (Accuracy: 100% / 3 votes)
A company develops and maintains a web application using Amazon EC2 instances and an Amazon RDS for SQL Server DB instance in a single Availability
Zone.
The resources need to run only when new deployments are being tested using AWS CodePipeline. Testing occurs one or more times a week and each test takes 2-3 hours to run. A DevOps engineer wants a solution that does not change the architecture components.
Which solution will meet these requirements in the MOST cost-effective manner?
  • A. Convert the RDS database to an Amazon Aurora Serverless database. Use an AWS Lambda function to start and stop the EC2 instances before and after tests.
  • B. Put the EC2 instances into an Auto Scaling group. Schedule scaling to run at the start of the deployment tests.
  • C. Replace the EC2 instances with EC2 Spot Instances and the RDS database with an RDS Reserved Instance.
  • D. Subscribe Amazon CloudWatch Events to CodePipeline to trigger AWS Systems Manager Automation documents that start and stop all EC2 and RDS instances before and after deployment tests.
#90 (Accuracy: 92% / 8 votes)
A company uses AWS Organizations to manage multiple accounts. Information security policies require that all unencrypted Amazon EBS volumes be marked as non-compliant. A DevOps engineer needs to automatically deploy the solution and ensure that this compliance check is always present.
With solution will accomplish this?
  • A. Create an AWS CloudFormation template that defines an AWS Inspector rule to check whether EBS encryption is enabled. Save the template to an Amazon S3 bucket that has been shared with all accounts within the company. Update the account creation script pointing to the CloudFormation template in Amazon S3.
  • B. Create an AWS Config organizational rule to check whether EBS encryption is enabled and deploy the rule using the AWS CLI. Create and apply an SCP to prohibit stopping and deleting AWS Config across the organization.
  • C. Create an SCP in Organizations. Set the policy to prevent the launch of Amazon EC2 instances without encryption on the EBS volumes using a conditional expression. Apply the SCP to all AWS accounts. Use Amazon Athena to analyze the AWS CloudTrail output, looking for events that deny an ec2:RunInstances action.
  • D. Deploy an IAM role to all accounts from a single trusted account. Build a pipeline with AWS CodePipeline with a stage in AWS Lambda to assume the IAM role, and list all EBS volumes in the account. Publish a report to Amazon S3.