Amazon AWS Certified DevOps Engineer - Professional DOP-C01
Prev

There are 96 results

Next
#41 (Accuracy: 100% / 8 votes)
A company is creating a software solution that executes a specific parallel-processing mechanism. The software can scale to tens of servers in some special scenarios. This solution uses a proprietary library that is license-based, requiring that each individual server have a single, dedicated license installed. The company has 200 licenses and is planning to run 200 server nodes concurrently at most.
The company has requested the following features:
✑ A mechanism to automate the use of the licenses at scale.

✑ Creation of a dashboard to use in the future to verify which licenses are available at any moment.

What is the MOST effective way to accomplish these requirements?
  • A. Upload the licenses to a private Amazon S3 bucket. Create an AWS CloudFormation template with a Mappings section for the licenses. In the template, create an Auto Scaling group to launch the servers. In the user data script, acquire an available license from the Mappings section. Create an Auto Scaling lifecycle hook, then use it to update the mapping after the instance is terminated.
  • B. Upload the licenses to an Amazon DynamoDB table. Create an AWS CloudFormation template that uses an Auto Scaling group to launch the servers. In the user data script, acquire an available license from the DynamoDB table. Create an Auto Scaling lifecycle hook, then use it to update the mapping after the instance is terminated.
  • C. Upload the licenses to a private Amazon S3 bucket. Populate an Amazon SQS queue with the list of licenses stored in S3. Create an AWS CloudFormation template that uses an Auto Scaling group to launch the servers. In the user data script acquire an available license from SQS. Create an Auto Scaling lifecycle hook, then use it to put the license back in SQS after the instance is terminated.
  • D. Upload the licenses to an Amazon DynamoDB table. Create an AWS CLI script to launch the servers by using the parameter --count, with min:max instances to launch. In the user data script, acquire an available license from the DynamoDB table. Monitor each instance and, in case of failure, replace the instance, then manually update the DynamoDB table.
#42 (Accuracy: 100% / 3 votes)
A business has an application that consists of five independent AWS Lambda functions.
The DevOps Engineer has built a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild that builds, tests, packages, and deploys each Lambda function in sequence.
The pipeline uses an Amazon CloudWatch Events rule to ensure the pipeline execution starts as quickly as possible after a change is made to the application source code.
After working with the pipeline for a few months, the DevOps Engineer has noticed the pipeline takes too long to complete.

What should the DevOps Engineer implement to BEST improve the speed of the pipeline?
  • A. Modify the CodeBuild projects within the pipeline to use a compute type with more available network throughput.
  • B. Create a custom CodeBuild execution environment that includes a symmetric multiprocessing configuration to run the builds in parallel.
  • C. Modify the CodePipeline configuration to execute actions for each Lambda function in parallel by specifying the same runOrder.
  • D. Modify each CodeBuild project to run within a VPC and use dedicated instances to increase throughput.
#43 (Accuracy: 100% / 3 votes)
A company has provided an externally hosted third-party vendor product with access to the company's AWS account. The vendor product performs various AWS actions in the AWS account and requires various IAM permissions. The company granted the access by creating an IAM user, associating IAM policies and inserting the IAM user credentials into the vendor product.

A security review reveals that the vendor’s access is overly permissive.
The company wants to apply the principle of least privilege and wants to continue giving the vendor permissions to perform only the actions that the vendor has performed in the last 6 months.

Which solution will meet these requirements with the LEAST effort?
  • A. Use AWS Identity and Access Management Access Analyzer to generate a new IAM policy based on the IAM user’s AWS CloudTrail history. Replace the IAM user policy with the newly generated policy.
  • B. Use AWS Identity and Access Management Access Analyzer to generate a new IAM policy based on the IAM user’s AWS CloudTrail history. Attach the newly generated policy as a permissions boundary to the IAM user.
  • C. Use AWS Identity and Access Management Access Analyzer to discover the last accessed information for the IAM user and to create a new IAM policy that allows only the services and actions that the last accessed review identified. Replace the IAM user policy with the newly generated policy.
  • D. Use AWS Identity and Access Management Access Analyzer to discover the last accessed information for the IAM user and to create a new IAM policy that allows only the services and actions that the last accessed review identified. Attach the newly generated policy as a permissions boundary to the IAM user.
#44 (Accuracy: 100% / 6 votes)
A DevOps engineer is creating a CI/CD pipeline for an Amazon ECS service. The ECS container instances run behind an Application Load Balancer as the web tier of a three-tier application. An acceptance criterion for a successful deployment is the verification that the web tier can communicate with the database and middleware tiers of the application upon deployment.

How can this be accomplished in an automated fashion?
  • A. Create a health check endpoint in the web application that tests connectivity to the data and middleware tiers. Use this endpoint as the health check URL for the load balancer.
  • B. Create an approval step for the quality assurance team to validate connectivity. Reject changes in the pipeline if there is an issue with connecting to the dependent tiers.
  • C. Use an Amazon RDS active connection count and an Amazon CloudWatch ELB metric to alarm on a significant change to the number of open connections.
  • D. Use Amazon Route 53 health checks to detect issues with the web service and roll back the Cl/CD pipeline if there is an error.
#45 (Accuracy: 100% / 4 votes)
A DevOps engineer must create a Linux AMI in an automated fashion. The newly created AMI identification must be stored in a location where other build pipelines can access the new identification programmatically.

What is the MOST cost-effective way to do this?
  • A. Build a pipeline in AWS CodePipeline to download and save the latest operating system Open Virtualization Format (OVF) image to an Amazon S3 bucket. Customize the image by using the guestfish utility. Use the virtual machine (VM) import command to convert the OVF to an AMI. Store the AMI identification output as an AWS Systems Manager Parameter Store parameter.
  • B. Create an AWS Systems Manager Automation runbook with values instructing how the image should be created. Build a pipeline in AWS CodePipeline to execute the runbook to create the AMI. Store the AMI identification output as a Systems Manager Parameter Store parameter.
  • C. Build a pipeline in AWS CodePipeline to take a snapshot of an Amazon EC2 instance running the latest version of the application. Start a new EC2 instance from the snapshot and update the running instance by using an AWS Lambda function. Take a snapshot of the updated instance and convert it to an AMI. Store the AMI identification output in an Amazon DynamoDB table.
  • D. Launch an Amazon EC2 instance and install Packer. Configure a Packer build with values defining how the image should be created. Build a Jenkins pipeline to invoke the Packer build to create an AMI. Store the AMI identification output in an Amazon DynamoDB table.
#46 (Accuracy: 100% / 5 votes)
A company is adopting AWS CodeDeploy to automate its application deployments for a Java-Apache Tomcat application with an Apache webserver The development team started with a proof of concept, created a deployment group for a developer environment, and performed functional tests within the application After completion, the team will create additional deployment groups for staging and production.

The current log level is configured within the Apache settings, but the team wants to change this configuration dynamically when the deployment occurs, so that they can set different log level configurations depending on the deployment group without having a different application revision for each group.

How can these requirements be met with the LEAST management overhead and without requiring different script versions for each deployment group?
  • A. Tag the Amazon EC2 instances depending on the deployment group. Then place a script into the application revision that calls the metadata service and the EC2 API to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference the script as part of the Afterinstall lifecycle hook in the appspec.yml file.
  • B. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the Beforelnstall lifecycle hook in the appspec.yml file.
  • C. Create a CodeDeploy custom environment variable for each environment Then place a script into the application revision that checks this environment variable to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the ValidateService lifecycle hook in the appspec.yml file.
  • D. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to identify which deployment group the instance is part of to configure the log level settings. Reference this script as part of the Install lifecycle hook in the appspec.yml file.
#47 (Accuracy: 100% / 5 votes)
A company is implementing a well-architected design for its globally accessible API stack. The design needs to ensure both high reliability and fast response times for users located in North America and Europe.

The API stack contains the following three tiers:

• Amazon API Gateway
• AWS Lambda
• Amazon DynamoDB

Which solution will meet the requirements?
  • A. Configure Amazon Route 53 to point to API Gateway APIs in North America and Europe using health checks. Configure the APIs to forward requests to a Lambda function in that Region. Configure the Lambda functions to retrieve and update the data in a DynamoDB table in the same Region as the Lambda function.
  • B. Configure Amazon Route 53 to point to API Gateway APIs in North America and Europe using latency-based routing and health checks. Configure the APIs to forward requests to a Lambda function in that Region. Configure the Lambda functions to retrieve and update the data in a DynamoDB global table.
  • C. Configure Amazon Route 53 to point to API Gateway in North America, create a disaster recovery API in Europe, and configure both APIs to forward requests to the Lambda functions in that Region. Retrieve the data from a DynamoDB global table. Deploy a Lambda function to check the North America API health every 5 minutes. In the event of a failure, update Route 53 to point to the disaster recovery API.
  • D. Configure Amazon Route 53 to point to API Gateway API in North America using latency-based routing. Configure the API to forward requests to the Lambda function in the Region nearest to the user. Configure the Lambda function to retrieve and updathe data in a DynamoDB table.
#48 (Accuracy: 100% / 6 votes)
A development team manually builds an artifact locally and then places it in an Amazon S3 bucket. The application has a local cache that must be cleared when a deployment occurs. The team executes a command to do this, downloads the artifact from Amazon S3, and unzips the artifact to complete the deployment.

A DevOps team wants to migrate to a CI/CD process and build in checks to stop and roll back the deployment when a failure occurs.
This requires the team to track the progression of the deployment.

Which combination of actions will accomplish this? (Choose three.)
  • A. Allow developers to check the code into a code repository. Using Amazon CloudWatch Events, on every pull into master, trigger an AWS Lambda function to build the artifact and store it in Amazon S3.
  • B. Create a custom script to clear the cache. Specify the script in the Beforelnstall lifecycle hook in the AppSpec file.
  • C. Create user data for each Amazon EC2 instance that contains the clear cache script. Once deployed, test the application. If it is not successful, deploy it again.
  • D. Set up AWS CodePipeline to deploy the application. Allow developers to check the code into a code repository as a source for the pipeline.
  • E. Use AWS CodeBuild to build the artifact and place it in Amazon S3. Use AWS CodeDeploy to deploy the artifact to Amazon EC2 instances.
  • F. Use AWS Systems Manager to fetch the artifact from Amazon S3 and deploy it to all the instances.
#49 (Accuracy: 100% / 3 votes)
A company must collect user consent to a privacy agreement. The company deploys an application in six AWS Regions: two Regions in North America, two Regions in Europe, and two Regions in Asia. The application has a user base of 20 million to 30 million users.

The company needs to read and write data that is related to each user's response.
The company also must ensure that the responses are available in all six Regions.

Which solution will meet these requirements with the LOWEST latency of reads and writes?
  • A. Implement Amazon DocumentDB (with MongoDB compatibility) in each of the six Regions.
  • B. Implement Amazon DynamoDB global tables in each of the six Regions.
  • C. Implement Amazon ElastiCache for Redis replication groups in each of the six Regions.
  • D. Implement Amazon Elasticsearch Service (Amazon ES) in each of the six Regions.
#50 (Accuracy: 100% / 3 votes)
A company has a mission-critical application on AWS that uses automatic scaling. The company wants the deployment lifecycle to meet the following parameters:

• The application must be deployed one instance at a time to ensure the remaining fleet continues to serve traffic.

• The application is CPU intensive and must be closely monitored.

• The deployment must automatically roll back if the CPU utilization of the deployment instance exceeds 85%.


Which solution will meet these requirements?
  • A. Use AWS CloudFormation to create an AWS Step Functions state machine and Auto Scaling lifecycle hooks to move to one instance at a time into a wait state. Use AWS Systems Manager automation to deploy the update to each instance and move it back into the Auto Scaling group using the heartbeat timeout.
  • B. Use AWS CodeDeploy with Amazon EC2 Auto Scaling. Configure an alarm tied to the CPU utilization metric. Use the CodeDeployDefault.OneAtAtime configuration as a deployment strategy. Configure automatic rollbacks within the deployment group to roll back the deployment if the alarm thresholds are breached.
  • C. Use AWS Elastic Beanstalk for load balancing and AWS Auto Scaling. Configure an alarm tied to the CPU utilization metric. Configure rolling deployments with a fixed batch size of one instance. Enable enhanced health to monitor the status of the deployment and roll back based on the alarm previously created.
  • D. Use AWS Systems Manager to perform a blue/green deployment with Amazon EC2 Auto Scaling. Configure an alarm tied to the CPU utilization metric. Deploy updates one at a time. Configure automatic rollbacks within the Auto Scaling group to roll back the deployment if the alarm thresholds are breached.