Amazon AWS Certified DevOps Engineer - Professional DOP-C01
Prev

There are 96 results

Next
#31 (Accuracy: 100% / 4 votes)
A company has developed a Node.js web application which provides REST services to store and retrieve time series data. The web application is built by the development team on company laptops, tested locally, and manually deployed to a single on-premises server, which accesses a local MySQL database. The company is starting a trial in two weeks, during which the application will undergo frequent updates based on customer feedback. The following requirements must be met:

• The team must be able to reliably build, test, and deploy new updates on a daily basis, without downtime or degraded performance.

• The application must be able to scale to meet an unpredictable number of concurrent users during the trial.


Which action will allow the team to quickly meet these objectives?
  • A. Create two Amazon Lightsail virtual private servers for Node.js; one for test and one for production. Build the Node.js application using existing processes and upload it to the new Lightsail test server using the AWS CLI. Test the application, and if it passes all tests, upload it to the production server. During the trial, monitor the production server usage, and if needed, increase performance by upgrading the instance type.
  • B. Develop an AWS CloudFormation template to create an Application Load Balancer and two Amazon EC2 instances with Amazon EBS (SSD) volumes in an Auto Scaling group with rolling updates enabled. Use AWS CodeBuild to build and test the Node.js application and store it in an Amazon S3 bucket. Use user-data scripts to install the application and the MySQL database on each EC2 instance. Update the stack to deploy new application versions.
  • C. Configure AWS Elastic Beanstalk to automatically build the application using AWS CodeBuild and to deploy it to a test environment that is configured to support auto scaling. Create a second Elastic Beanstalk environment for production. Use Amazon RDS to store data. When new versions of the applications have passed all tests, use Elastic Beanstalk 'swap cname' to promote the test environment to production.
  • D. Modify the application to use Amazon DynamoDB instead of a local MySQL database. Use AWS OpsWorks to create a stack for the application with a DynamoDB layer, an Application Load Balancer layer, and an Amazon EC2 instance layer. Use a Chef recipe to build the application and a Chef recipe to deploy the application to the EC2 instance layer. Use custom health checks to run unit tests on each instance with rollback on failure.
#32 (Accuracy: 100% / 4 votes)
A company hosts an application in North America. The application uses an Amazon Aurora PostgreSQL DB cluster. A team of analysts in Europe generates real- time reports by using the DB cluster. The analysts must have access to the most up-to-date data. A DevOps engineer discovers that the generation of reports is much slower for users in Europe than for users in North America.
What should the DevOps engineer do to resolve this issue?
  • A. Create an Amazon DynamoDB table in Europe. Use DynamoDB Accelerator (DAX) to configure replication between the DB cluster and the DynamoDB table. Configure the users' machines to point to the DynamoDB table in Europe.
  • B. Create cross-Region Aurora Replicas in North America, and activate synchronous replication. Configure the users' machines to point to the Aurora reader endpoint in North America.
  • C. Create an Aurora global database. Use the existing DB cluster as the primary cluster, and add a secondary cluster in an AWS Region in Europe. Configure the users' machines to point to the Aurora reader endpoint in Europe.
  • D. Use Amazon DynamoDB global tables in an AWS Region in Europe. Set up continuous replication between the DB cluster and the DynamoDB table by using AWS Database Migration Service (AWS DMS). Configure the users' machines to point to the DynamoDB table in Europe.
#33 (Accuracy: 100% / 4 votes)
A company wants to use AWS Systems Manager documents to bootstrap physical laptops for developers. The bootstrap code is stored in GitHub. A DevOps engineer has already created a Systems Manager activation, installed the Systems Manager agent with the registration code, and installed an activation ID on all the laptops.

Which set of steps should be taken next?
  • A. Configure the Systems Manager document to use the AWS-RunShellScript command to copy the files from GitHub to Amazon S3, then use the aws-downloadContent plugin with a sourceType of S3.
  • B. Configure the Systems Manager document to use the aws-configurePackage plugin with an install action and point to the Git repository.
  • C. Configure the Systems Manager document to use the aws-downloadContent plugin with a sourceType of GitHub and sourcelnfo with the repository details.
  • D. Configure the Systems Manager document to use the aws:softwarelnventory plugin and run the script from the Git repository.
#34 (Accuracy: 100% / 3 votes)
A development team wants to use AWS CloudFormation stacks to deploy an application. However, the developer IAM role does not have the required permissions to provision the resources that are specified in the AWS CloudFormation template. A DevOps engineer needs to implement a solution that allows the developers to deploy the stacks. The solution must follow the principle of least privilege.

Which solution will meet these requirements?
  • A. Create an IAM policy that allows the developers to provision the required resources. Attach the policy to the developer IAM role.
  • B. Create an IAM policy that allows full access to AWS CloudFormation. Attach the policy to the developer IAM role.
  • C. Create an AWS CloudFormation service role that has the required permissions. Grant the developer IAM role a cloudforrnation:* action. Use the new service role during stack deployments.
  • D. Create an AWS CloudFormation service role that has the required permissions. Grant the developer IAM role the iam:PassRole permission. Use the new service role during stack deployments.
#35 (Accuracy: 100% / 4 votes)
A security review has identified that an AWS CodeBuild project is downloading a database population script from an Amazon S3 bucket using an unauthenticated request. The security team does not allow unauthenticated requests to S3 buckets for this project.

How can this issue be corrected in the MOST secure manner?
  • A. Add the bucket name to the AllowedBuckets section of the CodeBuild project settings. Update the build spec to use the AWS CLI to download the database population script.
  • B. Modify the S3 bucket settings to enable HTTPS basic authentication and specify a token. Update the build spec to use cURL to pass the token and download the database population script.
  • C. Remove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.
  • D. Remove unauthenticated access from the S3 bucket with a bucket policy. Use the AWS CLI to download the database population script using an IAM access key and a secret access key.
#36 (Accuracy: 100% / 2 votes)
A company is running a custom-built application that processes records. All the components run on Amazon EC2 instances that run in an Auto Scaling group. Each record's processing is a multistep sequential action that is compute-intensive. Each step is always completed in 5 minutes or less.

A limitation of the current system is that if any steps fail, the application has to reprocess the record from the beginning.
The company wants to update the architecture so that the application must reprocess only the failed steps.

What is the MOST operationally efficient solution that meets these requirements?
  • A. Create a web application to write records to Amazon S3. Use S3 Event Notifications to publish to an Amazon Simple Notification Service (Amazon SNS) topic. Use an EC2 instance to poll Amazon SNS and start processing. Save intermediate results to Amazon S3 to pass on to the next step.
  • B. Perform the processing steps by using logic in the application. Convert the application code to run in a container. Use AWS Fargate to manage the container instances. Configure the container to invoke itself to pass the state from one step to the next.
  • C. Create a web application to pass records to an Amazon Kinesis data stream. Decouple the processing by using the Kinesis data stream and AWS Lambda functions.
  • D. Create a web application to pass records to AWS Step Functions. Decouple the processing into Step Functions tasks and AWS Lambda functions.
#37 (Accuracy: 100% / 3 votes)
A company wants to use a grid system for proprietary enterprise in-memory data store on top of AWS. The system can run in multiple server nodes in any Linux-based distribution. The system must be able to reconfigure the entire cluster every time a node is added or removed. When adding or removing nodes, an /etc/cluster/nodes.config file must be updated listing the IP addresses of the current node member of that cluster.

The company wants to automate the task of adding new nodes to a cluster.


What can a DevOps engineer do to meet these requirements?
  • A. Use AWS OpsWorks Stacks to layer the server nodes of that cluster. Create a Chief recipe that populates the content of the /etc/cluster/nodes.config file and restarts the service by using the current members of the layers. Assign that recipe to the Configure lifecycle event.
  • B. Put the file nodes.config in version control. Create an AWS CodeDeploy deployment configuration and deployment group based on an Amazon EC2 tag value for the cluster nodes. When adding a new node to the cluster, update the file with all tagged instances, and make a commit in version control. Deploy the new file and restart the services.
  • C. Create an Amazon S3 bucket and upload a version of the /etc/cluster/nodes.config file. Create a crontab script that will poll for that S3 file and download it frequently. Use a process manager, such as Monit or systemd, to restart the cluster services when it detects that the new file was modified. When adding a node to the cluster, edit the file’s most recent members. Upload the new file to the S3 bucket.
  • D. Create a user data script that lists all members of the current security group of the cluster and automatically updates the /etc/cluster/nodes.config file whenever a new instance is added to the cluster.
#38 (Accuracy: 100% / 5 votes)
A company uses Amazon S3 to store proprietary information. The development team creates buckets for new projects on a daily basis. The security team wants to ensure that all existing and future buckets have encryption, logging, and versioning enabled. Additionally, no buckets should ever be publicly read or write accessible.

What should a DevOps engineer do to meet these requirements?
  • A. Enable AWS CloudTrail and configure automatic remediation using AWS Lambda.
  • B. Enable AWS Config rules and configure automatic remediation using AWS Systems Manager documents.
  • C. Enable AWS Trusted Advisor and configure automatic remediation using Amazon CloudWatch Events.
  • D. Enable AWS Systems Manager and configure automatic remediation using Systems Manager documents.
#39 (Accuracy: 100% / 4 votes)
A company using AWS CodeCommit for source control wants to automate its continuous integration and continuous delivery pipeline on AWS in its development environment. The company has three requirements:

1.
There must be a legal and a security review of any code change to make sure sensitive information is not leaked through the source code.
2.
Every change must go through unit testing.
3.
Every change must go through a suite of functional testing to ensure functionality.

In addition, the company has the following requirements for automation:

1.
Code changes should automatically trigger the CI/CD pipeline.
2.
Any failure in the pipeline should notify devops-admin@xyz.com.
3.
There must be an approval to stage the assets to Amazon S3 after tests have been performed.

What should a DevOps Engineer do to meet all of these requirements while following Cl/CD best practices?
  • A. Commit to the development branch and trigger AWS CodePipeline from the development branch. Make an individual stage in CodePipeline for security review, unit tests, functional tests, and manual approval. Use Amazon CloudWatch metrics to detect changes in pipeline stages and Amazon SES for emailing devops-admin@xyz.com.
  • B. Commit to mainline and trigger AWS CodePipeline from mainline. Make an individual stage in CodePipeline for security review, unit tests, functional tests, and manual approval. Use AWS CloudTrail logs to detect changes in pipeline stages and Amazon SNS for emailing devops-admin@xyz.com.
  • C. Commit to the development branch and trigger AWS CodePipeline from the development branch. Make an individual stage in CodePipeline for security review, unit tests, functional tests, and manual approval. Use Amazon CloudWatch Events to detect changes in pipeline stages and Amazon SNS for emailing devops-admin@xyz.com.
  • D. Commit to mainline and trigger AWS CodePipeline from mainline. Make an individual stage in CodePipeline for security review, unit tests, functional tests, and manual approval. Use Amazon CloudWatch Events to detect changes in pipeline stages and Amazon SES for emailing devops-admin@xyz.com.
#40 (Accuracy: 100% / 4 votes)
A company requires an RPO of 2 hours and an RTO of 10 minutes for its data and application at all times. An application uses a MySQL database and Amazon
EC2 web servers.
The development team needs a strategy for failover and disaster recovery.
Which combination of deployment strategies will meet these requirements? (Choose two.)
  • A. Create an Amazon Aurora cluster in one Availability Zone across multiple Regions as the data store. Use Aurora's automatic recovery capabilities in the event of a disaster.
  • B. Create an Amazon Aurora global database in two Regions as the data store. In the event of a failure, promote the secondary Region as the master for the application.
  • C. Create an Amazon Aurora multi-master cluster across multiple Regions as the data store. Use a Network Load Balancer to balance the database traffic in different Regions.
  • D. Set up the application in two Regions and use Amazon Route 53 failover-based routing that points to the Application Load Balancers in both Regions. Use health checks to determine the availability in a given Region. Use Auto Scaling groups in each Region to adjust capacity based on demand.
  • E. Set up the application in two Regions and use a multi-Region Auto Scaling group behind Application Load Balancers to manage the capacity based on demand. In the event of a disaster, adjust the Auto Scaling group's desired instance count to increase baseline capacity in the failover Region.