Amazon AWS Certified DevOps Engineer - Professional DOP-C01
Prev

There are 96 results

Next
#21 (Accuracy: 91% / 5 votes)
A company deploys updates to its Amazon API Gateway API several times a week by using an AWS CodePipeline pipeline. As part of the update process, the company exports the JavaScript SDK for the API from the API Gateway console and uploads the SDK to an Amazon S3 bucket.

The company has configured an Amazon CloudFront distribution that uses the S3 bucket as an origin.
Web clients then download the SDK by using the CloudFront distribution's endpoint. A DevOps engineer needs to implement a solution to make the new SDK available automatically during new API deployments.

Which solution will meet these requirements?
  • A. Create a CodePipeline action immediately after the deployment stage of the API. Configure the action to invoke an AWS Lambda function. Configure the Lambda function to download the SDK from API Gateway, upload the SDK to the S3 bucket, and create a CloudFront invalidation for the SDK path.
  • B. Create a CodePipeline action immediately after the deployment stage of the API. Configure the action to use the CodePipeline integration with API Gateway to export the SDK to Amazon S3. Create another action that uses the CodePipeline integration with Amazon S3 to invalidate the cache for the SDK path.
  • C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that reacts to UpdateStage events from aws.apigateway. Configure the rule to invoke an AWS Lambda function to download the SDK from API Gateway, upload the SDK to the S3 bucket, and call the CloudFront API to create an invalidation for the SDK path.
  • D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that reacts to CreateDeployment events from aws.apigateway. Configure the rule to invoke an AWS Lambda function to download the SDK from API Gateway, upload the SDK to the S3 bucket, and call the S3 API to invalidate the cache for the SDK path.
#22 (Accuracy: 100% / 3 votes)
During the next CodePipeline run, the pipeline exits with a FAILED state during the build stage. The DevOps engineer verifies that the correct Systems Manager parameter path is in place for the environment variable values that were changed. The DevOps engineer also validates that the environment variable type is Parameter.

Why did the pipeline fail?
  • A. The CodePipeline IAM service role does not have the required IAM permissions to use Parameter Store.
  • B. The CodePipeline IAM service role does not have the required IAM permissions to use the aws/ssm KMS key.
  • C. The CodeBuild IAM service role does not have the required IAM permissions to use Parameter Store.
  • D. The CodeBuild IAM service role does not have the required IAM permissions to use the aws/ssm KMS key.
#23 (Accuracy: 100% / 5 votes)
A company has developed a serverless web application that is hosted on AWS. The application consists of Amazon S3. Amazon API Gateway, several AWS Lambda functions, and an Amazon RDS for MySQL database. The company is using AWS CodeCommit to store the source code. The source code is a combination of AWS Serverless Application Model (AWS SAM) templates and Python code.

A security audit and penetration test reveal that user names and passwords for authentication to the database are hardcoded within CodeCommit repositories.
A DevOps engineer must implement a solution to automatically detect and prevent hardcoded secrets.

What is the MOST secure solution that meets these requirements?
  • A. Enable Amazon CodeGuru Profiler. Decorate the handler function with @with_lambda_profiler(). Manually review the recommendation report. Write the secret to AWS Systems Manager Parameter Store as a secure string. Update the SAM templates and the Python code to pull the secret from Parameter Store.
  • B. Associate the CodeCommit repository with Amazon CodeGuru Reviewer. Manually check the code review for any recommendations. Choose the option to protect the secret. Update the SAM templates and the Python code to pull the secret from AWS Secrets Manager.
  • C. Enable Amazon CodeGuru Profiler. Decorate the handler function with @with lambda profiler(). Manually review the recommendation report. Choose the option to protect the secret. Update the SAM templates and the Python code to pull the secret from AWS Secrets Manager.
  • D. Associate the CodeCommit repository with Amazon CodeGuru Reviewer. Manually check the code review for any recommendations. Write the secret to AWS Systems Manager Parameter Store as a string. Update the SAM templates and the Python code to pull the secret from Parameter Store.
#24 (Accuracy: 92% / 4 votes)
A developer is maintaining a fleet of 50 Amazon EC2 Linux servers. The servers are part of an Amazon EC2 Auto Scaling group, and also use Elastic Load Balancing for load balancing.

Occasionally, some application servers are being terminated after failing ELB HTTP health checks.
The developer would like to perform a root cause analysis on the issue, but before being able to access application logs, the server is terminated.

How can log collection be automated?
  • A. Use Auto Scaling lifecycle hooks to put instances in a Pending:Wait state. Create an Amazon CloudWatch alarm for EC2 Instance Terminate Successful and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
  • B. Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an AWS Config rule for EC2 instance-terminate Lifecycle Action and trigger a step function that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
  • C. Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch subscription filter for EC2 Instance Terminate Successful and trigger a CloudWatch agent that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
  • D. Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon EventBridge rule for EC2 Instance-terminate Lifecycle Action and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
#25 (Accuracy: 100% / 3 votes)
A company updated the AWS CloudFormation template for a critical business application. The stack update process failed due to an error in the updated template, and AWS CloudFormation automatically began the stack rollback process. Later, a DevOps engineer discovered that the application was still unavailable and that the stack was in the UPDATE_ROLLBACK_FAILED state.

Which combination of actions should the DevOps engineer perform so that the stack rollback can complete successfully? (Choose two.)
  • A. Attach the AWSCIoudFormationFullAccess IAM policy to the AWS CloudFormation role.
  • B. Automatically recover the stack resources by using AWS CloudFormation drift detection.
  • C. Issue a ContinueUpdateRollback command from the AWS CloudFormation console or the AWS CLI.
  • D. Manually adjust the resources to match the expectations of the stack.
  • E. Update the existing AWS CloudFormation stack by using the original template.
#26 (Accuracy: 100% / 3 votes)
A company requires its internal business teams to launch resources through pre-approved AWS CloudFormation templates only. The security team requires automated monitoring when resources drift from their expected state.

Which strategy should be used to meet these requirements?
  • A. Allow users to deploy CloudFormation stacks using a CloudFormation service role only. Use CloudFormation drift detection to detect when resources have drifted from their expected state.
  • B. Allow users to deploy CloudFormation stacks using a CloudFormation service role only. Use AWS Config rules to detect when resources have drifted from their expected state.
  • C. Allow users to deploy CloudFormation stacks using AWS Service Catalog only. Enforce the use of a launch constraint. Use AWS Config rules to detect when resources have drifted from their expected state.
  • D. Allow users to deploy CloudFormation stacks using AWS Service Catalog only. Enforce the use of a template constraint. Use Amazon EventBridge notifications to detect when resources have drifted from their expected state.
#27 (Accuracy: 100% / 5 votes)
A DevOps engineer is tasked with creating a more stable deployment solution for a web application in AWS. Previous deployments have resulted in user-facing bugs, premature user traffic, and inconsistencies between web servers running behind an Application Load Balancer. The current strategy uses AWS CodeCommit to store the code for the application. When developers push to the main branch of the repository, CodeCommit triggers an AWS Lambda deploy function, which invokes an AWS Systems Manager run command to build and deploy the new code to all Amazon EC2 instances.

Which combination of actions should be taken to implement a more stable deployment solution? (Choose two.)
  • A. Create a pipeline in AWS CodePipeline with CodeCommit as a source provider. Create parallel pipeline stages to build and test the application. Pass the build artifact to AWS CodeDeploy.
  • B. Create a pipeline in AWS CodePipeline with CodeCommit as a source provider. Create separate pipeline stages to build and then test the application. Pass the build artifact to AWS CodeDeploy.
  • C. Create and use an AWS CodeDeploy application and deployment group to deploy code updates to the EC2 fleet. Select the Application Load Balancer for the deployment group.
  • D. Create individual Lambda functions to run all build, test, and deploy actions using AWS CodeDeploy instead of AWS Systems Manager.
  • E. Modify the Lambda function to build a single application package to be shared by all instances. Use AWS CodeDeploy instead of AWS Systems Manager to update the code on the EC2 fleet.
#28 (Accuracy: 100% / 2 votes)
A software-as-a-service (SaaS) company is using AWS Elastic Beanstalk to deploy its primary .NET application. The Elastic Beanstalk environment is configured to use Amazon EC2 Auto Scaling and Elastic Load Balancing (ELB) for its underlying Amazon EC2 instances.

The company is experiencing incidents in which EC2 instances are marked unhealthy and are terminated by Auto Scaling groups after a failed ELB health check.
The company's DevOps team must build a solution that will notify the operations team whenever an Auto Scaling group terminates EC2 instances for any existing client environments.

What should the DevOps team do to meet this requirement?
  • A. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the email addresses of all operations team members to the SNS topic. Apply a notification configuration for the autoscaling:EC2_INSTANCE_LAUNCH notification type to all the existing Auto Scaling groups.
  • B. Create an Amazon Simple Queue Service (Amazon SQS) queue. Add an AWS Lambda function trigger to the SQS queue. Apply a notification configuration for the autoscaling:EC2_INSTANCE_LAUNCH notification type to all the existing Auto Scaling groups.
  • C. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the email addresses of all operations team members to the SNS topic. Apply a notification configuration for the autoscaling:EC2_INSTANCE_TERMINATE notification type to all the existing Auto Scaling groups.
  • D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Add an AWS Lambda function trigger to the SQS queue. Apply a notification configuration for the autoscaling:EC2_INSTANCE_TERMINATE notification type to all the existing Auto Scaling groups.
#29 (Accuracy: 100% / 1 votes)
A DevOps Engineer has several legacy applications that all generate different log formats. The Engineer must standardize the formats before writing them to
Amazon S3 for querying and analysis.

How can this requirement be met at the LOWEST cost?
  • A. Have the application send its logs to an Amazon EMR cluster and normalize the logs before sending them to Amazon S3
  • B. Have the application send its logs to Amazon QuickSight, then use the Amazon QuickSight SPICE engine to normalize the logs. Do the analysis directly from Amazon QuickSight
  • C. Keep the logs in Amazon S3 and use Amazon Redshift Spectrum to normalize the logs in place
  • D. Use Amazon Kinesis Agent on each server to upload the logs and have Amazon Kinesis Data Firehose use an AWS Lambda function to normalize the logs before writing them to Amazon S3
#30 (Accuracy: 100% / 2 votes)
An ecommerce company has chosen AWS to host its new platform. The company's DevOps team has started building an AWS Control Tower landing zone. The DevOps team has set the identity store within AWS Single Sign-On (AWS SSO) to external identity provider (IdP) and has configured SAML 2 0.

The DevOps team wants a robust permission model that applies the principle of least privilege.
The model must allow the team to build and manage only the team's own resources.

Which combination of steps will meet these requirements? (Choose three.)
  • A. Create IAM policies that include the required permissions. Include the aws PrincipalTag condition key.
  • B. Create permission sets. Attach an inline policy that includes the required permissions and uses the aws:PrincipalTag condition key to scope the permissions.
  • C. Create a group in the IdP. Place users in the group. Assign the group to accounts and the permission sets in AWS SSO.
  • D. Create a group in the IdP. Place users in the group. Assign the group to OUs and IAM policies.
  • E. Enable attributes for access control in AWS SSO. Apply tags to users. Map the tags as key-value pairs.
  • F. Enable attributes for access control in AWS SSO. Map attributes from the IdP as key-value pairs.