Amazon AWS Certified DevOps Engineer - Professional DOP-C01
Prev

There are 96 results

Next
#51 (Accuracy: 100% / 2 votes)
A rapidly growing company wants to scale for Developer demand for AWS development environments. Development environments are created manually in the
AWS Management Console.
The Networking team uses AWS CloudFormation to manage the networking infrastructure, exporting stack output values for the
Amazon VPC and all subnets.
The development environments have common standards, such as Application Load Balancers, Amazon EC2 Auto Scaling groups, security groups, and Amazon DynamoDB tables.
To keep up with the demand, the DevOps Engineer wants to automate the creation of development environments.
Because the infrastructure required to support the application is expected to grow, there must be a way to easily update the deployed infrastructure. CloudFormation will be used to create a template for the development environments.
Which approach will meet these requirements and quickly provide consistent AWS environments for Developers?
  • A. Use Fn::ImportValue intrinsic functions in the Resources section of the template to retrieve Virtual Private Cloud (VPC) and subnet values. Use CloudFormation StackSets for the development environments, using the Count input parameter to indicate the number of environments needed. use the UpdateStackSet command to update existing development environments.
  • B. Use nested stacks to define common infrastructure components. To access the exported values, use TemplateURL to reference the Networking team's template. To retrieve Virtual Private Cloud (VPC) and subnet values, use Fn::ImportValue intrinsic functions in the Parameters section of the master template. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.
  • C. Use nested stacks to define common infrastructure components. Use Fn::ImportValue intrinsic functions with the resources of the nested stack to retrieve Virtual Private Cloud (VPC) and subnet values. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.
  • D. Use Fn::ImportValue intrinsic functions in the Parameters section of the master template to retrieve Virtual Private Cloud (VPC) and subnet values. Define the development resources in the order they need to be created in the CloudFormation nested stacks. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.
#52 (Accuracy: 100% / 2 votes)
A DevOps engineer is working on a data archival project that requires the migration of on-premises data to an Amazon S3 bucket. The DevOps engineer develops a script that incrementally archives on-premises data that is older than 1 month to Amazon S3. Data that is transferred to Amazon S3 is deleted from the on-premises location. The script uses the S3 PutObject operation.

During a code review, the DevOps engineer notices that the script does not verify whether the data was successfully copied to Amazon S3.
The DevOps engineer must update the script to ensure that data is not corrupted during transmission. The script must use MD5 checksums to verify data integrity before the on-premises data is deleted.

Which solutions for the script will meet these requirements? (Choose two.)
  • A. Check the returned response for the Versionld. Compare the returned VersionId against the MD5 checksum.
  • B. Include the MD5 checksum within the Content-MD5 parameter. Check the operation call's return status to find out if an error was returned.
  • C. Include the checksum digest within the tagging parameter as a URL query parameter.
  • D. Check the returned response for the ETag. Compare the returned ETag against the MD5 checksum.
  • E. Include the checksum digest within the Metadata parameter as a name-value pair. After upload, use the S3 HeadObject operation to retrieve metadata from the object.
#53 (Accuracy: 100% / 4 votes)
A company hosts a multi-tenant application on Amazon EC2 instances behind an Application Load Balancer. The instances run Windows Server and are in an Auto Scaling group. The application uses a license file on the instances that can be updated on the instances without customer disruption. When a new customer purchases access to the application, the company's licensing team adds a new license key to a file in an Amazon S3 bucket. After the license file is updated, the operations team manually updates the EC2 instances.

A DevOps engineer needs to automate the EC2 instance file update process.
The automated process must decrease the time for EC2 instances to get the updated license file and must notify the operations team about success or failure of the update process.

The DevOps engineer creates a resource group in AWS Resource Groups.
The resource group uses a tag that the Auto Sealing group applies to the EC2 instances.

What should the DevOps engineer do next to meet the requirements MOST cost-effectively?
  • A. Create an S3 event notification to invoke an AWS Lambda function when the license file is updated in the S3 bucket. Configure the Lambda function to invoke AWS Systems Manager Run Command to run the AWS-RunRemoteScript document to download the updated license file. Specify the command from Lambda to run on the application's resource group with 50% concurrency. Configure Amazon Simple Email Service (Amazon SES) notifications for event notifications of SUCCESS and FAILED to send email notifications to the operations team.
  • B. Create an S3 event notification to invoke an AWS Lambda function when the license file is updated in the S3 bucket. Configure the Lambda function to invoke AWS Systems Manager Run Command to run the AWS-RunPowerShellScript document to download the updated license file. Specify the command from Lambda to run on the application's resource group with 50% concurrency. Configure an Amazon Simple Notification Service (Amazon SNS) topic to send event notifications of SUCCESS and FAILED. Subscribe the email addresses of the operations team members to the SNS topic.
  • C. Create an Amazon EventBridge scheduled rule that runs each hour to invoke an AWS Lambda function. Configure the Lambda function to invoke AWS Systems Manager Run Command to run the AWS-RunPowerShellScript document to download the updated license file. Specify the command from Lambda to run on the application's resource group with 50% concurrency. Configure an Amazon Simple Notification Service (Amazon SNS) topic to send event notifications of SUCCESS and FAILED. Subscribe the email addresses of the operations team members to the SNS topic.
  • D. Create an Amazon EventBridge scheduled rule that runs each hour to invoke an AWS Lambda function. Configure the Lambda function to invoke AWS Systems Manager Run Command to run the AWS-RunRemoteScript document to download the updated license file. Specify the command from Lambda to run on the application's resource group with 50% concurrency. Configure Amazon Simple Email Service (Amazon SES) notifications for event notifications of SUCCESS and FAILED to send email notifications to the operations team.
#54 (Accuracy: 100% / 2 votes)
A production account has a requirement that any Amazon EC2 instance that has been logged in to manually must be terminated within 24 hours. All applications in the production account are using Auto Scaling groups with the Amazon CloudWatch Logs agent configured.

How can this process be automated?
  • A. Create a CloudWatch Logs subscription to an AWS Step Functions application. Configure an AWS Lambda function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissioned. Create an Amazon EventBridge rule to invoke a second Lambda function once a day that will terminate all instances with this tag.
  • B. Create an Amazon CloudWatch alarm that will be invoked by the login event. Send the notification to an Amazon Simple Notification Service (Amazon SNS) topic that the operations team is subscribed to, and have them terminate the EC2 instance within 24 hours.
  • C. Create an Amazon CloudWatch alarm that will be invoked by the login event. Configure the alarm to send to an Amazon Simple Queue Service (Amazon SQS) queue. Use a group of worker instances to process messages from the queue, which then schedules an Amazon EvantBridge rule to be invoked.
  • D. Create a CloudWatch Logs subscription in an AWS Lambda function. Configure the function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissioned. Create an Amazon EventBridge rule to invoke a daily Lambda function that terminates all instances with this tag.
#55 (Accuracy: 92% / 5 votes)
A company requires that its internally facing web application be highly available. The architecture is made up of one Amazon EC2 web server instance and one NAT instance that provides outbound internet access for updates and accessing public data.

Which combination of architecture adjustments should the company implement to achieve high availability? (Choose two.)
  • A. Add the NAT instance to an EC2 Auto Scaling group that spans multiple Availability Zones. Update the route tables.
  • B. Create additional EC2 instances spanning multiple Availability Zones. Add an Application Load Balancer to split the load between them.
  • C. Configure an Application Load Balancer in front of the EC2 instance. Configure Amazon Cloud Watch alarms to recover the EC2 instance upon host failure.
  • D. Replace the NAT instance with a NAT gateway in each Availability Zone. Update the route tables.
  • E. Replace the NAT instance with a NAT gateway that spans multiple Availability Zones. Update the route tables.
#56 (Accuracy: 100% / 2 votes)
A company is deploying a container-based application using AWS CodeBuild. The Security team mandates that all containers are scanned for vulnerabilities prior to deployment using a password-protected endpoint. All sensitive information must be stored securely.
Which solution should be used to meet these requirements?
  • A. Encrypt the password using AWS KMS. Store the encrypted password in the buildspec.yml file as an environment variable under the variables mapping. Reference the environment variable to initiate scanning.
  • B. Import the password into an AWS CloudHSM key. Reference the CloudHSM key in the buildpec.yml file as an environment variable under the variables mapping. Reference the environment variable to initiate scanning.
  • C. Store the password in the AWS Systems Manager Parameter Store as a secure string. Add the Parameter Store key to the buildspec.yml file as an environment variable under the parameter-store mapping. Reference the environment variable to initiate scanning.
  • D. Use the AWS Encryption SDK to encrypt the password and embed in the buildspec.yml file as a variable under the secrets mapping. Attach a policy to CodeBuild to enable access to the required decryption key.
#57 (Accuracy: 100% / 2 votes)
A company is building a web and mobile application that uses a serverless architecture powered by AWS Lambda and Amazon API Gateway. The company wants to fully automate the backend Lambda deployment based on code that is pushed to the appropriate environment branch in an AWS CodeCommit repository.

The deployment must have the following:
• Separate environment pipelines for testing and production
• Automatic deployment that occurs for test environments only

Which steps should be taken to meet these requirements?
  • A. Configure a new AWS CodePipeline service. Create a CodeCommit repository for each environment. Set up CodePipeline to retrieve the source code from the appropriate repository. Set up the deployment step to deploy the Lambda functions with AWS CloudFormation.
  • B. Create two AWS CodePipeline configurations for test and production environments. Configure the production pipeline to have a manual approval step. Create a CodeCommit repository for each environment. Set up each CodePipeline to retrieve the source code from the appropriate repository. Set up the deployment step to deploy the Lambda functions with AWS CloudFormation.
  • C. Create two AWS CodePipeline configurations for test and production environments. Configure the production pipeline to have a manual approval step. Create one CodeCommit repository with a branch for each environment. Set up each CodePipeline to retrieve the source code from the appropriate branch in the repository. Set up the deployment step to deploy the Lambda functions with AWS CloudFormation.
  • D. Create an AWS CodeBuild configuration for test and production environments. Configure the production pipeline to have a manual approval step. Create one CodeCommit repository with a branch for each environment. Push the Lambda function code to an Amazon S3 bucket. Set up the deployment step to deploy the Lambda functions from the S3 bucket.
#58 (Accuracy: 100% / 3 votes)
A company wants to migrate a legacy application to AWS and develop a deployment pipeline that uses AWS services only. A DevOps engineer is migrating all of the application code from a Git repository to AWS CodeCommit while preserving the history of the repository. The DevOps engineer has set all the permissions within CodeCommit, installed the Git client and the AWS CLI on a local computer, and is ready to migrate the repository.
Which actions will follow?
  • A. Create the CodeCommit repository using the AWS CLI. Clone the Git repository directly to CodeCommit using the AWS CLI. Validate that the files were migrated, and publish the CodeCommit repository.
  • B. Create the CodeCommit repository using the AWS Management Console. Clone both the Git and CodeCommit repositories to the local computer. Copy the files from the Git repository to the CodeCommit repository on the local computer. Commit the CodeCommit repository. Validate that the files were migrated, and share the CodeCommit repository.
  • C. Create the CodeCommit repository using the AWS Management Console. Use the console to clone the Git repository into the CodeCommit repository. Validate that the files were migrated, and publish the CodeCommit repository.
  • D. Create the CodeCommit repository using the AWS Management Console or the AWS CLI. Clone the Git repository with a mirror argument to the local computer and push the repository to CodeCommit. Validate that the files were migrated, and share the CodeCommit repository.
#59 (Accuracy: 100% / 5 votes)
A company has deployed a new Amazon API Gateway API that retrieves the cost of items for the company's online store. An AWS Lambda function supports the API and retrieves the data from an Amazon DynamoDB table. The API's latency increases during times of peak usage each day. However, the latency of the DynamoDB table reads is constant throughout the day.

A DevOps engineer configures DynamoDB Accelerator (DAX) for the DynamoDB table, and the API latency decreases throughout the day.
The DevOps engineer then configures Lambda provisioned concurrency with a limit of two concurrent invocations. This change reduces the latency during normal usage. However, the company is still experiencing higher latency during times of peak usage than during times of normal usage.

Which set of additional steps should the DevOps engineer take to produce the LARGEST decrease in API latency?
  • A. Increase the read capacity of the DynamoDB table. Use AWS Application Auto Scaling to manage provisioned concurrency for the Lambda function.
  • B. Enable caching in API Gateway. Stop using provisioned concurrency for the Lambda function.
  • C. Delete the DAX cluster for the DynamoDB table. Use AWS Application Auto Scaling to manage provisioned concurrency for the Lambda function.
  • D. Enable caching in API Gateway. Use AWS Application Auto Scaling to manage provisioned concurrency for the Lambda function
#60 (Accuracy: 100% / 3 votes)
A media company has several thousand Amazon EC2 instances in an AWS account. The company is using Slack and a shared email inbox for team communications and important updates. A DevOps engineer needs to send all AWS-scheduled EC2 maintenance notifications to the Slack channel and the shared inbox. The solution must include the instances' Name and Owner tags.

Which solution will meet these requirements?
  • A. Integrate AWS Trusted Advisor with AWS Config. Configure a custom AWS Config rule to invoke an AWS Lambda function to publish notifications to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe a Slack channel endpoint and the shared inbox to the topic.
  • B. Use Amazon EventBridge to monitor for AWS Health events. Configure the maintenance events to target an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe an AWS Lambda function to the SNS topic to send notifications to the Slack channel and the shared inbox.
  • C. Create an AWS Lambda function that sends EC2 maintenance notifications to the Slack channel and the shared inbox. Monitor EC2 health events by using Amazon CloudWatch metrics. Configure a CloudWatch alarm that invokes the Lambda function when a maintenance notification is received.
  • D. Configure AWS Support integration with AWS CloudTrail. Create a CloudTrail lookup event to invoke an AWS Lambda function to pass EC2 maintenance notifications to Amazon Simple Notification Service (Amazon SNS). Configure Amazon SNS to target the Slack channel and the shared inbox.