Amazon AWS Certified Database - Specialty
Prev

There are 231 results

Next
#191 (Accuracy: 100% / 5 votes)
An information management services company is storing JSON documents on premises. The company is using a MongoDB 3.6 database but wants to migrate to
AWS.
The solution must be compatible, scalable, and fully managed. The solution also must result in as little downtime as possible during the migration.
Which solution meets these requirements?
  • A. Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of Amazon DocumentDB (with MongoDB compatibility).
  • B. Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of a MongoDB image that is hosted on Amazon EC2
  • C. Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to Amazon DocumentDB (with MongoDB compatibility).
  • D. Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to a MongoDB image that is hosted on Amazon EC2.
#192 (Accuracy: 100% / 3 votes)
A company is using Amazon DocumentDB (with MongoDB compatibility) to manage its complex documents. Users report that an Amazon DocumentDB cluster takes a long time to return query results. A database specialist must investigate and resolve this issue.

Which of the following can the database specialist use to investigate the query plan and analyze the query performance?
  • A. AWS X-Ray deep linking
  • B. Amazon CloudWatch Logs Insights
  • C. MongoDB explain() method
  • D. AWS CloudTrail with a custom filter
#193 (Accuracy: 100% / 4 votes)
A startup company is developing electric vehicles. These vehicles are expected to send real-time data to the AWS Cloud for data analysis. This data will include trip metrics, trip duration, and engine temperature. The database team decides to store the data for 15 days using Amazon DynamoDB.

How can the database team achieve this with the LEAST operational overhead?
  • A. Implement Amazon DynamoDB Accelerator (DAX) on the DynamoDB table. Use Amazon EventBridge (Amazon CloudWatch Events) to poll the DynamoDB table and drop items after 15 days
  • B. Turn on DynamoDB Streams for the DynamoDB table to push the data from DynamoDB to another storage location. Use AWS Lambda to poll and terminate items older than 15 days.
  • C. Turn on the TTL feature for the DynamoDB table. Use the TTL attribute as a timestamp and set the expiration of items to 15 days
  • D. Create an AWS Lambda function to poll the list of DynamoDB tables every 15 days. Drop the existing table and create a new table
#194 (Accuracy: 100% / 2 votes)
A company uses AWS Lambda functions in a private subnet in a VPC to run application logic. The Lambda functions must not have access to the public internet. Additionally, all data communication must remain within the private network. As part of a new requirement, the application logic needs access to an Amazon DynamoDB table.

What is the MOST secure way to meet this new requirement?
  • A. Provision the DynamoDB table inside the same VPC that contains the Lambda functions
  • B. Create a gateway VPC endpoint for DynamoDB to provide access to the table
  • C. Use a network ACL to only allow access to the DynamoDB table from the VPC
  • D. Use a security group to only allow access to the DynamoDB table from the VPC
#195 (Accuracy: 92% / 5 votes)
A company uses Microsoft SQL Server on Amazon RDS in a Multi-AZ deployment as the database engine for its application. The company was recently acquired by another company. A database specialist must rename the database to follow a new naming standard.
Which combination of steps should the database specialist take to rename the database? (Choose two.)
  • A. Turn off automatic snapshots for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on the automatic snapshots.
  • B. Turn off Multi-AZ for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on Multi-AZ Mirroring.
  • C. Delete all existing snapshots for the DB instance. Use the rdsadmin.dbo.rds_modify_db_name stored procedure.
  • D. Update the application with the new database connection string.
  • E. Update the DNS record for the DB instance.
#196 (Accuracy: 100% / 4 votes)
A company is running a mobile app that has a backend database in Amazon DynamoDB. The app experiences sudden increases and decreases in activity throughout the day. The company’s operations team notices that DynamoDB read and write requests are being throttled at different times, resulting in a negative customer experience.

Which solution will solve the throttling issue without requiring changes to the app?
  • A. Add a DynamoDB table in a secondary AWS Region. Populate the additional table by using DynamoDB Streams.
  • B. Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.
  • C. Use on-demand capacity mode for the DynamoDB table.
  • D. Use DynamoDB Accelerator (DAX).
#197 (Accuracy: 100% / 3 votes)
A company has branch offices in the United States and Singapore. The company has a three-tier web application that uses a shared database. The database runs on an Amazon RDS for MySQL DB instance that is hosted in the us-west-2 Region. The application has a distributed front end that is deployed in us-west-2 and in the ap-southeast-1 Region. The company uses this front end as a dashboard that provides statistics to sales managers in each branch office.
The dashboard loads more slowly in the Singapore branch office than in the United States branch office.
The company needs a solution so that the dashboard loads consistently for users in each location.
Which solution will meet these requirements in the MOST operationally efficient way?
  • A. Take a snapshot of the DB instance in us-west-2. Create a new DB instance in ap-southeast-2 from the snapshot. Reconfigure the ap-southeast-1 front-end dashboard to access the new DB instance.
  • B. Create an RDS read replica in ap-southeast-1 from the primary DB instance in us-west-2. Reconfigure the ap-southeast-1 front-end dashboard to access the read replica.
  • C. Create a new DB instance in ap-southeast-1. Use AWS Database Migration Service (AWS DMS) and change data capture (CDC) to update the new DB instance in ap-southeast-1. Reconfigure the ap-southeast-1 front-end dashboard to access the new DB instance.
  • D. Create an RDS read replica in us-west-2, where the primary DB instance resides. Create a read replica in ap-southeast-1 from the read replica in us-west-2. Reconfigure the ap-southeast-1 front-end dashboard to access the read replica in ap-southeast-1.
#198 (Accuracy: 100% / 4 votes)
A global digital advertising company captures browsing metadata to contextually display relevant images, pages, and links to targeted users. A single page load can generate multiple events that need to be stored individually. The maximum size of an event is 200 KB and the average size is 10 KB. Each page load must query the user's browsing history to provide targeting recommendations. The advertising company expects over 1 billion page visits per day from users in the
United States, Europe, Hong Kong, and India.
The structure of the metadata varies depending on the event. Additionally, the browsing metadata must be written and read with very low latency to ensure a good viewing experience for the users.
Which database solution meets these requirements?
  • A. Amazon DocumentDB
  • B. Amazon RDS Multi-AZ deployment
  • C. Amazon DynamoDB global table
  • D. Amazon Aurora Global Database
#199 (Accuracy: 100% / 2 votes)
A database specialist needs to move an Amazon RDS DB instance from one AWS account to another AWS account.

Which solution will meet this requirement with the LEAST operational effort?
  • A. Use AWS Database Migration Service (AWS DMS) to migrate the DB instance from the source AWS account to the destination AWS account.
  • B. Create a DB snapshot of the DB instance. Share the snapshot with the destination AWS account. Create a new DB instance by restoring the snapshot in the destination AWS account.
  • C. Create a Multi-AZ deployment for the DB instance. Create a read replica for the DB instance in the source AWS account. Use the read replica to replicate the data into the DB instance in the destination AWS account.
  • D. Use AWS DataSync to back up the DB instance in the source AWS account. Use AWS Resource Access Manager (AWS RAM) to restore the backup in the destination AWS account.
#200 (Accuracy: 92% / 5 votes)
An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical business metrics that will be pulled from the core production database running on Amazon Aurora. Data that is read by the dashboard should be available within 100 milliseconds of an update.
The Database Specialist needs to review the current configuration of the Aurora DB cluster and develop a cost-effective solution.
The solution needs to accommodate the unpredictable read workload from the reporting dashboard without any impact on the write availability and performance of the DB cluster.
Which solution meets these requirements?
  • A. Turn on the serverless option in the DB cluster so it can automatically scale based on demand.
  • B. Provision a clone of the existing DB cluster for the new Application team.
  • C. Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).
  • D. Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.