Amazon AWS Certified Database - Specialty
Prev

There are 231 results

Next
#151 (Accuracy: 100% / 6 votes)
A company is running Amazon RDS for MySQL for its workloads. There is downtime when AWS operating system patches are applied during the Amazon RDS- specified maintenance window.
What is the MOST cost-effective action that should be taken to avoid downtime?
  • A. Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB
  • B. Enable cross-Region read replicas and direct read traffic to them when Amazon RDS is down
  • C. Enable a read replica and direct read traffic to it when Amazon RDS is down
  • D. Enable an Amazon RDS for MySQL Multi-AZ configuration
#152 (Accuracy: 93% / 8 votes)
A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database Specialist. Other members of the Development team can connect, but this user is consistently receiving an error indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number of times, but the error persists.
Which step should be taken to troubleshoot this issue?
  • A. Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine's IP address
  • B. Ensure that the RDS DB instance's subnet group includes a public subnet to allow the Developer to connect
  • C. Ensure that the RDS DB instance has not reached its maximum connections limit
  • D. Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections
#153 (Accuracy: 100% / 6 votes)
An ecommerce company migrates an on-premises MongoDB database to Amazon DocumentDB (with MongoDB compatibility). After the migration, a database specialist realizes that encryption at rest has not been turned on for the Amazon DocumentDB cluster.
What should the database specialist do to enable encryption at rest for the Amazon DocumentDB cluster?
  • A. Take a snapshot of the Amazon DocumentDB cluster. Restore the unencrypted snapshot as a new cluster while specifying the encryption option, and provide an AWS Key Management Service (AWS KMS) key.
  • B. Enable encryption for the Amazon DocumentDB cluster on the AWS Management Console. Reboot the cluster.
  • C. Modify the Amazon DocumentDB cluster by using the modify-db-cluster command with the --storage-encrypted parameter set to true.
  • D. Add a new encrypted instance to the Amazon DocumentDB cluster, and then delete an unencrypted instance from the cluster. Repeat until all instances are encrypted.
#154 (Accuracy: 100% / 4 votes)
A company has a 20 TB production Amazon Aurora DB cluster. The company runs a large batch job overnight to load data into the Aurora DB cluster. To ensure the company's development team has the most up-to-date data for testing, a copy of the DB cluster must be available in the shortest possible time after the batch job completes.
How should this be accomplished?
  • A. Use the AWS CLI to schedule a manual snapshot of the DB cluster. Restore the snapshot to a new DB cluster using the AWS CLI.
  • B. Create a dump file from the DB cluster. Load the dump file into a new DB cluster.
  • C. Schedule a job to create a clone of the DB cluster at the end of the overnight batch process.
  • D. Set up a new daily AWS DMS task that will use cloning and change data capture (CDC) on the DB cluster to copy the data to a new DB cluster. Set up a time for the AWS DMS stream to stop when the new cluster is current.
#155 (Accuracy: 100% / 5 votes)
A company is migrating a mission-critical 2-TB Oracle database from on premises to Amazon Aurora. The cost for the database migration must be kept to a minimum, and both the on-premises Oracle database and the Aurora DB cluster must remain open for write traffic until the company is ready to completely cut over to Aurora.
Which combination of actions should a database specialist take to accomplish this migration as quickly as possible? (Choose two.)
  • A. Use the AWS Schema Conversion Tool (AWS SCT) to convert the source database schema. Then restore the converted schema to the target Aurora DB cluster.
  • B. Use Oracle's Data Pump tool to export a copy of the source database schema and manually edit the schema in a text editor to make it compatible with Aurora.
  • C. Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Select the migration type to replicate ongoing changes to keep the source and target databases in sync until the company is ready to move all user traffic to the Aurora DB cluster.
  • D. Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS Kinesis Data Firehose stream to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.
  • E. Create an AWS Glue job and related resources to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS DMS task to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.
#156 (Accuracy: 100% / 2 votes)
A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company's network bandwidth is available.
How should the company perform this data load?
  • A. Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
  • B. Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
  • C. Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
  • D. Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
#157 (Accuracy: 92% / 8 votes)
A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after-the-fact analyses.
What should a Database Specialist do to meet these requirements with minimal effort?
  • A. Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
  • B. Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.
  • C. Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
  • D. Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.
#158 (Accuracy: 100% / 6 votes)
A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the persistent data store. The company wants to minimize downtime when it migrates the database to Amazon Aurora.
Which migration method should a Database Specialist use?
  • A. Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
  • B. Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.
  • C. Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.
  • D. Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.
#159 (Accuracy: 100% / 8 votes)
A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A
Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed.
The solution should also automate configuration changes across all Regions.
Which solution would meet these requirements and deploy the DynamoDB tables?
  • A. Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.
  • B. Create an AWS CloudFormation template and deploy the template to all the Regions.
  • C. Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.
  • D. Create DynamoDB tables using the AWS Management Console in all the Regions and create a step-by-step guide for future deployments.
#160 (Accuracy: 100% / 6 votes)
A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies.
The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only.
AWS KMS is used for encryption at rest.
Which step will provide additional security?
  • A. Set up NACLs that allow the entire EC2 subnet to access the DB instance
  • B. Disable the master user account
  • C. Set up a security group that blocks SSH to the DB instance
  • D. Set up RDS to use SSL for data in transit