Amazon AWS Certified Database - Specialty
Prev

There are 231 results

Next
#11 (Accuracy: 100% / 2 votes)
A company runs an ecommerce application on premises on Microsoft SQL Server. The company is planning to migrate the application to the AWS Cloud. The application code contains complex T-SQL queries and stored procedures.

The company wants to minimize database server maintenance and operating costs after the migration is completed.
The company also wants to minimize the need to rewrite code as part of the migration effort.

Which solution will meet these requirements?
  • A. Migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish.
  • B. Migrate the database to Amazon S3. Use Amazon Redshift Spectrum for query processing.
  • C. Migrate the database to Amazon RDS for SQL Server. Turn on Kerberos authentication.
  • D. Migrate the database to an Amazon EMR cluster that includes multiple primary nodes.
#12 (Accuracy: 100% / 3 votes)
An ecommerce company uses an Amazon Aurora MySQL DB cluster to process payments. The company’s database specialist notices that Aurora performs database maintenance actions periodically. The database specialist is concerned because the upcoming maintenance window conflicts with a company sales event.

What should the database specialist do to address this concern with the LEAST operational effort?
  • A. Add a new Aurora Replica so that the maintenance action occurs on the Aurora Replica first.
  • B. Defer the maintenance action in the AWS Management Console or by using the AWS CLI.
  • C. Delete the maintenance action in the AWS Management Console or by using the AWS CLI.
  • D. Add a new Aurora standby DB instance so that the maintenance action occurs on the standby DB instance first.
#13 (Accuracy: 100% / 5 votes)
A company has two separate AWS accounts: one for the business unit and another for corporate analytics. The company wants to replicate the business unit data stored in Amazon RDS for MySQL in us-east-1 to its corporate analytics Amazon Redshift environment in us-west-1. The company wants to use AWS DMS with
Amazon RDS as the source endpoint and Amazon Redshift as the target endpoint.

Which action will allow AVS DMS to perform the replication?
  • A. Configure the AWS DMS replication instance in the same account and Region as Amazon Redshift.
  • B. Configure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.
  • C. Configure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.
  • D. Configure the AWS DMS replication instance in the same account and Region as Amazon RDS.
#14 (Accuracy: 100% / 3 votes)
A startup company is building a new application to allow users to visualize their on-premises and cloud networking components. The company expects billions of components to be stored and requires responses in milliseconds. The application should be able to identify:
✑ The networks and routes affected if a particular component fails.

✑ The networks that have redundant routes between them.

✑ The networks that do not have redundant routes between them.

✑ The fastest path between two networks.

Which database engine meets these requirements?
  • A. Amazon Aurora MySQL
  • B. Amazon Neptune
  • C. Amazon ElastiCache for Redis
  • D. Amazon DynamoDB
#15 (Accuracy: 100% / 2 votes)
A healthcare company is running an application on Amazon EC2 in a public subnet and using Amazon DocumentDB (with MongoDB compatibility) as the storage layer. An audit reveals that the traffic between the application and Amazon DocumentDB is not encrypted and that the DocumentDB cluster is not encrypted at rest. A database specialist must correct these issues and ensure that the data in transit and the data at rest are encrypted.

Which actions should the database specialist take to meet these requirements? (Choose two.)
  • A. Download the SSH RSA public key for Amazon DocumentDB. Update the application configuration to use the instance endpoint instead of the cluster endpoint and run queries over SSH.
  • B. Download the SSL .pem public key for Amazon DocumentDAdd the key to the application package and make sure the application is using the key while connecting to the cluster.
  • C. Create a snapshot of the unencrypted cluster. Restore the unencrypted snapshot as a new cluster with the --storage-encrypted parameter set to true. Update the application to point to the new cluster.
  • D. Create an Amazon DocumentDB VPC endpoint to prevent the traffic from going to the Amazon DocumentDB public endpoint. Set a VPC endpoint policy to allow only the application instance's security group to connect.
  • E. Activate encryption at rest using the modify-db-cluster command with the --storage-encrypted parameter set to true. Set the security group of the cluster to allow only the application instance's security group to connect.
#16 (Accuracy: 100% / 3 votes)
A company uses an Amazon RDS for PostgreSQL database in the us-east-2 Region. The company wants to have a copy of the database available in the us-west-2 Region as part of a new disaster recovery strategy.

A database architect needs to create the new database.
There can be little to no downtime to the source database. The database architect has decided to use AWS Database Migration Service (AWS DMS) to replicate the database across Regions. The database architect will use full load mode and then will switch to change data capture (CDC) mode.

Which parameters must the database architect configure to support CDC mode for the RDS for PostgreSQL database? (Choose three.)
  • A. Set wal_level = logical.
  • B. Set wal_level = replica.
  • C. Set max_replication_slots to 1 or more, depending on the number of DMS tasks.
  • D. Set max_replication_slots to 0 to support dynamic allocation of slots.
  • E. Set wal_sender_timeout to 20,000 milliseconds.
  • F. Set wal_sender_timeout to 5,000 milliseconds.
#17 (Accuracy: 100% / 2 votes)
A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema
Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS.
The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on- premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.
Which approach has the least risk and the highest likelihood of a successful data transfer?
  • A. Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS task to move the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to Amazon Redshift.
  • B. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task with two AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS DMS to finish copying data to Amazon Redshift.
  • C. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet of 10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon redshift.
  • D. Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a native database export feature to export the data and compress the files. Use the aws S3 cp multi-port upload command to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load the data to Amazon Redshift using AWS Glue.
#18 (Accuracy: 100% / 3 votes)
A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company's Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.
When users try to utilize the application hosted in the AWS account, they are presented with a generic error message.
The application servers are logging a `could not connect to server: Connection times out` error message to Amazon CloudWatch Logs.
What is the cause of this error?
  • A. The user name and password the application is using are incorrect.
  • B. The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
  • C. The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
  • D. The user name and password are correct, but the user is not authorized to use the DB instance.
#19 (Accuracy: 91% / 7 votes)
A company is planning to close for several days. A Database Specialist needs to stop all applications along with the DB instances to ensure employees do not have access to the systems during this time. All databases are running on Amazon RDS for MySQL.
The Database Specialist wrote and ran a script to stop all the DB instances.
When reviewing the logs, the Database Specialist found that Amazon RDS DB instances with read replicas did not stop.
How should the Database Specialist edit the script to fix this issue?
  • A. Stop the source instances before stopping their read replicas
  • B. Delete each read replica before stopping its corresponding source instance
  • C. Stop the read replicas before stopping their source instances
  • D. Use the AWS CLI to stop each read replica and source instance at the same time
#20 (Accuracy: 100% / 4 votes)
The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real-time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.
Which approach will meet these requirements?
  • A. Use pg_audit to generate audit logs and send the logs to the Security team.
  • B. Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.
  • C. Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.
  • D. Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.