Amazon AWS Certified Database - Specialty
Prev

There are 231 results

Next
#61 (Accuracy: 100% / 4 votes)
A company is going through a security audit. The audit team has identified cleartext master user password in the AWS CloudFormation templates for Amazon
RDS for MySQL DB instances.
The audit team has flagged this as a security risk to the database team.
What should a database specialist do to mitigate this risk?
  • A. Change all the databases to use AWS IAM for authentication and remove all the cleartext passwords in CloudFormation templates.
  • B. Use an AWS Secrets Manager resource to generate a random password and reference the secret in the CloudFormation template.
  • C. Remove the passwords from the CloudFormation templates so Amazon RDS prompts for the password when the database is being created.
  • D. Remove the passwords from the CloudFormation template and store them in a separate file. Replace the passwords by running CloudFormation using a sed command.
#62 (Accuracy: 100% / 3 votes)
A database specialist was alerted that a production Amazon RDS MariaDB instance with 100 GB of storage was out of space. In response, the database specialist modified the DB instance and added 50 GB of storage capacity. Three hours later, a new alert is generated due to a lack of free space on the same DB instance.
The database specialist decides to modify the instance immediately to increase its storage capacity by 20 GB.

What will happen when the modification is submitted?
  • A. The request will fail because this storage capacity is too large.
  • B. The request will succeed only if the primary instance is in active status.
  • C. The request will succeed only if CPU utilization is less than 10%.
  • D. The request will fail as the most recent modification was too soon.
#63 (Accuracy: 100% / 3 votes)
A company has an on-premises production Microsoft SQL Server with 250 GB of data in one database. A database specialist needs to migrate this on-premises
SQL Server to Amazon RDS for SQL Server.
The nightly native SQL Server backup file is approximately 120 GB in size. The application can be down for an extended period of time to complete the migration. Connectivity between the on-premises environment and AWS can be initiated from on-premises only.
How can the database be migrated from on-premises to Amazon RDS with the LEAST amount of effort?
  • A. Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Download the backup files on an Amazon EC2 instance and restore them from the EC2 instance into the new production RDS instance.
  • B. Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Restore the backup files from the S3 bucket into the new production RDS instance.
  • C. Provision and configure AWS DMS. Set up replication between the on-premises SQL Server environment to replicate the database to the new production RDS instance.
  • D. Back up the SQL Server database using AWS Backup. Once the backup is complete, restore the completed backup to an Amazon EC2 instance and move it to the new production RDS instance.
#64 (Accuracy: 93% / 8 votes)
A company is developing an application that performs intensive in-memory operations on advanced data structures such as sorted sets. The application requires sub-millisecond latency for reads and writes. The application occasionally must run a group of commands as an ACID-compliant operation. A database specialist is setting up the database for this application. The database specialist needs the ability to create a new database cluster from the latest backup of the production cluster.
Which type of cluster should the database specialist create to meet these requirements?
  • A. Amazon ElastiCache for Memcached
  • B. Amazon Neptune
  • C. Amazon ElastiCache for Redis
  • D. Amazon DynamoDB Accelerator (DAX)
#65 (Accuracy: 100% / 3 votes)
A company is planning to use Amazon RDS for SQL Server for one of its critical applications. The company's security team requires that the users of the RDS for
SQL Server DB instance are authenticated with on-premises Microsoft Active Directory credentials.

Which combination of steps should a database specialist take to meet this requirement? (Choose three.)
  • A. Extend the on-premises Active Directory to AWS by using AD Connector.
  • B. Create an IAM user that uses the AmazonRDSDirectoryServiceAccess managed IAM policy.
  • C. Create a directory by using AWS Directory Service for Microsoft Active Directory.
  • D. Create an Active Directory domain controller on Amazon EC2.
  • E. Create an IAM role that uses the AmazonRDSDirectoryServiceAccess managed IAM policy.
  • F. Create a one-way forest trust from the AWS Directory Service for Microsoft Active Directory directory to the on-premises Active Directory.
#66 (Accuracy: 100% / 5 votes)
A company released a mobile game that quickly grew to 10 million daily active users in North America. The game's backend is hosted on AWS and makes extensive use of an Amazon DynamoDB table that is configured with a TTL attribute.
When an item is added or updated, its TTL is set to the current epoch time plus 600 seconds.
The game logic relies on old data being purged so that it can calculate rewards points accurately. Occasionally, items are read from the table that are several hours past their TTL expiry.
How should a database specialist fix this issue?
  • A. Use a client library that supports the TTL functionality for DynamoDB.
  • B. Include a query filter expression to ignore items with an expired TTL.
  • C. Set the ConsistentRead parameter to true when querying the table.
  • D. Create a local secondary index on the TTL attribute.
#67 (Accuracy: 100% / 2 votes)
A database specialist is planning to migrate a MySQL database to Amazon Aurora. The database specialist wants to configure the primary DB cluster in the us-west-2 Region and the secondary DB cluster in the eu-west-1 Region. In the event of a disaster recovery scenario, the database must be available in eu-west-1 with an RPO of a few seconds. Which solution will meet these requirements?
  • A. Use Aurora MySQL with the primary DB cluster in us-west-2 and a cross-Region Aurora Replica in eu-west-1
  • B. Use Aurora MySQL with the primary DB cluster in us-west-2 and binlog-based external replication to eu-west-1
  • C. Use an Aurora MySQL global database with the primary DB cluster in us-west-2 and the secondary DB cluster in eu-west-1
  • D. Use Aurora MySQL with the primary DB cluster in us-west-2. Use AWS Database Migration Service (AWS DMS) change data capture (GDC) replication to the secondary DB cluster in eu-west-1
#68 (Accuracy: 100% / 3 votes)
A company is using an Amazon Aurora PostgreSQL DB cluster for the backend of its mobile application. The application is running continuously and a database specialist is satisfied with high availability and fast failover, but is concerned about performance degradation after failover.

How can the database specialist minimize the performance degradation after failover?
  • A. Enable cluster cache management for the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-0
  • B. Enable cluster cache management tor the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-1
  • C. Enable Query Plan Management for the Aurora DB cluster and perform a manual plan capture
  • D. Enable Query Plan Management for the Aurora DB cluster and force the query optimizer to use the desired plan
#69 (Accuracy: 100% / 2 votes)
A database administrator is working on transferring data from an on-premises Oracle instance to an Amazon RDS for Oracle DB instance through an AWS Database Migration Service (AWS DMS) task with ongoing replication only. The database administrator noticed that the migration task failed after running successfully for some time. The logs indicate that there was generic error. The database administrator wants to know which data definition language (DDL) statement caused this issue.

What should the database administrator do to identify' this issue in the MOST operationally efficient manner?
  • A. Export AWS DMS logs to Amazon CloudWatch and identify the DDL statement from the AWS Management Console
  • B. Turn on logging for the AWS DMS task by setting the TARGET_LOAD action with the level of severity set to LOGGER_SEVERITY_DETAILED_DEBUG
  • C. Turn on DDL activity tracing in the RDS for Oracle DB instance parameter group
  • D. Turn on logging for the AWS DMS task by setting the TARGET_APPLY action with the level of severity' set to LOGGER_SEVERITY_DETAILED_DEBUG
#70 (Accuracy: 100% / 3 votes)
A company migrated an on-premises Oracle database to Amazon RDS for Oracle. A database specialist needs to monitor the latency of the database.

Which solution will meet this requirement with the LEAST operational overhead?
  • A. Publish RDS Performance insights metrics to Amazon CloudWatch. Add AWS CloudTrail filters to monitor database performance
  • B. Install Oracle Statspack. Enable the performance statistics feature to collect, store, and display performance data to monitor database performance.
  • C. Enable RDS Performance Insights to visualize the database load. Enable Enhanced Monitoring to view how different threads use the CPU
  • D. Create a new DB parameter group that includes the AllocatedStorage, DBInstanceClassMemory, and DBInstanceVCPU variables. Enable RDS Performance Insights