Amazon AWS Certified Database - Specialty
Prev

There are 231 results

Next
#221 (Accuracy: 100% / 3 votes)
A company wants to automate the creation of secure test databases with random credentials to be stored safely for later use. The credentials should have sufficient information about each test database to initiate a connection and perform automated credential rotations. The credentials should not be logged or stored anywhere in an unencrypted form.
Which steps should a Database Specialist take to meet these requirements using an AWS CloudFormation template?
  • A. Create the database with the MasterUserName and MasterUserPassword properties set to the default values. Then, create the secret with the user name and password set to the same default values. Add a Secret Target Attachment resource with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database. Finally, update the secret's password value with a randomly generated string set by the GenerateSecretString property.
  • B. Add a Mapping property from the database Amazon Resource Name (ARN) to the secret ARN. Then, create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString property. Add the database with the MasterUserName and MasterUserPassword properties set to the user name of the secret.
  • C. Add a resource of type AWS::SecretsManager::Secret and specify the GenerateSecretString property. Then, define the database user name in the SecureStringTemplate template. Create a resource for the database and reference the secret string for the MasterUserName and MasterUserPassword properties. Then, add a resource of type AWS::SecretsManagerSecretTargetAttachment with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database.
  • D. Create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString property. Add an SecretTargetAttachment resource with the SecretId property set to the Amazon Resource Name (ARN) of the secret and the TargetId property set to a parameter value matching the desired database ARN. Then, create a database with the MasterUserName and MasterUserPassword properties set to the previously created values in the secret.
#222 (Accuracy: 100% / 3 votes)
A Database Specialist migrated an existing production MySQL database from on-premises to an Amazon RDS for MySQL DB instance. However, after the migration, the database needed to be encrypted at rest using AWS KMS. Due to the size of the database, reloading, the data into an encrypted database would be too time-consuming, so it is not an option.
How should the Database Specialist satisfy this new requirement?
  • A. Create a snapshot of the unencrypted RDS DB instance. Create an encrypted copy of the unencrypted snapshot. Restore the encrypted snapshot copy.
  • B. Modify the RDS DB instance. Enable the AWS KMS encryption option that leverages the AWS CLI.
  • C. Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.
  • D. Create an encrypted read replica of the RDS DB instance. Promote it the master.
#223 (Accuracy: 100% / 5 votes)
A company is about to launch a new product, and test databases must be re-created from production data. The company runs its production databases on an
Amazon Aurora MySQL DB cluster.
A Database Specialist needs to deploy a solution to create these test databases as quickly as possible with the least amount of administrative effort.
What should the Database Specialist do to meet these requirements?
  • A. Restore a snapshot from the production cluster into test clusters
  • B. Create logical dumps of the production cluster and restore them into new test clusters
  • C. Use database cloning to create clones of the production cluster
  • D. Add an additional read replica to the production cluster and use that node for testing
#224 (Accuracy: 100% / 5 votes)
A media company is using Amazon RDS for PostgreSQL to store user data. The RDS DB instance currently has a publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well-Architected Framework review, a Database Specialist was given new security requirements.
✑ Only certain on-premises corporate network IPs should connect to the DB instance.

✑ Connectivity is allowed from the corporate network only.

Which combination of steps does the Database Specialist need to take to meet these new requirements? (Choose three.)
  • A. Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.
  • B. Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs.
  • C. Move the DB instance to a private subnet using AWS DMS.
  • D. Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.
  • E. Disable the publicly accessible setting.
  • F. Connect to the DB instance using private IPs and a VPN.
#225 (Accuracy: 100% / 5 votes)
An ecommerce company recently migrated one of its SQL Server databases to an Amazon RDS for SQL Server Enterprise Edition DB instance. The company expects a spike in read traffic due to an upcoming sale. A database specialist must create a read replica of the DB instance to serve the anticipated read traffic.
Which actions should the database specialist take before creating the read replica? (Choose two.)
  • A. Identify a potential downtime window and stop the application calls to the source DB instance.
  • B. Ensure that automatic backups are enabled for the source DB instance.
  • C. Ensure that the source DB instance is a Multi-AZ deployment with Always ON Availability Groups.
  • D. Ensure that the source DB instance is a Multi-AZ deployment with SQL Server Database Mirroring (DBM).
  • E. Modify the read replica parameter group setting and set the value to 1.
#226 (Accuracy: 100% / 3 votes)
Accompany is using Amazon Redshift for its data warehouse. During review of the last few AWS monthly bills, a database specialist notices charges for Amazon Redshift backup storage. The database specialist needs to decrease the cost of these charges in the future and must create a solution that provides notification if the charges exceed a threshold.

Which combination of actions will moot those requirements with the LEAST operational overhead? (Choose two.)
  • A. Migrate all manual snapshots to the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class
  • B. Use an automated snapshot schedule to take a snapshot once each day
  • C. Create an Amazon CloudWatch billing alarm to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic if the threshold is exceeded
  • D. Create a serverless AWS Glue job to run every 4 hours to describe cluster snapshots and send an email message if the threshold is exceeded
  • E. Delete manual snapshots that are not required anymore
#227 (Accuracy: 100% / 3 votes)
A global company needs to migrate from an on-premises Microsoft SQL Server database to a highly available database solution on AWS. The company wants to modernize its application and keep operational costs low. The current database includes secondary indexes and stored procedures that need to be included in the migration. The company has limited availability of database specialists to support the migration and wants to automate the process.

Which solution will meet these requirements?
  • A. Use AWS Database Migration Service (AWS DMS) to migrate all database objects from the on-premises SQL Server database to a Multi-AZ deployment of Amazon Aurora MySQL.
  • B. Use AWS Database Migration Service (AWS DMS) and the AWS Schema Conversion Tool (AWS SCT) to migrate all database objects from the on-premises SQL Server database to a Multi-AZ deployment of Amazon Aurora MySQL.
  • C. Rehost the on-premises SQL Server as a SQL Server Always On availability group. Host members of the availability group on Amazon EC2 instances. Use AWS Database Migration Service (AWS DMS) to migrate all database objects.
  • D. Rehost the on-premises SQL Server as a SQL Server Always On availability group. Host members of the availability group on Amazon EC2 instances in a single subnet that extends across multiple Availability Zones. Use SQL Server tools to migrate the data.
#228 (Accuracy: 100% / 3 votes)
A bike rental company operates an application to track its bikes. The application receives location and condition data from bike sensors. The application also receives rental transaction data from the associated mobile app.
The application uses Amazon DynamoDB as its database layer.
The company has configured DynamoDB with provisioned capacity set to 20% above the expected peak load of the application. On an average day, DynamoDB used 22 billion read capacity units (RCUs) and 60 billion write capacity units (WCUs). The application is running well. Usage changes smoothly over the course of the day and is generally shaped like a bell curve. The timing and magnitude of peaks vary based on the weather and season, but the general shape is consistent.
Which solution will provide the MOST cost optimization of the DynamoDB database layer?
  • A. Change the DynamoDB tables to use on-demand capacity.
  • B. Use AWS Auto Scaling and configure time-based scaling.
  • C. Enable DynamoDB capacity-based auto scaling.
  • D. Enable DynamoDB Accelerator (DAX).
#229 (Accuracy: 100% / 3 votes)
A company uses Amazon DynamoDB as a data store for multi-tenant data. Approximately 70% of the reads by the company's application are strongly consistent. The current key schema for the DynamoDB table is as follows:

Partition key: OrgID -

Sort key: TenantID#Version -

Due to a change in design and access patterns, the company needs to support strongly consistent lookups based on the new schema below:

Partition key: OrgID#TenantID -

Sort key: Version -

How can the database specialist implement this change?
  • A. Create a global secondary index (GSI) on the existing table with the specified partition and sort key.
  • B. Create a local secondary index (LSI) on the existing table with the specified partition and sort key.
  • C. Create a new table with the specified partition and sort key. Create an AWS Glue ETL job to perform the transformation and write the transformed data to the new table.
  • D. Create a new table with the specified partition and sort key. Use AWS Database Migration Service (AWS DMS) to migrate the data to the new table.
#230 (Accuracy: 100% / 3 votes)
A company hosts an internal file-sharing application running on Amazon EC2 instances in VPC_A. This application is backed by an Amazon ElastiCache cluster, which is in VPC_B and peered with VPC_A. The company migrates its application instances from VPC_A to VPC_B. Logs indicate that the file-sharing application no longer can connect to the ElastiCache cluster.
What should a database specialist do to resolve this issue?
  • A. Create a second security group on the EC2 instances. Add an outbound rule to allow traffic from the ElastiCache cluster security group.
  • B. Delete the ElastiCache security group. Add an interface VPC endpoint to enable the EC2 instances to connect to the ElastiCache cluster.
  • C. Modify the ElastiCache security group by adding outbound rules that allow traffic to VPC_B's CIDR blocks from the ElastiCache cluster.
  • D. Modify the ElastiCache security group by adding an inbound rule that allows traffic from the EC2 instances' security group to the ElastiCache cluster.