Amazon AWS Certified Database - Specialty
Prev

There are 231 results

Next
#141 (Accuracy: 93% / 4 votes)
A media company wants to use zero-downtime patching (ZDP) for its Amazon Aurora MySQL database. Multiple processing applications are using SSL certificates to connect to database endpoints and the read replicas.
Which factor will have the LEAST impact on the success of ZDP?
  • A. Binary logging is enabled, or binary log replication is in progress.
  • B. Current SSL connections are open to the database.
  • C. Temporary tables or table locks are in use.
  • D. The value of the lower_case_table_names server parameter was set to 0 when the tables were created.
#142 (Accuracy: 100% / 7 votes)
A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed.
Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)
  • A. Review the stack drift before modifying the template
  • B. Create and review a change set before applying it
  • C. Export the database resources as stack outputs
  • D. Define the database resources in a nested stack
  • E. Set a stack policy for the database resources
#143 (Accuracy: 100% / 3 votes)
A manufacturing company's website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)
  • A. Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
  • B. Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
  • C. Edit and enable Aurora DB cluster cache management in parameter groups.
  • D. Set TCP keepalive parameters to a high value.
  • E. Set JDBC connection string timeout variables to a low value.
  • F. Set Java DNS caching timeouts to a high value.
#144 (Accuracy: 100% / 4 votes)
A company is running its production databases in a 3 TB Amazon Aurora MySQL DB cluster. The DB cluster is deployed to the us-east-1 Region. For disaster recovery (DR) purposes, the company's database specialist needs to make the DB cluster rapidly available in another AWS Region to cover the production load with an RTO of less than 2 hours.
What is the MOST operationally efficient solution to meet these requirements?
  • A. Implement an AWS Lambda function to take a snapshot of the production DB cluster every 2 hours, and copy that snapshot to an Amazon S3 bucket in the DR Region. Restore the snapshot to an appropriately sized DB cluster in the DR Region.
  • B. Add a cross-Region read replica in the DR Region with the same instance type as the current primary instance. If the read replica in the DR Region needs to be used for production, promote the read replica to become a standalone DB cluster.
  • C. Create a smaller DB cluster in the DR Region. Configure an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) enabled to replicate data from the current production DB cluster to the DB cluster in the DR Region.
  • D. Create an Aurora global database that spans two Regions. Use AWS Database Migration Service (AWS DMS) to migrate the existing database to the new global database.
#145 (Accuracy: 100% / 3 votes)
A financial services company has an application deployed on AWS that uses an Amazon Aurora PostgreSQL DB cluster. A recent audit showed that no log files contained database administrator activity. A database specialist needs to recommend a solution to provide database access and activity logs. The solution should use the least amount of effort and have a minimal impact on performance.
Which solution should the database specialist recommend?
  • A. Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.
  • B. Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.
  • C. Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.
  • D. Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.
#146 (Accuracy: 93% / 5 votes)
A financial services company uses Amazon RDS for Oracle with Transparent Data Encryption (TDE). The company is required to encrypt its data at rest at all times. The key required to decrypt the data has to be highly available, and access to the key must be limited. As a regulatory requirement, the company must have the ability to rotate the encryption key on demand. The company must be able to make the key unusable if any potential security breaches are spotted. The company also needs to accomplish these tasks with minimum overhead.
What should the database administrator use to set up the encryption to meet these requirements?
  • A. AWS CloudHSM
  • B. AWS Key Management Service (AWS KMS) with an AWS managed key
  • C. AWS Key Management Service (AWS KMS) with server-side encryption
  • D. AWS Key Management Service (AWS KMS) CMK with customer-provided material
#147 (Accuracy: 100% / 4 votes)
A large automobile company is migrating the database of a critical financial application to Amazon DynamoDB. The company's risk and compliance policy requires that every change in the database be recorded as a log entry for audits. The system is anticipating more than 500,000 log entries each minute. Log entries should be stored in batches of at least 100,000 records in each file in Apache Parquet format.
How should a database specialist implement these requirements with DynamoDB?
  • A. Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon S3 object.
  • B. Create a backup plan in AWS Backup to back up the DynamoDB table once a day. Create an AWS Lambda function that restores the backup in another table and compares both tables for changes. Generate the log entries and write them to an Amazon S3 object.
  • C. Enable AWS CloudTrail logs on the table. Create an AWS Lambda function that reads the log files once an hour and filters DynamoDB API actions. Write the filtered log files to Amazon S3.
  • D. Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon Kinesis Data Firehose delivery stream with buffering and Amazon S3 as the destination.
#148 (Accuracy: 100% / 6 votes)
A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data includes a ride tracking system that stores
GPS coordinates for all rides.
Real-time statistics and metadata lookups must be performed with high throughput and microsecond latency. The database should be fault tolerant with minimal operational overhead and development effort.
Which solution meets these requirements in the MOST efficient way?
  • A. Use Amazon RDS for MySQL as the database and use Amazon ElastiCache
  • B. Use Amazon DynamoDB as the database and use DynamoDB Accelerator
  • C. Use Amazon Aurora MySQL as the database and use Aurora's buffer cache
  • D. Use Amazon DynamoDB as the database and use Amazon API Gateway
#149 (Accuracy: 100% / 6 votes)
A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users.
How should the Database Specialist apply the parameter group change for the DB instance?
  • A. Select the option to apply the change immediately
  • B. Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied
  • C. Apply the change manually by rebooting the DB instance during the approved maintenance window
  • D. Reboot the secondary Multi-AZ DB instance
#150 (Accuracy: 100% / 2 votes)
A company is using Amazon Aurora with Aurora Replicas. A database specialist needs to split up two read-only applications so that each application connects to a different set of DB instances. The database specialist wants to implement load balancing and high availability for the read-only applications.

Which solution meets these requirements?
  • A. Use a different instance endpoint for each application.
  • B. Use the reader endpoint for both applications.
  • C. Use the reader endpoint for one application and an instance endpoint for the other application.
  • D. Use different custom endpoints for each application.