Amazon AWS Certified Database - Specialty
Prev

There are 231 results

Next
#171 (Accuracy: 100% / 4 votes)
An ecommerce company is using Amazon DynamoDB as the backend for its order-processing application. The steady increase in the number of orders is resulting in increased DynamoDB costs. Order verification and reporting perform many repeated GetItem functions that pull similar datasets, and this read activity is contributing to the increased costs. The company wants to control these costs without significant development efforts.
How should a Database Specialist address these requirements?
  • A. Use AWS DMS to migrate data from DynamoDB to Amazon DocumentDB
  • B. Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into Amazon Redshift
  • C. Use an Amazon ElastiCache for Redis in front of DynamoDB to boost read performance
  • D. Use DynamoDB Accelerator to offload the reads
#172 (Accuracy: 100% / 5 votes)
A large gaming company is creating a centralized solution to store player session state for multiple online games. The workload required key-value storage with low latency and will be an equal mix of reads and writes. Data should be written into the AWS Region closest to the user across the games' geographically distributed user base. The architecture should minimize the amount of overhead required to manage the replication of data between Regions.
Which solution meets these requirements?
  • A. Amazon RDS for MySQL with multi-Region read replicas
  • B. Amazon Aurora global database
  • C. Amazon RDS for Oracle with GoldenGate
  • D. Amazon DynamoDB global tables
#173 (Accuracy: 100% / 3 votes)
A financial company wants to store sensitive user data in an Amazon Aurora PostgreSQL DB cluster. The database will be accessed by multiple applications across the company. The company has mandated that all communications to the database be encrypted and the server identity must be validated. Any non-SSL- based connections should be disallowed access to the database.
Which solution addresses these requirements?
  • A. Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=allow.
  • B. Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=disable.
  • C. Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-ca.
  • D. Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-full.
#174 (Accuracy: 100% / 7 votes)
A database specialist is responsible for an Amazon RDS for MySQL DB instance with one read replica. The DB instance and the read replica are assigned to the default parameter group. The database team currently runs test queries against a read replica. The database team wants to create additional tables in the read replica that will only be accessible from the read replica to benefit the tests.
Which should the database specialist do to allow the database team to create the test tables?
  • A. Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.
  • B. Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.
  • C. Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.
  • D. Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.
#175 (Accuracy: 100% / 3 votes)
A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the company discovered there is a period of time every day around 3:00 PM where the response time of the application is noticeably slower. The company has narrowed down the cause of this issue to the database and not the application.
Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?
  • A. Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.
  • B. Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.
  • C. Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.
  • D. Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.
#176 (Accuracy: 100% / 3 votes)
A company conducted a security audit of its AWS infrastructure. The audit identified that data was not encrypted in transit between application servers and a
MySQL database that is hosted in Amazon RDS.
After the audit, the company updated the application to use an encrypted connection. To prevent this problem from occurring again, the company's database team needs to configure the database to require in-transit encryption for all connections.
Which solution will meet this requirement?
  • A. Update the parameter group in use by the DB instance, and set the require_secure_transport parameter to ON.
  • B. Connect to the database, and use ALTER USER to enable the REQUIRE SSL option on the database user.
  • C. Update the security group in use by the DB instance, and remove port 80 to prevent unencrypted connections from being established.
  • D. Update the DB instance, and enable the Require Transport Layer Security option.
#177 (Accuracy: 100% / 4 votes)
A vehicle insurance company needs to choose a highly available database to track vehicle owners and their insurance details. The persisted data should be immutable in the database, including the complete and sequenced history of changes over time with all the owners and insurance transfer details for a vehicle.
The data should be easily verifiable for the data lineage of an insurance claim.

Which approach meets these requirements with MINIMAL effort?
  • A. Create a blockchain to store the insurance details. Validate the data using a hash function to verify the data lineage of an insurance claim.
  • B. Create an Amazon DynamoDB table to store the insurance details. Validate the data using AWS DMS validation by moving the data to Amazon S3 to verify the data lineage of an insurance claim.
  • C. Create an Amazon QLDB ledger to store the insurance details. Validate the data by choosing the ledger name in the digest request to verify the data lineage of an insurance claim.
  • D. Create an Amazon Aurora database to store the insurance details. Validate the data using AWS DMS validation by moving the data to Amazon S3 to verify the data lineage of an insurance claim.
#178 (Accuracy: 100% / 4 votes)
A company is using an Amazon RDS for MySQL DB instance for its internal applications. A security audit shows that the DB instance is not encrypted at rest. The company's application team needs to encrypt the DB instance.
What should the team do to meet this requirement?
  • A. Stop the DB instance and modify it to enable encryption. Apply this setting immediately without waiting for the next scheduled RDS maintenance window.
  • B. Stop the DB instance and create an encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
  • C. Stop the DB instance and create a snapshot. Copy the snapshot into another encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
  • D. Create an encrypted read replica of the DB instance. Promote the read replica to master. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
#179 (Accuracy: 100% / 3 votes)
A company migrated one of its business-critical database workloads to an Amazon Aurora Multi-AZ DB cluster. The company requires a very low RTO and needs to improve the application recovery time after database failovers.
Which approach meets these requirements?
  • A. Set the max_connections parameter to 16,000 in the instance-level parameter group.
  • B. Modify the client connection timeout to 300 seconds.
  • C. Create an Amazon RDS Proxy database proxy and update client connections to point to the proxy endpoint.
  • D. Enable the query cache at the instance level.
#180 (Accuracy: 100% / 3 votes)
A company is building a web application on AWS. The application requires the database to support read and write operations in multiple AWS Regions simultaneously. The database also needs to propagate data changes between Regions as the changes occur. The application must be highly available and must provide latency of single-digit milliseconds.
Which solution meets these requirements?
  • A. Amazon DynamoDB global tables
  • B. Amazon DynamoDB streams with AWS Lambda to replicate the data
  • C. An Amazon ElastiCache for Redis cluster with cluster mode enabled and multiple shards
  • D. An Amazon Aurora global database