Amazon AWS Certified Database - Specialty
Prev

There are 231 results

Next
#181 (Accuracy: 100% / 4 votes)
An online advertising website uses an Amazon DynamoDB table with on-demand capacity mode as its data store. The website also has a DynamoDB Accelerator
(DAX) cluster in the same VPC as its web application server.
The application needs to perform infrequent writes and many strongly consistent reads from the data store by querying the DAX cluster.
During a performance audit, a systems administrator notices that the application can look up items by using the DAX cluster.
However, the QueryCacheHits metric for the DAX cluster consistently shows 0 while the QueryCacheMisses metric continuously keeps growing in Amazon CloudWatch.
What is the MOST likely reason for this occurrence?
  • A. A VPC endpoint was not added to access DynamoDB.
  • B. Strongly consistent reads are always passed through DAX to DynamoDB.
  • C. DynamoDB is scaling due to a burst in traffic, resulting in degraded performance.
  • D. A VPC endpoint was not added to access CloudWatch.
#182 (Accuracy: 100% / 3 votes)
A global company is developing an application across multiple AWS Regions. The company needs a database solution with low latency in each Region and automatic disaster recovery. The database must be deployed in an active-active configuration with automatic data synchronization between Regions.
Which solution will meet these requirements with the LOWEST latency?
  • A. Amazon RDS with cross-Region read replicas
  • B. Amazon DynamoDB global tables
  • C. Amazon Aurora global database
  • D. Amazon Athena and Amazon S3 with S3 Cross Region Replication
#183 (Accuracy: 100% / 5 votes)
A company wants to improve its ecommerce website on AWS. A database specialist decides to add Amazon ElastiCache for Redis in the implementation stack to ease the workload off the database and shorten the website response times. The database specialist must also ensure the ecommerce website is highly available within the company's AWS Region.
How should the database specialist deploy ElastiCache to meet this requirement?
  • A. Launch an ElastiCache for Redis cluster using the AWS CLI with the -cluster-enabled switch.
  • B. Launch an ElastiCache for Redis cluster and select read replicas in different Availability Zones.
  • C. Launch two ElastiCache for Redis clusters in two different Availability Zones. Configure Redis streams to replicate the cache from the primary cluster to another.
  • D. Launch an ElastiCache cluster in the primary Availability Zone and restore the cluster's snapshot to a different Availability Zone during disaster recovery.
#184 (Accuracy: 93% / 5 votes)
A banking company recently launched an Amazon RDS for MySQL DB instance as part of a proof-of-concept project. A database specialist has configured automated database snapshots. As a part of routine testing, the database specialist noticed one day that the automated database snapshot was not created.
Which of the following are possible reasons why the snapshot was not created? (Choose two.)
  • A. A copy of the RDS automated snapshot for this DB instance is in progress within the same AWS Region.
  • B. A copy of the RDS automated snapshot for this DB instance is in progress in a different AWS Region.
  • C. The RDS maintenance window is not configured.
  • D. The RDS DB instance is in the STORAGE_FULL state.
  • E. RDS event notifications have not been enabled.
#185 (Accuracy: 100% / 3 votes)
An advertising company is developing a backend for a bidding platform. The company needs a cost-effective datastore solution that will accommodate a sudden increase in the volume of write transactions. The database also needs to make data changes available in a near real-time data stream.

Which solution will meet these requirements?
  • A. Amazon Aurora MySQL Multi-AZ DB cluster
  • B. Amazon Keyspaces (for Apache Cassandra)
  • C. Amazon DynamoDB table with DynamoDB auto scaling
  • D. Amazon DocumentDB (with MongoDB compatibility) cluster with a replica instance in a second Availability Zone
#186 (Accuracy: 100% / 5 votes)
A company has a production environment running on Amazon RDS for SQL Server with an in-house web application as the front end. During the last application maintenance window, new functionality was added to the web application to enhance the reporting capabilities for management. Since the update, the application is slow to respond to some reporting queries.
How should the company identify the source of the problem?
  • A. Install and configure Amazon CloudWatch Application Insights for Microsoft .NET and Microsoft SQL Server. Use a CloudWatch dashboard to identify the root cause.
  • B. Enable RDS Performance Insights and determine which query is creating the problem. Request changes to the query to address the problem.
  • C. Use AWS X-Ray deployed with Amazon RDS to track query system traces.
  • D. Create a support request and work with AWS Support to identify the source of the issue.
#187 (Accuracy: 100% / 3 votes)
A database specialist is designing the database for a software-as-a-service (SaaS) version of an employee information application. In the current architecture, the change history of employee records is stored in a single table in an Amazon RDS for Oracle database. Triggers on the employee table populate the history table with historical records.

This architecture has two major challenges.
First, there is no way to guarantee that the records have not been changed in the history table. Second, queries on the history table are slow because of the large size of the table and the need to run the queries against a large subset of data in the table.

The database specialist must design a solution that prevents modification of the historical records.
The solution also must maximize the speed of the queries.

Which solution will meet these requirements?
  • A. Migrate the current solution to an Amazon DynamoDB table. Use DynamoDB Streams to keep track of changes. Use DynamoDB Accelerator (DAX) to improve query performance.
  • B. Write employee record history to Amazon Quantum Ledger Database (Amazon QLDB) for historical records and to an Amazon OpenSearch Service (Amazon Elasticsearch Service) domain for queries.
  • C. Use Amazon Aurora PostgreSQL to store employee record history in a single table. Use Aurora Auto Scaling to provision more capacity.
  • D. Build a solution that uses an Amazon Redshift cluster for historical records. Query the Redshift cluster directly as needed.
#188 (Accuracy: 100% / 3 votes)
A company wants to move its on-premises Oracle database to an Amazon Aurora PostgreSQL DB cluster. The source database includes 500 GB of data. 900 stored procedures and functions, and application source code with embedded SQL statements. The company understands there are some database code objects and custom features that may not be automatically converted and may need some manual intervention. Management would like to complete this migration as fast as possible with minimal downtime.

Which tools and approach should be used to meet these requirements?
  • A. Use AWS DMS to perform data migration and to automatically create all schemas with Aurora PostgreSQL
  • B. Use AWS DMS to perform data migration and use the AWS Schema Conversion Tool (AWS SCT) to automatically generate the converted code
  • C. Use the AWS Schema Conversion Tool (AWS SCT) to automatically convert all types of Oracle schemas to PostgreSQL and migrate the data to Aurora
  • D. Use the dump and pg_dump utilities for both data migration and schema conversion
#189 (Accuracy: 100% / 3 votes)
A large IT hardware manufacturing company wants to deploy a MySQL database solution in the AWS Cloud. The solution should quickly create copies of the company's production databases for test purposes. The solution must deploy the test databases in minutes, and the test data should match the latest production data as closely as possible. Developers must also be able to make changes in the test database and delete the instances afterward.
Which solution meets these requirements?
  • A. Leverage Amazon RDS for MySQL with write-enabled replicas running on Amazon EC2. Create the test copies using a mysqidump backup from the RDS for MySQL DB instances and importing them into the new EC2 instances.
  • B. Leverage Amazon Aurora MySQL. Use database cloning to create multiple test copies of the production DB clusters.
  • C. Leverage Amazon Aurora MySQL. Restore previous production DB instance snapshots into new test copies of Aurora MySQL DB clusters to allow them to make changes.
  • D. Leverage Amazon RDS for MySQL. Use database cloning to create multiple developer copies of the production DB instance.
#190 (Accuracy: 100% / 3 votes)
A company plans to migrate a MySQL-based application from an on-premises environment to AWS. The application performs database joins across several tables and uses indexes for faster query response times. The company needs the database to be highly available with automatic failover.
Which solution on AWS will meet these requirements with the LEAST operational overhead?
  • A. Deploy an Amazon RDS DB instance with a read replica.
  • B. Deploy an Amazon RDS Multi-AZ DB instance.
  • C. Deploy Amazon DynamoDB global tables.
  • D. Deploy multiple Amazon RDS DB instances. Use Amazon Route 53 DNS with failover health checks configured.