Amazon AWS Certified Database - Specialty
Prev

There are 231 results

Next
#21 (Accuracy: 100% / 6 votes)
A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon
RDS events and determined a failover event occurred at that time.
The failover process took around 15 seconds to complete.
What is the MOST likely cause of the 5-minute connection outage?
  • A. After a database crash, Aurora needed to replay the redo log from the last database checkpoint
  • B. The client-side application is caching the DNS data and its TTL is set too high
  • C. After failover, the Aurora DB cluster needs time to warm up before accepting client connections
  • D. There were no active Aurora Replicas in the Aurora DB cluster
#22 (Accuracy: 100% / 1 votes)
A manufacturing company stores its inventory details in an Amazon DynamoDB table in the us-east-2 Region. According to new compliance and regulatory policies, the company is required to back up all of its tables nightly and store these backups in the us-west-2 Region for disaster recovery for 1 year.
  • A. Convert the existing DynamoDB table into a global table and create a global table replica in the us-west-2 Region.
  • B. Use AWS Backup to create a backup plan. Configure cross-Region replication in the plan and assign the DynamoDB table to this plan.
  • C. Create an on-demand backup of the DynamoDB table and restore this backup in the us-west-2 Region.
  • D. Enable Amazon S3 Cross-Region Replication (CRR) on the S3 bucket where DynamoDB on-demand backups are stored.
#23 (Accuracy: 100% / 5 votes)
A company stores critical data for a department in Amazon RDS for MySQL DB instances. The department was closed for 3 weeks and notified a database specialist that access to the RDS DB instances should not be granted to anyone during this time. To meet this requirement, the database specialist stopped all the
DB instances used by the department but did not select the option to create a snapshot.
Before the 3 weeks expired, the database specialist discovered that users could connect to the database successfully.
What could be the reason for this?
  • A. When stopping the DB instance, the option to create a snapshot should have been selected.
  • B. When stopping the DB instance, the duration for stopping the DB instance should have been selected.
  • C. Stopped DB instances will automatically restart if the number of attempted connections exceeds the threshold set.
  • D. Stopped DB instances will automatically restart if the instance is not manually started after 7 days.
#24 (Accuracy: 100% / 2 votes)
A company's application team needs to select an AWS managed database service to store application and user data. The application team is familiar with MySQL but is open to new solutions. The application and user data is stored in 10 tables and is de-normalized. The application will access this data through an API layer using a unique ID in each table. The company expects the traffic to be light at first, but the traffic will increase to thousands of transactions each second within the first year. The database service must support active reads and writes in multiple AWS Regions at the same time. Query response times need to be less than 100 ms.

Which AWS database solution will meet these requirements?
  • A. Deploy an Amazon RDS for MySQL environment in each Region and leverage AWS Database Migration Service (AWS DMS) to set up a multi-Region bidirectional replication.
  • B. Deploy an Amazon Aurora MySQL global database with write forwarding turned on.
  • C. Deploy an Amazon DynamoDB database with global tables.
  • D. Deploy an Amazon DocumentDB global cluster across multiple Regions.
#25 (Accuracy: 100% / 2 votes)
A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment method that will standardize the core solution components while managing environment-specific settings separately, and wants to minimize rework due to configuration errors.
Which process should the Database Specialist recommend to meet these requirements?
  • A. Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter.
  • B. Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.
  • C. Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack.
  • D. Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console.
#26 (Accuracy: 100% / 2 votes)
A Database Specialist is migrating an on-premises Microsoft SQL Server application database to Amazon RDS for PostgreSQL using AWS DMS. The application requires minimal downtime when the RDS DB instance goes live.
What change should the Database Specialist make to enable the migration?
  • A. Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
  • B. Configure the AWS DMS replication instance to allow both full load and ongoing change data capture (CDC)
  • C. Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
  • D. Configure the AWS DMS connections to allow two-way communication to allow for ongoing change data capture (CDC)
#27 (Accuracy: 100% / 2 votes)
A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low.
Which solution meets these requirements?
  • A. Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.
  • B. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.
  • C. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.
  • D. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.
#28 (Accuracy: 100% / 2 votes)
A city’s weather forecast team is using Amazon DynamoDB in the data tier for an application. The application has several components. The analysis component of the application requires repeated reads against a large dataset. The application has started to temporarily consume all the read capacity in the DynamoDB table and is negatively affecting other applications that need to access the same data.

Which solution will resolve this issue with the LEAST development effort?
  • A. Use DynamoDB Accelerator (DAX)
  • B. Use Amazon CloudFront in front of DynamoDB
  • C. Create a DynamoDB table with a local secondary index (LSI)
  • D. Use Amazon ElastiCache in front of DynamoDB
#29 (Accuracy: 100% / 4 votes)
A company's application development team wants to share an automated snapshot of its Amazon RDS database with another team. The database is encrypted with a custom AWS Key Management Service (AWS KMS) key under the "WeShare" AWS account. The application development team needs to share the DB snapshot under the "WeReceive" AWS account.
Which combination of actions must the application development team take to meet these requirements? (Choose two.)
  • A. Add access from the "WeReceive" account to the custom AWS KMS key policy of the sharing team.
  • B. Make a copy of the DB snapshot, and set the encryption option to disable.
  • C. Share the DB snapshot by setting the DB snapshot visibility option to public.
  • D. Make a copy of the DB snapshot, and set the encryption option to enable.
  • E. Share the DB snapshot by using the default AWS KMS encryption key.
#30 (Accuracy: 100% / 2 votes)
An online gaming company is using an Amazon DynamoDB table in on-demand mode to store game scores. After an intensive advertisement campaign in South
America, the average number of concurrent users rapidly increases from 100,000 to 500,000 in less than 10 minutes every day around 5 PM.

The on-call software reliability engineer has observed that the application logs contain a high number of DynamoDB throttling exceptions caused by game score insertions around 5 PM.
Customer service has also reported that several users are complaining about their scores not being registered.
How should the database administrator remediate this issue at the lowest cost?
  • A. Enable auto scaling and set the target usage rate to 90%.
  • B. Switch the table to provisioned mode and enable auto scaling.
  • C. Switch the table to provisioned mode and set the throughput to the peak value.
  • D. Create a DynamoDB Accelerator cluster and use it to access the DynamoDB table.