Amazon AWS Certified Database - Specialty
Prev

There are 231 results

Next
#1 (Accuracy: 100% / 2 votes)
A database specialist needs to reduce the cost of an application's database. The database is running on a Multi-AZ deployment of an Amazon RDS for Microsoft SQL Server DB instance. The application requires the database to support stored procedures, SQL Server Wire Protocol (TDS), and T-SQL. The database must also be highly available. The database specialist is using AWS Database Migration Service (AWS DMS) to migrate the database to a new data store.

Which solution will reduce the cost of the database with the LEAST effort?
  • A. Use AWS Database Migration Service (DMS) to migrate to an RDS for MySQL Multi-AZ database. Update the application code to use the features of MySQL that correspond to SQL Server. Update the application to use the MySQL port.
  • B. Use AWS Database Migration Service (DMS) to migrate to an RDS for PostgreSQL Multi-AZ database. Turn on the SQL_COMPAT optional extension within the database to allow the required features. Update the application to use the PostgreSQL port.
  • C. Use AWS Database Migration Service (DMS) to migrate to an RDS for SQL Server Single-AZ database. Update the application to use the new database endpoint.
  • D. Use AWS Database Migration Service (DMS) to migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish for Aurora PostgreSQL. Update the application to use the Babelfish TDS port.
#2 (Accuracy: 100% / 2 votes)
A company has more than 100 AWS accounts that need Amazon RDS instances. The company wants to build an automated solution to deploy the RDS instances with specific compliance parameters. The data does not need to be replicated. The company needs to create the databases within 1 day.

Which solution will meet these requirements in the MOST operationally efficient way?
  • A. Create RDS resources by using AWS CloudFormation. Share the CloudFormation template with each account.
  • B. Create an RDS snapshot. Share the snapshot with each account. Deploy the snapshot into each account.
  • C. Use AWS CloudFormation to create RDS instances in each account. Run AWS Database Migration Service (AWS DMS) replication to each of the created instances.
  • D. Create a script by using the AWS CLI to copy the RDS instance into the other accounts from a template account.
#3 (Accuracy: 100% / 2 votes)
A social media company recently launched a new feature that gives users the ability to share live feeds of their daily activities with their followers. The company has an Amazon RDS for MySQL DB instance that stores data about follower engagement.

After the new feature launched, the company noticed high CPU utilization and high database latency during reads and writes.
The company wants to implement a solution that will identify the source of the high CPU utilization.

Which solution will meet these requirements with the LEAST administrative oversight?
  • A. Use Amazon DevOps Guru insights.
  • B. Use AWS CloudTrail.
  • C. Use Amazon CloudWatch Logs.
  • D. Use Amazon Aurora Database Activity Streams.
#4 (Accuracy: 100% / 2 votes)
A company has an application environment that deploys Amazon Aurora PostgreSQL databases as part of its CI/CD process that uses AWS CloudFormation. The company's database administrator has received reports of performance issues from the resulting database but has no way to investigate the issues.

Which combination of changes must the database administrator make to the database deployment to automate the collection of performance data? (Choose two.)
  • A. Turn on Amazon DevOps Guru for the Aurora database resources in the CloudFormation template.
  • B. Turn on AWS CloudTrail in each AWS account.
  • C. Turn on and configure AWS Config for all Aurora PostgreSQL databases.
  • D. Update the CloudFormation template to enable Amazon CloudWatch monitoring on the Aurora PostgreSQL DB instances.
  • E. Update the CloudFormation template to turn on Performance Insights for Aurora PostgreSQL.
#5 (Accuracy: 100% / 2 votes)
A marketing company is developing an application to track responses to email message campaigns. The company needs a database storage solution that is optimized to work with highly connected data. The database needs to limit connections and programmatic access to the data by using IAM policies.

Which solution will meet these requirements?
  • A. Amazon ElastiCache for Redis cluster
  • B. Amazon Aurora MySQL DB cluster
  • C. Amazon DynamoDB table
  • D. Amazon Neptune DB cluster
#6 (Accuracy: 100% / 2 votes)
A company has a hybrid environment in which a VPC connects to an on-premises network through an AWS Site-to-Site VPN connection. The VPC contains an application that is hosted on Amazon EC2 instances. The EC2 instances run in private subnets behind an Application Load Balancer (ALB) that is associated with multiple public subnets. The EC2 instances need to securely access an Amazon DynamoDB table.

Which solution will meet these requirements?
  • A. Use the internet gateway of the VPC to access the DynamoDB table. Use the ALB to route the traffic to the EC2 instances.
  • B. Add a NAT gateway in one of the public subnets of the VPC. Configure the security groups of the EC2 instances to access the DynamoDB table through the NAT gateway.
  • C. Use the Site-to-Site VPN connection to route all DynamoDB network traffic through the on-premises network infrastructure to access the EC2 instances.
  • D. Create a VPC endpoint for DynamoDB. Assign the endpoint to the route table of the private subnets that contain the EC2 instances.
#7 (Accuracy: 100% / 2 votes)
A global company is creating an application. The application must be highly available. The company requires an RTO and an RPO of less than 5 minutes. The company needs a database that will provide the ability to set up an active-active configuration and near real-time synchronization of data across tables in multiple AWS Regions.

Which solution will meet these requirements?
  • A. Amazon RDS for MariaDB with cross-Region read replicas
  • B. Amazon RDS with a Multi-AZ deployment
  • C. Amazon DynamoDB global tables
  • D. Amazon DynamoDB with a global secondary index (GSI)
#8 (Accuracy: 100% / 1 votes)
A company has a reporting application that runs on an Amazon EC2 instance in an isolated developer account on AWS. The application needs to retrieve data during non-peak company hours from an Amazon Aurora PostgreSQL database that runs in the company’s production account. The company's security team requires that access to production resources complies with AWS best security practices.

A database administrator needs to provide the reporting application with access to the production database.
The company has already configured VPC peering between the production account and developer account. The company has also updated the route tables in both accounts with the necessary entries to correctly set up VPC peering.

What must the database administrator do to finish providing connectivity to the reporting application?
  • A. Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.
  • B. Add an outbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.
  • C. Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on all TCP ports. Add an inbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.
  • D. Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on all TCP ports.
#9 (Accuracy: 100% / 2 votes)
A company performs an audit on various data stores and discovers that an Amazon S3 bucket is storing a credit card number. The S3 bucket is the target of an AWS Database Migration Service (AWS DMS) continuous replication task that uses change data capture (CDC). The company determines that this field is not needed by anyone who uses the target data. The company has manually removed the existing credit card data from the S3 bucket.

What is the MOST operationally efficient way to prevent new credit card data from being written to the S3 bucket?
  • A. Add a transformation rule to the DMS task to ignore the column from the source data endpoint.
  • B. Add a transformation rule to the DMS task to mask the column by using a simple SQL query.
  • C. Configure the target S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS).
  • D. Remove the credit card number column from the data source so that the DMS task does not need to be altered.
#10 (Accuracy: 100% / 4 votes)
A company is using an Amazon RDS Multi-AZ DB instance in its development environment. The DB instance uses General Purpose SSD storage. The DB instance provides data to an application that has I/O constraints and high online transaction processing (OLTP) workloads. The users report that the application is slow.

A database specialist finds a high degree of latency in the database writes.
The database specialist must decrease the database latency by designing a solution that minimizes operational overhead.

Which solution will meet these requirements?
  • A. Eliminate the Multi-AZ deployment. Run the DB instance in only one Availability Zone
  • B. Recreate the DB instance. Use the default storage type. Reload the data from an automatic snapshot
  • C. Switch the storage to Provisioned IOPS SSD on the DB instance that is running
  • D. Recreate the DB instance. Use Provisioned IOPS SSD storage. Reload the data from an automatic snapshot