Amazon AWS Certified Solutions Architect - Associate SAA-C03
Prev

There are 677 results

Next
#521 (Accuracy: 100% / 2 votes)
A company is migrating its multi-tier on-premises application to AWS. The application consists of a single-node MySQL database and a multi-node web tier. The company must minimize changes to the application during the migration. The company wants to improve application resiliency after the migration.

Which combination of steps will meet these requirements? (Choose two.)
  • A. Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer.
  • B. Migrate the database to Amazon EC2 instances in an Auto Scaling group behind a Network Load Balancer.
  • C. Migrate the database to an Amazon RDS Multi-AZ deployment.
  • D. Migrate the web tier to an AWS Lambda function.
  • E. Migrate the database to an Amazon DynamoDB table.
#522 (Accuracy: 93% / 5 votes)
A company is required to use cryptographic keys in its on-premises key manager. The key manager is outside of the AWS Cloud because of regulatory and compliance requirements. The company wants to manage encryption and decryption by using cryptographic keys that are retained outside of the AWS Cloud and that support a variety of external key managers from different vendors.

Which solution will meet these requirements with the LEAST operational overhead?
  • A. Use AWS CloudHSM key store backed by a CloudHSM cluster.
  • B. Use an AWS Key Management Service (AWS KMS) external key store backed by an external key manager.
  • C. Use the default AWS Key Management Service (AWS KMS) managed key store.
  • D. Use a custom key store backed by an AWS CloudHSM cluster.
#523 (Accuracy: 94% / 6 votes)
A company collects 10 GB of telemetry data daily from various machines. The company stores the data in an Amazon S3 bucket in a source data account.

The company has hired several consulting agencies to use this data for analysis.
Each agency needs read access to the data for its analysts. The company must share the data from the source data account by choosing a solution that maximizes security and operational efficiency.

Which solution will meet these requirements?
  • A. Configure S3 global tables to replicate data for each agency.
  • B. Make the S3 bucket public for a limited time. Inform only the agencies.
  • C. Configure cross-account access for the S3 bucket to the accounts that the agencies own.
  • D. Set up an IAM user for each analyst in the source data account. Grant each user access to the S3 bucket.
#524 (Accuracy: 100% / 7 votes)
A company is designing the network for an online multi-player game. The game uses the UDP networking protocol and will be deployed in eight AWS Regions. The network architecture needs to minimize latency and packet loss to give end users a high-quality gaming experience.

Which solution will meet these requirements?
  • A. Setup a transit gateway in each Region. Create inter-Region peering attachments between each transit gateway.
  • B. Set up AWS Global Accelerator with UDP listeners and endpoint groups in each Region.
  • C. Set up Amazon CloudFront with UDP turned on. Configure an origin in each Region.
  • D. Set up a VPC peering mesh between each Region. Turn on UDP for each VPC.
#525 (Accuracy: 93% / 6 votes)
An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2 instance. During a monthly sales event, database usage increases and causes database connection issues for the application. The traffic is unpredictable for subsequent monthly sales events, which impacts the sales forecast. The company needs to maintain performance when there is an unpredictable increase in traffic.

Which solution resolves this issue in the MOST cost-effective way?
  • A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
  • B. Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodate increased usage.
  • C. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type.
  • D. Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage.
#526 (Accuracy: 95% / 7 votes)
A company runs an application that uses Amazon RDS for PostgreSQL. The application receives traffic only on weekdays during business hours. The company wants to optimize costs and reduce operational overhead based on this usage.

Which solution will meet these requirements?
  • A. Use the Instance Scheduler on AWS to configure start and stop schedules.
  • B. Turn off automatic backups. Create weekly manual snapshots of the database.
  • C. Create a custom AWS Lambda function to start and stop the database based on minimum CPU utilization.
  • D. Purchase All Upfront reserved DB instances.
#527 (Accuracy: 100% / 4 votes)
A company seeks a storage solution for its application. The solution must be highly available and scalable. The solution also must function as a file system be mountable by multiple Linux instances in AWS and on premises through native protocols, and have no minimum size requirements. The company has set up a Site-to-Site VPN for access from its on-premises network to its VPC.

Which storage solution meets these requirements?
  • A. Amazon FSx Multi-AZ deployments
  • B. Amazon Elastic Block Store (Amazon EBS) Multi-Attach volumes
  • C. Amazon Elastic File System (Amazon EFS) with multiple mount targets
  • D. Amazon Elastic File System (Amazon EFS) with a single mount target and multiple access points
#528 (Accuracy: 95% / 5 votes)
A company has multiple Windows file servers on premises. The company wants to migrate and consolidate its files into an Amazon FSx for Windows File Server file system. File permissions must be preserved to ensure that access rights do not change.

Which solutions will meet these requirements? (Choose two.)
  • A. Deploy AWS DataSync agents on premises. Schedule DataSync tasks to transfer the data to the FSx for Windows File Server file system.
  • B. Copy the shares on each file server into Amazon S3 buckets by using the AWS CLI. Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system.
  • C. Remove the drives from each file server. Ship the drives to AWS for import into Amazon S3. Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system.
  • D. Order an AWS Snowcone device. Connect the device to the on-premises network. Launch AWS DataSync agents on the device. Schedule DataSync tasks to transfer the data to the FSx for Windows File Server file system.
  • E. Order an AWS Snowball Edge Storage Optimized device. Connect the device to the on-premises network. Copy data to the device by using the AWS CLI. Ship the device back to AWS for import into Amazon S3. Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system.
#529 (Accuracy: 90% / 10 votes)
A company has a web application with sporadic usage patterns. There is heavy usage at the beginning of each month, moderate usage at the start of each week, and unpredictable usage during the week. The application consists of a web server and a MySQL database server running inside the data center. The company would like to move the application to the AWS Cloud, and needs to select a cost-effective database platform that will not require database modifications.

Which solution will meet these requirements?
  • A. Amazon DynamoDB
  • B. Amazon RDS for MySQL
  • C. MySQL-compatible Amazon Aurora Serverless
  • D. MySQL deployed on Amazon EC2 in an Auto Scaling group
#530 (Accuracy: 91% / 12 votes)
A company wants to use the AWS Cloud to make an existing application highly available and resilient. The current version of the application resides in the company's data center. The application recently experienced data loss after a database server crashed because of an unexpected power outage.

The company needs a solution that avoids any single points of failure.
The solution must give the application the ability to scale to meet user demand.

Which solution will meet these requirements?
  • A. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB instance in a Multi-AZ configuration.
  • B. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group in a single Availability Zone. Deploy the database on an EC2 instance. Enable EC2 Auto Recovery.
  • C. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB instance with a read replica in a single Availability Zone. Promote the read replica to replace the primary DB instance if the primary DB instance fails.
  • D. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Deploy the primary and secondary database servers on EC2 instances across multiple Availability Zones. Use Amazon Elastic Block Store (Amazon EBS) Multi-Attach to create shared storage between the instances.