Amazon AWS Certified Solutions Architect - Associate SAA-C03
Prev

There are 677 results

Next
#231 (Accuracy: 100% / 4 votes)
A company is creating an application. The company stores data from tests of the application in multiple on-premises locations.

The company needs to connect the on-premises locations to VPCs in an AWS Region in the AWS Cloud.
The number of accounts and VPCs will increase during the next year. The network architecture must simplify the administration of new connections and must provide the ability to scale.

Which solution will meet these requirements with the LEAST administrative overhead?
  • A. Create a peering connection between the VPCs. Create a VPN connection between the VPCs and the on-premises locations.
  • B. Launch an Amazon EC2 instance. On the instance, include VPN software that uses a VPN connection to connect all VPCs and on-premises locations.
  • C. Create a transit gateway. Create VPC attachments for the VPC connections. Create VPN attachments for the on-premises connections.
  • D. Create an AWS Direct Connect connection between the on-premises locations and a central VPC. Connect the central VPC to other VPCs by using peering connections.
#232 (Accuracy: 100% / 4 votes)
A company has two VPCs named Management and Production. The Management VPC uses VPNs through a customer gateway to connect to a single device in the data center. The Production VPC uses a virtual private gateway with two attached AWS Direct Connect connections. The Management and Production VPCs both use a single VPC peering connection to allow communication between the applications.

What should a solutions architect do to mitigate any single point of failure in this architecture?
  • A. Add a set of VPNs between the Management and Production VPCs.
  • B. Add a second virtual private gateway and attach it to the Management VPC.
  • C. Add a second set of VPNs to the Management VPC from a second customer gateway device.
  • D. Add a second VPC peering connection between the Management VPC and the Production VPC.
#233 (Accuracy: 95% / 6 votes)
A company stores sensitive data in Amazon S3. A solutions architect needs to create an encryption solution. The company needs to fully control the ability of users to create, rotate, and disable encryption keys with minimal effort for any data that must be encrypted.

Which solution will meet these requirements?
  • A. Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store the sensitive data.
  • B. Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
  • C. Create an AWS managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
  • D. Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer managed keys. Upload the encrypted objects back into Amazon S3.
#234 (Accuracy: 100% / 9 votes)
A company is building an application that consists of several microservices. The company has decided to use container technologies to deploy its software on AWS. The company needs a solution that minimizes the amount of ongoing effort for maintenance and scaling. The company cannot manage additional infrastructure.

Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
  • A. Deploy an Amazon Elastic Container Service (Amazon ECS) cluster.
  • B. Deploy the Kubernetes control plane on Amazon EC2 instances that span multiple Availability Zones.
  • C. Deploy an Amazon Elastic Container Service (Amazon ECS) service with an Amazon EC2 launch type. Specify a desired task number level of greater than or equal to 2.
  • D. Deploy an Amazon Elastic Container Service (Amazon ECS) service with a Fargate launch type. Specify a desired task number level of greater than or equal to 2.
  • E. Deploy Kubernetes worker nodes on Amazon EC2 instances that span multiple Availability Zones. Create a deployment that specifies two or more replicas for each microservice.
#235 (Accuracy: 100% / 7 votes)
A company has deployed its newest product on AWS. The product runs in an Auto Scaling group behind a Network Load Balancer. The company stores the product’s objects in an Amazon S3 bucket.

The company recently experienced malicious attacks against its systems.
The company needs a solution that continuously monitors for malicious activity in the AWS account, workloads, and access patterns to the S3 bucket. The solution must also report suspicious activity and display the information on a dashboard.

Which solution will meet these requirements?
  • A. Configure Amazon Macie to monitor and report findings to AWS Config.
  • B. Configure Amazon Inspector to monitor and report findings to AWS CloudTrail.
  • C. Configure Amazon GuardDuty to monitor and report findings to AWS Security Hub.
  • D. Configure AWS Config to monitor and report findings to Amazon EventBridge.
#236 (Accuracy: 100% / 6 votes)
An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances that run Microsoft SQL Server Enterprise Edition. The company's current recovery point objective (RPO) and recovery time objective (RTO) are 24 hours.

Which solution will meet these requirements MOST cost-effectively?
  • A. Create a cross-Region read replica and promote the read replica to the primary instance.
  • B. Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.
  • C. Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket.
  • D. Copy automatic snapshots to another Region every 24 hours.
#237 (Accuracy: 100% / 8 votes)
A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is reached. The peak capacity is the ‘same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are complete.

What should the solutions architect do to meet these requirements?
  • A. Increase the minimum capacity for the Auto Scaling group.
  • B. Increase the maximum capacity for the Auto Scaling group.
  • C. Configure scheduled scaling to scale up to the desired compute level.
  • D. Change the scaling policy to add more EC2 instances during each scaling operation.
#238 (Accuracy: 100% / 10 votes)
A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS) behind an Application Load Balancer in an AWS Region. The application needs to store data in a PostgreSQL database engine. The company wants the data in the database to be highly available. The company also needs increased capacity for read workloads.

Which solution will meet these requirements with the MOST operational efficiency?
  • A. Create an Amazon DynamoDB database table configured with global tables.
  • B. Create an Amazon RDS database with Multi-AZ deployments.
  • C. Create an Amazon RDS database with Multi-AZ DB cluster deployment.
  • D. Create an Amazon RDS database configured with cross-Region read replicas.
#239 (Accuracy: 100% / 5 votes)
A company is developing a real-time multiplayer game that uses UDP for communications between the client and servers in an Auto Scaling group. Spikes in demand are anticipated during the day, so the game server platform must adapt accordingly. Developers want to store gamer scores and other non-relational data in a database solution that will scale without intervention.

Which solution should a solutions architect recommend?
  • A. Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
  • B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data storage.
  • C. Use a Network Load Balancer for traffic distribution and Amazon Aurora Global Database for data storage.
  • D. Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for data storage.
#240 (Accuracy: 100% / 7 votes)
A company needs to transfer 600 TB of data from its on-premises network-attached storage (NAS) system to the AWS Cloud. The data transfer must be complete within 2 weeks. The data is sensitive and must be encrypted in transit. The company’s internet connection can support an upload speed of 100 Mbps.

Which solution meets these requirements MOST cost-effectively?
  • A. Use Amazon S3 multi-part upload functionality to transfer the files over HTTPS.
  • B. Create a VPN connection between the on-premises NAS system and the nearest AWS Region. Transfer the data over the VPN connection.
  • C. Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices. Use the devices to transfer the data to Amazon S3.
  • D. Set up a 10 Gbps AWS Direct Connect connection between the company location and the nearest AWS Region. Transfer the data over a VPN connection into the Region to store the data in Amazon S3.