Amazon AWS Certified Solutions Architect - Associate SAA-C02
Prev

There are 450 results

Next
#431 (Accuracy: 100% / 1 votes)
A company runs a high performance computing (HPC) workload on AWS. The workload required low-latency network performance and high network throughput with tightly coupled node-to-node communication. The Amazon EC2 instances are properly sized for compute and storage capacity, and are launched using default options.
What should a solutions architect propose to improve the performance of the workload?
  • A. Choose a cluster placement group while launching Amazon EC2 instances.
  • B. Choose dedicated instance tenancy while launching Amazon EC2 instances.
  • C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances.
  • D. Choose the required capacity reservation while launching Amazon EC2 instances.
#432 (Accuracy: 100% / 3 votes)
A solutions architect at a company is designing the architecture for a two-tiered web application. The web application is composed of an internet-facing Application
Load Balancer (ALB) that forwards traffic to an Auto Scaling group of Amazon EC2 instances.
The EC2 instances must be able to access a database that runs on
Amazon RDS.

The company has requested a defense-in-depth approach to the network layout.
The company does not want to rely solely on security groups or network ACLs.
Only the minimum resources that are necessary should be routable from the internet.

Which network design should the solutions architect recommend to meet these requirements?
  • A. Place the ALB, EC2 instances, and RDS database in private subnets.
  • B. Place the ALB in public subnets. Place the EC2 instances and RDS database in private subnets.
  • C. Place the ALB and EC2 instances in public subnets. Place the RDS database in private subnets.
  • D. Place the ALB outside the VPC. Place the EC2 instances and RDS database in private subnets.
#433 (Accuracy: 100% / 1 votes)
A company has two applications: a sender application that sends messages with payloads to be processed and a processing application intended to receive messages with payloads. The company wants to implement an AWS service to handle messages between the two applications. The sender application can send about 1,000 messages each hour. The messages may take up to 2 days to be processed. If the messages fail to process, they must be retained so that they do not impact the processing of any remaining messages.
Which solution meets these requirements and is the MOST operationally efficient?
  • A. Set up an Amazon EC2 instance running a Redis database. Configure both applications to use the instance. Store, process, and delete the messages, respectively.
  • B. Use an Amazon Kinesis data stream to receive the messages from the sender application. Integrate the processing application with the Kinesis Client Library (KCL).
  • C. Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process.
  • D. Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications to process. Integrate the sender application to write to the SNS topic.
#434 (Accuracy: 100% / 1 votes)
A company copies 200 TB of data from a recent ocean survey onto AWS Snowball Edge Storage Optimized devices. The company has a high performance computing (HPC) cluster that is hosted on AWS to look for oil and gas deposits. A solutions architect must provide the cluster with consistent sub-millisecond latency and high-throughput access to the data on the Snowball Edge Storage Optimized devices. The company is sending the devices back to AWS.
Which solution will meet these requirements?
  • A. Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an AWS Storage Gateway file gateway to use the S3 bucket. Access the file gateway from the HPC cluster instances.
  • B. Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an Amazon FSx for Lustre file system, and integrate it with the S3 bucket. Access the FSx for Lustre file system from the HPC cluster instances.
  • C. Create an Amazon S3 bucket and an Amazon Elastic File System (Amazon EFS) file system. Import the data into the S3 bucket. Copy the data from the S3 bucket to the EFS file system Access the EFS file system from the HPC cluster instances.
  • D. Create an Amazon FSx for Lustre file system. Import the data directly into the FSx for Lustre file system. Access the FSx for Lustre file system from the HPC cluster instances.
#435 (Accuracy: 90% / 4 votes)
A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.
What should a solutions architect recommend to meet these requirements?
  • A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.
  • B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.
  • C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in AmazonDynamoDB. Other applications can consume the transactions data off the Kinesis data stream.
  • D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.
#436 (Accuracy: 100% / 5 votes)
A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All application components will be deployed on the AWS infrastructure.
The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load.
The application must be able to store petabytes of data. Which combination of storage and caching should the solutions architect use?
  • A. Amazon S3 with Amazon CloudFront
  • B. Amazon S3 Glacier with Amazon ElastiCache
  • C. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
  • D. AWS Storage Gateway with Amazon ElastiCache
#437 (Accuracy: 100% / 1 votes)
A company has primary and secondary data centers that are 500 miles (804.7 km) apart and interconnected with high-speed fiber-optic cable. The company needs a highly available and secure network connection between its data centers and a VPC on AWS for a mission-critical workload. A solutions architect must choose a connection solution that provides maximum resiliency.
Which solution meets these requirements?
  • A. Two AWS Direct Connect connections from the primary data center terminating at two Direct Connect locations on two separate devices
  • B. A single AWS Direct Connect connection from each of the primary and secondary data centers terminating at one Direct Connect location on the same device
  • C. Two AWS Direct Connect connections from each of the primary and secondary data centers terminating at two Direct Connect locations on two separate devices
  • D. A single AWS Direct Connect connection from each of the primary and secondary data centers terminating at one Direct Connect location on two separate devices
#438 (Accuracy: 100% / 1 votes)
A company needs to move data from an Amazon EC2 instance to an Amazon S3 bucket. The company must ensure that no API calls and no data are routed through public internet routes. Only the EC2 instance can have access to upload data to the S3 bucket.
Which solution will meet these requirements?
  • A. Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located. Attach a resource policy to the S3 bucket to only allow the EC2 instance's IAM role for access.
  • B. Create a gateway VPC endpoint for Amazon S3 in the Availability Zone where the EC2 instance is located. Attach appropriate security groups to the endpoint. Attach a resource policy to the S3 bucket to only allow the EC2 instance's IAM role for access.
  • C. Run the nslookup tool from inside the EC2 instance to obtain the private IP address of the S3 bucket's service API endpoint. Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the EC2 instance's IAM role for access.
  • D. Use the AWS provided, publicly available ip-ranges.json file to obtain the private IP address of the S3 bucket's service API endpoint. Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the EC2 instance's IAM role for access.
#439 (Accuracy: 100% / 1 votes)
A company runs a photo processing application that needs to frequently upload and download pictures from Amazon S3 buckets that are located in the same
AWS Region.
A solutions architect has noticed an increased cost in data transfer fees and needs to implement a solution to reduce these costs.
How can the solutions architect meet this requirement?
  • A. Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through it.
  • B. Deploy a NAT gateway into a public subnet and attach an endpoint policy that allows access to the S3 buckets.
  • C. Deploy the application into a public subnet and allow it to route through an internet gateway to access the S3 buckets.
  • D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets.
#440 (Accuracy: 90% / 4 votes)
A company has an event-driven application that invokes AWS Lambda functions up to 800 times each minute with varying runtimes. The Lambda functions access data that is stored in an Amazon Aurora MySQL DB cluster. The company is noticing connection timeouts as user activity increases. The database shows no signs of being overloaded. CPU, memory, and disk access metrics are all low.
Which solution will resolve this issue with the LEAST operational overhead?
  • A. Adjust the size of the Aurora MySQL nodes to handle more connections. Configure retry logic in the Lambda functions for attempts to connect to the database.
  • B. Set up Amazon ElastiCache for Redis to cache commonly read items from the database. Configure the Lambda functions to connect to ElastiCache for reads.
  • C. Add an Aurora Replica as a reader node. Configure the Lambda functions to connect to the reader endpoint of the DB cluster rather than to the writer endpoint.
  • D. Use Amazon RDS Proxy to create a proxy. Set the DB cluster as the target database. Configure the Lambda functions to connect to the proxy rather than to the DB cluster.