Amazon AWS Certified Solutions Architect - Associate SAA-C02
Prev

There are 450 results

Next
#151 (Accuracy: 100% / 3 votes)
A research company runs experiments that are powered by a simulation application and a visualization application. The simulation application runs on Linux and outputs intermediate data to an NFS share every 5 minutes. The visualization application is a Windows desktop application that displays the simulation output and requires an SMB file system.
The company maintains two synchronized file systems.
This strategy is causing data duplication and inefficient resource usage. The company needs to migrate the applications to AWS without making code changes to either application.
Which solution will meet these requirements?
  • A. Migrate both applications to AWS Lambda. Create an Amazon S3 bucket to exchange data between the applications.
  • B. Migrate both applications to Amazon Elastic Container Service (Amazon ECS). Configure Amazon FSx File Gateway for storage.
  • C. Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances. Configure Amazon Simple Queue Service (Amazon SQS) to exchange data between the applications.
  • D. Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances. Configure Amazon FSx for NetApp ONTAP for storage.
#152 (Accuracy: 100% / 4 votes)
A company hosts an application on AWS. The application uses AWS Lambda functions and stores data in Amazon DynamoDB tables. The Lambda functions are connected to a VPC that does not have internet access.
The traffic to access DynamoDB must not travel across the internet.
The application must have write access to only specific DynamoDB tables.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
  • A. Attach a VPC endpoint policy for DynamoDB to allow write access to only the specific DynamoDB tables.
  • B. Attach a security group to the interface VPC endpoint to allow write access to only the specific DynamoDB tables.
  • C. Create a resource-based IAM policy to grant write access to only the specific DynamoDB tables. Attach the policy to the DynamoDB tables.
  • D. Create a gateway VPC endpoint for DynamoDB that is associated with the Lambda VPC. Ensure that the Lambda execution role can access the gateway VPC endpoint.
  • E. Create an interface VPC endpoint for DynamoDB that is associated with the Lambda VPC. Ensure that the Lambda execution role can access the interface VPC endpoint.
#153 (Accuracy: 100% / 3 votes)
A company's ecommerce website has unpredictable traffic and uses AWS Lambda functions to directly access a private Amazon RDS for PostgreSQL DB instance. The company wants to maintain predictable database performance and ensure that the Lambda invocations do not overload the database with too many connections.
What should a solutions architect do to meet these requirements?
  • A. Point the client driver at an RDS custom endpoint. Deploy the Lambda functions inside a VPC.
  • B. Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions inside a VPC.
  • C. Point the client driver at an RDS custom endpoint. Deploy the Lambda functions outside a VPC.
  • D. Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions outside a VPC.
#154 (Accuracy: 100% / 1 votes)
A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of terabytes. The application data must be stored in a standard file system structure. The company wants a solution that scales automatically, is highly available, and requires minimum operational overhead.
Which solution will meet these requirements?
  • A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS). Use Amazon S3 for storage.
  • B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Block Store (Amazon EBS) for storage.
  • C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for storage.
  • D. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic Block Store (Amazon EBS) for storage.
#155 (Accuracy: 100% / 2 votes)
A company is developing a marketing communications service that targets mobile app users. The company needs to send confirmation messages with Short
Message Service (SMS) to its users.
The users must be able to reply to the SMS messages. The company must store the responses for a year for analysis.
What should a solutions architect do to meet these requirements?
  • A. Create an Amazon Connect contact flow to send the SMS messages. Use AWS Lambda to process the responses.
  • B. Build an Amazon Pinpoint journey. Configure Amazon Pinpoint to send events to an Amazon Kinesis data stream for analysis and archiving.
  • C. Use Amazon Simple Queue Service (Amazon SQS) to distribute the SMS messages. Use AWS Lambda to process the responses.
  • D. Create an Amazon Simple Notification Service (Amazon SNS) FIFO topic. Subscribe an Amazon Kinesis data stream to the SNS topic for analysis and archiving.
#156 (Accuracy: 100% / 2 votes)
A company produces batch data that comes from different databases. The company also produces live stream data from network sensors and application APIs.
The company needs to consolidate all the data into one place for business analytics.
The company needs to process the incoming data and then stage the data in different Amazon S3 buckets. Teams will later run one-time queries and import the data into a business intelligence tool to show key performance indicators
(KPIs).

Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)
  • A. Use Amazon Athena for one-time queries. Use Amazon QuickSight to create dashboards for KPIs.
  • B. Use Amazon Kinesis Data Analytics for one-time queries. Use Amazon QuickSight to create dashboards for KPIs.
  • C. Create custom AWS Lambda functions to move the individual records from the databases to an Amazon Redshift cluster.
  • D. Use an AWS Glue extract, transform, and load (ETL) job to convert the data into JSON format. Load the data into multiple Amazon OpenSearch Service (Amazon Elasticsearch Service) clusters.
  • E. Use blueprints in AWS Lake Formation to identify the data that can be ingested into a data lake. Use AWS Glue to crawl the source, extract the data, and load the data into Amazon S3 in Apache Parquet format.
#157 (Accuracy: 100% / 4 votes)
A media company collects and analyzes user activity data on premises. The company wants to migrate this capability to AWS. The user activity data store will continue to grow and will be petabytes in size. The company needs to build a highly available data ingestion solution that facilitates on-demand analytics of existing data and new data with SQL.
Which solution will meet these requirements with the LEAST operational overhead?
  • A. Send activity data to an Amazon Kinesis data stream. Configure the stream to deliver the data to an Amazon S3 bucket.
  • B. Send activity data to an Amazon Kinesis Data Firehose delivery stream. Configure the stream to deliver the data to an Amazon Redshift cluster.
  • C. Place activity data in an Amazon S3 bucket. Configure Amazon S3 to run an AWS Lambda function on the data as the data arrives in the S3 bucket.
  • D. Create an ingestion service on Amazon EC2 instances that are spread across multiple Availability Zones. Configure the service to forward data to an Amazon RDS Multi-AZ database.
#158 (Accuracy: 100% / 1 votes)
A rapidly growing global ecommerce company is hosting its web application on AWS. The web application includes static content and dynamic content. The website stores online transaction processing (OLTP) data in an Amazon RDS database. The website's users are experiencing slow page loads.
Which combination of actions should a solutions architect take to resolve this issue? (Choose two.)
  • A. Configure an Amazon Redshift cluster.
  • B. Set up an Amazon CloudFront distribution.
  • C. Host the dynamic web content in Amazon S3.
  • D. Create a read replica for the RDS DB instance.
  • E. Configure a Multi-AZ deployment for the RDS DB instance.
#159 (Accuracy: 100% / 4 votes)
A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible.
Which solution will meet these requirements with the LEAST operational overhead?
  • A. Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
  • B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
  • C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule's target. Create a second EventBridge (CloudWatch Events) rule to send events when the upload to the S3 bucket is complete. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule's target.
  • D. Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
#160 (Accuracy: 100% / 2 votes)
An ecommerce company stores terabytes of customer data in the AWS Cloud. The data contains personally identifiable information (PII). The company wants to use the data in three applications. Only one of the applications needs to process the PII. The PII must be removed before the other two applications process the data.
Which solution will meet these requirements with the LEAST operational overhead?
  • A. Store the data in an Amazon DynamoDB table. Create a proxy application layer to intercept and process the data that each application requests.
  • B. Store the data in an Amazon S3 bucket. Process and transform the data by using S3 Object Lambda before returning the data to the requesting application.
  • C. Process the data and store the transformed data in three separate Amazon S3 buckets so that each application has its own custom dataset. Point each application to its respective S3 bucket.
  • D. Process the data and store the transformed data in three separate Amazon DynamoDB tables so that each application has its own custom dataset. Point each application to its respective DynamoDB table.