Amazon AWS Certified Solutions Architect - Professional SAP-C01
Prev

There are 579 results

Next
#561 (Accuracy: 100% / 2 votes)
A company has a web application that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by custom recognition software for categorization.
The website contains static content that has variable traffic with peaks in certain months.
The architecture consists of Amazon EC2 instances running in an Auto
Scaling group for the web application and EC2 instances running in an Auto Scaling group to process an Amazon SQS-queue.
The company wants to re-architect the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-party software.
Which solution meets these requirements?
  • A. Use Amazon ECS containers for the web application and Spot instances for the Scaling group that processes the SQS queue. Replace the custom software with Amazon Rekognition to categorize the videos.
  • B. Store the uploaded videos in Amazon EFS and mount the file system to the EC2 instances for the web application. Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
  • C. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notification to publish events to the SQS queue. Process the SQS queue with an AWS Lambda function that call the Amazon Rekognition API to categorize the videos.
  • D. Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the application and launch a worker environment to process the SQS queue. Replace the custom software with Amazon Rekognition to categorize the videos.
#562 (Accuracy: 100% / 1 votes)
A Solutions Architect has been asked to look at a company's Amazon Redshift cluster, which has quickly become an integral part of its technology and supports key business process. The Solutions Architect is to increase the reliability and availability of the cluster and provide options to ensure that if an issue arises, the cluster can either operate or be restored within four hours.
Which of the following solution options BEST addresses the business need in the most cost-effective manner?
  • A. Ensure that the Amazon Redshift cluster has been set up to make use of Auto Scaling groups with the nodes in the cluster spread across multiple Availability Zones.
  • B. Ensure that the Amazon Redshift cluster creation has been templated using AWS CloudFormation so it can easily be launched in another Availability Zone and data populated from the automated Redshift back-ups stored in Amazon S3.
  • C. Use Amazon Kinesis Data Firehose to collect the data ahead of ingestion into Amazon Redshift and create clusters using AWS CloudFormation in another region and stream the data to both clusters.
  • D. Create two identical Amazon Redshift clusters in different regions (one as the primary, one as the secondary). Use Amazon S3 cross-region replication from the primary to secondary region, which triggers an AWS Lambda function to populate the cluster in the secondary region.
#563 (Accuracy: 100% / 3 votes)
A company manages multiple AWS accounts by using AWS Organizations. Under the root OU, the company has two OUs: Research and DataOps.
Because of regulatory requirements, all resources that the company deploys in the organization must reside in the ap-northeast-1 Region.
Additionally, EC2 instances that the company deploys in the DataOps OU must use a predefined list of instance types.
A solutions architect must implement a solution that applies these restrictions.
The solution must maximize operational efficiency and must minimize ongoing maintenance.
Which combination of steps will meet these requirements? (Choose two.)
  • A. Create an IAM role in one account under the DataOps OU. Use the ec2:InstanceType condition key in an inline policy on the role to restrict access to specific instance type.
  • B. Create an IAM user in all accounts under the root OU. Use the aws:RequestedRegion condition key in an inline policy on each user to restrict access to all AWS Regions except ap-northeast-1.
  • C. Create an SCP. Use the aws:RequestedRegion condition key to restrict access to all AWS Regions except ap-northeast-1. Apply the SCP to the root OU.
  • D. Create an SCP. Use the ec2:Region condition key to restrict access to all AWS Regions except ap-northeast-1. Apply the SCP to the root OU, the DataOps OU, and the Research OU.
  • E. Create an SCP. Use the ec2:InstanceType condition key to restrict access to specific instance types. Apply the SCP to the DataOps OU.
#564 (Accuracy: 100% / 2 votes)
A company had a third-party audit of its AWS environment. The auditor identified secrets in developer documentation and found secrets that were hardcoded into AWS CloudFormation templates throughout the environment. The auditor also identified security groups that allowed inbound traffic from the internet and outbound traffic to all destinations on the internet.

A solutions architect must design a solution that will encrypt all secrets and rotate the secrets every 90 days.
Additionally, the solutions architect must configure the security groups to prevent resources from being accessible from the internet.

Which solution will meet these requirements?
  • A. Use AWS Secrets Manager to create, store, and access secrets. Create new secrets in AWS CloudFormation by using the AWS::SecretsManager::Secret resource type. Reference the secrets in other templates by using Secrets Manager dynamic references. Configure automatic rotation in Secrets Manager to rotate the secrets every 90 days. Use AWS Firewall Manager to create a policy that identifies all security groups that allow inbound or outbound communications for any protocols to 0.0.0.0/0. Whenever the policy flags a security group in violation, remove the noncompliant rule from security groups.
  • B. Use AWS Systems Manager Parameter Store to create, store, and access secrets. Create new Parameter Store items in AWS CloudFormation by using the AWS::SSM::Parameter resource type. Access these items by using the AWS CLI or AWS APIs. Configure automatic rotation in Parameter Store to rotate the secrets every 90 days. Use AWS Firewall Manager to create a policy that identifies all security groups that allow inbound or outbound communications for any protocols to 0.0.0.0/0. Whenever the policy flags a security group in violation, remove the noncompliant rule from security groups.
  • C. Use AWS Secrets Manager to create, store, and access secrets. Create new secrets in AWS CloudFormation by using the AWS::SecretsManager::Secret resource type. Reference the secrets in other templates by using Secrets Manager dynamic references. Configure automatic rotation in Secrets Manager to rotate the secrets every 90 days. Use AWS Firewall Manager to create a policy that enforces a requirement for all security groups to explicitly deny inbound and outbound communications for all protocols to 0.0.0.0/0.
  • D. Use AWS Systems Manager Parameter Store to create, store, and access secrets. Create new Parameter Store items in AWS CloudFormation by using the AWS::SSM::Parameter resource type. Reference the items in other templates by using Systems Manager dynamic references. Configure automatic rotation in Parameter Store to rotate the secrets every 90 days. Use AWS Firewall Manager to create a policy that enforces a requirement for all security groups to explicitly deny inbound and outbound communications for all protocols to 0.0.0.0/0.
#565 (Accuracy: 100% / 1 votes)
A company has an application that is deployed on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are part of an Auto Scaling group. The application has unpredictable workloads and frequently scales out and in. The company's development team wants to analyze application logs to find ways to improve the application's performance. However, the logs are no longer available after instances scale in.

Which solution will give the development team the ability to view the application logs after a scale-in event?
  • A. Enable access logs for the ALB. Store the logs in an Amazon S3 bucket.
  • B. Configure the EC2 instances to publish logs to Amazon CloudWatch Logs by using the unified CloudWatch agent.
  • C. Modify the Auto Scaling group to use a step scaling policy.
  • D. Instrument the application with AWS X-Ray tracing.
#566 (Accuracy: 100% / 1 votes)
A company is using IoT devices on its manufacturing equipment. Data from the devices travels to the AWS Cloud through a connection to AWS IoT Core. An Amazon Kinesis data stream sends the data from AWS IoT Core to the company's processing application. The processing application stores data in Amazon S3.

A new requirement states that the company also must send the raw data to a third-party system by using an HTTP API.


Which solution will meet these requirements with the LEAST amount of development work?
  • A. Create a custom AWS Lambda function to consume records from the Kinesis data stream. Configure the Lambda function to call the third-party HTTP API.
  • B. Create an S3 event notification with Amazon EventBridge (Amazon CloudWatch Events) as the event destination. Create an EventBridge (CloudWatch Events) API destination for the third-party HTTP API.
  • C. Create an Amazon Kinesis Data Firehose delivery stream. Configure an HTTP endpoint destination that targets the third-party HTTP API. Configure the Kinesis data stream to send data to the Kinesis Data Firehose delivery stream.
  • D. Create an S3 event notification with an Amazon Simple Queue Service (Amazon SQS) queue as the event destination. Configure the SOS queue to invoke a custom AWS Lambda function. Configure the Lambda function to call the third-party HTTP API.
#567 (Accuracy: 100% / 2 votes)
A large company has many business units. Each business unit has multiple AWS accounts for different purposes. The CIO of the company sees that each business unit has data that would be useful to share with other parts of the company. In total, there are about 10 PB of data that needs to be shared with users in
1,000 AWS accounts.
The data is proprietary, so some of it should only be available to users with specific job types. Some of the data is used for throughput of intensive workloads, such as simulations. The number of accounts changes frequently because of new initiatives, acquisitions, and divestitures.
A Solutions Architect has been asked to design a system that will allow for sharing data for use in AWS with all of the employees in the company.

Which approach will allow for secure data sharing in scalable way?
  • A. Store the data in a single Amazon S3 bucket. Create an IAM role for every combination of job type and business unit that allows for appropriate read/write access based on object prefixes in the S3 bucket. The roles should have trust policies that allow the business unit's AWS accounts to assume their roles. Use IAM in each business unit's AWS account to prevent them from assuming roles for a different job type. Users get credentials to access the data by using AssumeRole from their business unit's AWS account. Users can then use those credentials with an S3 client.
  • B. Store the data in a single Amazon S3 bucket. Write a bucket policy that uses conditions to grant read and write access where appropriate, based on each user's business unit and job type. Determine the business unit with the AWS account accessing the bucket and the job type with a prefix in the IAM user's name. Users can access data by using IAM credentials from their business unit's AWS account with an S3 client.
  • C. Store the data in a series of Amazon S3 buckets. Create an application running in Amazon EC2 that is integrated with the company's identity provider (IdP) that authenticates users and allows them to download or upload data through the application. The application uses the business unit and job type information in the IdP to control what users can upload and download through the application. The users can access the data through the application's API.
  • D. Store the data in a series of Amazon S3 buckets. Create an AWS STS token vending machine that is integrated with the company's identity provider (IdP). When a user logs in, have the token vending machine attach an IAM policy that assumes the role that limits the user's access and/or upload only the data the user is authorized to access. Users can get credentials by authenticating to the token vending machine's website or API and then use those credentials with an S3 client.
#568 (Accuracy: 100% / 3 votes)
A Solutions Architect is responsible for redesigning a legacy Java application to improve its availability, data durability, and scalability. Currently, the application runs on a single high-memory Amazon EC2 instance. It accepts HTTP requests from upstream clients, adds them to an in-memory queue, and responds with a
200 status. A separate application thread reads items from the queue, processes them, and persists the results to an Amazon RDS MySQL instance. The processing time for each item takes 90 seconds on average, most of which is spent waiting on external service calls, but the application is written to process multiple items in parallel.
Traffic to this service is unpredictable. During periods of high load, items may sit in the internal queue for over an hour while the application processes the backlog.

In addition, the current system has issues with availability and data loss if the single application node fails.
Clients that access this service cannot be modified. They expect to receive a response to each HTTP request they send within 10 seconds before they will time out and retry the request.
Which approach would improve the availability and durability of the system while decreasing the processing latency and minimizing costs?
  • A. Create an Amazon API Gateway REST API that uses Lambda proxy integration to pass requests to an AWS Lambda function. Migrate the core processing code to a Lambda function and write a wrapper class that provides a handler method that converts the proxy events to the internal application data model and invokes the processing module.
  • B. Create an Amazon API Gateway REST API that uses a service proxy to put items in an Amazon SQS queue. Extract the core processing code from the existing application and update it to pull items from Amazon SQS instead of an in-memory queue. Deploy the new processing application to smaller EC2 instances within an Auto Scaling group that scales dynamically based on the approximate number of messages in the Amazon SQS queue.
  • C. Modify the application to use Amazon DynamoDB instead of Amazon RDS. Configure Auto Scaling for the DynamoDB table. Deploy the application within an Auto Scaling group with a scaling policy based on CPU utilization. Back the in-memory queue with a memory-mapped file to an instance store volume and periodically write that file to Amazon S3.
  • D. Update the application to use a Redis task queue instead of the in-memory queue. Build a Docker container image for the application. Create an Amazon ECS task definition that includes the application container and a separate container to host Redis. Deploy the new task definition as an ECS service using AWS Fargate, and enable Auto Scaling.
#569 (Accuracy: 100% / 1 votes)
A company is using Amazon Aurora MySQL for a customer relationship management (CRM) application. The application requires frequent maintenance on the database and the Amazon EC2 instances on which the application runs. For AWS Management Console access, the system administrators authenticate against
AWS Identity and Access Management (IAM) using an internal identity provider.
For database access, each system administrator has a user name and password that have previously been configured within the database.
A recent security audit revealed that the database passwords are not frequently rotated.
The company wants to replace the passwords with temporary credentials using the company's existing AWS access controls.
Which set of options will meet the company's requirements?
  • A. Create a new AWS Systems Manager Parameter Store entry for each database password. Enable parameter expiration to invoke an AWS Lambda function to perform password rotation by updating the parameter value. Create an IAM policy allowing each system administrator to retrieve their current password from the Parameter Store. Use the AWS CLI to retrieve credentials when connecting to the database.
  • B. Create a new AWS Secrets Manager entry for each database password. Configure password rotation for each secret using an AWS Lambda function in the same VPC as the database cluster. Create an IAM policy allowing each system administrator to retrieve their current password. Use the AWS CLI to retrieve credentials when connecting to the database.
  • C. Enable IAM database authentication on the database. Attach an IAM policy to each system administrator's role to map the role to the database user name. Install the Amazon Aurora SSL certificate bundle to the system administrators' certificate trust store. Use the AWS CLI to generate an authentication token used when connecting to the database.
  • D. Enable IAM database authentication on the database. Configure the database to use the IAM identity provider to map the administrator roles to the database user. Install the Amazon Aurora SSL certificate bundle to the system administrators' certificate trust store. Use the AWS CLI to generate an authentication token used when connecting to the database.
#570 (Accuracy: 100% / 4 votes)
A company has a media catalog with metadata for each item in the catalog. Different types of metadata are extracted from the media items by an application running on AWS Lambda. Metadata is extracted according to a number of rules with the output stored in an Amazon ElastiCache for Redis cluster. The extraction process is done in batches and takes around 40 minutes to complete.
The update process is triggered manually whenever the metadata extraction rules change.

The company wants to reduce the amount of time it takes to extract metadata from its media catalog.
To achieve this, a solutions architect has split the single metadata extraction Lambda function into a Lambda function for each type of metadata.
Which additional steps should the solutions architect take to meet the requirements?
  • A. Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create another Step Functions workflow that retrieves a list of media items and executes a metadata extraction workflow for each one.
  • B. Create an AWS Batch compute environment for each Lambda function. Configure an AWS Batch job queue for the compute environment. Create a Lambda function to retrieve a list of media items and write each item to the job queue.
  • C. Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create a Lambda function to retrieve a list of media items and write each item to an Amazon SQS queue. Configure the SQS queue as an input to the Step Functions workflow.
  • D. Create a Lambda function to retrieve a list of media items and write each item to an Amazon SQS queue. Subscribe the metadata extraction Lambda functions to the SQS queue with a large batch size.