Amazon AWS Certified Solutions Architect - Professional SAP-C01
Prev

There are 579 results

Next
#431 (Accuracy: 100% / 2 votes)
A company has a policy that all Amazon EC2 instances that are running a database must exist within the same subnets in a shared VPC. Administrators must follow security compliance requirements and are not allowed to directly log in to the shared account. All company accounts are members of the same organization in AWS Organizations. The number of accounts will rapidly increase as the company grows.
A solutions architect uses AWS Resource Access Manager to create a resource share in the shared account.

What is the MOST operationally efficient configuration to meet these requirements?
  • A. Add the VPC to the resource share. Add the account IDs as principals
  • B. Add all subnets within the VPC to the resource share. Add the account IDs as principals
  • C. Add all subnets within the VPC to the resource share. Add the organization as a principal
  • D. Add the VPC to the resource share. Add the organization as a principal
#432 (Accuracy: 100% / 3 votes)
A large company is migrating its on-premises applications to the AWS Cloud. All the company's AWS accounts belong to an organization in AWS Organizations.
Each application is deployed into its own VPC in separate AWS accounts.

The company decides to start the migration process by migrating the front-end web services while keeping the databases on premises.
The databases are configured with local domain names that are specific to the on-premises environment. The local domain names must be resolvable from the migrated web services.
Which solution will meet these requirements with the LEAST operational overhead?
  • A. Create a shared services VPC in a new AWS account. Deploy Amazon Route 53 outbound resolvers. For relevant on-premises domains, use the outbound resolver settings to create forwarding rules that point to the on-premises DNS servers. Share these rules with the other AWS accounts by using AWS Resource Access Manager.
  • B. Deploy Multi-AZ Amazon Route 53 outbound resolvers in each VPC. Create forwarding rules that point to on-premises DNS servers in local outbound resolvers for each VPC.
  • C. Create a shared services VPC in a new AWS account. Deploy Amazon EC2 instances that act conditional forwarders inside the shared services VPC. Change the DHCP options set in each VPC to point to these forwarders as DNS servers. Create forwarding rules for relevant on-premises domains in these forwarders.
  • D. Create a shared services VPC in a new AWS account. Deploy Amazon Route 53 inbound resolvers. For relevant on-premises domains, create forwarding rules that point to on-premises DNS servers. Share these rules with other AWS accounts by using AWS Resource Access Manager.
#433 (Accuracy: 100% / 1 votes)
A multimedia company needs to deliver its video-on-demand (VOD) content to its subscribers in a cost-effective way. The video files range in size from 1-15 GB and are typically viewed frequently for the first 6 months after creation, and then access decreases considerably. The company requires all video files to remain immediately available for subscribers. There are now roughly 30,000 files, and the company anticipates doubling that number over time.
What is the MOST cost-effective solution for delivering the company's VOD content?
  • A. Store the video files in an Amazon S3 bucket using S3 Intelligent-Tiering. Use Amazon CloudFront to deliver the content with the S3 bucket as the origin.
  • B. Use AWS Elemental MediaConvert and store the adaptive bitrate video files in Amazon S3. Configure an AWS Elemental MediaPackage endpoint to deliver the content from Amazon S3.
  • C. Store the video files in Amazon Elastic File System (Amazon EFS) Standard. Enable EFS lifecycle management to move the video files to EFS Infrequent Access after 6 months. Create an Amazon EC2 Auto Scaling group behind an Elastic Load Balancer to deliver the content from Amazon EFS.
  • D. Store the video files in Amazon S3 Standard. Create S3 Lifecycle rules to move the video files to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months and to S3 Glacier Deep Archive after 1 year. Use Amazon CloudFront to deliver the content with the S3 bucket as the origin.
#434 (Accuracy: 100% / 1 votes)
A company is developing a gene reporting device that will collect genomic information to assist researchers will collecting large samples of data from a diverse population. The device will push 8 KB of genomic data every second to a data platform that will need to process and analyze the data and provide information back to researchers. The data platform must meet the following requirements:
✑ Provide near-real-time analytics of the inbound genomic data
✑ Ensure the data is flexible, parallel, and durable
✑ Deliver results of processing to a data warehouse
Which strategy should a solutions architect use to meet these requirements?
  • A. Use Amazon Kinesis Data Firehouse to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon RDS instance.
  • B. Use Amazon Kinesis Data Streams to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon Redshift cluster using Amazon EMR.
  • C. Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon Redshift cluster.
  • D. Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR.
#435 (Accuracy: 100% / 2 votes)
A company operates a group of imaging satellites. The satellites stream data to one of the company's ground stations where processing creates about 5 GB of images per minute. This data is added to network-attached storage, where 2 PB of data are already stored.
The company runs a website that allows its customers to access and purchase the images over the Internet.
This website is also running in the ground station.
Usage analysis shows that customers are most likely to access images that have been captured in the last 24 hours.

The company would like to migrate the image storage and distribution system to AWS to reduce costs and increase the number of customers that can be served.

Which AWS architecture and migration strategy will meet these requirements?
  • A. Use multiple AWS Snowball appliances to migrate the existing imagery to Amazon S3. Create a 1-Gb AWS Direct Connect connection from the ground station to AWS, and upload new data to Amazon S3 through the Direct Connect connection. Migrate the data distribution website to Amazon EC2 instances. By using Amazon S3 as an origin, have this website serve the data through Amazon CloudFront by creating signed URLs.
  • B. Create a 1-Gb Direct Connect connection from the ground station to AWS. Use the AWS Command Line Interface to copy the existing data and upload new data to Amazon S3 over the Direct Connect connection. Migrate the data distribution website to EC2 instances. By using Amazon S3 as an origin, have this website serve the data through CloudFront by creating signed URLs.
  • C. Use multiple Snowball appliances to migrate the existing images to Amazon S3. Upload new data by regularly using Snowball appliances to upload data from the network-attached storage. Migrate the data distribution website to EC2 instances. By using Amazon S3 as an origin, have this website serve the data through CloudFront by creating signed URLs.
  • D. Use multiple Snowball appliances to migrate the existing images to an Amazon EFS file system. Create a 1-Gb Direct Connect connection from the ground station to AWS, and upload new data by mounting the EFS file system over the Direct Connect connection. Migrate the data distribution website to EC2 instances. By using webservers in EC2 that mount the EFS file system as the origin, have this website serve the data through CloudFront by creating signed URLs.
#436 (Accuracy: 90% / 4 votes)
A company has a media metadata extraction pipeline running on AWS. Notifications containing a reference to a file in Amazon S3 are sent to an Amazon Simple
Notification Service (Amazon SNS) topic.
The pipeline consists of a number of AWS Lambda functions that are subscribed to the SNS topic. The Lambda functions extract the S3 tile and write metadata to an Amazon RDS PostgreSQL DB instance.
Users report that updates to the metadata are sometimes slow to appear or are lost.
During these times, the CPU utilization on the database is high and the number of failed Lambda invocations increases.
Which combination of actions should a solutions architect take to help resolve this issue? (Choose two.)
  • A. Enable message delivery status on the SNS topic. Configure the SNS topic delivery policy to enable retries with exponential backoff.
  • B. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue and subscribe the queue to the SNS topic. Configure the Lambda functions to consume messages from the SQS queue.
  • C. Create an RDS proxy for the RDS instance. Update the Lambda functions to connect to the RDS instance using the proxy.
  • D. Enable the RDS Data API for the RDS instance. Update the Lambda functions to connect to the RDS instance using the Data API.
  • E. Create an Amazon Simple Queue Service (Amazon SQS) standard queue for each Lambda function and subscribe the queues to the SNS topic. Configure the Lambda functions to consume messages from their respective SQS queue.
#437 (Accuracy: 100% / 1 votes)
A company is currently running a production workload on AWS that is very I/O intensive. Its workload consists of a single tier with 10 c4.8xlarge instances, each with 2 TB gp2 volumes. The number of processing jobs has recently increased, and latency has increased as well. The team realizes that they are constrained on the IOPS. For the application to perform efficiently, they need to increase the IOPS by 3,000 for each of the instances.
Which of the following designs will meet the performance goal MOST cost effectively?
  • A. Change the type of Amazon EBS volume from gp2 to io1 and set provisioned IOPS to 9,000.
  • B. Increase the size of the gp2 volumes in each instance to 3 TB.
  • C. Create a new Amazon EFS file system and move all the data to this new file system. Mount this file system to all 10 instances.
  • D. Create a new Amazon S3 bucket and move all the data to this new bucket. Allow each instance to access this S3 bucket and use it for storage.
#438 (Accuracy: 100% / 1 votes)
A company is migrating its three-tier web application from on-premises to the AWS Cloud. The company has the following requirements for the migration process:
✑ Ingest machine images from the on-premises environment.

✑ Synchronize changes from the on-premises environment to the AWS environment until the production cutover.

✑ Minimize downtime when executing the production cutover.

✑ Migrate the virtual machines' root volumes and data volumes.

Which solution will satisfy these requirements with minimal operational overhead?
  • A. Use AWS Server Migration Service (SMS) to create and launch a replication job for each tier of the application. Launch instances from the AMIs created by AWS SMS. After initial testing, perform a final replication and create new instances from the updated AMIs.
  • B. Create an AWS CLI VM Import/Export script to migrate each virtual machine. Schedule the script to run incrementally to maintain changes in the application. Launch instances from the AMIs created by VM Import/Export. Once testing is done, rerun the script to do a final import and launch the instances from the AMIs.
  • C. Use AWS Server Migration Service (SMS) to upload the operating system volumes. Use the AWS CLI import-snapshot command for the data volumes. Launch instances from the AMIs created by AWS SMS and attach the data volumes to the instances. After initial testing, perform a final replication, launch new instances from the replicated AMIs, and attach the data volumes to the instances.
  • D. Use AWS Application Discovery Service and AWS Migration Hub to group the virtual machines as an application. Use the AWS CLI VM Import/Export script to import the virtual machines as AMIs. Schedule the script to run incrementally to maintain changes in the application. Launch instances from the AMIs. After initial testing, perform a final virtual machine import and launch new instances from the AMIs.
#439 (Accuracy: 100% / 3 votes)
An ecommerce website running on AWS uses an Amazon RDS for MySQL DB instance with General Purpose SSD storage. The developers chose an appropriate instance type based on demand, and configured 100 GB of storage with a sufficient amount of free space.
The website was running smoothly for a few weeks until a marketing campaign launched.
On the second day of the campaign, users reported long wait times and time outs. Amazon CloudWatch metrics indicated that both reads and writes to the DB instance were experiencing long response times. The CloudWatch metrics show 40% to 50% CPU and memory utilization, and sufficient free storage space is still available. The application server logs show no evidence of database connectivity issues.
What could be the root cause of the issue with the marketing campaign?
  • A. It exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
  • B. It caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries.
  • C. It exhausted the maximum number of allowed connections to the database instance.
  • D. It exhausted the network bandwidth available to the RDS for MySQL DB instance.
#440 (Accuracy: 100% / 3 votes)
A manufacturing company is growing exponentially and has secured funding to improve its IT infrastructure and ecommerce presence. The company's ecommerce platform consists of:
✑ Static assets primarily comprised of product images stored in Amazon S3.

✑ Amazon DynamoDB tables that store product information, user information, and order information.

✑ Web servers containing the application's front-end behind Elastic Load Balancers.

The company wants to set up a disaster recovery site in a separate Region.

Which combination of actions should the solutions architect take to implement the new design while meeting all the requirements? (Choose three.)
  • A. Enable Amazon Route 53 health checks to determine if the primary site is down, and route traffic to the disaster recovery site if there is an issue.
  • B. Enable Amazon S3 cross-Region replication on the buckets that contain static assets.
  • C. Enable multi-Region targets on the Elastic Load Balancer and target Amazon EC2 instances in both Regions.
  • D. Enable DynamoDB global tables to achieve a multi-Region table replication.
  • E. Enable Amazon CloudWatch and create CloudWatch alarms that route traffic to the disaster recovery site when application latency exceeds the desired threshold.
  • F. Enable Amazon S3 versioning on the source and destination buckets containing static assets to ensure there is a rollback version available in the event of data corruption.