Amazon AWS Certified Machine Learning - Specialty
Prev

There are 204 results

Next
#21 (Accuracy: 90% / 8 votes)
A Data Scientist needs to migrate an existing on-premises ETL process to the cloud. The current process runs at regular time intervals and uses PySpark to combine and format multiple large data sources into a single consolidated output for downstream processing.
The Data Scientist has been given the following requirements to the cloud solution:
✑ Combine multiple data sources.

✑ Reuse existing PySpark logic.

✑ Run the solution on the existing schedule.

✑ Minimize the number of servers that will need to be managed.

Which architecture should the Data Scientist use to build this solution?
  • A. Write the raw data to Amazon S3. Schedule an AWS Lambda function to submit a Spark step to a persistent Amazon EMR cluster based on the existing schedule. Use the existing PySpark logic to run the ETL job on the EMR cluster. Output the results to a ג€processedג€ location in Amazon S3 that is accessible for downstream use.
  • B. Write the raw data to Amazon S3. Create an AWS Glue ETL job to perform the ETL processing against the input data. Write the ETL job in PySpark to leverage the existing logic. Create a new AWS Glue trigger to trigger the ETL job based on the existing schedule. Configure the output target of the ETL job to write to a ג€processedג€ location in Amazon S3 that is accessible for downstream use.
  • C. Write the raw data to Amazon S3. Schedule an AWS Lambda function to run on the existing schedule and process the input data from Amazon S3. Write the Lambda logic in Python and implement the existing PySpark logic to perform the ETL process. Have the Lambda function output the results to a ג€processedג€ location in Amazon S3 that is accessible for downstream use.
  • D. Use Amazon Kinesis Data Analytics to stream the input data and perform real-time SQL queries against the stream to carry out the required transformations within the stream. Deliver the output results to a ג€processedג€ location in Amazon S3 that is accessible for downstream use.
#22 (Accuracy: 100% / 3 votes)
A data scientist is building a linear regression model. The scientist inspects the dataset and notices that the mode of the distribution is lower than the median, and the median is lower than the mean.

Which data transformation will give the data scientist the ability to apply a linear regression model?
  • A. Exponential transformation
  • B. Logarithmic transformation
  • C. Polynomial transformation
  • D. Sinusoidal transformation
#23 (Accuracy: 100% / 5 votes)
A Machine Learning Specialist is designing a scalable data storage solution for Amazon SageMaker. There is an existing TensorFlow-based model implemented as a train.py script that relies on static training data that is currently stored as TFRecords.
Which method of providing training data to Amazon SageMaker would meet the business requirements with the LEAST development overhead?
  • A. Use Amazon SageMaker script mode and use train.py unchanged. Point the Amazon SageMaker training invocation to the local path of the data without reformatting the training data.
  • B. Use Amazon SageMaker script mode and use train.py unchanged. Put the TFRecord data into an Amazon S3 bucket. Point the Amazon SageMaker training invocation to the S3 bucket without reformatting the training data.
  • C. Rewrite the train.py script to add a section that converts TFRecords to protobuf and ingests the protobuf data instead of TFRecords.
  • D. Prepare the data in the format accepted by Amazon SageMaker. Use AWS Glue or AWS Lambda to reformat and store the data in an Amazon S3 bucket.
#24 (Accuracy: 100% / 3 votes)
An agricultural company is interested in using machine learning to detect specific types of weeds in a 100-acre grassland field. Currently, the company uses tractor-mounted cameras to capture multiple images of the field as 10 ֳ— 10 grids. The company also has a large training dataset that consists of annotated images of popular weed classes like broadleaf and non-broadleaf docks.
The company wants to build a weed detection model that will detect specific types of weeds and the location of each type within the field.
Once the model is ready, it will be hosted on Amazon SageMaker endpoints. The model will perform real-time inferencing using the images captured by the cameras.
Which approach should a Machine Learning Specialist take to obtain accurate predictions?
  • A. Prepare the images in RecordIO format and upload them to Amazon S3. Use Amazon SageMaker to train, test, and validate the model using an image classification algorithm to categorize images into various weed classes.
  • B. Prepare the images in Apache Parquet format and upload them to Amazon S3. Use Amazon SageMaker to train, test, and validate the model using an object- detection single-shot multibox detector (SSD) algorithm.
  • C. Prepare the images in RecordIO format and upload them to Amazon S3. Use Amazon SageMaker to train, test, and validate the model using an object- detection single-shot multibox detector (SSD) algorithm.
  • D. Prepare the images in Apache Parquet format and upload them to Amazon S3. Use Amazon SageMaker to train, test, and validate the model using an image classification algorithm to categorize images into various weed classes.
#25 (Accuracy: 100% / 3 votes)
A Machine Learning team runs its own training algorithm on Amazon SageMaker. The training algorithm requires external assets. The team needs to submit both its own algorithm code and algorithm-specific parameters to Amazon SageMaker.
What combination of services should the team use to build a custom algorithm in Amazon SageMaker? (Choose two.)
  • A. AWS Secrets Manager
  • B. AWS CodeStar
  • C. Amazon ECR
  • D. Amazon ECS
  • E. Amazon S3
#26 (Accuracy: 100% / 9 votes)
An interactive online dictionary wants to add a widget that displays words used in similar contexts. A Machine Learning Specialist is asked to provide word features for the downstream nearest neighbor model powering the widget.
What should the Specialist do to meet these requirements?
  • A. Create one-hot word encoding vectors.
  • B. Produce a set of synonyms for every word using Amazon Mechanical Turk.
  • C. Create word embedding vectors that store edit distance with every other word.
  • D. Download word embeddings pre-trained on a large corpus.
#27 (Accuracy: 100% / 2 votes)
A Machine Learning Specialist is building a logistic regression model that will predict whether or not a person will order a pizza. The Specialist is trying to build the optimal model with an ideal classification threshold.
What model evaluation technique should the Specialist use to understand how different classification thresholds will impact the model's performance?
  • A. Receiver operating characteristic (ROC) curve
  • B. Misclassification rate
  • C. Root Mean Square Error (RMSE)
  • D. L1 norm
#28 (Accuracy: 100% / 9 votes)
A Machine Learning Specialist is packaging a custom ResNet model into a Docker container so the company can leverage Amazon SageMaker for training. The
Specialist is using Amazon EC2 P3 instances to train the model and needs to properly configure the Docker container to leverage the NVIDIA GPUs.

What does the Specialist need to do?
  • A. Bundle the NVIDIA drivers with the Docker image.
  • B. Build the Docker container to be NVIDIA-Docker compatible.
  • C. Organize the Docker container's file structure to execute on GPU instances.
  • D. Set the GPU flag in the Amazon SageMaker CreateTrainingJob request body.
#29 (Accuracy: 100% / 5 votes)
An employee found a video clip with audio on a company's social media feed. The language used in the video is Spanish. English is the employee's first language, and they do not understand Spanish. The employee wants to do a sentiment analysis.
What combination of services is the MOST efficient to accomplish the task?
  • A. Amazon Transcribe, Amazon Translate, and Amazon Comprehend
  • B. Amazon Transcribe, Amazon Comprehend, and Amazon SageMaker seq2seq
  • C. Amazon Transcribe, Amazon Translate, and Amazon SageMaker Neural Topic Model (NTM)
  • D. Amazon Transcribe, Amazon Translate and Amazon SageMaker BlazingText
#30 (Accuracy: 100% / 5 votes)
During mini-batch training of a neural network for a classification problem, a Data Scientist notices that training accuracy oscillates.
What is the MOST likely cause of this issue?
  • A. The class distribution in the dataset is imbalanced.
  • B. Dataset shuffling is disabled.
  • C. The batch size is too big.
  • D. The learning rate is very high.