Amazon (SAP-C01) Exam Questions And Answers page 36
A media company is hosting a high-traffic news website on AWS. The website s front end is based solely on HTML and JavaScript. The company loads all dynamic content by using dynamic asynchronous JavaScript requests to a dedicated backend infrastructure.
The front end runs on four Amazon EC2 instances as web servers. The dynamic backend runs in containers on an Amazon Elastic Container Service (Amazon ECS) cluster that uses an Auto Scaling group of EC2 instances. The ECS tasks are behind an Application Load Balancer (ALB).
Which solutions should a solutions architect recommend to optimize costs? (Choose two.)
The front end runs on four Amazon EC2 instances as web servers. The dynamic backend runs in containers on an Amazon Elastic Container Service (Amazon ECS) cluster that uses an Auto Scaling group of EC2 instances. The ECS tasks are behind an Application Load Balancer (ALB).
Which solutions should a solutions architect recommend to optimize costs? (Choose two.)
Deploy an Amazon CloudFront distribution. Configure the distribution to use the ALB endpoint as the origin.
Migrate the front-end services to the ECS cluster. Increase the minimum number of nodes in the Auto Scaling group.
Turn on Auto Scaling for the front-end EC2 instances. Configure a new listener rule on the ALB to serve the front end.
Migrate the backend of the website to an Amazon S3 bucket. Deploy an Amazon CloudFront distribution. Set the S3 bucket as the distribution s origin.
Designing enterprise-wide scalable operations on AWS
Implementing cost control strategies
A media company is serving video files stored in Amazon S3 using Amazon CloudFront. The development team needs access to the logs to diagnose faults and perform service monitoring. The log files from CloudFront may contain sensitive information about users.
The company uses a log processing service to remove sensitive information before making the logs available to the development team. The company has the following requirements for the unprocessed logs:
• The logs must be encrypted at rest and must be accessible by the log processing service only.
• Only the data protection team can control access to the unprocessed log files.
• AWS CloudFormation templates must be stored in AWS CodeCommit.
• AWS CodePipeline must be triggered on commit to perform updates made to CloudFormation templates.
• CloudFront is already writing the unprocessed logs to an Amazon S3 bucket, and the log processing service is operating against this S3 bucket.
Which combination of steps should a solutions architect take to meet the company s requirements? (Choose two.)
The company uses a log processing service to remove sensitive information before making the logs available to the development team. The company has the following requirements for the unprocessed logs:
• The logs must be encrypted at rest and must be accessible by the log processing service only.
• Only the data protection team can control access to the unprocessed log files.
• AWS CloudFormation templates must be stored in AWS CodeCommit.
• AWS CodePipeline must be triggered on commit to perform updates made to CloudFormation templates.
• CloudFront is already writing the unprocessed logs to an Amazon S3 bucket, and the log processing service is operating against this S3 bucket.
Which combination of steps should a solutions architect take to meet the company s requirements? (Choose two.)
Create an AWS KMS key that allows the AWS Logs Delivery account to generate data keys for encryption
Configure S3 default encryption to use server-side encryption with KMS managed keys (SSE-KMS) on the log storage bucket using the new KMS key.
Modify the KMS key policy to allow the log processing service to perform decrypt operations.
Configure S3 default encryption to use server-side encryption with KMS managed keys (SSE-KMS) on the log storage bucket using the new KMS key.
Modify the KMS key policy to allow the log processing service to perform decrypt operations.
Create an AWS KMS key that follows the CloudFront service role to generate data keys for encryption
Configure S3 default encryption to use KMS managed keys (SSE-KMS) on the log storage bucket using the new KMS key
Modify the KMS key policy to allow the log processing service to perform decrypt operations.
Configure S3 default encryption to use KMS managed keys (SSE-KMS) on the log storage bucket using the new KMS key
Modify the KMS key policy to allow the log processing service to perform decrypt operations.
Configure S3 default encryption to use AWS KMS managed keys (SSE-KMS) on the log storage bucket using the AWS Managed S3 KMS key.
Modify the KMS key policy to allow the CloudFront service role to generate data keys for encryption
Modify the KMS key policy to allow the log processing service to perform decrypt operations.
Modify the KMS key policy to allow the CloudFront service role to generate data keys for encryption
Modify the KMS key policy to allow the log processing service to perform decrypt operations.
Create a new CodeCommit repository for the AWS KMS key template.
Create an IAM policy to allow commits to the new repository and attach it to the data protection team s users.
Create a new CodePipeline pipeline with a custom IAM role to perform KMS key updates using CloudFormation
Modify the KMS key policy to allow the CodePipeline IAM role to modify the key policy.
Create an IAM policy to allow commits to the new repository and attach it to the data protection team s users.
Create a new CodePipeline pipeline with a custom IAM role to perform KMS key updates using CloudFormation
Modify the KMS key policy to allow the CodePipeline IAM role to modify the key policy.
Use the existing CodeCommit repository for the AWS KMS key template.
Create an IAM policy to allow commits to the new repository and attach it to the data protection team s users.
Modify the existing CodePipeline pipeline to use a custom IAM role and to perform KMS key updates using CloudFormation.
Modify the KMS key policy to allow the CodePipeline IAM role to modify the key policy.
Create an IAM policy to allow commits to the new repository and attach it to the data protection team s users.
Modify the existing CodePipeline pipeline to use a custom IAM role and to perform KMS key updates using CloudFormation.
Modify the KMS key policy to allow the CodePipeline IAM role to modify the key policy.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing for security and compliance
How Can We Improve Lambda Performance and DynamoDB Reliability?
Multiple Choice
A media storage application uploads user photos to Amazon S3 for processing. End users are reporting that some uploaded photos are not being processed properly. The Application Developers trace the logs and find that AWS Lambda is experiencing execution issues when thousands of users are on the system simultaneously. Issues are caused by:
• Limits around concurrent executions.
• The performance of Amazon DynamoDB when saving data.
Which actions can be taken to increase the performance and reliability of the application? (Choose two.)
• Limits around concurrent executions.
• The performance of Amazon DynamoDB when saving data.
Which actions can be taken to increase the performance and reliability of the application? (Choose two.)
Evaluate and adjust the read capacity units (RCUs) for the DynamoDB tables.
Evaluate and adjust the write capacity units (WCUs) for the DynamoDB tables.
Add an Amazon ElastiCache layer to increase the performance of Lambda functions.
Configure a dead letter queue that will reprocess failed or timed-out Lambda functions.
Use S3 Transfer Acceleration to provide lower-latency access to end users.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
A medical company is building a data lake on Amazon S3. The data must be encrypted in transit and at rest. The data must remain protected even if S3 bucket is inadvertently made public.
Which combination of steps will meet these requirements? (Choose three.)
Which combination of steps will meet these requirements? (Choose three.)
Ensure that each S3 bucket has a bucket policy that includes a Deny statement if the aws:SecureTransport condition is not present.
Create a CMK in AWS Key Management Service (AWS KMS). Turn on server-side encryption (SSE) on the S3 buckets, select SSE-KMS for the encryption type, and use the CMK as the key.
Ensure that each S3 bucket has a bucket policy that includes a Deny statement for PutObject actions if the request does not include an s3:x-amz-server-side-encryption : aws:kms condition.
Turn on server-side encryption (SSE) on the S3 buckets and select SSE-S3 for the encryption type.
Ensure that each S3 bucket has a bucket policy that includes a Deny statement for PutObject actions if the request does not include an s3:x-amz-server-side-encryption : AES256 condition.
Turn on AWS Config. Use the s3-bucket-public-read-prohibited, s3-bucket-public-write-prohibited, and s3-bucket-ssl-requests-only AWS Config managed rules to monitor the S3 buckets.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing for security and compliance
A medical company is running an application in the AWS Cloud. The application simulates the effect of medical drugs in development.
The application consists of two parts: configuration and simulation. The configuration part runs in AWS Fargate containers in an Amazon Elastic Container Service (Amazon ECS) cluster. The simulation part runs on large, compute optimized Amazon EC2 instances. Simulations can restart if they are interrupted.
The configuration part runs 24 hours a day with a steady load. The simulation part runs only for a few hours each night with a variable load. The company stores simulation results in Amazon S3, and researchers use the results for 30 days. The company must store simulations for 10 years and must be able to retrieve the simulations within 5 hours.
Which solution meets these requirements MOST cost-effectively?
The application consists of two parts: configuration and simulation. The configuration part runs in AWS Fargate containers in an Amazon Elastic Container Service (Amazon ECS) cluster. The simulation part runs on large, compute optimized Amazon EC2 instances. Simulations can restart if they are interrupted.
The configuration part runs 24 hours a day with a steady load. The simulation part runs only for a few hours each night with a variable load. The company stores simulation results in Amazon S3, and researchers use the results for 30 days. The company must store simulations for 10 years and must be able to retrieve the simulations within 5 hours.
Which solution meets these requirements MOST cost-effectively?
Purchase an EC2 Instance Savings Plan to cover the usage for the configuration part. Run the simulation part by using EC2 Spot Instances. Create an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Intelligent-Tiering.
Purchase an EC2 Instance Savings Plan to cover the usage for the configuration part and the simulation part. Create an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Glacier.
Purchase Compute Savings Plans to cover the usage for the configuration part. Run the simulation part by using EC2 Spot Instances. Create an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Glacier.
Purchase Compute Savings Plans to cover the usage for the configuration part. Purchase EC2 Reserved Instances for the simulation part. Create an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Glacier Deep Archive.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
A mobile app has become very popular, and usage has gone from a few hundred to millions of users. Users capture and upload images of activities within a city, and provide ratings and recommendations. Data access patterns are unpredictable. The current application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The application is experiencing slowdowns and costs are growing rapidly.
Which changes should a solutions architect make to the application architecture to control costs and improve performance?
Which changes should a solutions architect make to the application architecture to control costs and improve performance?
Create an Amazon CloudFront distribution and place the ALB behind the distribution. Store static content in Amazon S3 in an Infrequent Access storage class.
Store static content in an Amazon S3 bucket using the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the S3 bucket and the ALB.
Place AWS Global Accelerator in front of the ALB. Migrate the static content to Amazon EFS, and then run an AWS Lambda function to resize the images during the migration process.
Move the application code to AWS Fargate containers and swap out the EC2 instances with the Fargate containers.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
A mobile gaming application publishes data continuously to Amazon Kinesis Data Streams. An AWS Lambda function processes records from the data stream and writes to an Amazon DynamoDB table. The DynamoDB table has an auto scaling policy enabled with the target utilization set to 70%.
For several minutes at the start and end of each day, there is a spike in traffic that often exceeds five times the normal load. The company notices the GetRecords.IteratorAgeMilliseconds metric of the Kinesis data stream temporarily spikes to over a minute for several minutes. The AWS Lambda function writes ProvisionedThroughputExceededException messages to Amazon CloudWatch Logs during these times, and some records are redirected to the dead letter queue. No exceptions are thrown by the Kinesis producer on the gaming application.
What change should the company make to resolve this issue?
For several minutes at the start and end of each day, there is a spike in traffic that often exceeds five times the normal load. The company notices the GetRecords.IteratorAgeMilliseconds metric of the Kinesis data stream temporarily spikes to over a minute for several minutes. The AWS Lambda function writes ProvisionedThroughputExceededException messages to Amazon CloudWatch Logs during these times, and some records are redirected to the dead letter queue. No exceptions are thrown by the Kinesis producer on the gaming application.
What change should the company make to resolve this issue?
Use Application Auto Scaling to set a scaling schedule to scale out write capacity on the DynamoDB table during predictable load spikes.
Use Amazon CloudWatch Events to monitor the dead letter queue and invoke a Lambda function to automatically retry failed records.
Reduce the DynamoDB table auto scaling policy's target utilization to 20% to more quickly respond to load spikes.
Increase the number of shards in the Kinesis data stream to increase throughput capacity.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
A mobile gaming company is expanding into the global market. The company s game servers run in the us-east-1 Region. The game s client application uses UDP to communicate with the game servers and needs to be able to connect to a set of static IP addresses.
The company wants its game to be accessible on multiple continents. The company also wants the game to maintain its network performance and global availability.
Which solution meets these requirements?
The company wants its game to be accessible on multiple continents. The company also wants the game to maintain its network performance and global availability.
Which solution meets these requirements?
Provision an Application Load Balancer (ALB) in front of the game servers. Create an Amazon CloudFront distribution that has no geographical restrictions. Set the ALB as the origin. Perform DNS lookups for the cloudfront.net domain name. Use the resulting IP addresses in the game s client application.
Provision game servers in each AWS Region. Provision an Application Load Balancer in front of the game servers. Create an Amazon Route 53 latency-based routing policy for the game s client application to use with DNS lookups.
Provision game servers in each AWS Region. Provision a Network Load Balancer (NLB) in front of the game servers. Create an accelerator in AWS Global Accelerator, and configure endpoint groups in each Region. Associate the NLBs with the corresponding Regional endpoint groups. Point the game client's application to the Global Accelerator endpoints.
Provision game servers in each AWS Region. Provision a Network Load Balancer (NLB) in front of the game servers. Create an Amazon CloudFront distribution that has no geographical restrictions. Set the NLB as the origin. Perform DNS lookups for the cloudfront.net domain name. Use the resulting IP addresses in the game s client application.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
A multimedia company needs to deliver its video-on-demand (VOD) content to its subscribers in a cost-effective way. The video files range in size from 1-15 GB and are typically viewed frequently for the first 6 months after creation, and then access decreases considerably. The company requires all video files to remain immediately available for subscribers. There are now roughly 30,000 files, and the company anticipates doubling that number over time.
What is the MOST cost-effective solution for delivering the company s VOD content?
What is the MOST cost-effective solution for delivering the company s VOD content?
Store the video files in an Amazon S3 bucket using S3 Intelligent-Tiering. Use Amazon CloudFront to deliver the content with the S3 bucket as the origin.
Use AWS Elemental MediaConvert and store the adaptive bitrate video files in Amazon S3. Configure an AWS Elemental MediaPackage endpoint to deliver the content from Amazon S3.
Store the video files in Amazon Elastic File System (Amazon EFS) Standard. Enable EFS lifecycle management to move the video files to EFS Infrequent Access after 6 months. Create an Amazon EC2 Auto Scaling group behind an Elastic Load Balancer to deliver the content from Amazon EFS.
Store the video files in Amazon S3 Standard. Create S3 Lifecycle rules to move the video files to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months and to S3 Glacier Deep Archive after 1 year. Use Amazon CloudFront to deliver the content with the S3 bucket as the origin.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Implementing cost control strategies
A multimedia company with a single AWS account is launching an application for a global user base. The application storage and bandwidth requirements are unpredictable. The application will use Amazon EC2 instances behind an Application Load Balancer as the web tier and will use Amazon DynamoDB as the database tier. The environment for the application must meet the following requirements:
• Low latency when accessed from any part of the world
• WebSocket support
• End-to-end encryption
• Protection against the latest security threats
• Managed layer 7 DDoS protection
Which actions should the solutions architect take to meet these requirements? (Choose two.)
• Low latency when accessed from any part of the world
• WebSocket support
• End-to-end encryption
• Protection against the latest security threats
• Managed layer 7 DDoS protection
Which actions should the solutions architect take to meet these requirements? (Choose two.)
Use Amazon Route 53 and Amazon CloudFront for content distribution. Use Amazon S3 to store static content
Use Amazon Route 53 and AWS Transit Gateway for content distribution. Use an Amazon Elastic Block Store (Amazon EBS) volume to store static content
Use AWS WAF with AWS Shield Advanced to protect the application
Use AWS WAF and Amazon Detective to protect the application
Use AWS Shield Standard to protect the application
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing for security and compliance
Comments