Amazon (SAP-C01) Exam Questions And Answers page 47
A solutions architect is designing a disaster recovery strategy for a three-tier application. The application has an RTO of 30 minutes and an RPO of 5 minutes for the data tier. The application and web tiers are stateless and leverage a fleet of Amazon EC2 instances. The data tier consists of a 50 TB Amazon Aurora database.
Which combination of steps satisfies the RTO and RPO requirements while optimizing costs? (Choose two.)
Which combination of steps satisfies the RTO and RPO requirements while optimizing costs? (Choose two.)
Deploy a hot standby of the application to another Region.
Create snapshots of the Aurora database every 5 minutes.
Create a cross-Region Aurora Replica of the database.
Create an AWS Backup job to replicate data to another Region.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing for security and compliance
A Solutions Architect is designing a highly available and reliable solution for a cluster of Amazon EC2 instances.
The Solutions Architect must ensure that any EC2 instance within the cluster recovers automatically after a system failure. The solution must ensure that the recovered instance maintains the same IP address.
How can these requirements be met?
The Solutions Architect must ensure that any EC2 instance within the cluster recovers automatically after a system failure. The solution must ensure that the recovered instance maintains the same IP address.
How can these requirements be met?
Create an AWS Lambda script to restart any EC2 instances that shut down unexpectedly.
Create an Auto Scaling group for each EC2 instance that has a minimum and maximum size of 1.
Create a new t2.micro instance to monitor the cluster instances. Configure the t2.micro instance to issue an aws ec2 reboot-instances command upon failure.
Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric, and then configure an EC2 action to recover the instance.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
A Solutions Architect is designing a multi-account structure that has 10 existing accounts. The design must meet the following requirements:
• Consolidate all accounts into one organization.
• Allow full access to the Amazon EC2 service from the master account and the secondary accounts.
• Minimize the effort required to add additional secondary accounts.
Which combination of steps should be included in the solution? (Choose two.)
• Consolidate all accounts into one organization.
• Allow full access to the Amazon EC2 service from the master account and the secondary accounts.
• Minimize the effort required to add additional secondary accounts.
Which combination of steps should be included in the solution? (Choose two.)
Create an organization from the master account. Send invitations to the secondary accounts from the master account. Accept the invitations and create an OU.
Create an organization from the master account. Send a join request to the master account from each secondary account. Accept the requests and create an OU.
Create a VPC peering connection between the master account and the secondary accounts. Accept the request for the VPC peering connection.
Create a service control policy (SCP) that enables full EC2 access, and attach the policy to the OU.
Create a full EC2 access policy and map the policy to a role in each account. Trust every other account to assume the role.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
A solutions architect is designing a network for a new cloud deployment. Each account will need autonomy to modify route tables and make changes. Centralized and controlled egress internet connectivity is also needed. The cloud footprint is expected to grow to thousands of AWS accounts.
Which architecture will meet these requirements?
Which architecture will meet these requirements?
A centralized transit VPC with a VPN connection to a standalone VPC in each account. Outbound internet traffic will be controlled by firewall appliances.
A centralized shared VPC with a subnet for each account. Outbound internet traffic will be controlled through a fleet of proxy servers.
A shared services VPC to host central assets to include a fleet of firewalls with a route to the internet. Each spoke VPC will peer to the central VPC.
A shared transit gateway to which each VPC will be attached. Outbound internet access will route through a fleet of VPN-attached firewalls.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
A Solutions Architect is designing a network solution for a company that has applications running in a data center in Northern Virginia. The applications in the company s data center require predictable performance to applications running in a virtual private cloud (VPC) located in us-east-1, and a secondary VPC in us-west-2 within the same account. The company data center is collocated in an AWS Direct Connect facility that serves the us-east-1 region. The company has already ordered an AWS Direct Connect connection and a cross-connect has been established.
Which solution will meet the requirements at the LOWEST cost?
Which solution will meet the requirements at the LOWEST cost?
Provision a Direct Connect gateway and attach the virtual private gateway (VGW) for the VPC in us-east-1 and the VGW for the VPC in us-west-2. Create a private VIF on the Direct Connect connection and associate it to the Direct Connect gateway.
Create private VIFs on the Direct Connect connection for each of the company s VPCs in the us-east-1 and us-west-2 regions. Configure the company s data center router to connect directly with the VPCs in those regions via the private VIFs.
Deploy a transit VPC solution using Amazon EC2-based router instances in the us-east-1 region. Establish IPsec VPN tunnels between the transit routers and virtual private gateways (VGWs) located in the us-east-1 and us-west-2 regions, which are attached to the company s VPCs in those regions. Create a public VIF on the Direct Connect connection and establish IPsec VPN tunnels over the public VIF between the transit routers and the company s data center router.
Order a second Direct Connect connection to a Direct Connect facility with connectivity to the us-west-2 region. Work with a partner to establish a network extension link over dark fiber from the Direct Connect facility to the company s data center. Establish private VIFs on the Direct Connect connections for each of the company s VPCs in the respective regions. Configure the company s data center router to connect directly with the VPCs in those regions via the private VIFs.
Designing enterprise-wide scalable operations on AWS
A solutions architect is designing a publicly accessible web application that is on an Amazon CloudFront distribution with an Amazon S3 website endpoint as the origin. When the solution is deployed, the website returns an Error 403: Access Denied message.
Which steps should the solutions architect take to correct the issue? (Choose two.)
Which steps should the solutions architect take to correct the issue? (Choose two.)
Remove the S3 block public access option from the S3 bucket.
Remove the requester pays option from the S3 bucket.
Remove the origin access identity (OAI) from the CloudFront distribution.
Change the storage class from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA).
Disable S3 object versioning.
Designing enterprise-wide scalable operations on AWS
Designing for security and compliance
A Solutions Architect is designing a system that will collect and store data from 2,000 internet-connected sensors. Each sensor produces 1 KB of data every second. The data must be available for analysis within a few seconds of it being sent to the system and stored for analysis indefinitely.
Which is the MOST cost-effective solution for collecting and storing the data?
Which is the MOST cost-effective solution for collecting and storing the data?
Put each record in Amazon Kinesis Data Streams. Use an AWS Lambda function to write each record to an object in Amazon S3 with a prefix that organizes the records by hour and hashes the record s key. Analyze recent data from Kinesis Data Streams and historical data from Amazon S3.
Put each record in Amazon Kinesis Data Streams. Set up Amazon Kinesis Data Firehouse to read records from the stream and group them into objects in Amazon S3. Analyze recent data from Kinesis Data Streams and historical data from Amazon S3.
Put each record into an Amazon DynamoDB table. Analyze the recent data by querying the table. Use an AWS Lambda function connected to a DynamoDB stream to group records together, write them into objects in Amazon S3, and then delete the record from the DynamoDB table. Analyze recent data from the DynamoDB table and historical data from Amazon S3
Put each record into an object in Amazon S3 with a prefix what organizes the records by hour and hashes the record s key. Use S3 lifecycle management to transition objects to S3 infrequent access storage to reduce storage costs. Analyze recent and historical data by accessing the data in Amazon S3
Implementing cost control strategies
A solutions architect is designing the data storage and retrieval architecture for a new application that a company will be launching soon. The application is designed to ingest millions of small records per minute from devices all around the world. Each record is less than 4 KB in size and needs to be stored in a durable location where it can be retrieved with low latency. The data is ephemeral and the company is required to store the data for 120 days only, after which the data can be deleted.
The solutions architect calculates that, during the course of a year, the storage requirements would be about 10-15 TB.
Which storage strategy is the MOST cost-effective and meets the design requirements?
The solutions architect calculates that, during the course of a year, the storage requirements would be about 10-15 TB.
Which storage strategy is the MOST cost-effective and meets the design requirements?
Design the application to store each incoming record as a single .csv file in an Amazon S3 bucket to allow for indexed retrieval. Configure a lifecycle policy to delete data older than 120 days.
Design the application to store each incoming record in an Amazon DynamoDB table properly configured for the scale. Configure the DynamoDB Time to Live (TTL) feature to delete records older than 120 days.
Design the application to store each incoming record in a single table in an Amazon RDS MySQL database. Run a nightly cron job that executes a query to delete any records older than 120 days.
Design the application to batch incoming records before writing them to an Amazon S3 bucket. Update the metadata for the object to contain the list of records in the batch and use the Amazon S3 metadata search feature to retrieve the data. Configure a lifecycle policy to delete the data after 120 days.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
A Solutions Architect is designing the storage layer for a data warehousing application. The data files are large, but they have statically placed metadata at the beginning of each file that describes the size and placement of the file s index. The data files are read in by a fleet of Amazon EC2 instances that store the index size, index location, and other category information about the data file in a database. That database is used by Amazon EMR to group files together for deeper analysis.
What would be the MOST cost-effective, high availability storage solution for this workflow?
What would be the MOST cost-effective, high availability storage solution for this workflow?
Store the data files in Amazon S3 and use Range GET for each file s metadata, then index the relevant data.
Store the data files in Amazon EFS mounted by the EC2 fleet and EMR nodes.
Store the data files on Amazon EBS volumes and allow the EC2 fleet and EMR to mount and unmount the volumes where they are needed.
Store the content of the data files in Amazon DynamoDB tables with the metadata, index, and data as their own keys.
Implementing cost control strategies
A Solutions Architect is designing the storage layer for a recently purchased application. The application will be running on Amazon EC2 instances and has the following layers and requirements:
• Data layer: A POSIX file system shared across many systems.
• Service layer: Static file content that requires block storage with more than 100k IOPS.
Which combination of AWS services will meet these needs? (Choose two.)
• Data layer: A POSIX file system shared across many systems.
• Service layer: Static file content that requires block storage with more than 100k IOPS.
Which combination of AWS services will meet these needs? (Choose two.)
Data layer Amazon S3
Data layer Amazon EC2 Ephemeral Storage
Data layer Amazon EFS
Service layer Amazon EBS volumes with Provisioned IOPS
Service layer Amazon EC2 Ephemeral Storage
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
Comments