Amazon (SAP-C01) Exam Questions And Answers page 3
A company experienced a breach of highly confidential personal information due to permission issues on an Amazon S3 bucket. The Information Security team has tightened the bucket policy to restrict access. Additionally, to be better prepared for future attacks, these requirements must be met:
• Identify remote IP addresses that are accessing the bucket objects.
• Receive alerts when the security policy on the bucket is changed.
• Remediate the policy changes automatically.
Which strategies should the Solutions Architect use?
• Identify remote IP addresses that are accessing the bucket objects.
• Receive alerts when the security policy on the bucket is changed.
• Remediate the policy changes automatically.
Which strategies should the Solutions Architect use?
Use Amazon Athena with S3 access logs to identify remote IP addresses. Use AWS Config rules with AWS Systems Manager Automation to automatically remediate S3 bucket policy changes. Use Amazon SNS with AWS Config rules for alerts.
Use S3 access logs with Amazon Elasticsearch Service and Kibana to identify remote IP addresses. Use an Amazon Inspector assessment template to automatically remediate S3 bucket policy changes. Use Amazon SNS for alerts.
Use Amazon Macie with an S3 bucket to identify access patterns and remote IP addresses. Use AWS Lambda with Macie to automatically remediate S3 bucket policy changes. Use Macie automatic alerting capabilities for alerts.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Implementing cost control strategies
A Company had a security event whereby an Amazon S3 bucket with sensitive information was made public. Company policy is to never have public S3 objects, and the Compliance team must be informed immediately when any public objects are identified.
How can the presence of a public S3 object be detected, set to trigger alarm notifications, and automatically remediated in the future? (Choose two.)
How can the presence of a public S3 object be detected, set to trigger alarm notifications, and automatically remediated in the future? (Choose two.)
Turn on object-level logging for Amazon S3. Turn on Amazon S3 event notifications to notify by using an Amazon SNS topic when a PutObject API call is made with a public-read permission.
Configure an Amazon CloudWatch Events rule that invokes an AWS Lambda function to secure the S3 bucket.
Use the S3 bucket permissions for AWS Trusted Advisor and configure a CloudWatch event to notify by using Amazon SNS.
Turn on object-level logging for Amazon S3. Configure a CloudWatch event to notify by using an SNS topic when a PutObject API call with public-read permission is detected in the AWS CloudTrail logs.
Schedule a recursive Lambda function to regularly change all object permissions inside the S3 bucket.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing for security and compliance
A company had a tight deadline to migrate its on-premises environment to AWS. It moved over Microsoft SQL Servers and Microsoft Windows Servers using the virtual machine import/export service and rebuild other applications native to the cloud. The team created both Amazon EC2 databases and used Amazon RDS. Each team in the company was responsible for migrating their applications, and they have created individual accounts for isolation of resources. The company did not have much time to consider costs, but now it would like suggestions on reducing its AWS spend.
Which steps should a Solutions Architect take to reduce costs?
Which steps should a Solutions Architect take to reduce costs?
Enable AWS Business Support and review AWS Trusted Advisor s cost checks. Create Amazon EC2 Auto Scaling groups for applications that experience fluctuating demand. Save AWS Simple Monthly Calculator reports in Amazon S3 for trend analysis. Create a master account under Organizations and have teams join for consolidated billing.
Enable Cost Explorer and AWS Business Support. Reserve Amazon EC2 and Amazon RDS DB instances. Use Amazon CloudWatch and AWS Trusted Advisor for monitoring and to receive cost-savings suggestions. Create a master account under Organizations and have teams join for consolidated billing.
Create an AWS Lambda function that changes the instance size based on Amazon CloudWatch alarms. Reserve instances based on AWS Simple Monthly Calculator suggestions. Have an AWS Well-Architected framework review and apply recommendations. Create a master account under Organizations and have teams join for consolidated billing.
Create a budget and monitor for costs exceeding the budget. Create Amazon EC2 Auto Scaling groups for applications that experience fluctuating demand. Create an AWS Lambda function that changes instance sizes based on Amazon CloudWatch alarms. Have each team upload their bill to an Amazon S3 bucket for analysis of team spending. Use Spot Instances on nightly batch processing jobs.
Implementing cost control strategies
A company has 50 AWS accounts that are members of an organization in AWS Organizations. Each account contains multiple VPCs. The company wants to use AWS Transit Gateway to establish connectivity between the VPCs in each member account. Each time a new member account is created, the company wants to automate the process of creating a new VPC and a transit gateway attachment.
Which combination of steps will meet these requirements? (Choose three.)
Which combination of steps will meet these requirements? (Choose three.)
From the management account, share the transit gateway with member accounts by using AWS Resource Access Manager.
From the management account, share the transit gateway with member accounts by using an AWS Organizations SCP.
Launch an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a VPC transit gateway attachment in a member account. Associate the attachment with the transit gateway in the management account by using the transit gateway ID.
Launch an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a peering transit gateway attachment in a member account. Share the attachment with the transit gateway in the management account by using a transit gateway service-linked role.
From the management account, share the transit gateway with member accounts by using AWS Service Catalog.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
A company has a 24 TB MySQL database in its on-premises data center that grows at the rate of 10 GB per day. The data center is connected to the company s AWS infrastructure with a 50 Mbps VPN connection.
The company is migrating the application and workload to AWS. The application code is already installed and tested on Amazon EC2. The company now needs to migrate the database and wants to go live on AWS within 3 weeks.
Which of the following approaches meets the schedule with LEAST downtime?
The company is migrating the application and workload to AWS. The application code is already installed and tested on Amazon EC2. The company now needs to migrate the database and wants to go live on AWS within 3 weeks.
Which of the following approaches meets the schedule with LEAST downtime?
1. Use the VM Import/Export service to import a snapshot of the on-premises database into AWS.
2. Launch a new EC2 instance from the snapshot.
3. Set up ongoing database replication from on premises to the EC2 database over the VPN.
4. Change the DNS entry to point to the EC2 database.
5. Stop the replication.
2. Launch a new EC2 instance from the snapshot.
3. Set up ongoing database replication from on premises to the EC2 database over the VPN.
4. Change the DNS entry to point to the EC2 database.
5. Stop the replication.
1. Launch an AWS DMS instance.
2. Launch an Amazon RDS Aurora MySQL DB instance.
3. Configure the AWS DMS instance with on-premises and Amazon RDS database information.
4. Start the replication task within AWS DMS over the VPN.
5. Change the DNS entry to point to the Amazon RDS MySQL database.
6. Stop the replication.
2. Launch an Amazon RDS Aurora MySQL DB instance.
3. Configure the AWS DMS instance with on-premises and Amazon RDS database information.
4. Start the replication task within AWS DMS over the VPN.
5. Change the DNS entry to point to the Amazon RDS MySQL database.
6. Stop the replication.
1. Create a database export locally using database-native tools.
2. Import that into AWS using AWS Snowball.
3. Launch an Amazon RDS Aurora DB instance.
4. Load the data in the RDS Aurora DB instance from the export.
5. Set up database replication from the on-premises database to the RDS Aurora DB instance over the VPN.
6. Change the DNS entry to point to the RDS Aurora DB instance.
7. Stop the replication.
2. Import that into AWS using AWS Snowball.
3. Launch an Amazon RDS Aurora DB instance.
4. Load the data in the RDS Aurora DB instance from the export.
5. Set up database replication from the on-premises database to the RDS Aurora DB instance over the VPN.
6. Change the DNS entry to point to the RDS Aurora DB instance.
7. Stop the replication.
1. Take the on-premises application offline.
2. Create a database export locally using database-native tools.
3. Import that into AWS using AWS Snowball.
4. Launch an Amazon RDS Aurora DB instance.
5. Load the data in the RDS Aurora DB instance from the export.
6. Change the DNS entry to point to the Amazon RDS Aurora DB instance.
7. Put the Amazon EC2 hosted application online.
2. Create a database export locally using database-native tools.
3. Import that into AWS using AWS Snowball.
4. Launch an Amazon RDS Aurora DB instance.
5. Load the data in the RDS Aurora DB instance from the export.
6. Change the DNS entry to point to the Amazon RDS Aurora DB instance.
7. Put the Amazon EC2 hosted application online.
Designing enterprise-wide scalable operations on AWS
A company has a complex web application that leverages Amazon CloudFront for global scalability and performance. Over time, users report that the web application is slowing down.
The company s operations team reports that the CloudFront cache hit ratio has been dropping steadily. The cache metrics report indicates that query strings on some URLs are inconsistently ordered and are specified sometimes in mixed-case letters and sometimes in lowercase letters.
Which set of actions should the solutions architect take to increase the cache hit ratio as quickly possible?
The company s operations team reports that the CloudFront cache hit ratio has been dropping steadily. The cache metrics report indicates that query strings on some URLs are inconsistently ordered and are specified sometimes in mixed-case letters and sometimes in lowercase letters.
Which set of actions should the solutions architect take to increase the cache hit ratio as quickly possible?
Deploy a Lambda@Edge function to sort parameters by name and force them to be lowercase. Select the CloudFront viewer request trigger to invoke the function.
Update the CloudFront distribution to disable caching based on query string parameters.
Deploy a reverse proxy after the load balancer to post process the emitted URLs in the application to force the URL strings to be lowercase.
Update the CloudFront distribution to specify case-insensitive query string processing.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
A company has a data center that must be migrated to AWS as quickly as possible. The data center has a 500 Mbps AWS Direct Connect link and a separate, fully available 1 Gbps ISP connection. A Solutions Architect must transfer 20 TB of data from the data center to an Amazon S3 bucket.
What is the FASTEST way transfer the data?
What is the FASTEST way transfer the data?
Upload the data using an 80 TB AWS Snowball device.
Upload the data to the S3 bucket using S3 Transfer Acceleration.
Upload the data to the S3 bucket using the existing DX link.
Send the data to AWS using the AWS Import/Export service.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
A company has a data lake in Amazon S3 that needs to be accessed by hundreds of applications across many AWS accounts. The company s information security policy states that the S3 bucket must not be accessed over the public internet and that each application should have the minimum permissions necessary to function.
To meet these requirements, a solutions architect plans to use an S3 access point that is restricted to specific VPCs for each application.
Which combination of steps should the solutions architect take to implement this solution? (Choose two.)
To meet these requirements, a solutions architect plans to use an S3 access point that is restricted to specific VPCs for each application.
Which combination of steps should the solutions architect take to implement this solution? (Choose two.)
Create an S3 access point for each application in the AWS account that owns the S3 bucket. Configure each access point to be accessible only from the application s VPC. Update the bucket policy to require access from an access point
Create an interface endpoint for Amazon S3 in each application s VPC. Configure the endpoint policy to allow access to an S3 access point. Create a VPC gateway attachment for the S3 endpoint
Create a gateway endpoint for Amazon S3 in each application s VPC. Configure the endpoint policy to allow access to an S3 access point. Specify the route table that is used to access the access point.
Create an S3 access point for each application in each AWS account and attach the access points to the S3 bucket. Configure each access point to be accessible only from the application s VPC. Update the bucket policy to require access from an access point.
Create a gateway endpoint for Amazon S3 in the data lake s VPC. Attach an endpoint policy to allow access to the S3 bucket. Specify the route table that is used to access the bucket
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing for security and compliance
A company has a High Performance Computing (HPC) cluster in its on-premises data center, which runs thousands of jobs in parallel for one week every month, processing petabytes of images. The images are stored on a network file server, which is replicated to a disaster recovery site. The on-premises data center has reached capacity and has started to spread the jobs out over the course of the month in order to better utilize the cluster, causing a delay in the job completion.
The company has asked its Solutions Architect to design a cost-effective solution on AWS to scale beyond the current capacity of 5,000 cores and 10 petabytes of data. The solution must require the least amount of management overhead and maintain the current level of durability.
Which solution will meet the company s requirements?
The company has asked its Solutions Architect to design a cost-effective solution on AWS to scale beyond the current capacity of 5,000 cores and 10 petabytes of data. The solution must require the least amount of management overhead and maintain the current level of durability.
Which solution will meet the company s requirements?
Create a container in the Amazon Elastic Container Registry with the executable file for the job. Use Amazon ECS with Spot Fleet in Auto Scaling groups. Store the raw data in Amazon EBS SC1 volumes and write the output to Amazon S3.
Create an Amazon EMR cluster with a combination of On Demand and Reserved Instance Task Nodes that will use Spark to pull data from Amazon S3. Use Amazon DynamoDB to maintain a list of jobs that need to be processed by the Amazon EMR cluster.
Store the raw data in Amazon S3, and use AWS Batch with Managed Compute Environments to create Spot Fleets. Submit jobs to AWS Batch Job Queues to pull down objects from Amazon S3 onto Amazon EBS volumes for temporary storage to be processed, and then write the results back to Amazon S3.
Submit the list of jobs to be processed to an Amazon SQS to queue the jobs that need to be processed. Create a diversified cluster of Amazon EC2 worker instances using Spot Fleet that will automatically scale based on the queue depth. Use Amazon EFS to store all the data sharing it across all instances in the cluster.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Implementing cost control strategies
A company has a large on-premises Apache Hadoop cluster with a 20 PB HDFS database. The cluster is growing every quarter by roughly 200 instances and 1 PB. The company s goals are to enable resiliency for its Hadoop data, limit the impact of losing cluster nodes, and significantly reduce costs. The current cluster runs 24/7 and supports a variety of analysis workloads, including interactive queries and batch processing.
Which solution would meet these requirements with the LEAST expense and down time?
Which solution would meet these requirements with the LEAST expense and down time?
Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the interactive workload based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metrics. Create job-specific, optimized clusters for batch workloads that are similarly optimized.
Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster of a similar size and configuration to the current cluster. Store the data on EMRFS. Minimize costs by using Reserved Instances. As the workload grows each quarter, purchase additional Reserved Instances and add to the cluster.
Use AWS Snowball to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the interactive workloads based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metrics. Create job-specific, optimized clusters for batch workloads that are similarly optimized.
Use AWS Direct Connect to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the interactive workload based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metrics. Create job-specific, optimized clusters for batch workloads that are similarly optimized.
Designing highly available, cost-efficient, fault-tolerant, scalable systems
Designing enterprise-wide scalable operations on AWS
Comments