[AWS SAA-C03] 개념 정리 및 덤프 정리 [영문]
Nowhere 와 Now here 의 차이

IT/Cloud

[AWS SAA-C03] 개념 정리 및 덤프 정리 [영문]

TIENE 2023. 8. 1. 17:29
반응형

저처럼 영어로 시험보실 분들도 계실까봐 정리해둡니다.

문제 패턴은 회사에서 요구하는 것이 있는데, 그 요구 사항이 패러프레이징되어 옵션에 표기되어 있습니다.

옵션에는 패러프레이징 및 AWS 서비스가 나열되어 있고, 요구에 맞는 서비스를 고르면됩니다.

서비스 별 정의와 문제에서 자주 요구하는 사항들을 정리했고, 연계된 서비스들을 정리했습니다.

한번 주욱 읽어보시고 문제 풀어보시길 바랍니다.

 

[AWS SAA-C03] 2주만에 SAA-C03 취득하기 [결국 공부를 해야한다]

 

[AWS SAA-C03] 2주만에 SAA-C03 취득하기 [결국 공부를 해야한다]

AWS CLF-C01에 이어서 SAA-C03을 취득했다. 지난 2주간 반복에 반복을 거듭했다. CLF-C01로 맛을 본 적이 있었는데, 그로부터 약 4개월이 지난 시점이라 대부분 까먹었다. 다들 CLF-C01 따고 바로 SAA 준비하

a-gyuuuu.tistory.com


Amazon Auto Scaling Groups

- dynamic scaling 

= based on demand or load

= use CloudWatch alarms to trigger scaling actions when a specitifed metric crosses a thershold.

= can add instances(scale-out) or remove instances(scale-in) as needed

= can maintain application performance during sudden traffic increases most cost-effectively.

 


API

- write data to a queue instead of directly to the database

 

AWS Lambda

- Amazon SQS invokes to write data from the queue to the database

= as the queue will serve as a buffer between the API and the database

 


Amazon Appflow

- a fully managed integration service

- to securely transfer data between SaaS application like Salesforce, SAP, Zendesk, Slack, and ServiceNow and AWS Services like Amazon S3 and Amazon Redshift in just a few clicks.

 

출처 :https://aws.amazon.com/ko/blogs/big-data/cross-account-integration-between-saas-platforms-using-amazon-appflow/

 


Amazon Athena

- to run SQL queries on data stored in Amazon S3. ( can be used to query JSON in S3)

- don't need to provision or manage any infrasturcure because it is serverless.

- only pay for the queries you run and the amount os data scanned.

 

 

Amazon Athena with Amazon S3

- can run queires for your report without affecting the perfomance of your Amazon RDS for MySQL DB instance.

== can export your data from your DB instance to an S3 bucket and use Athena to query the data in the bucket.

== can aviod the overhead and contention of running queries on your DB instance.

 

- don't need to create a read replica or a backup of your DB instance

- don't need to resize your DB instance to accommodate the additional workload

== both would incur additional charges and require maintenance and increase your operational overhead.

 

- You can store large amounts of data in S3 and query them with Athena without worrying about capacity or performance limitations.

 

출처 : https://aws.amazon.com/ko/blogs/big-data/configure-and-optimize-performance-of-amazon-athena-federation-with-amazon-redshift/

 

 


AWS Backup

- allows you to centralize and automate data protection os AWS services across compute, storage, and database.

 

AWS Backup Vault Lock

- optional feature of a backup vault

- enhance the security and control over your backup vaults

- in comliance mode, the vault configuration cannot be altered or deleted by a customer, data owner, AWS.

 


Amazon DynamoDB

- Provisioned capacity

= if you have relatively predictable application traffic, run applications whose traffic is consistent and ramps up or down gradually

 

- On-demand capacity mode

= when you have unknown workloads, unpredictable application traffic and only want to pay exactly for what you use

 

It is ideal for bursty, new, or unpredictable workloads whose traffic can spike in senconds or minutes, and when usder-provisioned capacity would impact the user experience.

 


Amazon DynamoDB with DynamoDB Accelerator (DAX):

- It is designed for low-latency access to frequently accessed data. 

- an in-memory cache for DynamoDB

- can significantly reduce read latency, making it suitable for achieving sub-millisecond read times.

Users use a chat application with a data store bases in Amazon DynamoDB and would like new messages to be read with as little latency as possible. = 23/08/03 SAA-C03 출제 확인
A solutions architect needs to design an optinal solution that requres minimal application changes.

AWS DataSync

- focuses on data transfer and synchronization between storage systems

- includes automatic encryption and data integrity validation to help make sure that one's data arrives securely.

 

AWS DMS

- specializes in migrating databases to AWS database services.

- move active datasets rapidly over the network into Amazon S3.

 

문제 1

더보기

A university research laboratory needs to migrate 30 TB of data from an on-premises Windows file server to Amazon FSx for Windows File Server. The laboratory has a 1 Gbps network link that many other departments in the university share.

The laboratory wants to implement a data migration service that will maximize the performance of the data transfer. However, the laboratory needs to be able to control the amount of bandwidth that the service uses to minimize the impact on other departments. The data migration must take place within the next 5 days.

Which AWS solution will meet these requirements?

 

A. AWS Snowcone

B. Amazon FSx File Gateway

C. AWS DataSync

D. AWS Transfer Family

 

 

You can use AWS DataSync to migrate data located on-premises, at the edge, or in other clouds to Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, and Amazon FSx for NetApp ONTAP.

 

문제 2

더보기

A company recently created a disaster recovery site in a different AWS Region. The company needs to transfer large amounts of data back and forth between NFS file systems in the two Regions on a periodic basis.

Which solution will meet these requirements with the LEAST operational overhead?

 

A. Use AWS DataSync.

B. Use AWS Snowball devices.

C. Set up an SFTP server on Amazon EC2.

D. Use AWS Database Migration Service (AWS DMS)

 

 

AWS DataSync is a data transfer service optimized for moving large amounts of data between NFS file systems. It can automatically copy files and metadata between your NFS file systems in different AWS Regions.


Amazon FSx for Lustre file system

- uses SSD storage for sub-millisecond latencies and up to 6GBps throughput.

 


 

Amazon Glue

- a fully managed ETL service that can extract data from various sources, transform it into the required format, and load it into a target data store. 

 

문제 1

더보기

A company uses a legacy application to produce data in CSV format. The legacy application stores the output data in Amazon S3. The company is deploying a new commercial off-the-shelf (COTS) application that can perform complex SQL queries to analyze data that is stored in Amazon Redshift and Amazon S3 only. However, the COTS application cannot process the .csv files that the legacy application produces.

The company cannot update the legacy application to produce data in another format. The company needs to implement a solution so that the COTS application can use the data that the legacy application produces.

Which solution will meet these requirements with the LEAST operational overhead?

 

A. Create an AWS Glue extract, transform, and load (ETL) job that runs on a schedule. Configure the ETL job to process the .csv files and store the processed data in Amazon Redshift.

B. Develop a Python script that runs on Amazon EC2 instances to convert the .csv files to .sql files. Invoke the Python script on a cron schedule to store the output files in Amazon S3.

C. Create an AWS Lambda function and an Amazon DynamoDB table. Use an S3 event to invoke the Lambda function. Configure the Lambda function to perform an extract, transform, and load (ETL) job to process the .csv files and store the processed data in the DynamoDB table.

D. Use Amazon EventBridge to launch an Amazon EMR cluster on a weekly schedule. Configure the EMR cluster to perform an extract, transform, and load (ETL) job to process the .csv files and store the processed data in an Amazon Redshift table.

 

In this case, the ETL job can be configured to read the CSV files from Amazon S3, transform the data into a format that can be loaded into Amazon Redshift, and load it into an Amazon Redshift table.

 


Amazon EC2 instances

- M5 EC2 instances

= general purpose

 

- R5 EC2 instances

= memory optimized

 

 

 


 


A gateway VPC endpoint for Amazon S3

- enables private connections between the VPC and Amazon S3 that do not require an internet gateway or NAT device.

- uses a prefix list as the route target in a VPC route table to route traffic privately to Amazon S3.

The company want to minimize costs and wants to prevent traffic from traversing the internet whenver possible.

When EC2 in Private Subnet connect to the internet without Amazon S3 or DynamoDB, VPN Endpoint Gateway must be needed. - gateway endpoints only support for S3 and dynamoDB

 

 

 


When EC2 in Private Subnet connect to the internet, NAT Gateway must be needed.

 

출처 : https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html


AWS CloudFront

- Configuring an Amazon CloudFront distribution for the website further improves preformance by caching content at edge locations closer to the end-users, reducing latency and improving content delivery.

- uses a local cache to provide the response

- UDP is not working With CloudFront

- improve for both cacheable content  (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery)

 

문제 1

더보기

A company wants to restrict access to the content of one of its main web applications and to protect the content by using authorization techniques available on AWS. The company wants to implement a serverless architecture and an authentication solution for fewer than 100 users. The solution needs to integrate with the main web application and serve web content globally. The solution must also scale as the company's user base grows while providing the lowest login latency possible.

Which solution will meet these requirements MOST cost-effectively?

 

A. Use Amazon Cognito for authentication. Use Lambda@Edge for authorization. Use Amazon CloudFront to serve the web application globally.

B. Use AWS Directory Service for Microsoft Active Directory for authentication. Use AWS Lambda for authorization. Use an Application Load Balancer to serve the web application globally.

C. Use Amazon Cognito for authentication. Use AWS Lambda for authorization. Use Amazon S3 Transfer Acceleration to serve the web application globally.

D. Use AWS Directory Service for Microsoft Active Directory for authentication. Use Lambda@Edge for authorization. Use AWS Elastic Beanstalk to serve the web application globally.

 

 

CloudFront = globally

Lambda@edge = Authorization

Latency Cognito=Authentication for Web apps

 

 

AWS Global Accelarator

- provides two global static public IPs that act as a fixed entry point to one's application endpoints, such as NLBs in different AWS Regions.

- don't support Application Load Balancers

- Network Load Balancers + Global accelerator support UDP/TCP

- AWS Global Accelerator = TCP/UDP minimize latency

- proxies requests and connects to the application all the time for the response.

- improve performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions.

출처 : https://aws.amazon.com/ko/blogs/korea/anti-ddos-for-game/

 

 

문제 1

더보기

A company is using Amazon Route 53 latency-based routing to route requests to its UDP-based application for users around the world. The application is hosted on redundant servers in the company's on-premises data centers in the United States, Asia, and Europe. The company’s compliance requirements state that the application must be hosted on premises. The company wants to improve the performance and availability of the application.What should a solutions architect do to meet these requirements?

A. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS.

B. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the ALBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS.

C. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three NLBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS.

D. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three ALBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS.

 


AWS Network Load Balancer

- supports both TCP and UDP protocols

- Network Load Balancers + Global accelerator support UDP/TCP

 

AWS Application Load Balancer

- not supports both TCP and UDP protocols

- supports only HTTP and HTTPS protocols

 

AWS Transfer Family 

- secure transfer service

- enables to transfer files into and out of AWS storage services using SFTP, FPTS,FTP and AS2 protocols.

- create an SFTP-enabled server with a public endpoint that allows only trusted IP addresses.

 

 


AWS Transit Gateway

- a highly scalable and centralized hub for connecting multiple VPCs, on-premises networks, and remote networks.

- simplifies network connectivity by providing a single entrp point and reducing the number of connections required.

 

- Deploying an AWS Transit Gateway in the networking team's AWS account allows for efficient management and control over the network connectivity acroos multiple VPCs.

출처 : https://aws.amazon.com/ko/blogs/architecture/use-a-city-planning-analogy-to-visualize-and-create-your-cloud-architecture/


Amazon S3 Tier

First 30 days data accessed every morning = S3 Standard

Beyond 30 days data accessed quarterly = S3 Standard-Infrequent Access

Beyond 1 year data retained = S3 Glacier Deep Archive

문제1

더보기

A company stores its data objects in Amazon S3 Standard storage. A solutions architect has found that 75% of the data is rarely accessed after 30 days. The company needs all the data to remain immediately accessible with the same high availability and resiliency, but the company wants to minimize storage costs.

Which storage solution will meet these requirements?

 

A. Move the data objects to S3 Glacier Deep Archive after 30 days.

B. Move the data objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.

C. Move the data objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.

D. Move the data objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) immediately.

문제2

더보기

What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted?A. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set.

B. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set to private.

C. Update the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true.

D. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.

 

 

To encrypt an object at the time of upload, you need to add a header called x-amz-server-side-encryption to the request to tell S3 to encrypt the object using SSE-C, SSE-S3, or SSE-KMS. The following code example shows a Put request using SSE-S3.

 


APT Gateway

- allow to control who can access one's API and how much and how fast they can access it.

- has a features that can implement API usage plans and API keys to limit the acces of users who do not have a subscription = enables you to create different tiers of access for one's API and charge users accordingly.

 

API 서버 생성

API Gateway -> Lambda -> DB 

 

APT Gateway + Lambda

- the perfect solution for modern applications with serverless architecture.

 


Amazon Lambda

- a serverless compute service that lets you run code without provisioning or managing servers.

 

- it take some time to initialize new instances of your function if there is a sudden increase in demand.

- This may result in high latency

- response latency is as low as possible for all 

- to avoid this, you can user provisioned concurrency.

 

= Amazon Lambda with provisoned concurrency

-  to set the amount of compute resources that are available to the Lambda function, so that it can handle more requests at once and reduce latency.

- your function is initialized and ready to respond at any time.

 


Three-tier application on AWS

- Amazon S3 => Amazon Elastic Container Service with AWS Fargate => Amazon RDS cluster

 

 

Amazon S3

- host static content for one's website, such as HTML files, images etc.

 

Amazon Elastic Container Service

- fully manages container orchestation service that allow to run and scale containerized application on AWS.

 

AWS Fargate

- a serverless compute engine for containers

- works with both Amazon ECS and Amazon EKS

- makes it easier to focus on building your application by removing the need to provision and manage servers.

 

Amazon Elastic Container Service with AWS Fargate

- to compute power for your containerized application logic tier.

 

 

Amazon RDS

- a managed relational database service

- makes it easier to set up, operate, and scale a relational database in the cloud.

- can use a managed Amazon RDS cluster for the database tier of one's application

 

This solution will simplify deployment and reduce operational cost for three-tier application.

 


The website's performance improvement through efficient image storage and content delivery.

 

 

Amazon EFS file system

- provides a scalable and fully managed file storage solution 

- highly-available and durable file system that is built to scale on demand

- make it be accessed concurrently from multiple EC2 instances

- making it convenient for content management systems, development environments, and big data processing.

 

 

Amazon Auto Scalling Group

- maintains a minimum of two instances, ensuring resilience by automatically replacing any unhealthy instances.

 

 

Amazon CloudFront

- Configuring an Amazon CloudFront distribution for the website further improves preformance by caching content at edge locations closer to the end-users, reducing latency and improving content delivery.


Amazon SQS -> Amazon Lambda 와 같은 SQS활용 패턴

(The loss is not accpted = Amazon SQS / process in order = Amazon SQS FIFO Pattern)

 

문제 1

더보기

A company is building a game system that needs to send unique events to separate leaderboard, matchmaking, and authentication services concurrently. The company needs an AWS event-driven system that guarantees the order of the events.Which solution will meet these requirements?

A. Amazon EventBridge event bus
B. Amazon Simple Notification Service (Amazon SNS) FIFO topics
C. Amazon Simple Notification Service (Amazon SNS) standard topics
D. Amazon Simple Queue Service (Amazon SQS) FIFO queues

문제 2

더보기

A hospital is designing a new application that gathers symptoms from patients. The hospital has decided to use Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification Service (Amazon SNS) in the architecture.A solutions architect is reviewing the infrastructure design. Data must be encrypted at rest and in transit. Only authorized personnel of the hospital should be able to access the data.Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

A. Turn on server-side encryption on the SQS components. Update the default key policy to restrict key usage to a set of authorized principals.

B. Turn on server-side encryption on the SNS components by using an AWS Key Management Service (AWS KMS) customer managed key. Apply a key policy to restrict key usage to a set of authorized principals.

C. Turn on encryption on the SNS components. Update the default key policy to restrict key usage to a set of authorized principals. Set a condition in the topic policy to allow only encrypted connections over TLS.

D. Turn on server-side encryption on the SQS components by using an AWS Key Management Service (AWS KMS) customer managed key. Apply a key policy to restrict key usage to a set of authorized principals. Set a condition in the queue policy to allow only encrypted connections over TLS.

E. Turn on server-side encryption on the SQS components by using an AWS Key Management Service (AWS KMS) customer managed key. Apply an IAM policy to restrict key usage to a set of authorized principals. Set a condition in the queue policy to allow only encrypted connections over TLS.

문제 3

더보기

A company has a Java application that uses Amazon Simple Queue Service (Amazon SQS) to parse messages. The application cannot parse messages that are larger than 256 KB in size. The company wants to implement a solution to give the application the ability to parse messages as large as 50 MB.

Which solution will meet these requirements with the FEWEST changes to the code?

A. Use the Amazon SQS Extended Client Library for Java to host messages that are larger than 256 KB in Amazon S3.

B. Use Amazon EventBridge to post large messages from the application instead of Amazon SQS.

C. Change the limit in Amazon SQS to handle messages that are larger than 256 KB.

D. Store messages that are larger than 256 KB in Amazon Elastic File System (Amazon EFS). Configure Amazon SQS to reference this location in the messages.

 

Amazon SQS has a limit of 256 KB for the size of messages. To handle messages larger than 256 KB, the Amazon SQS Extended Client Library for Java can be used. This library allows messages larger than 256 KB to be stored in Amazon S3 and provides a way to retrieve and process them. Using this solution, the application code can remain largely unchanged while still being able to process messages up to 50 MB in size.

 

문제 4

더보기

A company is using AWS to design a web application that will process insurance quotes. Users will request quotes from the application. Quotes must be separated by quote type, must be responded to within 24 hours, and must not get lost. The solution must maximize operational efficiency and must minimize maintenance.

Which solution meets these requirements?

A. Create multiple Amazon Kinesis data streams based on the quote type. Configure the web application to send messages to the proper data stream. Configure each backend group of application servers to use the Kinesis Client Library (KCL) to pool messages from its own data stream.

B. Create an AWS Lambda function and an Amazon Simple Notification Service (Amazon SNS) topic for each quote type. Subscribe the Lambda function to its associated SNS topic. Configure the application to publish requests for quotes to the appropriate SNS topic.

C. Create a single Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each backend application server to use its own SQS queue.

D. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon OpenSearch Service cluster. Configure the application to send messages to the proper delivery stream. Configure each backend group of application servers to search for the messages from OpenSearch Service and process them accordingly.

 

 

 

Quote types need to be separated: SNS message filtering can be used to publish messages to the appropriate SQS queue based on the quote type, ensuring that quotes are separated by type.

 

Quotes must be responded to within 24 hours and must not get lost: SQS provides reliable and scalable queuing for messages, ensuring that quotes will not get lost and can be processed in a timely manner. Additionally, each backend application server can use its own SQS queue, ensuring that quotes are processed efficiently without any delay.

 

Operational efficiency and minimizing maintenance: Using a single SNS topic and multiple SQS queues is a scalable and cost-effective approach, which can help to maximize operational efficiency and minimize maintenance. Additionally, SNS and SQS are fully managed services, which means that the company will not need to worry about maintenance tasks such as software updates, hardware upgrades, or scaling the infrastructure.

 


1. Amazon Kinesis Data Stream 

- a platform for collecting, processing, and analyzing real-time data streams.

- allows you to ingest and store data from a large number of sources

 

2. Amazon Kinesis Firehose

- a service designed for easy and reliable data loading into data stores and analytics tools.

- simplifies the process of ingesting streaming data and delivering it to other AWS services or third-party destinations.

 

3. Amazon Kinesis Analytics

- enables you to process and analyze real-time streaming data using standard SQL queires without requiring you to manage infrastructure or write complex code.

- is well-suited for scenarios where you need to perform real-time data analysis on streaming data without the complexity of managing infrastructure or writing custom code.

 

Amazon Kinesis Workflow

1) Amazon Kinesis Data Stream collects real-time data streams from various sources.

2) Amazon Kinesis Firehose loads and delivers the collected data from data streams to storage or analytics destinations

3) Amazon Kinesis Analytics analyzes the data in real-time using SQL-based queries to derive insights and patterns.

 

The company needs an API and a process that transforms data as the data is streamed and a storage solution for the data

- Amazon API Gateway

- Amazon Kinesis Firehose

- Amazon S3

 

 

 

문제 1

더보기

A company uses a payment processing system that requires messages for a particular payment ID to be received in the same order that they were sent. Otherwise, the payments might be processed incorrectly.Which actions should a solutions architect take to meet this requirement? (Choose two.)

A. Write the messages to an Amazon DynamoDB table with the payment ID as the partition key.
B. Write the messages to an Amazon Kinesis data stream with the payment ID as the partition key.
C. Write the messages to an Amazon ElastiCache for Memcached cluster with the payment ID as the key.
D. Write the messages to an Amazon Simple Queue Service (Amazon SQS) queue. Set the message attribute to use the payment ID.
E. Write the messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the message group to use the payment ID.


1) SQS FIFO queues guarantee that messages are received in the exact order they are sent. Using the payment ID as the message group ensures all messages for a payment ID are received sequentially.

2) Kinesis data streams can also enforce ordering on a per partition key basis. Using the payment ID as the partition key will ensure strict ordering of messages for each payment ID.

문제 2

더보기

 

A company is using a fleet of Amazon EC2 instances to ingest data from on-premises data sources. The data is in JSON format and ingestion rates can be as high as 1 MB/s. When an EC2 instance is rebooted, the data in-flight is lost. The company’s data science team wants to query ingested data in near-real time.

Which solution provides near-real-time data querying that is scalable with minimal data loss?

A. Publish data to Amazon Kinesis Data Streams, Use Kinesis Data Analytics to query the data.

B. Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination. Use Amazon Redshift to query the data.

C. Store ingested data in an EC2 instance store. Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination. Use Amazon Athena to query the data.

D. Store ingested data in an Amazon Elastic Block Store (Amazon EBS) volume. Publish data to Amazon ElastiCache for Redis. Subscribe to the Redis channel to query the data.

 

 

Provide near-real-time data ingestion into Kinesis Data Streams with the ability to handle the 1 MB/s ingestion rate. Data would be stored redundantly across shards.

Enable near-real-time querying of the data using Kinesis Data Analytics. SQL queries can be run directly against the Kinesis data stream.

Minimize data loss since data is replicated across shards. If an EC2 instance is rebooted, the data stream is still accessible.

Scale seamlessly to handle varying ingestion and query rates.


AWS Key Management Service (AWS KMS)

- to create and manage encryption keys for your data

- can create and manage a customer managed key that is a symmetric encrpytion key

- Because AWS KMS generates new crpytographic material for the key every year, you can enable automatic key rotation for a customer managed key.


Amazon Macie

- uses machine learning and artificial intelligence (AI) to discover, classify and protect any sensitive data

- identify and secure sensitive data, such ad personally identifiable information(PII), financial data, intellectual property.

 

Amazon Macie with Amazon EventBridge and Amazon SNS

- Amazon Macie can find Personally Identifiable Information (PII)

- Amazon Macie also can send is to Amazon EventBridge

- Amazon EventBridge is a serverless envet bus 

 


Amazon Redshift

- a fully managed, petabyte-scale data warehouse service in the cloud

- can start with just a few hundred gigabytes of data and scale to a petabyte or more

 

Amazon Redshift Cluster

- the first step to launch a set of nodes

- can upload your data set and then perform data analysis queries.

- Regardless of the size of the data set, Amazon Redshift offers fast query performance using the same SQL-based tools and business intelligence application that you use today.

 

The user activity data store will continue to grow and will be petabytes in size.
The company needs to build a highly available data ingestion solution that facilitates on-demand analytics of existing data and new data with SQL

 

Amazon RDS 관련 문제

- Database Read Performance 문제로 딜레이 및 인터럽션 해결책

= Database 앞에 Amazon ElastiCache 배치 = 자주 접근하는 데이터 캐싱해서 레이턴시 줄이고 데이터 접근 더 빠르게 가능

 

The application runs on Amazon EC2 instances behind an Apllication Load Balancer.
The application stores data in an Amazon RDS for MySQL database.
Users are starting to experience long delays and interruptions that are caused by database read performance.

 

- Database 연결 지연 시간 문제 해결책

= RDS DB 인스턴스에 RDS Proxy 사용해서 Pool and Share Connections = 데이터베이스와의 연결 개선 및 효율성 확장성 향상

 

 

더보기

A company has an on-premises MySQL database used by the global sales team with infrequent access patterns. The sales team requires the database to have minimal downtime. A database administrator wants to migrate this database to AWS without selecting a particular instance type in anticipation of more users in the future.

Which service should a solutions architect recommend?

 

A. Amazon Aurora MySQL

B. Amazon Aurora Serverless for MySQL

C. Amazon Redshift Spectrum

D. Amazon RDS for MySQL

 

without selecting a particular instance type = serverless 

 

 

Amazon RDS Proxy

- allows applications to pool and share connections established with the database, improving database efficiency and application scalability.

- With RDS Proxy, failover times for Aurora and RDS databases are reduced by up to 66%.

 

Amazon RDS proxy & Read replica with Aurora

- To reduce application failures resulting from database connection timeouts, the best solution is to enable RDS Proxy on the RDS DB instance

 

 

Read replicas are exact copies of the primary database that can be user to handle read-only traffic, which will reduce the load on the primary database and improve the performance of the web application.

 

 

 


AWS Shield Advanced

- provides expanded DDoS attack protection for Amazon EC2 instances.

출처 : https://aws.amazon.com/ko/blogs/korea/anti-ddos-for-game/

AWS WAF

- a Web Application Firewall

- monitor the HTTP and HTTPS requests that are forwarded to your protected web application resoucres.

- helps protect your web applications or APIs against common web exploits that may affect availability 

- gives you control over how traffic reaches your application by enabling you to creat security rules that block common attack patterns

- with an ALB, to protect the web application from malicious requests.

출처 : https://aws.amazon.com/ko/blogs/korea/anti-ddos-for-game/
출처 : https://aws.amazon.com/ko/blogs/korea/anti-ddos-for-game/

 

 

https://aws.amazon.com/ko/blogs/korea/anti-ddos-for-game/

 

AWS 기반 게임 개발자를 위한 안내서 – 1부. DDoS 공격 방어 방법 | Amazon Web Services

전 세계에 대규모 게임 사용자를 위한 빠르고 민첩한 게임 서비스 개발을 위해 클라우드 활용은 필수가 되었습니다. 세계 최대 게임 회사의 90%가 AWS 기반 게임 서비스를 제공하고 있으며, 국내

aws.amazon.com


Amazon QuickSight

- a data visualization that allow you to create interactive dashboards and reports from various data sources.

- can connect all the data sources and create new datasets in Amazon QuickSight and then publish dashboards to visualize tha data.

- can share the dashboards with the appropriate users and groups and control their access levels using IAM roles and permissions.

 


 

 

 


반응형