Interview Guide for Amazon Web Services (AWS): CloudFreaks

Amazon Web Services

Today’s modern world is witnessing a significant change in how businesses and organizations work. Everything is getting digitized, and the introduction of cloud and cloud computing platforms have been a major driving force behind this growth.

AWS Scenario Based Interview Questions

1.An application is deployed in multiple Availability Zones in a single region. In the event of failure, the RTO must be less than 3 hours, and the RPO is 15 minutes. Which DR strategy can be used to achieve this RTO and RPO in the event of this kind of failure?

S3 to Amazon S3, with transaction logs stored in S3 every 5 minutes. It takes hourly backups to Amazon S3 which makes restoring the backups quick, and since the transaction logs are stored in S3 every 5 minutes, it will help to restore the application to a state that is within the RPO of 15 minutes. 

2.What is the preferred type of cache to use when developing gaming applications?"

ElastiCache for Redis is the best suited for gaming applications since leaderboards, sessions and profiles are the top functions for game developers.Its raw event stream is used for dashboards and powering interactive customized campaigns, on top of being consumed by downstream processes for deeper analytics and long-term storage. Also, for sorting the results for top performance or scores, Redis' data structure is very helpful.

3.Currently, a company uses Redshift to store its analyzed data. They have started with the base configuration. What would they get when they initially start using Redshift?

When initially starting to use Redshift with the base configuration, the company would get benefits such as a scalable data warehousing solution, columnar storage, compression, parallel query execution, automated backups and maintenance, integration with the AWS ecosystem, security features, and cost-effectiveness.

4.What can be done if a company wants to establish a low latency dedicated connection to an S3 public endpoint over the Direct Connect?

You can create a public virtual interface to connect to public resources or a private virtual interface to connect to your VPC. You can configure multiple virtual interfaces on a single AWS Direct Connect connection, and you'll need one private virtual interface for each VPC to connect to. Each virtual interface needs a VLAN ID, interface IP address, ASN, and BGP key. it creates a public virtual interface to connect to S3 endpoint. Add a BGP route as part of the on-premise router.This will route S3 related traffic to the public S3 endpoint to dedicated AWS region. 

5.An auditor needs read-only access to all AWS resources and logs of all the events that have occurred on AWS. What is the best way for creating this sort of access?

The best way to create read-only access for an auditor to all AWS resources and event logs is by using AWS Identity and Access Management (IAM) with appropriate permissions. By creating a dedicated IAM user or role, you can grant read-only access to the desired resources and enable logging for AWS CloudTrail to capture all the events occurring on the AWS account. This ensures that the auditor can view and monitor the necessary information without having the ability to make any changes to the resources.

6.What can aid the user in comprehending how ELB handles traffic regarding the SSL listener, given that they have set up an SSL listener at ELB and on the back-end instances?

Documentation and resources provided by AWS, such as user guides, documentation on Elastic Load Balancer (ELB), and SSL listener configurations, can assist the user in understanding how ELB handles traffic in relation to the SSL listener setup.

7.How can the user configure CloudFormation to ensure that the creation of ELB and Auto Scaling waits until the EC2 instance is launched and properly configured?

To ensure that the creation of ELB and Auto Scaling waits until the EC2 instance is launched and properly configured, the user can utilize the "Creation Policy" attribute in CloudFormation. By specifying a "Creation Policy" with a "Resource Signal" and using the "cfn-signal" script in the EC2 instance's user data, CloudFormation will wait for a successful signal from the instance before proceeding with the creation of ELB and Auto Scaling resources.

8.While hosting a static website with Amazon S3, your static JavaScript code attempts to include resources from another S3 bucket but permission is denied. How might you solve the problem?

Enable CORS Configuration. 

Explanation: the instance in the EU region will not have any changes made after copying the AMI. You will need to copy the AMI#2 to eu-west-1 and then launch the instance again to have all the changes. 


9.You have been asked to manage your AWS infrastructure In a manner similar to the way you might manage application code. You want to be able to deploy exact copies of different versions of your infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time (development test QA . production).Which approach addresses this requirement?

Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure.

Explanation: You can use AWS CloudFormation sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer. AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

10.An organization is planning to use AWS for their production roll out. The organization wants to implement automation for deployment such that it will automatically create a LAMP stack, download the latest PHP installable from S and set up the ELB. Which AWS services meet the requirement for making an orderly deployment of the software?

The Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. We can simply upload code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. Meanwhile we can retain full control over the AWS resources used in the application and can access the underlying resources at any time.

11.An application is deployed in multiple Availability Zones in a single region. In the event of failure, the RTO must be less than 3 hours, and the RPO is 15 minutes. Which DR strategy can be used to achieve this RTO and RPO in the event of this kind of failure?

Take hourly DB backups to Amazon S3, with transaction logs stored in S3 every 5 minutes. It takes hourly backups to Amazon S3 which makes restoring the backups quick, and since the transaction logs are stored in S3 every 5 minutes, it will help to restore the application to a state that is within the RPO of 15 minutes. 

12.An application is deployed in multiple Availability Zones in a single region. In the event of failure, the RTO must be less than 3 hours, and the RPO is 15 minutes. Which DR strategy can be used to achieve this RTO and RPO in the event of this kind of failure?

Take hourly DB backups to Amazon S3, with transaction logs stored in S3 every 5 minutes. It takes hourly backups to Amazon S3 - which makes restoring the backups quick, and since the transaction logs are stored in S3 every 5 minutes, it will help to restore the application to a state that is within the RPO of 15 minutes. 

13.Can you configure message retention periods in SQS? If yes, how?

Yes, you can configure the message retention period in SQS. The default retention period is 4 days, but it can be configured to a minimum of 1 minute and a maximum of 14 days. After the retention period expires, SQS automatically deletes the message.

14.You are managing the AWS account of a big organization. The organization has more than 1000+ employees and they want to provide access to the various services to most of the employees. What is the best possible solution in this case?

The best practice for IAM is to create roles which have specific access to an AWS service and then give the user permission to the AWS service via the role. it authenticates the users with the organization’s authentication service and creates an appropriate IAM Role for accessing the AWS services.

15.A customer is running an application in the US-West region and wants to set up disaster recovery failover to the Singapore region. The customer is interested in achieving a low RPO for an RDS multi-AZ DB instance. Which approach is best suited to this need?

Asynchronous replication is the best suited approach for this. As when you have cross-region replication for RDS, this is done Asynchronously. Having Synchronous replication would be too much of an overhead for a cross-region replication. 


16.A newspaper organization has a requirement to store around 20TB of data for their readers. This data comprises newspapers in various languages. They wanted to use a search feature for users to search for articles on the site. Which AWS service can help to fulfill this requirement?

With Amazon Cloud Search, you can quickly add rich search capabilities to your website or application. You don't need to become a search expert or worry about hardware provisioning, setup, and maintenance. With a few clicks in the AWS Management Console, you can create a search domain and upload the data that you want to make searchable, and Amazon Cloud Search will automatically provision the required resources and deploy a highly tuned search index. You can easily change your search parameters, fine tune search relevance, and apply new settings at any time. As your volume of data and traffic fluctuates, Amazon CloudSearch seamlessly scales to meet your needs.

17.You have multiple instances behind private and public subnets. None of the instances have an EIP assigned to them. How can you securely connect them to the internet just to be able to download system updates?

You can use a Network Address Translation (NAT) instance in a public subnet in your VPC to enable instances in the private subnet to initiate outbound IPv4 traffic to the Internet or other AWS services, but prevent the instances from receiving inbound traffic initiated by someone on the Internet. All the instances in the private (and public) can get the system updates via NAT instance, which is placed in the public subnet, without exposing themselves over the internet. 

18. A user has launched a large EBS backed EC2 instance in the US-East-1a region. The user wants to achieve Disaster Recovery (DR. for that instance by creating another small instance in Europe). How can the user achieve DR?

Create an AMI of the instance and copy the AMI to the EU region. Then launch the instance from the EU AMI. If you need an AMI across multiple regions, then you have to copy the AMI across regions. Note that by default AMI’s that you have created will not be available across all regions. 

19.You are designing security inside your VPC. You are considering the options for establishing separate security zones, and enforcing network traffic rules across the different zones to limit which instances can communicate. How would you accomplish these requirements?

1.NACLs to explicitly allow or deny communication between the different IP address ranges, as required for inter-zone communication. You can explicitly allow or deny traffic based on a certain IP address range.

2.Configure a security group for every zone. Configure allow rules only between zones that need to be able to communicate with one another. Use the implicit deny all rule to block any other traffic. A Security Group in this case would act like a Firewall that provides security and control at the port/protocol level and have "implicit deny all" rule and only allow what is needed.  

Basic AWS Interview Questions

1.What are the different types of queues available in SQS?

SQS offers two types of queues:

  • standard queues 

  • FIFO (First-In-First-Out) queues.

Standard queues provide at-least-once delivery of messages and offer high throughput, while FIFO queues provide exactly once processing and preserve the order in which messages are sent.

2.What is the maximum message size supported by SQS?

The maximum message size supported by SQS is 256 KB for standard queues and 256 KB for FIFO queues. If a message exceeds this limit, it must be stored externally and a reference to it can be sent as a message in the queue.

3.What programming languages are supported by AWS Lambda?

AWS Lambda supports a wide range of programming languages, including Node.js, Python, Java, C#, Ruby, Go, and PowerShell. This allows developers to write Lambda functions in their preferred language. 

4.How does Lambda handle scaling and concurrency?

Lambda automatically scales horizontally to handle incoming request traffic. It provisions new instances of your function as needed and automatically balances the load across these instances. Each instance of a Lambda function can handle multiple requests concurrently, and Lambda manages the concurrency on your behalf.

5.What is the maximum execution duration allowed for a Lambda function?

The maximum execution duration for a Lambda function is 900 seconds (15 minutes). If a function runs beyond this limit, it will be terminated by Lambda. 

6.What are the different event sources that can trigger a Lambda function?

Lambda supports various event sources, including:

  • Object-created events in Amazon S3 buckets

  • DynamoDB stream events

  • Kinesis stream events

  • SNS (Simple Notification Service) messages

  • SQS (Simple Queue Service) messages

  • CloudWatch Events (scheduled events or custom events)

  • API Gateway requests

7.Can you configure VPC (Virtual Private Cloud) access for Lambda functions? If so, how?

Yes, Lambda functions can be configured to access resources within a VPC. You can specify the VPC and subnets in the function configuration, allowing the function to access resources such as RDS databases or resources hosted within the VPC.

8.What is the difference between provisioned concurrency and on-demand concurrency in Lambda?

Provisioned concurrency allows you to allocate a specific number of instances to be kept warm and ready to serve requests. This helps reduce latency caused by cold starts. On the other hand, on-demand concurrency allows Lambda to automatically manage the scaling of function instances based on the incoming request traffic.

9.How can you control the execution environment and dependencies of a Lambda function?

Lambda functions run in a specific execution environment provided by AWS. You can package your function code along with its dependencies, libraries, and custom runtimes in a deployment package or container image. By controlling the package, you can manage the execution environment and dependencies of your Lambda function. 

10.What are the different storage classes available in Amazon S3?

Amazon S3 offers several storage classes, each designed for different use cases and cost considerations:

  • Standard: The default storage class with high durability and availability.

  • Intelligent Tiering: Automatically moves data between frequent and infrequent access tiers based on access patterns.

  • Standard-IA (Infrequent Access): Designed for data that is accessed less frequently but requires rapid access when needed.

  • One Zone-IA: Similar to Standard-IA but stores data in a single availability zone, reducing costs.

  • Glacier: Suitable for long-term archival storage with lower retrieval time but higher retrieval costs.

  • Glacier Deep Archive: Designed for long-term retention of data that is rarely accessed, with the lowest storage cost but longer retrieval time. 

11.How can you control access to objects stored in Amazon S3?

Access to objects in Amazon S3 can be controlled using a combination of bucket policies, access control lists (ACLs), and IAM (Identity and Access Management) policies. Bucket policies and IAM policies are generally recommended for managing access control as they provide more granular control and flexibility. 

12.What is S3 Transfer Acceleration, and how does it work?

S3 Transfer Acceleration is a feature that enables fast, easy, and secure file transfers to Amazon S3 over long distances. It utilizes the AWS Edge Network to optimize transfer speed by leveraging CloudFront's globally distributed network of edge locations. By enabling Transfer Acceleration for a bucket, data is routed through these edge locations, reducing the time it takes to upload or download objects. 

13.Can you share objects publicly in Amazon S3?

Yes, you can make objects publicly accessible in Amazon S3 by configuring the appropriate permissions. This can be done through bucket policies or by setting the object ACL (Access Control List) to grant public read access. 

14.How can you monitor Amazon S3 bucket activity?

Amazon S3 provides various monitoring and logging options to track bucket activity, such as:

  • S3 Server Access Logging: Logs all requests made to your bucket and stores the access logs in a separate bucket.

  • AWS CloudTrail: Captures API activity for your bucket, providing detailed information about the actions taken on your resources.

  • S3 Storage Lens: Provides a comprehensive view of your storage usage, activity trends, and recommendations for cost optimization.

15.How does an Elastic Load Balancer handle session persistence (sticky sessions)?

Elastic Load Balancers support session persistence through cookies. By enabling sticky sessions, the load balancer associates a cookie with a specific target (such as an EC2 instance) and ensures that subsequent requests from the same client are directed to the same target. 

16.Can you explain the concept of cross-zone load balancing in Elastic Load Balancers?

Cross-zone load balancing is a feature in Elastic Load Balancers that ensures even distribution of traffic across all registered targets in all availability zones. By default, each target receives an equal share of traffic, regardless of the availability zone it belongs to. 

17.How can you configure SSL/TLS encryption with an Elastic Load Balancer?

SSL/TLS encryption can be configured in Elastic Load Balancers through listeners. You can create an HTTPS listener and associate it with an SSL certificate stored in AWS Certificate Manager (ACM) or an Identity and Access Management (IAM) server certificate. This enables secure communication between clients and the load balancer. 

18.How can you configure access logs for a Load Balancer?

Access logs for a Load Balancer can be configured using the Elastic Load Balancing service or the Amazon S3 service. When access logging is enabled, the Load Balancer records detailed information about each request and stores the logs in the specified Amazon S3 bucket.

19.How can you configure a Load Balancer to handle traffic spikes?

A Load Balancer can handle traffic spikes by scaling the number of healthy targets automatically based on the traffic demand. This can be achieved using features such as Auto Scaling, which dynamically adjusts the capacity of the target group based on predefined scaling policies, or by using AWS Lambda functions, which can be triggered by CloudWatch alarms to perform additional tasks. 

20.Can you customize the security configuration of a Load Balancer?

Yes, a Load Balancer can be configured to use various security features, such as SSL/TLS certificates, security groups, and AWS WAF (Web Application Firewall). These features can help protect your applications against common attacks, such as cross-site scripting (XSS) and SQL injection.

21.Explain the concept of long polling in SQS.

Long polling is a feature in SQS that allows the consumers to retrieve messages from a queue with a longer polling duration. Instead of repeatedly polling the queue for new messages, the consumer sends a request to SQS and waits for a response. If messages are available within the specified wait time, SQS immediately returns them. Long polling reduces the number of empty responses and provides more efficient message retrieval. 

22.How does Lambda integrate with other AWS services?

Lambda can be integrated with various AWS services through event sources. Some of the commonly used event sources include Amazon S3, Amazon DynamoDB, Amazon Kinesis, Amazon SNS, Amazon SQS, and AWS CloudWatch Events. These services can trigger the execution of a Lambda function based on specific events.

23.Explain the concept of cold starts in Lambda and how to mitigate them.

Cold starts occur when a Lambda function is invoked for the first time or after a period of inactivity, and AWS needs to provision a new instance to handle the request. This can result in increased latency. To mitigate cold starts, you can use techniques such as enabling provisioned concurrency, which keeps a specified number of instances warm and ready to handle requests.

24.What is the difference between standard queues and FIFO queues in SQS?

The main differences between standard queues and FIFO queues in SQS are:

  • Ordering: Standard queues provide best-effort ordering, while FIFO queues preserve the exact order in which messages are sent.

  • Deduplication: FIFO queues ensure exactly once processing using message deduplication, while standard queues might have occasional duplicate messages.

  • Throughput: Standard queues offer higher throughput and support a nearly unlimited number of transactions per second, while FIFO queues have a maximum limit of 300 transactions per second (TPS). 

25.How does SQS handle message visibility and what is the significance of the visibility timeout?

When a consumer receives a message from a queue, SQS makes the message invisible to other consumers for a specific duration called the visibility timeout. During this time, the consumer processes the message. If the consumer successfully processes the message, it deletes it from the queue. If the processing fails, the message becomes visible again after the visibility timeout, allowing another consumer to process it. The visibility timeout provides a mechanism to handle message processing failures and avoid message duplication. 

26.How does data consistency work in Amazon S3?

Amazon S3 provides read-after-write consistency for PUTS of new objects in your bucket and eventual consistency for overwrite PUTS and DELETES. This means that when you upload a new object, you can immediately read it, but when you update or delete an object, it may take some time for all the S3 servers to be aware of the change.

27.How can you optimize costs in Amazon S3?

 There are several ways to optimize costs in Amazon S3:

a) Choose the appropriate storage class based on your data access patterns and requirements.

b) Implement lifecycle policies to automatically transition objects to lower-cost storage classes or delete them after a specific period.

c)Use S3 Intelligent-Tiering to automatically optimize storage costs by moving objects between frequent and infrequent access tiers.

d)Enable S3 Requester Pays for data transfer costs when the requester of the data bears the cost instead of the bucket owner. 

28.What is auto-scaling in AWS and how does it work?

Auto Scaling is a feature in AWS that allows you to automatically scale your EC2 instances based on predefined conditions. It helps maintain application availability and dynamically adjusts the number of instances based on demand. Auto Scaling uses scaling policies to determine when to launch or terminate instances.

29.Explain the difference between public, private, and elastic IP addresses in AWS.

  • Public IP address: It is an IP address assigned to an instance that is reachable from the internet.

  • Private IP address: It is an IP address assigned to an instance within a VPC and is only reachable within the VPC or connected networks via private connectivity options.

  • Elastic IP address: It is a static, public IP address that users can allocate to their AWS account and assign to instances. Unlike a public IP, an Elastic IP address can be associated with an instance even if it is stopped and started, providing a consistent IP for the resource.

30.How can you secure data at rest in AWS?

AWS offers several options to secure data at rest:

1.Server-Side Encryption: AWS services like S3, EBS, and RDS provide encryption options to encrypt data at rest using keys managed by AWS.

2.Client-Side Encryption: Data can be encrypted on the client-side before storing it in AWS services.

3.AWS Key Management Service (KMS): It allows users to create and manage encryption keys to encrypt and decrypt data.

4.AWS CloudHSM (Cloud Hardware Security Module): Provides dedicated hardware security modules to securely generate and store encryption keys.

31.Explain the differences between Amazon RDS and Amazon DynamoDB.

Amazon RDS (Relational Database Service) is a managed relational database service that supports multiple database engines like MySQL, PostgreSQL, Oracle, and SQL Server. It simplifies the administration tasks such as backups, software patching, and scaling of the database instances.

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance at any scale. It is designed for applications that require single-digit millisecond latency and can handle high read and write throughput.

32.What is AWS CloudFormation and how is it useful?

AWS CloudFormation is a service that allows you to provision and manage AWS resources using code templates called CloudFormation templates. It helps automate the deployment and configuration of infrastructure resources in a predictable and repeatable manner. CloudFormation templates are written in JSON or YAML and define the desired state of your infrastructure, including EC2 instances, networking, security groups, and more.