AWS Lambda Introduction

AWS Lambda is a compute service that lets us run code without provisioning or managing servers. AWS

Lambda service executes our code only when needed and scales automatically, from a few requests per day to thousands per second. We pay only for the compute time we consume, i.e. there is no charge when our code is not running. With AWS Lambda, we can run code for virtually any type of application or backend service – with zero administration. AWS Lambda runs our code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. All we need to do is supply our code in one of the languages that AWS Lambda supports (as of now – Node.js, Java, C#, Go and Python).

When Should we Use AWS Lambda?

AWS Lambda is an ideal compute platform for many application scenarios, provided that we can write our application code in languages supported by AWS Lambda (Node.js, Java, C# and Python) and run within the AWS Lambda standard runtime environment and resources provided by Lambda.

When using AWS Lambda, we are responsible only for our code. AWS Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources. This is in exchange for flexibility, which means we cannot login to compute instances or customize the operating system or language runtime. These constraints enable AWS Lambda to perform operational and administrative activities on our behalf, including provisioning capacity, monitoring fleet health, applying security patches, deploying our code, and monitoring and logging our Lambda functions.

If we need to manage our own compute resources, Amazon Web Services also offers other compute services to meet our needs.

Amazon Elastic Compute Cloud (Amazon EC2) service offers flexibility and a wide range of EC2 instance types to choose from. It gives us the option to customize operating systems, network and security settings, and the entire software stack, but we are responsible for provisioning capacity, monitoring fleet health and performance and using Availability Zones for fault tolerance.

Elastic Beanstalk offers an easy-to-use service for deploying and scaling applications onto Amazon EC2 in which we retain ownership and full control over the underlying EC2 instances.

What is AWS Lambda?

Amazon explains that AWS Lambda (λ) as a ‘serverless’ compute service, which implied the developers do not have to worry about which AWS resources to launch or how will they manage them, they just put the code on lambda and it runs, it is as simple as that. It helps us to focus on core-competency i.e. App Building or the code.

Where will I use AWS Lambda?

AWS Lambda executes our backend code by automatically managing the AWS resources. When we say ‘manage’, it includes launching or terminating instances, health checkups, auto scaling, updating or patching new updates etc.

So, how does it work?

The code that we want Lambda to run is known as a Lambda function. Now, as we know a function runs only when it is called, isn’t it? Here, Event Source is the entity which triggers a Lambda Function and then the task is executed.

Let us take an example to understand it more clearly.

Suppose we have an app for image uploading. Now when we upload an image, there are a lot of tasks involved before storing it, such as resizing, applying filters, compression etc.

So, this task of uploading of an image can be defined as an Event Source or the ‘trigger’ that will call the Lambda Function and then all these tasks can be executed via the Lambda function. In this example, a developer just has to define the event source and upload the code.

Let’s understand this example with actual AWS resources.

Over here, we will be uploading images in the form of objects to an S3 bucket. This uploading an image to the S3 bucket will become an event source or the ‘trigger’.

The whole process, as we can see in the diagram, is divided into 5 steps. Let us understand each one of them.

  1. User uploads an image (object) to a source bucket in S3 which has notification attached to it for Lambda.
  2. The notification is read by S3 and it decides where to send that notification.
  3. S3 sends the notification to Lambda, this notification acts as an invoke call of the lambda function.
  4. Execution role in Lambda can be defined by using IAM (Identity and Access Management) to give access permission for the AWS resources, for this example here it would be S3.
  5. Finally, it invokes the desired lambda function which works on the object which has been uploaded to the S3 bucket.

If we were to solve this scenario traditionally, along with development, we would have hired people for         managing the following tasks:

  • Size, provision and scale up group of servers
  • Managing OS updates
  • Apply security patches and
  • Monitor all this infrastructure for performance and availability.

This would have been an expensive, tedious and tiresome task, therefore the need for AWS Lambda is justified. AWS Lambda is compatible with Java, C#, Node.JS, Go and Python, so we can upload our file in a zip, define an event source and we are set.

We now know – How Lambda works and what Lambda does.

Now, let us understand-

  1. Where to use Lambda?
  2. What purpose does Lambda serve, that other AWS Compute services don’t?

If we were to architect a solution to a problem, we should be able to identify where to use Lambda.

So, as an architect we have the following options to execute a task:

  • AWS EC2
  • AWS Elastic Beanstalk
  • AWS OpsWorks
  • AWS Lambda

Let’s take the above use case as an example and understand why we chose Lambda to solve it.

AWS OpsWorks and AWS Elastic Beanstalk are used to deploy an app, so our use case is not to create an app, but to execute a back-end code.

Then why not EC2?

If we were to use EC2, we would have to architect everything i.e. load balancer, EBS volumes, software stacks etc. In lambda, we don’t have to worry about anything, just insert our code and AWS will manage the rest.

For example, in EC2 we would be installing the software packages on our virtual machine which would support our code, but in Lambda we don’t have to worry about any VM, just insert plain code and Lambda will execute it for us.

But, if our code will be running for hours and we expect a continuous stream of requests, we should probably go with EC2, because the architecture of Lambda is for a sporadic kind of workload, wherein there will be some quiet hours and some spikes in the number of requests as well.

For example, logging the email activity for say a small company, would see more activity during the day than in the night, also there could be days when there are less emails to be processed and sometimes the whole world could start emailing us. In both the cases, Lambda is at our service.

Considering this use case for a big social networking company, where the emails are never ending because it has a huge user base, Lambda may not be the apt choice.

Limitations of AWS Lambda

Some limitations are hardware specific and some are bound by the architecture.

Hardware limitations include the disk size, which is limited to 512 MB, the memory can vary between 128 MB and 1536 MB. Then there are some other such as the execution timeout can be maximized to just 5 minutes, our request body payload can be no more than 6 MB. The request body payload is like the data that you send with a “GET” or “PUT” request in HTTP, whereas the request body would be the type of request, the headers etc.

Actually, these are not limitations, but are the design boundaries which have been set in the architecture of Lambda so if our use case does not fit these, we can always have the other AWS compute services at our disposal.

Let us now cover the expense part as well.

Let us take the above use case as an example and understand why we chose Lambda to solve it.

AWS OpsWorks and AWS ElasticBeanstalk are used to deploy an app, so our use case is not to create an app, but to execute a back-end code.

Pricing in AWS Lambda

Like most of the AWS services, AWS Lambda is also a pay per use service, meaning we only pay what we use, therefore we are charged on the following parameters

  1. The number of requests that we make to our lambda function
  2. The duration for which our code executes.

Requests

We are charged for the number of requests that we make across all our lambda functions.

AWS Lambda counts a request each time it starts executing in response to an event source or invoke call, including test is invoked from the console.

Let us look at the prices:

First 1 million requests, every month are for free.

0.20$ per million requests thereafter.

Duration

Duration is calculated from the moment our code starts executing till the moment it returns or terminates, it is rounded up to the nearest 100ms.

The price depends on the amount of memory we allocate to our function, we are charged $0.00001667 for every GB-second used. * Source: AWS official website

* Source: AWS official website

Let us create a Lambda function which will log “An object has been added” once we add an object to a specific bucket in S3.

Step1: From the AWS Management Console under compute section, select AWS Lambda.

Step2: On the AWS Lambda Console, click on “Create a Lambda function”.

Step3: On the next page, we have to select a blueprint. For example, we will be selecting the blank function for our use-case.

Step4: On the next page we will be (1) setting a trigger, since we are going to work on S3, (2) select the S3 trigger and then (3) click next.

Step5: On the configuration page, fill in the details. After that, fill the handler and role, leave the advanced settings as it is, in the end click next.

Step6: On the next page, review all the information, and click on “Create function”.

Step7: Now, since we created the function for S3 bucket, the moment we add a file to our S3 bucket, we should get a log for the same in CloudWatch, which is a monitoring service from AWS.

What is Amazon Web Services (AWS)

Amazon Web Services (AWS) is a secure cloud services platform from Amazon. AWS offers compute power, database storage, content delivery and other functionality to help businesses scale and grow. AWS provides services in the form of building blocks which can be used to create and deploy sophisticated scalable applications which support any work load in the cloud without any upfront costs and commitments, we pay for only what we use.

AWS Global Infrastructure it the physical part of AWS, made up of Regions, availability zones and Edge locations.

Region is a place i.e. a geographical area where AWS resources exist. An AWS region consists of 2 or more availability zones. An availability zone (AZ) is the AWS Data center.  An availability zone is isolated from another zone. Issues like natural calamity will not affect the other availability zone. Resources are not replicated across AWS Regions unless we specifically do so. Edge locations are points of presence and are used for content delivery network (CDN) endpoints for AWS Cloud-front. They are used for caching large multimedia content. For example, when a user downloads a video file from one region to another region, the file downloaded, it is cached and will be reused when someone else requests the same content next time.

amazon-web-services

The following table lists the regions provided by an AWS account.

AWS Region called AWS GovCloud are designed to allow US government agencies and customers to move more sensitive workloads into the cloud. AWS GovCloud addresses the US government’s specific regulatory and compliance requirements. (https://docs.aws.amazon.com/govcloud-us/latest/ug-west/whatis.html)

If we do not explicitly specify an endpoint, the US West (Oregon) endpoint is the default.

AWS services or building blocks are designed to work with each other and result in highly available applications which are sophisticated and scalable.

There are multiple services widely used as mentioned below:

  • Compute
  • Storage
  • Database
  • Migration
  • Network and Content Delivery
  • Management Tools
  • Security & Identity Compliance
  • Messaging

The Compute part of AWS includes services related to compute workloads and has the following services

  • Elastic Compute Cloud (EC2) – virtual machines in the AWS cloud e.g. like VM Ware
  • EC2 Container Service (ECS) – highly scalable and high-performance container management supports docker containers.
  • Elastic Beanstalk – used for intelligently deploying our applications.
  • Lambda – Serverless computing, no direct usage of hosts or OS. Code is executed in response to events. E.g. Amazon Echo uses Lambda service.
  • Amazon LightSail – out of the box web sites based on WordPress are automatically deployed.

The Storage domain includes services related data storage, it includes the following services

  • S3 (Simple Storage Service) – virtual disk in the cloud and is used for storing documents and media stuff and are known as objects. E.g. Dropbox uses S3 for storing documents.
  • Glacier – Archive files from S3 storage used for storing files which are no longer in use but must be retained for compliance requirements, it is a low-cost service
  • EFS (Elastic file service) – file based storage and is used for sharing application and databases among multiple virtual machines. 
  • Storage gateway – used for connecting S3 to our on-premise data center
  • EBS (Elastic Block Store) – is a virtual disk which can be attached to our EC2 instance.

The Database domain is used for database related workloads, it includes the following services

  • RDS (Relation Database Service) – MySQL, Maria DB, PostgreSQL, SQL-Server, Oracle
  • DynamoDB – Non-relational NoSQL Database, scalable and high-performance database
  • RedShift – Amazon’s data warehouse solution, Big Data and used for running reports.
  • Elasticache – Caching of frequently used data in the cloud, reduces load on the database.

The Migration domain is used for transferring data to or from the AWS Infrastructure, it includes the following services

  • Snowball – used for import and export of data, supports huge amount of data transfer from physical disks.
  • Database Migration Service (DMS) – Migrate on-premise database to AWS cloud or migrate from one region to another region.
  • SMS (Server Migration Service) – used for migrating our virtual machines from on-premise to AWS cloud

The Networking and Content Delivery domain is used for isolating your network infrastructure, and content delivery is used for faster delivery of content. It includes the following services:

  • Virtual Private Cloud (VPC) –  is like a virtual data center where our assets would be deployed.
  • Amazon Route 53 – is Amazon DNS Service which allows us to register domain names.
  • AWS Cloud Front – moved from storage section to networking section, consists of Edge locations.
  • Direct Connect – allows us to connect our office or physical data center to AWS Network over a dedicated telephone line due to security and reliable purpose.

The Management Tools domain consists of services which are used to manage other services in AWS, it includes the following services:

  • CloudWatch used to monitor performance of our AWS environment., disk, Ram and CPU utilization.
  • CloudFormation – converts infrastructure into code. Creates a document which describes our AWS environment. This acts like a template which can be used for deploying new servers.
  • CloudTrail used for auditing our AWS resource. It records the user activity about the changes made to the environment. e.g. if a new user is created, such activity gets recorded.

The Security & Identity, Compliance domain consist of services which are used to manage to authenticate and provide security to your AWS resources. It consists of the following services:

  • IAM (Identity Access Management) – fundamental to AWS. This service allows users to sign-in and authenticate users in AWS, setup new users, manage their permissions, group them (e.g. developer,           administrator etc.)
  • Inspector – is an agent which is installed in the virtual machines. Inspects and reports about their  security
  • Certificate Manager – Can provide SSL certificates for our domains
  • Directory Service – Connect Microsoft Active Directory to AWS
  • WAF (Web Application Firewall) – application level protection to our web applications, can prevent SQL injections, cross site scripting.

The Messaging domain consists of services which are used for queuing, notifying or emailing messages. It consists of the following domains:

  • SNS (Simple Notification Services) – Notify via email, text messages, HTTP endpoints.
  • SQS (Simple Queue Service) – allows us post jobs to a Queue which are processed asynchronously.
  • SES (Simple Email service) – allows us to send and receive email from AWS environment.

Building Applications

To begin with, we should analyze as to what is our application about? Is it something that requires us to be worried about the underlying infrastructure? Is it something that requires a database? Is it something which would need monitoring?

After we know all the requirements about our application, you can pick the domain and hence choose a service.

Like for example, we want to deploy an application in AWS and in case our application does not require us to worry about the underlying architecture, which service will we choose from?

Well, in the compute section there is this service called Elastic Beanstalk. We just upload our application, and AWS does the rest for us. It is that simple!

Of course, you would not know about any of these services without using them, isn’t it? That is why AWS came up with an amazing free tier option.

Who is eligible for a free tier?

Every customer from the time he registers on AWS, receives the free tier option, and is eligible for the same till 1 year from the time of registration.

How shall this help?

We can try every service available in free tier of AWS and learn about such services. The more we practice, the more we learn about AWS.

So basically, we learn for free!

How do you sign up on AWS?

Step 1: Go to aws.amazon.com and click on Create an AWS Account.

How do you sign up on AWS?

Step 2:

Cloud Computing Job Roles – Azure Cloud

It’s raining jobs in Cloud!

  • Companies of all sizes are moving in greater numbers to the cloud while cloud providers continue to grow their operations to support more and more workloads.
  • An IDC report released in 2012 estimated a worldwide growth of 14 million Cloud-based jobs by the end of 2015.
  • There are about 100 jobs chasing each qualified candidate at this point in time, according to technical recruiters.

With cloud many roles will be redefined or replaced with new roles.

  1. Cloud System Engineer / I.T. Professional
    • Responsible to implement and operate Virtual Systems that support cloud implementation.
    • To build and configure Virtual Network and provision Virtual Machines, Storage Accounts, Databases, Network Load Balancer, Gateways etc.
    • They’re responsible for the scale-in/scale-out infrastructure.
    • Should have system engineering experience, holistic understanding of the Internet and hosting from the network layer up through the application layer.
    • Should have experience in 24×7 hosting environment.
    • Should have knowledge of using maintaining and monitoring tools, scripting, configuration manager tools, network security, firewalls, etc…

  2. Application Developers / Software Engineer
    • Responsible for design and development of different types of software applications that integrate with cloud service providers.
    • Developers can take advantage of managed services such as databases, storages, queues, caches, workflows, and more to bring new applications to market quicker and cheaper than ever before.
    • They need to understand how these managed services can be used to build highly available, fault-tolerant and scalable applications.
    • Increasingly, job requirements for developer opportunities are adding Cloud Computing as a must-have skill.
    • Required credentials: Computer Science engineering with 2+ years of professional experience in software development. Must have an excellent understanding of at least one language like C#, Java, PHP, Python, etc.
  3. DevOps Engineers
    • Responsible for Automation of deployment and configuration of applications.
    • DevOps represents a merger between development and operations. It breaks down the barrier of developers and operations engineers with the goal of streamlining the application lifecycle.
    • The role often is responsible for managing the infrastructure through version-controlled source files that can be used to recreate Cloud environments in hours and minutes instead of weeks and days under the traditional model.
    • DevOps is more attainable now than it ever has been with the ease of automation for infrastructure and software services, making it a natural choice for developers and/or system administrators with scripting experience.
  4. Cloud Architect
    • Should possess a strong understanding of how to design and build Cloud environments to ensure that systems are scalable, reliable, secure and supportable and that they achieve business performance and budgetary objectives.
    • Their knowledge of a Cloud platform is broad enough to know which services are best suited for any particular situation including whether or not a hybrid environment makes sense.
    • Should have significant experience designing, installing and administrating virtualized environments. They lead migration projects to move companies into the Cloud.
    • They design for disaster recovery and mitigation.
    • They will be required in companies which build applications and/or infrastructure in the Cloud.
    • There are various certifications that, when combined with experience, can help Cloud architects stand out. Additionally, they must stay update-to-date on the latest and greatest features of Cloud platforms to stay competitive in the market.
    • Required credentials: Engineers with 8 to 10 years of experience dealing with large-scale, multiplatform networks, expert level knowledge of Linux and Windows OS. High-level understanding or programming languages. Significant experience designing, installing and administrating virtualized environments.
  5. Cloud Services Developer
  6. Cloud System Administrator
  7. Cloud Sales Executive
  8. Cloud Consultant

Cloud Computing Platforms and Certifications

Deployment Models in Cloud Computing & Advantage and Disadvantages

There are three main deployment models in Cloud Computing.

1.Private Cloud:

  • A private cloud hosting solution, also known as an internal or enterprise cloud, resides on the company’s intranet or hosted data center where all of your data is protected behind a firewall.
  • This can be a great option for companies who already have expensive data centers because they can use their current infrastructure.
  • You go for a private cloud when you have strict security and data privacy issues.
  • Cons: The main drawback people see with a private cloud is that all management, maintenance, and updating of data centers is the responsibility of the company.

2. Public Cloud:

  • These are the clouds which are open for use by the general public and they exist beyond the firewall of an organization, fully hosted and managed by vendors.
  • Your data is stored in the provider’s data center and the provider is responsible for the management and maintenance of the data center.
  • Because you are sharing computing resources among a network of users, the public cloud offers greater flexibility and cost savings.
  • This is a good option if your demand for computing resources fluctuates. You have to purchase the capacity on the basis of usage and can scale up or scale down server capabilities based on traffic and other dynamic requirements.
  • This type of cloud environment is appealing to many companies because it reduces lead times in testing and deploying new products.
  • Cons: They are more vulnerable than private clouds and there is no control of resources used or who shares them.
  • Note: Even though you don’t control the security of a public cloud, all of your data remains separate from others and security breaches of public clouds are extremely rare.

3.Hybrid Clouds:

  • They consist of external and internal providers, namely a mix of public and private clouds.
  • Secure & critical apps are managed by an organization and the not-so-critical & secure apps by the third party vendor. For example, you can use a public cloud to interact with the clients but keep their data secured within a private cloud. Most companies are now switching to Hybrid clouds.
  • Ideal in situations where you have planned is to migrate to a complete cloud solution as existing hardware expires or you have some applications or hardware that are not ready for the cloud.

Advantages and Disadvantages of Public Cloud

Advantages of Cloud Computing:

  1. Lower Computer Cost.
  2. Improved Performance.
  3. Reduced Software Cost and Instant Software Updates.
  4. Unlimited Storage Capacity.
  5. Universal Document Access.
  6. Increased data reliability.
  7. Device Independence.

Disadvantage of Cloud Computing:

  1. Requires a constant Internet connection.
  2. Does not work well with low-speed connections.
  3. Features might be limited based on the provider you choose.
  4. Can be slow.
  5. Stored data might not be secure.
  6. If your data is stored abroad whose policy do you adhere to?