Connected and Disconnected Architecture

DevOps Practice Tool

DevOps Tools

To expedite and actualize DevOps process apart from culturally accepting it, one also needs various DevOps tools like Puppet, Jenkins, GIT, Chef, Docker, Selenium, Azure/AWS etc to achieve automation at various stages which helps in achieving Continuous Development, Continuous Integration, Continuous Testing, Continuous Deployment, Continuous Monitoring to deliver a quality software to the customer at a very fast pace.

Tools Required for

  • Version Control System
  • Configuration management
  • Ticketing System
  • Resource Monitoring
  • Provisioning

What is CI and CD

  • Continuous integration is a software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. The key goals of continuous integration are to find and address bugs quicker, improve software quality, and reduce the time it takes to validate and release new software updates.
  • Continuous delivery is a software development practice where code changes are automatically built, tested, and prepared for a release to production. It expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has passed through a standardized test process.

CI and CD enables agile teams to increase deployment frequency and decrease lead time for change, change-failure rate, and mean time to recovery key performance indicators (KPIs), thereby improving quality and delivering value faster. The only prerequisites are a solid development process, a mindset for quality and accountability for features from ideation to deprecation, and a comprehensive pipeline.

DevOps as a profession – DevOps Engineer

When the company’s management decides to shift to DevOps, the need arises to train IT department specialists to master certain practices and use new tools. In this case, either developers or system administrators need to assume new job responsibilities. A better alternative may be hiring a professional with a clear understanding of the DevOps approach and an ability to set all the necessary processes properly.

After getting software requirements specifications, a DevOps engineer starts setting up the IT infrastructure required for the development. When the IT infrastructure is ready and provided to developers, testers, and other specialists involved in the development cycle, a DevOps engineer ensures that the development and testing environments are aligned with the production environment.

If you ask the DevOps engineer what exactly they do, the answer will likely mention “automation”. What they actually mean is the following:

  • Automating software delivery from the testing environment to the production.
  • Managing physical and virtual servers and their configurations.
  • Monitoring the IT infrastructure’s state and the application’s behavior.

Things to know to become a DevOps engineer

1.Linux and/or Windows Administrator
2.Programming Skills
3.Cloud Management Skills
4.Practical Experience with Containers and orchestration

Below is a more comprehensive list of tools most commonly required to get a job in DevOps:

  • Version control systems (Git, Team Foundation Server (TFS), Apache Subversion, etc.).
  • Continuous integration tools (Jenkins, Travis CI, TeamCity, and others). ·Software deployment automation platforms (Chef, Puppet, Ansible, etc.).
  • Systems monitoring tools (Grafana, Zabbix, Prometheus, and the like).
  • Cloud infrastructure (AWS, Microsoft Azure, Google Cloud Platform (GCP), Alibaba Cloud, and more).
  • Container orchestration tools (such as Kubernetes, Docker Swarm, Apache Mesos, OpenShift, Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), WS EKS)

DevOps engineers’ average salary in the US is twice as high as that of a system administrator. The reason is quite simple —a competent DevOps engineer can greatly increase the efficiency of software development and operations.
More: https://www.scnsoft.com/blog/how-to-become-a-devops-engineer

Azure DevOps Features
You can use one or more of the following features based on your business needs:
1.Azure Boards delivers a suite of Agile tools to support planning and tracking work, code defects, and issues using Kanban and Scrum methods.
2.Azure Repos provides Git repositories or Team Foundation Version Control (TFVC) for source control of your code.
3.Azure Pipelines provides build and release services to support continuous integration and delivery of your apps.
4.Azure Test Plans provides several tools to test your apps, including manual/exploratory testing and continuous testing.
5.Azure Artifacts allows teams to share Maven, npm, and NuGet packages from public and private sources and integrate package sharing into your CI/CD pipeines.

Azure DevOps supports adding extensions and integrating with other popular services, such as: Campfire, Slack, Trello, UserVoice, and more, and developing your own custom extensions. Azure DevOps provides extensive integration with industry and community tools. It is far from the closed-off single vendor solution that was the early version of TFS. As noted above, there is a market place which makes hundreds of extensions available, so if Azure Develops doesn’t do something out of the box, odds are a tool exists in the market which does.

Pricing:
https://azure.microsoft.com/en-in/pricing/details/devops/azure-devops-services/

To Create an DevOps Account / Organization:
Sign up with a personal Microsoft account
1.Visit https://azure.microsoft.com/services/devops/
2.Click Start free
3.Either Login with Existing Microsoft Account or Create a New one.
An organization is created based on the account you used to sign in. Sign in to your organization at any time,
(https://dev.azure.com/{yourorganization}).

Create a Project to get started

  • If you signed up for Azure DevOps with a newly created Microsoft account (MSA), your project is automatically created and named based on your sign-in.
  • If you signed up for Azure DevOps with an existing MSA or GitHub identity, you’re automatically prompted to create a project. You can create either a public or private project.
  • A public Git Hub repository is accessible to everyone, whereas a private repositoryis accessible to you and the people you share it with. In both cases, only collaborators can commit changes to a GitHub repository.

Create Users
1.Create an outlook ID
2.Activate your Subscription (FREE account or Azure Pass Sponsorship or get owner rights other users subscription)
3.Visit https://portal.office.com
4.Go to Azure Active Directory and create users user1@orgname.onmicrosoft.comand user2@orgname.onmicrosoft.com

Invite team members
1.Create couple of outlook ids
2.Use primary email id and visit https://dev.azure.com/.
3.Start Free
4.Create a project to get started
 a. Project Name = “Demo Project”
 b. Description = “For Demo”
 c. Visibility = “Private”
 d. Expand Advanced, Version Control = Git, Work Item process = “Agile”
 e. Create Project
5.Invite Users
 a. Use bread crump and navigate to Organization
 b. Select Organization settings.
 c. Select UsersàAdd new users.
 d. Users = <Email Id of another User), Access level = Basic, Add to project = <project created before>, Azure DevOps Group=Project Contributors
Note: Youwill have to now login as another use and accept the invitation. Do the same in different browser..

Start to Learn Complete Azure DevOps Online

What is Azure DevOps?

Azure DevOps Introduction

Azure DevOps (formerly Visual Studio Team Services) is a hosted suite of service providing development and collaboration tools for anyone who wants an enterprise-grade DevOps tool chain. Azure DevOps can help your team release code in a more efficient, cooperative, and stable manner. Azure DevOps has a lot of inbuilt functionality that allows teams to get up and running with managing their project and automating their workflows to increase productivity with a very short initial learning curve.

you can quickly get up and running with the many tools available.

  • Git repositories for source control
  • Build and Release pipelines for CI/CD automation
  • Agile tools covering Kanban/scrum project methodologies
  • Many pre-built deployment tasks/steps to cover the most common use cases and the ability to extend this with your own tasks.
  • Hosted build/release agents with ability to additionally run your own
  • Custom dashboards to report on build/release and agile metrics.
  • Built in wiki

Azure DevOps is available in two different forms:

  • Azure DevOps Server, collaboration software for software development formerly known as Team Foundation Server (TFS) and Visual Studio Team System (VSTS)
  • Azure DevOps Services, cloud service for software development formerly known as Visual Studio Team Services and Visual Studio Online

History: This first version of Team Foundation Server was released March 17, 2006.

Product name Form Release year
Visual Studio 2005 Team System On-premises 2006
Visual Studio Team System 2008 On-premises 2008
Team Foundation Server 2010 On-premises 2010
Team Foundation Service Preview Cloud 2012
Team Foundation Server 2012 On-premises 2012
Visual Studio Online Cloud 2013
Team Foundation Server 2013 On-premises 2013
Team Foundation Server 2015 On-premises 2015
Visual Studio Team Services Cloud 2015
Team Foundation Server 2017 On-premises 2017
Team Foundation Server 2018 On-premises 2017
Azure DevOps Services Cloud 2018
Azure DevOps Server 2019 On-premises 2019

Traditional Software Development Life Cycle

The developers create applications and the operations teams deploy them to an infrastructure they manage.

Responsibility of Developers.

1.Develop Software Applications
2.New Features Implementation
3.Collaborate with other Developers in Team.
4.Maintain Source Repos and deal with versions.
5.Pass on the code to the operations team.

Responsibility of IT Operations

1.IT Operations determine how the software and hardware are managed.
2.Plan and Provide the required IT Infrastructure for Testing and Production of Applications.
3.Deploy the Application and Database.
4.Validate and Monitor performance.

Waterfall Model:

In the below diagram you will see the phases it will involve:

How the traditional Systems worked:

Tasks would be divided into different groups based on specialization

1.Group to write specification
2.Group that Develop application.
3.Group that Test the application.
4.Group to configure and manage VM
5.Group to that hands over VM to another group to install database
6.and so on…

A system / process is created for each action and each group operations in isolation from others. Groups communicate with each other in a very formal way, such as using ticketing system.

Drawback:

  • This requires handoffs from one group to another. This can introduce significant delays, inconsistencies and inaccuracies.
  • Lack of a common approach among the groups contributes to the problems of long build times and errors.
  • And blame game begins.

What is Agile Methodology

Agile is a process by which a team can manage a project by breaking it up into several stages and involving constant collaboration with stakeholders and continuous improvement and iteration at every stage. There are no surprises. Continuous collaboration is key, both among team members and with project stakeholders, to make fully-informed decisions.

Scrum is a framework for project management that emphasizes teamwork, accountability and iterative progress toward a well-defined goal….

The three pillars of Scrum are transparency, inspection and adaptation. The framework, which is often part of Agile software development, is named for a rugby formation. Scrum is one of the implementations of agile methodology. In which incremental builds are delivered to the customer in every two to three weeks’ time.

Get Complete Microsoft Azure DevOps training

Azure DevOps Development and Operation


Click to below link – DevOps Lessons from Formula 1
https://www.devopsgroup.com/blog/devops-lessons-formula-1-part-2/

DevOps Practices

  • Agile planning. Together, we’ll create a backlog of work that everyone on the team and in management can see. We’ll prioritize the items so we know what we need to work on first. The backlog can include user stories, bugs, and any other information that helps us.
  • Continuous integration (CI). We’ll automate how we build and test our code. We’ll run that every time a team member commits changes to version control.
  • Continuous delivery (CD). CD is how we test, configure, and deploy from a build to a QA or production environment.
  • Monitoring. We’ll use telemetry to get information about an application’s performance and usage patterns. We can use that information to improve as we iterate.

How DevOps Works?

Under a DevOps model, development and operations teams are no longer “siloed.” Sometimes, these two teams are merged into a single team where the engineers work across the entire application lifecycle. Starting from design and development to testing automation and from continuous integration to continuous delivery, the team works together to achieve the desired goal. People having both development and operations skill sets work together and use various tools for CI-CD and Monitoring to respond quickly to customers need and fix issues and bugs.

Benefits of DevOps over Traditional IT

Speed + Rapid Delivery + Reliability + Scale + Improved Collaboration + Security

With DevOps, teams:

1.Deploy more frequently
In fact, some teams deploy up to dozens of times per day. Practices such as monitoring, continuous testing, database change management, and integrating security earlier in the software development process help elite performers deploy more frequently, and with greater predictability and security.
2.Reduce lead time from commit to deploy
Lead time is the time it takes for a feature to make it to the customer. By working in smaller batches, automating manual processes, and deploying more frequently, elite performers can achieve in hours or days what once took weeks or even months.
3.Reduce change failure rate
A new feature that fails in production or that causes other features to break can create a lost opportunity between you and your users. As high-performing teams mature, they reduce their change failure rate over time.
4.Recover from incidents more quickly
When incidents do occur, elite performers are able to recover more quickly. Acting on metrics helps elite performers recover more quickly while also deploying more frequently.

How you implement cloud infrastructure also matters. The cloud improves software delivery performance, and teams that adopt essential cloud characteristics are more likely to become elite performers.

The information here is based on DevOps research reports and surveys conducted with technical professionals worldwide.

Organizations that have implemented DevOps saw these benefits:

1.Improved Quality of Software Deployments -65%
2.More frequent Software Releases –63%
3.Improved visibility into IT process and requirements –61%
4.Cultural change (Collaboration and Cooperation) –55%
5.More responsiveness to Business Needs –55%
6.More Agile Development –51%
7.More Agile Change management process –45%
8.Improved Quality of Code –38%

Learn Microsoft Azure DevOps Online

AWS Lambda Introduction

AWS Lambda is a compute service that lets us run code without provisioning or managing servers. AWS

Lambda service executes our code only when needed and scales automatically, from a few requests per day to thousands per second. We pay only for the compute time we consume, i.e. there is no charge when our code is not running. With AWS Lambda, we can run code for virtually any type of application or backend service – with zero administration. AWS Lambda runs our code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. All we need to do is supply our code in one of the languages that AWS Lambda supports (as of now – Node.js, Java, C#, Go and Python).

When Should we Use AWS Lambda?

AWS Lambda is an ideal compute platform for many application scenarios, provided that we can write our application code in languages supported by AWS Lambda (Node.js, Java, C# and Python) and run within the AWS Lambda standard runtime environment and resources provided by Lambda.

When using AWS Lambda, we are responsible only for our code. AWS Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources. This is in exchange for flexibility, which means we cannot login to compute instances or customize the operating system or language runtime. These constraints enable AWS Lambda to perform operational and administrative activities on our behalf, including provisioning capacity, monitoring fleet health, applying security patches, deploying our code, and monitoring and logging our Lambda functions.

If we need to manage our own compute resources, Amazon Web Services also offers other compute services to meet our needs.

Amazon Elastic Compute Cloud (Amazon EC2) service offers flexibility and a wide range of EC2 instance types to choose from. It gives us the option to customize operating systems, network and security settings, and the entire software stack, but we are responsible for provisioning capacity, monitoring fleet health and performance and using Availability Zones for fault tolerance.

Elastic Beanstalk offers an easy-to-use service for deploying and scaling applications onto Amazon EC2 in which we retain ownership and full control over the underlying EC2 instances.

What is AWS Lambda?

Amazon explains that AWS Lambda (λ) as a ‘serverless’ compute service, which implied the developers do not have to worry about which AWS resources to launch or how will they manage them, they just put the code on lambda and it runs, it is as simple as that. It helps us to focus on core-competency i.e. App Building or the code.

Where will I use AWS Lambda?

AWS Lambda executes our backend code by automatically managing the AWS resources. When we say ‘manage’, it includes launching or terminating instances, health checkups, auto scaling, updating or patching new updates etc.

So, how does it work?

The code that we want Lambda to run is known as a Lambda function. Now, as we know a function runs only when it is called, isn’t it? Here, Event Source is the entity which triggers a Lambda Function and then the task is executed.

Let us take an example to understand it more clearly.

Suppose we have an app for image uploading. Now when we upload an image, there are a lot of tasks involved before storing it, such as resizing, applying filters, compression etc.

So, this task of uploading of an image can be defined as an Event Source or the ‘trigger’ that will call the Lambda Function and then all these tasks can be executed via the Lambda function. In this example, a developer just has to define the event source and upload the code.

Let’s understand this example with actual AWS resources.

Over here, we will be uploading images in the form of objects to an S3 bucket. This uploading an image to the S3 bucket will become an event source or the ‘trigger’.

The whole process, as we can see in the diagram, is divided into 5 steps. Let us understand each one of them.

  1. User uploads an image (object) to a source bucket in S3 which has notification attached to it for Lambda.
  2. The notification is read by S3 and it decides where to send that notification.
  3. S3 sends the notification to Lambda, this notification acts as an invoke call of the lambda function.
  4. Execution role in Lambda can be defined by using IAM (Identity and Access Management) to give access permission for the AWS resources, for this example here it would be S3.
  5. Finally, it invokes the desired lambda function which works on the object which has been uploaded to the S3 bucket.

If we were to solve this scenario traditionally, along with development, we would have hired people for         managing the following tasks:

  • Size, provision and scale up group of servers
  • Managing OS updates
  • Apply security patches and
  • Monitor all this infrastructure for performance and availability.

This would have been an expensive, tedious and tiresome task, therefore the need for AWS Lambda is justified. AWS Lambda is compatible with Java, C#, Node.JS, Go and Python, so we can upload our file in a zip, define an event source and we are set.

We now know – How Lambda works and what Lambda does.

Now, let us understand-

  1. Where to use Lambda?
  2. What purpose does Lambda serve, that other AWS Compute services don’t?

If we were to architect a solution to a problem, we should be able to identify where to use Lambda.

So, as an architect we have the following options to execute a task:

  • AWS EC2
  • AWS Elastic Beanstalk
  • AWS OpsWorks
  • AWS Lambda

Let’s take the above use case as an example and understand why we chose Lambda to solve it.

AWS OpsWorks and AWS Elastic Beanstalk are used to deploy an app, so our use case is not to create an app, but to execute a back-end code.

Then why not EC2?

If we were to use EC2, we would have to architect everything i.e. load balancer, EBS volumes, software stacks etc. In lambda, we don’t have to worry about anything, just insert our code and AWS will manage the rest.

For example, in EC2 we would be installing the software packages on our virtual machine which would support our code, but in Lambda we don’t have to worry about any VM, just insert plain code and Lambda will execute it for us.

But, if our code will be running for hours and we expect a continuous stream of requests, we should probably go with EC2, because the architecture of Lambda is for a sporadic kind of workload, wherein there will be some quiet hours and some spikes in the number of requests as well.

For example, logging the email activity for say a small company, would see more activity during the day than in the night, also there could be days when there are less emails to be processed and sometimes the whole world could start emailing us. In both the cases, Lambda is at our service.

Considering this use case for a big social networking company, where the emails are never ending because it has a huge user base, Lambda may not be the apt choice.

Limitations of AWS Lambda

Some limitations are hardware specific and some are bound by the architecture.

Hardware limitations include the disk size, which is limited to 512 MB, the memory can vary between 128 MB and 1536 MB. Then there are some other such as the execution timeout can be maximized to just 5 minutes, our request body payload can be no more than 6 MB. The request body payload is like the data that you send with a “GET” or “PUT” request in HTTP, whereas the request body would be the type of request, the headers etc.

Actually, these are not limitations, but are the design boundaries which have been set in the architecture of Lambda so if our use case does not fit these, we can always have the other AWS compute services at our disposal.

Let us now cover the expense part as well.

Let us take the above use case as an example and understand why we chose Lambda to solve it.

AWS OpsWorks and AWS ElasticBeanstalk are used to deploy an app, so our use case is not to create an app, but to execute a back-end code.

Pricing in AWS Lambda

Like most of the AWS services, AWS Lambda is also a pay per use service, meaning we only pay what we use, therefore we are charged on the following parameters

  1. The number of requests that we make to our lambda function
  2. The duration for which our code executes.

Requests

We are charged for the number of requests that we make across all our lambda functions.

AWS Lambda counts a request each time it starts executing in response to an event source or invoke call, including test is invoked from the console.

Let us look at the prices:

First 1 million requests, every month are for free.

0.20$ per million requests thereafter.

Duration

Duration is calculated from the moment our code starts executing till the moment it returns or terminates, it is rounded up to the nearest 100ms.

The price depends on the amount of memory we allocate to our function, we are charged $0.00001667 for every GB-second used. * Source: AWS official website

* Source: AWS official website

Let us create a Lambda function which will log “An object has been added” once we add an object to a specific bucket in S3.

Step1: From the AWS Management Console under compute section, select AWS Lambda.

Step2: On the AWS Lambda Console, click on “Create a Lambda function”.

Step3: On the next page, we have to select a blueprint. For example, we will be selecting the blank function for our use-case.

Step4: On the next page we will be (1) setting a trigger, since we are going to work on S3, (2) select the S3 trigger and then (3) click next.

Step5: On the configuration page, fill in the details. After that, fill the handler and role, leave the advanced settings as it is, in the end click next.

Step6: On the next page, review all the information, and click on “Create function”.

Step7: Now, since we created the function for S3 bucket, the moment we add a file to our S3 bucket, we should get a log for the same in CloudWatch, which is a monitoring service from AWS.