PLASMA CASH DAPP USE CASE

The Plasma Framework solves the privacy and scalability issues of the public Ethereum blockchain. In this blog, we will take a closer look at the possible use cases that can be implemented as decentralized application (DApp) using Plasma Cash, and how can those be implemented.

Plasma Cash Use Cases

Plasma Cash converts all deposit tokens into a non-fungible-token (NFT), wherein each NFT token holds a unique and stagnant behavior. Therefore, Plasma Cash can be an ideal fit for applications where assets aren’t divided and ownership is changed frequently. Gaming Dapps can use the scalability potential of Plasma Cash where users exchange game objects modeled as token, for instance any character’s card, suit, or any power. In addition, non-gaming models can also benefit from the scalability and privacy potential of Plasma Cash. For example- Artifact review workflow application (developed as DApp) in an Enterprise, where a given artifact is transferred between users and the artifact can hold one of the states, namely Created, Submitted, Rejected, Accepted, and many more.

Document Review Workflow Use Case

In any enterprise, artifacts such as Product (Service) specifications, quotations, proposals, etc. are created and transferred to multiple departments for evaluation. Each stake holder can reverse it conclusively, accept it, or transfer it ahead, and Plasma Cash is the best fit for an application like this. Here, the artifacts can be modeled as NFT tokens and state changes are similar to NFT ownership transfers. The final state will be transferred to the Ethereum parent chain.

 

Figure (1) shows a simplified workflow of an artifact.

Consider a document created by a user and the document owner has submitted it to another user for review. In this step, the document moves from ‘created’ state to ‘submitted’ state. After successful review, the reviewer will send this document for publication to the third user and the document moves from the ‘submitted’ state to the ‘reviewed’ state. Post publication, the document state is changed to ‘published’ state.

The deposit function of the plasma contract will take document hash as the input and produces an equivalent NFT token as output. As document state is altered, it’s equivalent NFT token goes through corresponding state changes; after publishing the document, the NFT token will be transferred to the Ethereum parent chain using the plasma contract exit function.

DApp Components

DApp built using plasma would be based on a similar set of components i.e. every DApp consists of an operator process, a user component to ensure child chain security to arrest double spend, and a client application responsible for interacting with the child and the parent chain.

The client application deposits a token to the Plasma Chain smart contract on the parent chain, then the smart contract will return an unsigned integer and emit a DEPOSIT event. So far, DApp only has UID knowledge and DApp needs to fetch the deposit block index. For that, DApp needs to monitor all the upcoming blocks on the child chain and check whether the upcoming block has a deposit block with the same UID. The block index is the utmost important property either for security or transferring it to another user. The user component maintains all token’s UID of the user and corresponding block index details and watches the child chain at least once during the exit period. The user component fetches all the upcoming blocks and smart contract states to check incoming tokens, or if anyone has tried an invalid exit or a double-spent, and then, raise counter action appropriately.

Conclusion

We are working on Plasma Cash for use-cases where we need to change one or more states including NFT transfer. In case we have got you interested enough, do check out our work on the GitHub repository.

Deploy application in Kubernetes cluster using GoCD with helm

This blog will explain the deployment of the micro-service applications using the GoCD with Helm deployment manager in the K8s cluster.

Blog consider that user is having some background knowledge of below technology

  1. Kubernetes [https://www.talentica.com/blogs/kubernetes-introduction-architecture-overview/]
  2. Micro-services
  3. CI/CD deployments

GOCD:

GoCD is an opensource tool used in the software development process. It supports continuous integration and continuous delivery (CI/CD) software lifecycle.

GoCD Architecture

  • GoCD follows the server and agent approach.
  • GoCD server runs 3 components to achieve CI/CD of an application:

– material update sub-system

– scheduling sub-system

– work assignment sub-system.

  • The material update sub-system and scheduling sub-system work to initiate the build/deployment if there is a new commit or at a fixed time interval respectively.
  • Work assignment sub-system gets activated when agent connects to server
  • The agent pulls work from the server instead of server sending work to an agent, and it’s responsible for sending the status back to server.
  • Agent gets a job allocated to it if it matches the constraints
  • The server will assign jobs to agents based on agent status. The agent is responsible for sharing the agent ping and agent state to the server. If the server finds the agent in idle state it assigns the job to it.

Concepts in GoCD

Tasks: This is a single entity consider as a single command.

Eg. sh –c mvn install

Fig. Task

Job: Job consist of multiple tasks. Bended to run sequentially. The tasks are run independently will not get affected by other tasks definitions. Job will stop execution if previous tasks failed tasks queue.

                          Fig. Job

Stage: stage consist of multiple jobs. The jobs are executed in parallel. if a single job failed then also the other job will complete its tasks independently.

Fig. Stage

Pipeline: pipeline consist of multiple stages. The stages are run in sequence.

                                                         Fig. Pipeline

Materials: GoCD supports Git, SVN, Mercurial repository, artifact publish by the GoCD pipeline.

Trigger: GoCD pipeline can be triggered based on manual, commit, time schedule and based on previous pipeline status – failed/passed

Fig. Materials and trigger

Value Stream Map: Visual map for pipelines and its connectivity.

Artifacts: Every job can publish its own material so that other jobs or stage can access in execution.

Agent: GoCD agents are the workers in GoCD, they will pick-up the tasks from the pipeline and execute them. The agents are in ideal state until the pipeline gets a trigger.

Resources: Resource is the tagging for an agent. Using tagging jobs can decide to run the tasks on the specific agents. Generally, the tags are assigned with specific software installed on agent. For example, the maven job we can have the agent running with a maven package installed on it.

Environments: Environment is to isolate the pipeline and agent. The pipeline will only run its job on an agent within the same environment. Else if no environment is assigned pipeline will run on an agent with no environment assigned

Installation of  GoCD

  1. Login to server and install using root permission
  2. Install GoCD server [https://docs.gocd.org/current/installation/installing_go_server.html]
  3. Start the GoCD server.

systemctl start go-server

  1. Open the GoCD console from <PUBLIC_IP>:8153

In GoCD practical mention the lab setup we are using.

Configure kubectl in GoCD server

  1. Login to GoCD server and install using root permission
  2. Install kubectl [https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-using-native-package-management]
  3. Download the kubectl config file for K8s cluster.
  4. Log in to the GoCD server with ‘go’ username.
  5. Copy the config content from k8s config file and paste it in ~/.kube/config.
  6. Check the connectivity with k8s cluster by running

kubectl get pod

Install docker and enable the docker access to `go` system user.

  1. Login to GoCD server and install using root permission
  2. Install docker [https://docs.docker.com/install/linux/docker-ce/ubuntu/]
  3. Enable docker access to `go` user [https://docs.docker.com/install/linux/linux-postinstall/]

 

Helm:

Helm tool helps to manage the Kubernetes application. Just store the Kubernetes template using helm chart and use helm with install, update. Helm allows creating a configuration package so we can use the same k8s template with multiple environments.

In our scenario, we are going to use the Helm atop of Kubernetes orchestration scripts.

Configure the helm package

  1. Login to GoCD server and install using root permission
  2. Install helm package [https://helm.sh/docs/using_helm/#installing-the-helm-client]
  3. Initialize the helm package. It will install the helm tiller which will run in K8s cluster.

helm init

 

Create the helm chart.

  1. Helm uses the packaging called chart. The package is a directory structure that contains the Kubernetes resources. [https://helm.sh/docs/developing_charts/#charts]

 

Install maven on GoCD server

  1. Login to GoCD server and install using root permission
  2. Install maven plugin [https://maven.apache.org/install.html]
  3. Verify mvn installation

mvn -v

 

Quick check for what we have setup.

  1. GoCD server setup
  2. Install docker on GoCD server.
  3. Configure the kubectl from go user to connect k8s cluster.
  4. Setup the helm client and helm tiller.
  5. Store the Kubernetes template in helm chart directory structure.
  6. Install maven package on GoCD

 

Let’s start with the practical

In the following tutorial, we are going to create a maven build, build a docker image from build and deploy the image in Kubernetes cluster using the helm package.

Pre-requisite

  1. Running Kubernetes cluster. You need to have admin role to deploy pods in cluster.
  2. Docker repository(private/public).
  3. Dockerfile placed in code to build in GoCD pipeline.

 

Deploy Kubernetes application using GoCD pipelines

  1. Create pipeline. Select Admin –> Pipeline
  • Create pipeline group. Pipelines tab –> Add new Pipeline Group
  • Give the name for pipeline group, for example, config and save.
  • Create the pipeline in pipeline group.

Step 1. In basic setting pass the pipeline name and pipeline group name. –> Next

Step 2.  Provide the materials. Materials type URL is mandatory. Also, check the connection before processing. If a connection does not work, follow this step to Connecting to GitHub with SSH [ https://help.github.com/en/articles/connecting-to-github-with-ssh ]

Once the step is done. Run the below command by replacing your repo name. Confirm the connection request.
git ls-remote git@github.com:<repo_name>  refs/heads/dev

Once, this step is done again try to run the connection test from the GoCD console.

Step 3. Stage/Job definition, add the name for stage. Configure the trigger.
           Initial job details. Save the settings.

 

  • Open the pipeline and add the remaining steps for mvn build, docker build and docker push steps

  • Create local artifact for helm package so that we can fetch same in deployment stage.

 

  • Create the deployment pipeline in the same pipeline group. From top left to go Admin –> Pipelines. You will see the option to add new pipeline in config pipeline group

  • Let’s create the deployment pipeline

Step 1. Add the pipeline name and add the pipeline group name

Step 2. Add the material as build pipeline. So, the deployment will get trigger when build is done. Select the build pipeline name from dropdown.

Step 3: Add the stage and job for the deployment as below. Adding first step as docker pull

  • Open the pipeline and add the remaining steps for deployment to k8s cluster

  • On the Dashboard you will see the 2 pipeline setup under config pipeline group.

It’s done.

Kubernetes Introduction & Architecture Overview

In this blog, We will be covering the following topics and try to give an introduction about Kubernetes (K8s), which is an open-source system for automating deployment, scaling, and management of containerized applications.

  • Why is Container orchestration needed?
  • Why Kubernetes?
  • K8s Architecture
  • Master Node Components
  • Worker Node Components

Why is Container orchestration needed?

  • Manages container lifecycle within the cluster.
  • Provisioning and deployment of containers.
  • Redundancy and availability of containers.
  • Scaling up or removing containers to spread application load evenly across host infrastructure.
  • Movement of containers from one host to another if there is a shortage of resources in a host, or if a host dies.
  • Allocation of resources between containers.
  • External exposure of services running in a container with the outside world.
  • Load balancing of service discovery between containers.
  • Health monitoring of containers and hosts.

What is Kubernetes?

Kubernetes is an open-source orchestration system for containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users’ declared intentions.

Why Kubernetes?

Service discovery and load balancing

Kubernetes gives Pods their IP addresses and a single DNS name for a set of Pods and can load-balance across them.

Storage orchestration

Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.

Automated rollouts and rollbacks

Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you.

Automatic bin packing

Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability.

Self-healing

Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.

Secret and configuration management

Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration.

Batch execution

In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.

Horizontal scaling

Kubernetes autoscaling automatically sizes a deployment’s number of Pods based on the usage of specified resources (within defined limits –  eg CPU % usage).

Rolling updates

Updates to a Kubernetes deployment are orchestrated in “rolling fashion,” across the deployment’s Pods. These rolling updates are orchestrated while working with optional predefined limits on the number of Pods that can be unavailable and the number of spare Pods that may exist temporarily.

Canary deployments

 A useful pattern when deploying a new version of a deployment is to first test the new deployment in production, in parallel with the previous version, and scale up the new deployment while simultaneously scaling down the previous deployment.

Visibility

Identify completed, in-process, and failing deployments with status querying capabilities.

Time savings

Pause a deployment at any time and resume it later.

Version control

Update deployed Pods using newer versions of application images and roll back to an earlier deployment if the current version is not stable.

Kubernetes Architecture

 

 

In Kubernetes architecture,

You can have one master – multi worker node, multi-master – multi worker node or even everything on a single node for testing purpose.

You can see different components on master as shown in the above diagram

On Master node,

The API server is that main component, as everything in Kubernetes cluster will connect and talk to this API server.

Other components are a scheduler, controller manager we call them as Kubernetes control plane.

And the ETCD component is for storing the information of Kubernetes cluster, if we lose ETCD we lose the whole data of the cluster.

We can also configure it externally, but in the most common scenario, we deploy it along the Kubernetes control plane.
On Worker node,

Kubelet is the main component because the Kubelet is going to talk to the API server which gives the status of the node, app running inside that node.

And we also have Kube-proxy which also communicates to API server, which does traffic redirection using Ip tables by default.

K8 Architecture Components

Master Node Components

Master nodes have the following components as shown

1) API Server

2) Scheduler

3) Controllers

4) ETCD

 

Before we begin,

 

Everything in k8s is objects same as like everything in Linux is a file , Object is a persistent entity in kubernetes . One of the objects is Pod, Pod is a logical collection of one or more containers which are always scheduled together.

 

So starting with API-server

1) All the administrative tasks are performed via API server within the master node
2) A user sends an API request to api server which then authenticates, validates and processes the request, after executing the request the resulting state of the cluster is stored in the distributed key-value store.
Let’s discuss in detail about it,

Master Node Components – API Server

 

1) The client for the API Server can be either Kubectl(command-line tool) or a Rest API client.
2) As mentioned in the diagram, there are several plugin’s that are invoked by Api Server before creating/deleting/updating the object in ETCD.

3) When we send a request for object creation operation to Api Server, it needs to authenticate the client. This is performed by one or more authentication plugins. The authentication mechanism can be based on the client’s certificate or based on Basic authentication using HTTP header “Authorization”.

4) Once the authentication is passed by any of the plugins, it will be passed to Authorization plugins. It validates whether the user has access to perform the requested action on the object. Examples are like developers are not supposed to handle cluster role bindings or security policies. They are supposed to be controlled at the cluster level only by the DevOps team. Once the authorisation passes the request will be sent to Admission Control PlugIns(ACP).

5) Admission Control PlugIns are responsible for initialising any missing fields or default values. For example, if we didn’t specify any required parameter information in the object creation, one of the plugIns will take care about adding default that parameter to the resource specification.

6) Finally, API Server validates the object by checking the syntax and object definition format and stores it in ETCD.

Master Node Components – Scheduler

1) So as the name suggests scheduler schedules the object to different worker nodes

2) It registers with Api Server for any newly created object/resource.

3) The scheduler has the resource information of each worker nodes and also knows the constraints the user might have set.

4) Before scheduling, It checks whether the worker node has the desired capacity or not

5) Its also takes into account various user parameters, eg any specific volumes like SSD

So basically it watches for newly created pods that have no node assigned and selects a node for them to run on it.
The next component we have is the controller manager

Master Node Components – Controller

1) It manages different non-terminating control loops which regulate the state of Kubernetes cluster.

2) Now each one of this control loop knows about their desired state of the objects it manages, and then they watch their current state through API server. Now In the current loop if the current state of the object that manages does not meet the desired state, then the control loop itself takes the corrective steps to make sure that the current state is the same as the desired state.

So basically it makes sure that your current state is the same as the desired state, as specified in the resource specification

3) There are multiple objects in k8s, to manage these objects we have different controllers available under controller manager.

4) All controllers watch the API Server for changes to resources/objects and perform necessary actions like create/update/delete of the resource.

Master Node Components – ETCD

After that last component that we have is ETCD

1) So as I have mentioned earlier ETCD is distributed key-value store which is used to store the cluster state, So either it has to be the part of Kubernetes master or you can configure it externally in cluster mode.

2) It is written in the Go programming language.

3) Besides storing the cluster state, it is also used to store config details.

So now let’s move on to worker node components

Worker Node

 

A worker node can be a machine, VM or any physical server which runs the app (pods) and is controlled by the master node.

To access the app from the external world, we have to connect to the worker node and not to the master node
So as I have brief you about the worker node, let’s discuss various components of the worker node

Worker node has mainly three components – Kubelet, Kube-proxy and container runtime

Worker Node Components -container runtime

 

 

Container runtime is basically used to run and manage the containers life cycle on the worker node. For example, docker, RKT, containers, and LX, DETC, K8s supports most of them.

Docker is widely used in the container world.

Now you have understood what Container runtime is let’s move to the next component

Worker Node Components -Kubelet

 

1) It is basically an agent which runs on each worker node and communicates with the master node via the API server

So if we have ten worker nodes, then Kubelet runs on every worker node.

2) It receives the pod definition via various means and runs the containers associated with that pod by instructing to container runtime.

3) It will also do health checks for the containers and restart if needed.

4) It monitors the status of running containers and reports to API server about status, events and resource consumption.

5) It connects to the container runtime using the container runtime interface which consists of various protocol buffer, GRPC APIs and libraries

Now let’s move on to the 3rd component

Worker Node Components -Kubeproxy

 

1) It is a network proxy which runs on each worker node, and they listen to the API server for each service point creation or deletion.

So basically it helps you to access your application on localhost by mapping the IP address and port forwarding the port on which your application is running.
2) So for each service point, Kube proxy sets the route so that it can reach it.

Kubernetes WorkFlow

 

 

DevOps/infra – Wants to deploy an app via Kubectl/API

The workflow will be as follows

1) Kubectl/API will talk to API -server

2) API-server stores the request in ETCD, then it will talk to the scheduler

3) Scheduler will choose one node to schedule the app

4) Then API server will then talk to Kubelet

5) Kubelet will talk to container runtime (e.g. docker) which will deploy the app

6) Kubelet will report it back to API server whether it succeeded or not and API server will store its status to ETCD

7) Kube proxy will get notified about the new app, and through Kubelet it will come to know  local IP, ports of the app

 

Developer/user – Will access application on local machine using Kubelet port-forward via Kube proxy

Still, a lot to cover, Right ??

K8s is like an ocean to cover. We will try to cover up the remaining part in our next upcoming blog.

VPC Sharing Using AWS RAM (Resource Access Manager)

Over the years AWS has made managing multi-account AWS environments easier. They have introduced consolidated billing, AWS Organizations, cross-account IAM roles delegation, and various ways to share resources like snapshots, AMIs, etc.

In this blog post, I will discuss cross-account VPC sharing using AWS RAM which is a cool new service launched by AWS in November 2018. AWS RAM enables us to share our resources with an AWS account or through AWS Organizations. If you have multiple AWS accounts, you can create resources centrally and use AWS RAM to share those resources with other accounts.

VPC sharing is a very powerful concept with many benefits:

  • Separation of duties: centrally controlled VPC structure, routing, IP address allocation.
  • Application owners continue to own resources, accounts, and security groups.
  • VPC sharing participants can reference security group IDs of each other.
  • Efficiencies: higher density in subnets, efficient use of VPNs and AWS Direct Connect.
  • Hard limits can be avoided, for example, 50 VIFs per AWS Direct Connect connection through simplified network architecture.
  • Costs can be optimized through reuse of NAT gateways, VPC interface endpoints, and intra-Availability Zone traffic.

AWS RAM gives us the provision to share following services till date:

  • Subnet
  • Transit Gateways
  • Resolver Rules
  • License Configuration

 

When you share a resource with another account, then that account is granted access to the resource. Any policies and permissions in that account apply to the shared resource

I will now share subnets from the account (A) which will be the owner account to account (B), say participant account.

Setting up AWS organization:

Create an AWS organization in account A and add the participant account B in the Organization.

Invite the account B in the AWS organization by sending a request from the console.

 

 

Create a Custom VPC and few subnets in the owner account which will be shared with the participant account.

 

 

Next, enable the resource sharing for your organization from the AWS Resource Access Manager settings in account A.

 

Now let’s start with resource sharing by creating a resource share in “shared by me tab”.

 

After providing a description for the shared resource, select “Subnets” in the resource tab and then go ahead and select the subnets which you wish to share with participant account.

 

 

The principal will be the destination account or the AWS Organization to which the subnets will be shared. I will go with AWS organization and select account B in the organization.

 

 

After creating the resource share in owner account A, go to the participant account B and check if the resource share is visible in AWS RAM dashboard “shared with me” tab.

 

 

The shared subnets will now appear in the participant account B along with the VPC.

 

 

Let’s use this VPC to launch resources in Participant account. Navigate to the EC2 dashboard and while launching the instance, in the configure instance section check the availability of shared VPC and subnets.

 

 

Voila! The magic is done!

Things to know:
  • At this moment VPC sharing is only available within the same AWS Organization.
  • We cannot share default VPC’s.
  • Participants accounts can’t launch resources using security groups that are owned by other participants or the owner.
  • Participants can’t launch resources using the default security group for the VPC because it belongs to the owner.
  • Participants pay for their resources and also pay for data transfer charges associated with Inter-Availability Zone data transfer, internet gateway, VPC peering connections, and data transfer through an AWS Direct Connect.
  • VPC owners pay hourly charges (where applicable), data processing and data transfer charges across NAT gateways, virtual private gateways, transit gateways, AWS PrivateLink, and VPC endpoints.

AWS ECS (Amazon Elastic Container Service )

In this blog, I will try to cover the following topics and try to explain more about AWS Elastic Container Service which is a highly scalable, fast and high-performance container management service.

  • Why Docker Containers?
  • ECS Cluster Management
  • EC2 Container Registry
  • ECS Services
  • Auto-Scaling in ECS
  • Monitoring, Logging and Notification

Why Docker Containers?

  • Lightweight, Open Source and Secure
  • Portable and efficient in comparison to VM
  • Empower Developer creativity
  • Eliminates Environmental Inconsistencies
  • Ability to scale quickly
  • Reduces time to market of your application

Services evolve to microservices

 

Why Container Cluster Management System is needed?

  • Provides clustering layer for controlling the deployment of your containers onto the underlying hosts
  • Manages container lifecycle within the cluster
  • Scheduling Containers across the cluster
  • Scaling containers

What is AWS ECS (EC2 Container Service)?

  • Amazon EC2 Container Service (ECS) is a highly scalable, fast and high performance container management service.
  • Easily run, stop and manage Docker containers on cluster of Amazon EC2 instances.
  • Schedules the placement of Docker containers across your cluster based on resource needs, availability and requirements.

Components of ECS

  • Cluster – Logical group of container instances
  • Container Instance – EC2 instance in which ECS agents runs and is registered to cluster.
  • Task Definition – Description of application to be deployed
  • Task – An instantiation of task definition running on container  instance
  • Service – Runs and maintains predefined tasks simultaneously
  • Container – Docker Container created during task instantiation

ECS Architecture Overview

Key Components of ECS Architecture

Agent Communication Service – Gateway between ECS agents and ECS backend cluster management engine

API – Provides cluster state information

Cluster Management Engine – Provides cluster coordination and  state management

Key/Value Store – It is used to store cluster state information

ECS Agent –

  • It runs on EC2(Container) instances
  • ECS cluster is collection of EC2(Container) Instances
  • ECS agent is installed on each of EC2(Container) Instances
  • ECS agent registers instance to centralised ECS service
  • ECS agent handles incoming requests for container deployment
  • ECS agent handles the lifecycle of container

EC2 Container Registry (Amazon ECR)

  • It is an AWS managed Docker container registry Service.
  • Stores and Manages Docker Images
  • Hosts images in a highly available and scalable architecture
  • It is integrated with ECS.
  • No upfront fee, cheap and pay only for the data stored.

 

 

Creating ECS Cluster

Cluster can be created using

  • AWS Console (Manual method)
  • AWS ECS CLI (Manual method)
  • Cloud Formation Template (IAC and Recommended method)

Cloud Formation Example

aws cloudformation create-stack –stack-name dev-ecs-stack –template-body file://master.yaml –parameters file://parameter_dev.json –capabilities CAPABILITY_IAM

ECS Task Definition

Task Definition is similar to docker-compose.

Task definition can consist 1 or more container definitions

It defines

  • Docker Images to use
  • Port and Drive Volume Mapping
  • CPU and memory to use with container
  • Whether containers are linked
  • Environmental variable which is required to be passed to container.

ECS services 

  • Allows you to run and maintain a specified/desired number of tasks.
  • If any task fails or stop for any reason, ECS service scheduler launches another task of your task definition to maintain desired task count.

Deploying ECS Cluster

  • Create Security groups at instance and load balancer level.
  • Create an Application Load Balancer
  • Create a Launch configuration with ECS optimised AWS AMI
  • Create a Autoscaling group, which specifies the desired number of instances
  • Create a task definition
  • Create a target group and ecs service

Sample ECS architecture

ECS Instance Level Auto Scaling

ECS provides cluster-level parameters which can give the cluster utilization Statistics

  • Memory Reservation – Current % of reserved memory by  cluster
  • Memory Utilization – Current % of utilized memory by cluster
  • CPU Reservation – Current % of reserved CPU by cluster
  • CPU Utilization – Current % of utilized CPU by cluster

CloudWatch Alarms on the above parameters enables to Scale Up/Down the ECS cluster

ECS Service Level Autoscaling

  • ECS also provides the facility to scale up/down the number of tasks in the service.
  • Tasks can be autoscaled on following ECS service parameters
    • CPU Utilization – Current % CPU utilization by ECS service
    • Memory Utilization – Current % Memory Utilization by ECS 

CloudWatch Alarms on the above parameters enables to Scale Up/Down the service.

ECS Auto Scaling Overview

Monitoring and Logging

CloudWatch

  • Use Cloudwatch Logs to centralized all container service logs
  • Follow “ecs/stackname/servicename” Log Group Format.
  • Get notification in slacks channel about the Cloudwatch ECS Alarms and Events via AWS Lambda function.

 

 

 

Key Advantages of ECS Service

  • Easy Cluster Management – ECS sets up and manages clusters made up of Docker containers. It launches and terminates the containers and maintains complete information about the state of your cluster.
  • Auto Scaling – Instance as well as Service level.
  • Zero-downtime deployment – service updation follows Blue-Green deployments.
  • Resource Efficiency – A containerized application can make very efficient use of resources. You can choose to run multiple, unrelated containers on the same EC2 instance in order to make good use of all available resources.
  • AWS Integration – Your applications can make use of AWS features such as Elastic IP addresses, resource tags, and Virtual Private Cloud (VPC)
  • Service Discovery – used for internal Service to service communication.
  • Fargate technology – automatically scale, load balance, and manage scheduling of your containers.
  • Secure – Your tasks run on EC2 instances within an The tasks can take advantage of IAM roles, security groups, and other AWS security features.

Key Challenges of ECS Service

  • Supported by only AWS.
  • Application level custom monitoring is not available.

 

Using Custom Metrics for CloudWatch Monitoring

AWS can dig a crater in your pocket (if not you, then your client’s). Also, post-downtime meetings with clients can get sour for the right metrics going unmonitored.

I have been working with AWS since a while now and have learned it the difficult way that just spinning up the infrastructure is not enough. Setting up monitoring is a cardinal rule. With the proliferation of cloud and microservice-based architecture, you cannot possibly gauge the usage, optimize the cost or ascertain when to scale up or scale down without monitoring.

This is not a post on why monitoring is required but rather on why and how to enhance your monitoring using custom metrics for AWS-specific infrastructure on CloudWatch. While CloudWatch provides ready metrics for CPU, network bandwidth—both in and out, disk read, disk write and a lot more it does not provide memory and disk metrics. And, considering you are reading this post on custom metrics, you already know that monitoring just the CPU without memory and disk is simply not enough.

Why doesn’t AWS provide CPU and Disk Metrics by default like it provides the rest?

Well, CPU metrics, network metrics for EC2 can be fetched externally, while for monitoring memory and disk access to the servers is required. AWS does not have access to your servers by default. You need to be inside the server and export the metrics at regular intervals. This is what I have done as well to capture the metrics.

The following are the custom metrics we should monitor:
• Memory Utilized (in %)
• Buffer Memory (in MB)
• Cached Memory (in MB)
• Used Memory (in MB)
• Free Memory (in MB)
• Available Memory (in MB)
• Disk Usage (in GB)

Why did I create these playbooks? Why use custom metrics to monitor?

• Memory metrics are not provided by AWS CloudWatch by default and require an agent to be installed. I have automated the steps to install the agent and added a few features.
• The base script provided by Amazon didn’t output some metrics to be exported to CloudWatch like buffer memory and cached which didn’t give a clear picture of the about the memory.
• There were times when the free memory would indicate 1–2GB but the cached/buffer would be consuming that memory and not release it, thereby depriving your applications of memory.
• Installing the agent on each server and adding to the cron was challenging. Especially if you frequently create and destroy VMS. Why not just use Ansible to install it in one go to multiple servers?

So how do we set up this monitoring?

It’s fairly simple:
1. Install Ansible on machine / local
2. Clone the repo https://github.com/alokpatra/aws-cloudwatch.git
3. $ git clone https://github.com/alokpatra/aws-cloudwatch.git
4. Populate the host file with the servers details you want to monitor
5. Allow CloudWatch access to EC2 by attaching an IAM role the target hosts you want to monitor. To attach a role go to the Instance section of the AWS Console. Select the instance > Click on Actions > Instance Settings > Attach/Replace IAM Role
6. Run the playbook
7. Create your own custom dashboards on AWS CloudWatch Console.
While I have detailed out the ReadMe in the GitHub repo, I’ll just discuss a few things in brief here:

What do the scripts do precisely?

Well, the scripts are an automated and improvised version of the steps to deploy the CloudWatch agent. Since I had extensively used Ansible previously, I wrote an Ansible role to simplify deployment of multiple server agents.
The Ansible role does the following:
• Identifies the distribution
• Installs the pre-requisites as per the OS flavor
• Installs the prerequisite packages based on the distribution
• Copies the Perl scripts to the target machine.
• Sets the cron job to fetch and export the metrics at regular intervals (default of 5mins)

Minor changes to the Perl script have been made also to export the cached and buffer memory which I found quite useful.

Supported OS Versions

• Amazon Linux 2
• Ubuntu

Prerequisites:

1. Ansible to be installed on the Host Machine to deploy the scripts on the target machines/servers. I have used Ansible Version 2.7.
•  To install ansible on Ubuntu you can run the following commands or follow this link
$ sudo apt update $ sudo apt install software-properties-common $ sudo apt-add-repository ppa:ansible/ansible $ sudo apt update $ sudo apt install ansible“`
•  On Amazon Linux 2 you need to run the following commands, obviously, there is no Digital Ocean Guide to follow
$ sudo yum-config-manager — enable epel $ yum repolist ( you should see epel) $ yum install ansible
2. CloudWatch access to Amazon EC2. The EC2 instances need to have access to push metrics to CloudWatch. So you need to create an IAM role ‘EC2AccessToCloudwatch’ and attach the policy to allow ‘write’ access for EC2 to CloudWatch. Now attach this IAM role to the target hosts you want to monitor. In case you already have a role attached to the instance, then add the above policy to that role.
The other alternative is to export the keys to the servers. (Playbooks are not updated for this option yet). I have used the IAM option which avoids the need to export keys to the server which can often be a security concern. It is also difficult to rotate the credentials subsequently.
3. SSH access to the target hosts i.e. the hosts where you want the agent installed since Ansible uses SSH to connect to managed hosts.

What do the playbooks exactly do?

• Identify the Distribution
• Install the pre-requisites as per the OS flavor
• Install the prerequisite packages based on the distribution
• Copy the Perl scripts to the target machine
• Set the cron job to fetch and export the metrics at regular intervals (default of 5mins)

How to run the playbooks?

• Populate the inventory/host file with the hosts/IPs. A sample host file is present in the repo
• Run the main playbook with the following command which in turn calls the role CloudWatch-agent $ ansible-playbook -i hosts installer.yaml -vv“`

This would install the agent. Now you can go ahead and create the Dashboards on CloudWatch

Below is a sample Dashboard I have created. You might want to customize the widgets as per your requirement.

The below dashboard has 2 widgets:
i. It gives a high-level picture of the overall Memory Utilization Percent. This tells you which server is running on high memory and the spikes if any.
ii. Free Memory (in MB). This is read after the one above to obtain free memory on a particular server you see has high utilization.

Before the next Dashboard which is a level deeper, let’s just glance at the Total Memory Pie Chart.

The following Dashboard is to dig deeper into the memory metrics. To understand the exact distribution of memory and where it is consumed.

Hope the post was useful.
Cheers!
Happy Monitoring!

Basics of Ansible and Installation

What is Ansible?:

 

Ansible is an open source software that automates software provisioning, configuration management, and application deployment. Ansible connects via SSH, remote PowerShell or via other remote APIs.

 

How Ansible works?:

 

Ansible works by connecting to your nodes and pushing out small programs, called “Ansible modules” to them. These programs are written to be resource models of the desired state of the system. Ansible then executes these modules (over SSH by default) and removes them when finished

 

Key Features of Ansible: 

  • Models the IT infrastructure around the systems interrelating with each other, thus ensuring faster end results.
  • Module library can reside on any system, without the requirement of any server, daemons or databases.
  • No additional setup required, so once you have the instance ready you can work on it straight away.
  • Easier and faster to deploy as it doesn’t rely on agents or additional custom security infrastructure.
  • Uses a very simple language structure called playbooks. Playbooks are almost similar to the plain English language for describing automation jobs.
  • Ansible has the flexibility to allow user-made modules that can be written in any programming language such as Ruby, Python. It also allows adding new server-side behaviours extending Ansible’s connection types through Python APIs.

 

Terms in Ansible:

 

  • 1) Playbooks

Playbooks express configurations, deployment, and orchestration in Ansible. The Playbook format is YAML. Each Playbook maps a group of hosts to a set of roles. Each role is represented by calls to Ansible tasks.

 

 

  • 2) Ansible Tower

Ansible Tower is a REST API, web service, and web-based console designed to make Ansible more usable for IT teams with members of different technical proficiencies and skill sets. It is a hub for automation tasks. The Tower is a commercial product supported by Red Hat, Inc. Red Hat announced during AnsibleFest 2016 that it would release Tower as open source software

Ansible Architecture:

 

(On AWS EC2 Linux Free Tier Instance, python and ssh both are already installed)

  • Python Version — 2.7.13
  • Three servers
  • Ansible control Server ( Install ansible using epel repository)- On AWS you have to enable this file
  • WebServer
  • DBServer

 

How to connect between these servers?

To ping these servers(webserver and dbserver) from ansible control server, you have to add one inbound rule “All ICAMP traffic” in both the instances)

  • Ansible Control Server
  • Install Ansible on Redhat

wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

rpm -ivh epel-release-latest-7.noarch.rpm

yum repolist

yum — enablerepo=epel install ansible

  • Install Ansible on AWSLinux
vim /etc/yum.repos.d/epel.repo

or

sudo yum-config-manager --enable epel

yum repolist ( you should see epel)

yum install ansible

Create an entry for all servers in etc/hosts file as shown below

vim etc/hosts

Create one user “ansadm” on all the servers as shown below

After adding you have to do ssh by login as ansadm user. You will get the below error because ssh is not set up yet

How to Setup SSH

  • Generate ssh key on ansible control server.
  • https://www.youtube.com/watch?v=5KmQMfEqYxc
  • ssh-keygen on ansible control server by login on ansadm ( ssh is user specific)
  • This will create .ssh folder (/home/ansadm/.ssh)
  • Create an authorized_keys on both the servers and copy the key from ansible control server as shown below

 

[ansadm@ip-172–31–21–35 ~]$ ssh-copy-id -i ansadm@172.31.19.214
 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/home/ansadm/.ssh/id_rsa.pub”
 /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
  (if you think this is a mistake, you may want to use -f option)

[ansadm@ip-172–31–21–35 ~]$ ssh ansadm@172.31.19.214
 Last login: Thu Jan 11 13:34:31 2018

__| __|_ )
  _| ( / Amazon Linux AMI
  ___|\___|___|

https://aws.amazon.com/amazon-linux-ami/2017.09-release-notes/
 [ansadm@ip-172–31–19–214 ~]$ exit

Now all three servers are configured, ansible control server can do ssh on both the servers

Change the ownership of etc/ansible folder to ansadm

chown -R ansadm:ansadm /etc/ansible

vim etc/ansible/hosts

[webserver]
172.31.19.214
[dbserver]
172.31.26.66

ansible.cfg file ( This is an inventory file)

Ansible commands ( We can run all commands only on the control server and all other servers are managed by it)

To install any package you have to be root. So we are making ansadm of the controller as a root user on all machines (except controller)

vi /etc/sudoers

## ANSIBLE ADMIN USER
ansadm ALL=NOPASSWD: ALL

Now run the same command with -s option

Ansible Roles

Roles are the next level of abstraction of ansible playbook. Roles are the list of commands that ansible will execute on target machines in given order

Playbook — decides which role is for which target machine

[ansadm@ip-172–31–21–35 ansible]$ mkdir roles/basic
[ansadm@ip-172–31–21–35 ansible]$ mkdir roles/basic/tasks
[ansadm@ip-172–31–21–35 ansible]$ cd roles/basic/tasks
[ansadm@ip-172–31–21–35 tasks]$ vi main.yml

[ansadm@ip-172–31–21–35 ansible]$ cat /etc/ansible/roles/basic/tasks/main.yml

- name: Install ntp
 yum: name=ntp state=present
 tags: ntp

[ansadm@ip-172–31–21–35 ansible]$ vi playbook.yml
[ansadm@ip-172–31–21–35 ansible]$ ansible-playbook -K playbook.yml

[ansadm@ip-172–31–21–35 ansible]$ cat playbook.yml
- hosts: all
 roles:
— role: basic

ansible-playbook <playbook> — list-hosts

To check if HTTPd is installed, the easiest way is to ask rpm:

rpm -qa | grep httpd
  • Verify the playbook for syntax errors:

#ansible-playbook file_name.yml –syntax-check

  • To see what hosts would be affected by a playbook

#ansible-playbook file_name.yml –list-hosts

  • Run a playbook

# ansible-playbook file_name.yml

 

Conclusion:

Ansible is easy to learn. Managing resources using Ansible can be extremely efficient and easy. Here we learn about Ansible basic concept, Installation steps and different features.

Spot Fleet Termination and Remove Stale Route53 Entries

When it comes to Cluster management, we may need many slaves (machines) which will run our tasks/applications. Similarly, in our project, we do have Apache Mesos Cluster which nearly runs around 200 EC2 instances in production itself and for real-time data processing, we use Apache Storm cluster which has supervisors machine count around 150. All these machines rather than running on on-demand we run on Spotfleet in AWS.

Now the question comes what is spotfleet? To understand SpotFleet first look what is Spot Instances.

Spot Instances

AWS must maintain a huge infrastructure with a lot of unused capacity. This unused capacity is basically the available spot instance pool – AWS lets users bid for these unused resources (usually on a significantly lower price than the on-demand price). So we can get AWS ec2 boxes at a much lower price as compared to their on-demand price.

SpotFleet

A Spot Fleet is a collection, or fleet, of Spot Instances, and optionally On-Demand Instances. The Spot Fleet attempts to launch the number of Spot Instances and On-Demand Instances to meet the target capacity that you specified in the Spot Fleet request.

Above is the screenshot of AWS SpotFleet, in which we are launching 20 instances.

Spot instance lifecycle:

  • User submits a bid to run the desired number of EC2 instances of a particular type. The bid includes the price that the user is willing to pay to use the instance.
  • If the bid price exceeds the current spot price (that is determined by AWS based on current supply and demand) the instances are started.
  • If the current spot price rises above the bid price or there are no available capacity, the spot instance is interrupted and reclaimed by AWS. 2 minutes before the interruption the internal metadata endpoint on the instance is updated with the termination info.

Spot instance termination notice

The Termination Notice is accessible to code running on the instance via the instance’s metadata at http://169.254.169.254/latest/meta-data/spot/termination-time. This field becomes available when the instance has been marked for termination and will contain the time when a shutdown signal will be sent to the instance’s operating system.

The most common way discussed to detect that the Two Minute Warning has been issued is by polling the instance metadata every few seconds. This is available on the instance at:

http://169.254.169.254/latest/meta-data/spot/termination-time

This field will normally return a 404 HTTP status code but once the two-minute warning has been issued, it will return the time that shutdown will actually occur.

This can only be accessed from the instance itself, so you have to put this code on every spot instance that you are running. A simple curl to that address will return the value. You might be thinking to set up a cron job, but do not go down that path. The smallest interval you can run something with cron is once a minute. If you miss it by a second or two you are not going to detect it until the next minute and you lose half of the time available to you.

 Sample Snippet:

Below is alert that we receive in SLACK.

Delete Route53 Entries

We do create DNS entries for all our mesos and storm boxes, but whenever those instances get deleted their DNS entries still remain there, which cause lots of entries under Route53 which are of no use. So, we come up with an idea why not have a lambda function that will be triggered whenever a spotfleet instance got terminated.

Created a cloudwatch rule

So, whenever an instance got terminated this rule run and it triggers a Lambda function, which deletes the route53 of the terminated instance.

Below is the code snippet:

For each mesos and storm Instance, we do have Tagging and that tagging we use to destroy entries

Conclusion:

From Spot instances Termination we get a two-minute warning that they are about to be reclaimed by AWS. These are precious moments where we can have the instance deregister from accepting any new work, finish up any work in progress. Apart from these,  using lambda function we can remove stale route53 entries.

Automation frenzy deployment using OpsWorks, Jenkins and AWS CLI

What is deployment: Deployment is what developers want the DevOps team to carry out.

They (Developers) code out some nice stuff which the above-mentioned people (DevOps) are responsible to handle the code with care and pass it to the production servers so the code is hosted in some way. Trust me it just sounds easy.

It’s an era of cloud so let’s take examples of cloud services especially taking AWS into account (we love AWS here in Talentica). So consider that your Infrastructure or servers shall be on AWS.

Now when setting up your infra (the platform to host your code) you shall chalk out a plan, a process to construct an architecture over it but what if you don’t know stuff about how infra works!

Well, there is this service AWS OPSWORKS which is quite simple to understand.
So OPSWORKS is based on another tool “chef-server”. You don’t need to know much about chef for using AWS OPSWORK though, still, Chef is a configuration management tool which follows idempotency and it is mostly scripted in Ruby language.

So, whatever you want the chef tool to run can be called as a “recipe”.
The recipe shall contain definitions (block of code) for what do you want your server to work upon.

Here is an example of a recipe to make the talking make some sense:

user "Add a user" do
   home "/home/joe"  
   shell "/bin/bash"
   username "joe"  
 end

Ruby code generally looks like “do … end”

Here the OpsWorks will fetch this block of code from wherever you want to say from local machine or by pulling it from GitHub/bitbucket directly on the server etc and then it will read the ruby code and it will create a user with username as “joe”.

Let us complicate things a bit-

Consider your infra /platform which hosts your website or whatever block of code on the INTERNET looks something like this:

 

Here on the bottom, there is database server (we use AWS RDS). The second bottommost level is a herd of servers. Well technically we call it as a cluster but that’s okay. But the catch is all the servers are duplicated, they all do the same thing. Why? Because we are” load balancing” between the servers that’s why.

NOW as I said let’s complicate things. Imagine you want the same architecture just for testing, you want to test your code first in one environment (consider the whole image above as one environment) So now we have two images same as above but in QA there are no users and it is not available directly on the internet (don’t ask why ! )  .
One fine day we need to add other environments for some other purpose call it as loadtest to check how much load our architecture can withstand. And we also want another environment for the demo env which shall be always available as we need to show a demo website which is not like QA also not like production.

This will create a problem statement  “How am I supposed to handle this alone!

This is the part of the blog where we will have to dive deep into what’s OpsWorks.

In Opsworks:

STACKS: So, stack in the Infra sense is what all components are used in your environment. Talking about multiple environments as in our pictorial example we shall have 4 stacks named as prod-stack, QA-stack,  demo-stack and loadtest-stack.

How to create them??? It’s easy to go to AWS CONSOLE -> SEARCH FOR OPSWORKS -> get started -> create stacks. I don’t want this blog to be that full of images. If you face any problem to do so maybe you can mail me.

LAYERS: wondering when will Jenkins come up in this blog? Wait!
So, talking about layers, I need to give a good example here.

Layers are the actual components within the stacks, they can also be categorized as applications.

Let’s go back to our stack scenario diagram where we will have a slight difference this time.

Picking up the “QA” environment. Here we have two applications hosted under the QA environment where QA is the “stack“, qa.abc.test.com and qa.test.com are the “layers”.

So here qa.abc.test.com is a branch of QA environment which hosts somewhat different code as compared to the main domain (qa.test.com)

Hence for different code, there shall be different hosting space, different deployment space, perhaps different servers.

Here is what AWS document says about what is OpsWork layers:
Every stack contains one or more layers, each of which represents a stack component, such as a load balancer or a set of application servers.

So, when we desire to make such kind of architecture OpsWork helps us a lot. It is very easy to simply add up layers.

Here is an example of how easily we can add layers just a click here a dropdown menu select there and you are done. You can integrate many other AWS services such as ECS with Opsworks. We can add a DB (RDS panel) as a layer too.
Opsworks nicely segregates the layers but also keeps them together in a single stack.

Deploying on the layers
1. BOOT-TIME Deployment
2. Code-Deployment

BOOT-TIME Deployment

Boot time deployment is basically what stuff do you want to get installed when a new server (instance) is booted up.  The stuff you specify will be installed on the new server at every boot up. You don’t need to install it again and again after launching new instances.
The installations are nothing but recipes running on the servers.

You wish to install apache server on all the servers in a particular layer or in the whole stack then write a ruby recipe (like we did for adding user) for installing apache server and boot up an instance to check if the apache is installed already.

There can be many such important recipes you will need to run on a newly launched instance.

Code-Deployment

JENKINS: Here is its entry.
Jenkins is a Continuous Integration Continuous Delivery tool.
It carries out amazing deployments smoothly. Here we are using Jenkins and Opsworks in a manner that Jenkins shall know these many points
1. How many stacks are present
2. How many layers are present in a stack
3. How much instances(servers) are present in the layers.
4. Which layer to deploy
5. Which layer not to deploy
6 PublicIP address of the instances on which we need to run the code-deploy

To know all these points, we have/ (require if you don’t have) AWS CLI (command line interface). Whereby using particular aws query command we get JSON outputs.

Jenkins can run these aws command and can get json outputs. Let’s take an example.

“AWS OpsWorks describe-stacks” This command will give a json output with the names of our stacks such as QA stack etc.  Suppose you want to deploy on qa stack then from that json output we will simply have to fetch the “QA-stack “. We can use “jq” to parse the json outputs, here is an example:

OPS_STACK_ID=$(aws opsworks describe-stacks | jq '.Stacks [] | select(.Name=="'"$ENV-stack"'")'| jq '.StackId' | cut -d '"' -f2)

(covers up point number 1)

here we will get the stack-id of our $env-stack where $env is a variable which is set as “QA” (parameterized in Jenkins). SO basically, Jenkins has got the QA-stack’s ID.

Jenkins will further describe this QA-stack ahead to get the json output for how many layers are present in this stack.

OPS_LAYER_ID=$(aws opsworks --region us-east-1 describe-layers --stack-id $OPS_STACK_ID |jq '.Layers[] | select (.Name=="'"qa.abc.test-layer"'")'| jq '.LayerId')

(covers up point number 2, 4, 5)

Using jq we can fetch/parse the layer id we desire by just passing the name to find it using the above command
after getting the layer id we can describe this layer to find out how many servers are we running under that layer. According to our QA infra diagram with two layers. The layer named as “qa.abc.test.com” has one instance which is hosting the domain.

aws opsworks –region us-east-1 describe-instances –layer-id $OPS_LAYER_ID | jq ‘.Instances[].PublicIp’| wc -l

(covers number 3)

This command over here will simply return us the number of public IP of servers inside the layer.

aws opsworks –region us-east-1 describe-instances –layer-id $OPS_LAYER_ID | jq ‘.Instances[].PublicIp’

(covers number 6)

This is the same command but will simply send out the values of the public IPs

we can ssh/ login into that server’s public IP and then we can run the deploy.sh shell script which will contain all the steps which are required to code deploy (like git pull or git clone etc .)

Here is what the Jenkins job code would look like:

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 #!/bin/bash -x

echo $BRANCH_NAME

###################################################

ENV=qa

keyfile=/var/lib/jenkins/.ssh/jenkins.pem

user=jenkins

subject=" anything you want"

mail -s $subejct “email ids ”

###################################################
if [ -z "$BRANCH_NAME" ]; then

        echo "Exiting..Git branch name not set..!!"

    exit 1

fi

echo "ENV=$ENV" > sshenv.txt

echo "BRANCH_NAME=$BRANCH_NAME" >> sshenv.txt

#this following command is imp anyways as we will need the stack id, so this command shall remain universal for any deployment

OPS_STACK_ID=$(aws opsworks describe-stacks | jq '.Stacks[] | select(.Name=="'"$ENV-stack"'")'| jq '.StackId' | cut -d '"' -f2)

OPS_LAYER_ID=$(aws opsworks --region us-east-1 describe-layers --stack-id $OPS_STACK_ID |jq '.Layers[] | select (.Name=="'"qa.abc.test-layer"'")'| jq '.LayerId' | cut -d '"' -f2)

#now! that we have got the id we shall take a count of instances and its pub dns entries (instances on which we are going to deploy.)

OPS_INSTANCE_COUNT=$(aws opsworks --region us-east-1 describe-instances --layer-id $OPS_LAYER_ID  | jq '.Instances[].PublicIp' | cut -d '"' -f2| wc -l)

if [ "$OPS_INSTANCE_COUNT" -eq "0" ];then

        echo "Exiting as no servers present for deployment"

        exit 1
fi

### Run deploy via ssh in each server

for i in $(aws opsworks --region us-east-1 describe-instances --layer-id $OPS_LAYER_ID  | jq '.Instances[].PublicIp' | cut -d '"' -f2);

do

        echo $i; ##TESTING

    sudo scp -i $keyfile -o StrictHostKeyChecking=no sshenv.txt $user@$i:/tmp/envfile

    sudo ssh -i $keyfile -o StrictHostKeyChecking=no $user@$i 'bash -x /var/scripts/deploy.sh'

done

 

Here I conclude that with OpsWorks, Jenkins and AWS CLI we have thus created a High availability architecture with one click/ AUTO deployment configured.