AWS Batch Jobs

What is batch computing?

Batch computing means running jobs asynchronously and automatically, across one or more computers.

What is AWS Batch Job?

AWS Batch enables developers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (for example, CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. AWS Batch plans, schedules, and executes your batch computing workloads across the full range of AWS compute services and features, such as Amazon EC2 and Spot Instances.

Why use AWS Batch Job ?

  • Fully managed infrastructure – No software to install or servers to manage. AWS Batch provisions, manages, and scales your infrastructure.
  • Integrated with AWS – Natively integrated with the AWS Platform, AWS Batch jobs can easily and securely interact with services such as Amazon S3, DynamoDB, and Recognition.
  • Cost-optimized Resource Provisioning – AWS Batch automatically provisions compute resources tailored to the needs of your jobs using Amazon EC2 and EC2 Spot.

AWS Batch Concepts

  • Jobs
  • Job Definitions
  • Job Queue
  • Compute Environments

Jobs

Jobs are the unit of work executed by AWS Batch as containerized applications running on Amazon EC2. Containerized jobs can reference a container image, command, and parameters or users can simply provide a .zip containing their application and AWS will run it on a default Amazon Linux container.

$ aws batch submit-job –job-name poller –job-definition poller-def –job-queue poller-queue

Job Dependencies

Jobs can express a dependency on the successful completion of other jobs or specific elements of an array job.

Use your preferred workflow engine and language to submit jobs. Flow-based systems simply submit jobs serially, while DAG-based systems submit many jobs at once, identifying inter-job dependencies.

Jobs run in approximately the same order in which they are submitted as long as all dependencies on other jobs have been met.

$ aws batch submit-job –depends-on 606b3ad1-aa31-48d8-92ec-f154bfc8215f …

Job Definitions

Similar to ECS Task Definitions, AWS Batch Job Definitions specify how jobs are to be run. While each job must reference a job definition, many parameters can be overridden.

Some of the attributes specified in a job definition are:

  • IAM role associated with the job
  • vCPU and memory requirements
  • Mount points
  • Container properties
  • Environment variables
$ aws batch register-job-definition –job-definition-name gatk –container-properties …

Job Queues

Jobs are submitted to a Job Queue, where they reside until they are able to be scheduled to a compute resource. Information related to completed jobs persists in the queue for 24 hours.

$ aws batch create-job-queue –job-queue-name genomics –priority 500 –compute-environment-order …

 

Compute Environments

Job queues are mapped to one or more Compute Environments containing the EC2 instances that are used to run containerized batch jobs.

Managed (Recommended) compute environments enable you to describe your business requirements (instance types, min/max/desired vCPUs, and EC2 Spot bid as x % of On-Demand) and AWS launches and scale resources on your behalf.

We can choose specific instance types (e.g. c4.8xlarge), instance families (e.g. C4, M4, R3), or simply choose “optimal” and AWS Batch will launch appropriately sized instances from AWS more-modern instance families.

Alternatively, we can launch and manage our own resources within an Unmanaged compute environment. Your instances need to include the ECS agent and run supported versions of Linux and Docker.

$ aws batch create-compute-environment –compute- environment-name unmanagedce –type UNMANAGED …

AWS Batch will then create an Amazon ECS cluster which can accept the instances we launch. Jobs can be scheduled to your Compute Environment as soon as the instances are healthy and register with the ECS Agent.

Job States

Jobs submitted to a queue can have the following states:

  • SUBMITTED: Accepted into the queue, but not yet evaluated for execution
  • PENDING: The job has dependencies on other jobs which have not yet completed
  • RUNNABLE: The job has been evaluated by the scheduler and is ready to run
  • STARTING: The job is in the process of being scheduled to a compute resource
  • RUNNING: The job is currently running
  • SUCCEEDED: The job has finished with exit code 0
  • FAILED: The job finished with a non-zero exit code or was cancelled or terminated.

AWS Batch Actions

  • Jobs: SubmitJob, ListJobs, DescribeJobs, CancelJob, TerminateJob
  • Job Definitions: RegisterJobDefinition, DescribeJobDefinitions, DeregisterJobDefinition
  • Job Queues: CreateJobQueue, DescribeJobQueues, UpdateJobQueue, DeleteJobQueue
  • Compute Environments: CreateComputeEnvironment, DescribeComputeEnvironments, UpdateComputeEnvironment, DeleteComputeEnvironment

AWS Batch Pricing

There is no charge for AWS Batch. We only pay for the underlying resources we have consumed.

Use Case

Poller and Processor Service

Purpose

Poller service needs to be run every hour like a cron job which submits one or more requests to a processor service which has to launch the required number of EC2 resource, process files in parallel and terminate them when done.

Solution

We plan to go with Serverless Architecture approach instead of using the traditional beanstalk/EC2 instance, as we don’t want to maintain and keep running EC2 server instance 24/7.

This approach will reduce our AWS billing cost as the EC2 instance launches when the job is submitted to Batch Job and terminates when the job execution is completed.

Poller Service Architecture Diagram

Processor Service Architecture Diagram

First time release

For Poller and Processor Service:

  • Create Compute environment
  • Create Job queue
  • Create Job definition

To automate above resource creation process, we use batchbeagle (for Installaion and configuration, please refer batch-deploymnent repository)

Command to Create/Update Batch Job Resources of a Stack (Creates all Job Descriptions, Job Queues and Compute Environments)

beagle -f stack/stackname/servicename.yml assemble

To start Poller service:

  • Enable a Scheduler using AWS CloudWatch rule to trigger poller service batch job.

Incremental release

We must create a new revision of existing Job definition environment which will point to the new release version tagged ECR image to be deployed.

Command to deploy new release version of Docker image to Batch Job (Creates a new revision of an existing Job Definition)

 

beagle -f stack/stackname/servicename.yml job update job-definition-name

Monitoring

Cloudwatch Events

We will use AWS Batch event stream for CloudWatch Events to receive near real-time notifications regarding the current state of jobs that have been submitted to your job queues.

AWS Batch sends job status change events to CloudWatch Events. AWS Batch tracks the state of your jobs. If a previously submitted job’s status changes, an event is triggered. For example, if a job in the RUNNING status moves to the FAILED status.

We will configure an Amazon SNS topic to serve as an event target which sends notification to lambda function which will then filter out relevant content from the SNS message (json) content and beautify it and send to the respective Environment slack channel .

CloudWatch Event Rule → SNS Topic → Lambda Function → Slack Channel

Batch Job Status Notification in Slack

Slack notification provides the following details:

  • Job name
  • Job Status
  • Job ID
  • Job Queue Name
  • Log Stream Name

Go ServerLess with Firebase cloud functions

Firebase Cloud function

With announcement of cloud functions beta at Google cloud next 2017 event, Google has added one of the highly requested features in the firebase suite. This is one major step from Google in making firebase serverless. In this post, we will see some of the capabilities, pros and cons, setup and deployment of firebase cloud functions. Google IO is just days away and knowing about firebase is surely going to help in understanding the upcoming firebase features. Continue reading Go ServerLess with Firebase cloud functions

CLOUD GIANTS RACE :AWS vs AZURE vs GOOGLE CLOUD

The more we travel in time, towards future, cloud computing keeps on captivating us with its mysterious enticing.The competition is heating up in the public cloud space as vendors regularly drop prices and offer new features.I will shine a light on the competition between the three giants of the cloud:Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft’s Azure.   Continue reading CLOUD GIANTS RACE :AWS vs AZURE vs GOOGLE CLOUD

One-Click Deployment with AWS CodeDeploy

AWS CodeDeploy is a deployment system that enables developers to automate the deployment of applications on EC2 instances  and to update the applications as required.

You can deploy a nearly unlimited variety of application content, such as code, web and configuration files, executables, packages, scripts, multimedia files, and so on. AWS CodeDeploy can deploy application content stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. You do not need to make changes to your existing code before you can use AWS CodeDeploy. Continue reading One-Click Deployment with AWS CodeDeploy

Schedule Daily EC2 instance stop using CloudWatch Events

Purpose

Infra/Dev-ops team do have instance created for POC/Demo/testing purpose which we need to stop daily (office off-hours) or during weekends for cost saving purpose. As this adds an overhead for us to daily stop the instance manually before leaving office and sometimes we might forget to stop the instance which again will add up the cost.So there was a demand to automate this process in order to save cost. Continue reading Schedule Daily EC2 instance stop using CloudWatch Events

Build a Custom Solr Filter to Handle Unit Conversions

Recently, I came across a use case where it was required to handle units of weight in the index. For instance, 2kg and 2000g, when searched should return the same set of results.

So, for achieving the above, I wrote a custom Solr filter that will work along with KeywordTokenizer to convert all units of weight in the incoming request to a single unit (g) and hence every measurement will be saved in the form of a number; at the same time, it will also keep units like kg/g/mg intact while returning the docs. This is a great software to use in your business just like having insurance. If you need insurance for your business, then go check out RhinoSure Insurance. Another thing that you should do is go to mein-parteibuch.com so you can get more customers on your company website. Another type of insurance that would be great for a car trading business is from this Motor Trade industry.

Firstly, we need to write custom tokenfilter and tokenfilterfactory .

UnitConversionFilter.java

[code language=”java”]

package com.solr.custom.filter.test;
import java.io.IOException;

import org.apache.lucene.analysis.TokenFilter;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;

/**
* @author SumeetS
*
*/
public class UnitConversionFilter extends TokenFilter{

private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);

/**
* @param input
*/
public UnitConversionFilter(TokenStream input) {
super(input);
}

/* (non-Javadoc)
* @see org.apache.lucene.analysis.TokenStream#incrementToken()
*/
@Override
public boolean incrementToken() throws IOException {
if (input.incrementToken()) {
// charUtils.toLowerCase(termAtt.buffer(), 0, termAtt.length());
int length = termAtt.length();
String inputWt = termAtt.toString(); //assuming format to be 1kg/mg
float valInGrams = convertUnit(inputWt);
String storeFormat = valInGrams+””;
termAtt.setEmpty();
termAtt.copyBuffer(storeFormat.toCharArray(), 0, storeFormat.length());
return true;
} else
return false;
}

private float convertUnit(String field){
String [] tmp = field.split(“(k|m)?g”);
float weight = Integer.parseInt(tmp[0]);
String[] tmp2 = field.split(tmp[0]);
String unit = tmp2[1];
float convWt = 0;
switch(unit) {
case “kg”:
convWt = weight * 1000;
break;
case “mg”:
convWt = weight /1000;
break;
case “g”:
convWt = weight;
break;
}
return convWt;
}
}

[/code]

UnitConversionTokenFilterFactory.java

[code language=”java”]

package com.solr.custom.filter.test;
import java.util.Map;

import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.util.TokenFilterFactory;

/**
* @author SumeetS
*
*/
public class UnitConversionTokenFilterFactory extends TokenFilterFactory {

/**
* @param args
*/
public UnitConversionTokenFilterFactory(Map<String, String> args) {
super(args);
if (!args.isEmpty()) {
throw new IllegalArgumentException(“Unknown parameters: ” + args);
}
}

/* (non-Javadoc)
* @see org.apache.lucene.analysis.util.TokenFilterFactory#create(org.apache.lucene.analysis.TokenStream)
*/
@Override
public TokenStream create(TokenStream input) {
return new UnitConversionFilter(input);
}

}

[/code]

NOTE: When you override the TokenFilter and TokenFilterFactory, make sure to edit the protected constructors to public, otherwise it will throw NoSuchMethodException during plugin init.

Now, compile and export your above classes into a jar say customUnitConversionFilterFactory.jar

Steps to Deploy Your Jar Into Solr

1. Place your jar file under /lib

2. Make an entry in solrConfig.xml file to help it identify your custom jar.

[code language=”xml”]

<lib dir=”../../../lib/” regex=”.*\.jar” />

[/code]

3. Add custom fieldType and field in your schema.xml

[code language=”xml”]

<field name=”unitConversion” type=”unitConversion” indexed=”true” stored=”true”/>
<fieldType name=”unitConversion” class=”solr.TextField” positionIncrementGap=”100″>
<analyzer>
<tokenizer class=”solr.KeywordTokenizerFactory”/>
<filter class=”com.solr.custom.filter.test.UnitConversionTokenFilterFactory” />
</analyzer>
</fieldType>
[/code]

4. Now restart Solr and browse to the Solr console//documents

5. Add documents in your index like below:

{"id":"tmp1","unitConversion":"1000g"}
{"id":"tmp2","unitConversion":"2kg"}
{"id":"tmp3","unitConversion":"1kg"}

6. Query your index.

Query1 : querying for documents with 1kg

http://localhost:8983/solr/core1/select?q=*%3A*&fq=unitConversion%3A1kg&wt=json&indent=true

Result:

{
 "responseHeader":{
 "status":0,
 "QTime":0,
 "params":{
 "q":"*:*",
 "indent":"true",
 "fq":"unitConversion:1kg",
 "wt":"json"}},
 "response":{"numFound":2,"start":0,"docs":[
 {
 "id":"tmp1",
 "unitConversion":"1000g",
 "_version_":1524411029806645248},
 {
 "id":"tmp3",
 "unitConversion":"1kg",
 "_version_":1524411081738420224}]
 }}

Query2: querying for documents with 2kg

http://localhost:8983/solr/core1/select?q=*%3A*&fq=unitConversion%3A2kg&wt=json&indent=true

Result:

{
 "responseHeader":{
 "status":0,
 "QTime":0,
 "params":{
 "q":"*:*",
 "indent":"true",
 "fq":"unitConversion:2kg",
 "wt":"json"}},
 "response":{"numFound":1,"start":0,"docs":[
 {
 "id":"tmp2",
 "unitConversion":"2kg",
 "_version_":1524411089834475520}]
 }}

Query3: let’s try faceting

http://localhost:8983/solr/core1/select?q=*%3A*&rows=0&wt=json&indent=true&facet=true&facet.field=unitConversion

{
 "responseHeader":{
 "status":0,
 "QTime":1,
 "params":{
 "q":"*:*",
 "facet.field":"unitConversion",
 "indent":"true",
 "rows":"0",
 "wt":"json",
 "facet":"true"}},
 "response":{"numFound":335,"start":0,"docs":[]
 },
 "facet_counts":{
 "facet_queries":{},
 "facet_fields":{
 "unitConversion":[
 "1000.0",2,
 "2000.0",1]},
 "facet_dates":{},
 "facet_ranges":{},
 "facet_intervals":{},
 "facet_heatmaps":{}}}

This is just a basic implementation. One can add additional fields to identify the type of unit and then based on that decide the conversion.

Further improvements include handling of range queries along with the units.

For more info check us out in Social Media, we were recently able to Buy Instagram likes to improve our account.

Multi-tenancy in Cloud Application through Meta Data Driven Architecture

A multi-tenant architecture is designed to allow tenant-specific configurations at the UI, business rules, business processes and data model layers. This is enabled without changing the code thereby transforming complex customization into configuration of software. This drives the clear need for “metadata driven everything” including metadata driven database, metadata driven SOA, Metadata driven business layer, Metadata driven AOP and Metadata driven user interfaces.

Metadata Driven Database

To develop a Multi-tenanted database, one of the following architecture approaches applies:

  • Shared Tables among Tenants
  • Flexible Schema, Shared Tables
  • Multi-Schema, Private Tables
  • Single Schema, Private Tables for Tenants
  • Multi-Instance

As the service grows building a cloud database service to manage a vast, ever-changing set of actual database would be difficult. Rules pertaining to whom, where, how etc. may become an overhead as the application and numbers of clients grow.

Metadata driven approach involves collecting all these answers in tables so that it could be reused. It involves putting info about all tables, columns, indexes, constraints, partitions, SPs, parameters, functions; rules defines in business and transaction steps in a SP.

In a true metadata driven database, no rule and procedure refer to tables directly and even these rules are abstracted and used through metadata.

Metadata Driven SOA

To be a true service-oriented application the fractal model must be applicable from the system boundary to the database, with service interfaces defined for each component or sub-system and each service treated as a black- box by the caller.

The metadata-driven nature of the services of application leads the solution to a dead-end if a pure technical ‘code it’ approach is taken. In such a metadata-driven application exposing functions is replaced by exposing metadata.

Exposing the metadata itself is not the true intent of a metadata-driven application. Driving the propagation of services [functions] over the system boundary is a more accurate manner of phasing the approach that needs to be employed.

A metadata-driven application is capable of providing a bridging approach to propagate its services into many technologies via code generation. This is a direct result of all services being regular and that all service descriptions are available in a meta-format at both build-time and runtime.

Metadata Driven Business Layer

In the past, business logic and workflow were written using if else condition. If a business model or workflow is being designed in a multitenant environment, then the very first step has to be preparing metadata configurations. It should include the data source, extractions steps, transformation routing, loading and the rules and execution logic derivation source. Next step has to be the decision of tools and language, the usage of which can generate code and workflows out of the configurations. The final and the most challenging one will be changing the mindset of developers to “not create workflows and business objects but write code which can generate”.

Metadata Driven AOP

Metadata and the Join Point Model

A join point is an identifiable point in the execution of a system. The model defines which join points in a system are exposed and how they are captured. To implement crosscutting functionality using aspects, you need to capture the required join points using a programming construct called a pointcut.

Pointcuts select join points and collect the context at selected join points. All AOP systems provide a language to define pointcuts. The sophistication of the pointcut language is a differentiating factor among the various AOP systems. The more mature the pointcut language, the easier it is to write robust pointcuts.

Capturing Join Points with Metadata

Signature-based pointcuts cannot capture the join points needed to implement certain crosscutting concerns. For example, how would you capture join points requiring transaction management or authorization? Nothing inherent in an element’s name or signature suggests transactionality or authorization characteristics. The pointcut required in these situations can get unwieldy. The example is in AspectJ but pointcuts in other systems are conceptually identical.

pointcut transactedOps() 

    : execution(public void Account.credit(..))

      || execution(public void Account.debit(..)) 

Situations like these invite the use of metadata to capture the required join points. For example, you could write a pointcut as shown below to capture the execution of all the methods carrying the @Transactional annotation.

pointcut execution(@Transactional * *.*(..));

AOP systems and their join point models can be augmented by consuming metadata annotations. By piggybacking on code generation support it’s possible to consume metadata even when the core AOP system doesn’t directly support it.
Metadata support in AOP systems

To support metadata-based crosscutting, an AOP system needs to provide a way to consume and supply annotations. An AOP system that supports consuming annotations will let you select join points based on annotations associated with program elements. The current AOP systems that offer such support extend the definition for various signature patterns to allow annotation types and properties to be specified. For example, a pointcut could select all the methods carrying an annotation of type Timing. Further, it could subselect only methods with the value property exceeding, say, 25. To implement advice dependent on both annotation type and properties, the system could include pointcut syntax capturing the annotation instances associated with the join points. Lastly, the system could also allow advice to access annotation instances through reflective APIs.

Metadata Driven User Interfaces

Many business applications require the user interface (UI) to be extensible as the requirements vary from one customer to another. Client-side business logic for the UI may also need customization based on individual user need. A screen layout for a user might be different from another user. This may include control position, visibility, UIs for various mobile devices. The business logic customization also includes customizing validation rules, changing control properties, and other modifications. For example, a manager may have different options for deleting and moving files than a subordinate.

There are many techniques for enabling business applications to be extensible or customizable. Most applications solve this problem by storing customizable items such as UI layout and client-side business logic as metadata in a repository. This metadata can then be interpreted by a run-time engine to display the screen to users and to execute the client-side business logic when the user performs an action on the screen.

The advantages of this approach are:

  • Redeployment of components on the presentation layer is not required as the customization is done in a central repository.
  • A very light client installation is required. One only needs to deploy the run-time engine to the client machine.

While designing a Metadata driven UI, the following components are taken into account:

  1. Metadata Service. An ordinary service layer delivers Meta data for UI
  2. Login/Role Controller
  3. Action Controller
  4. Widget Controller
  5. MetaTree
  6. TreeService

Multi-tenancy in cloud applications can have a huge impact on the application delivery and productivity of an IT company.  Yet most people who use cloud and its services tend to ignore it owing to it’s “behind the scenes” functionality. Many old applications have been written in multitenant manner but moving them to SAAS or converting legacy to SOA might become a challenge. Meta data driven programming is indeed a different paradigm. However, it has a capability to solve numerous challenges associated not only with multi-tenancy but other cloud issues as well.