Build Automation with Gradle

Gradle is the latest build automation tool in vogue, particularly for Java projects. It aims at keeping the flexibility provided by Ant builds that Maven lacks. Gradle combines it with the build-by-convention functionality that Maven provides, but makes it much more flexible and configurable. It provides dependency management through Ivy and provides very good multiproject support, through its incremental build methodology as well as support for partial builds.

WHY USE GRADLE?
Gradle builds are written in its own Groovy based DSL, known as Gradle Build Language. It is a build programming language with features for organizing build logic in a more readable and maintainable form. For instance set of instructions that are being used repeatedly can be extracted in Gradle as a method. This method can be named with appropriate parameters in each of the tasks, thus avoiding rewriting code. Gradle developers cite its closeness to Java in terms of syntax, type system and package structure as the reason behind using Groovy for DSL. The argument is that although Gradle is a general purpose build tool at its core, its main focus is Java projects. Groovy provides the greatest transparency and ease of understanding for Java developers amongst dynamic scripting languages like Python, Groovy and Ruby.

GRADLE BUILD FILE
Gradle uses Gradle Build File as the build script for a project. It is Gradle’s substitute to Maven’s pom.xml and Ant’s build.xml. The file is actually a build configuration script rather than a build script. The Gradle command looks for this file to start the build.

There are two basic building blocks in Gradle- projects and tasks. Project represents any component of the software that can be built. For instance a library JAR, a web application WAR or a distribution ZIP assembled from JARs produced by various projects. It could also be something to be done, like deploying the application to various environments. A task represents an atomic piece or work that a build performs, like deleting the older classes, compiling some classes, creating a JAR, generating a Javadoc or publishing archives to a repository. Thus, tasks are the building blocks of builds in Gradle.

To illustrate a few things that stood out for us while evaluating Gradle, Gradle build scripts are made up of code and can be used to leverage the power and flexibility of Groovy. Thus, a build can have easy to read, reusable code blocks like :

task upper << {
String someString = ‘mY_nAmE’
println “Original: ” + someString
println “Upper case: ” + someString.toUpperCase()
}

output:
> Gradle -q upper
Original: mY_nAmE
Upper case: MY_NAME
Iterations can be done with ease when needed, example:

task count << {
4.times { print “$it ” }
}

output:
> Gradle -q count
0 1 2 3
Methods can be extracted out of the logic and reused, example:

task checksum << {
fileList(‘../tempFileDirectory’).each {File file ->
Ant.checksum(file: file, property: “cs_$file.name”)
println “$file.name Checksum: ${Ant.properties[“cs_$file.name”]}”
}
}

task loadfile << {
fileList(‘../tempFileDirectory’).each {File file ->
Ant.loadfile(srcFile: file, property: file.name)
println “I’m fond of $file.name”
}
}
File[] fileList(String dir) {
file(dir).listFiles({file -> file.isFile() } as FileFilter).sort()
}

Another point to note here is the use of Ant tasks (Ant.checksum and Ant.loadfile). This demonstrates how Ant tasks are treated in Gradle as first class citizens. The Java plugin that comes with Gradle’s distribution is also a neat addition and reinforces the claim of useful build-by-convention support. It defines a bunch of conventional build tasks like clean, compile, test and assemble for Java projects.

In conclusion, Gradle has the potential to replace the existing build tools and processes in a big way. However, the move from existing systems to Gradle has understandably been limited across projects. This can be due to the small yet measurable learning curve that comes with moving to Gradle, the relatively low importance attributed to build systems in a project or developers preferring to use systems that they’re already comfortable with. Gradle is definitely an option worth considering, if you are going to start a new project, or your current build tools aren’t cutting it for you anymore.

Better Unit testing with Mockito

Unit tests are like guidelines that help you test right. These guide your design to be loosely coupled, well etched out and provide fast automated regression for refactors and small changes to the code.

Best unit test case scenario is with an independent isolated class. Unit Testing is harder with dependencies on remote method calls, file system operations, DB operations etc.Unit testing means automation and minimum setup. All these dependencies require initial setup and take a very long time to execute. Also, it makes it almost impossible to test the class for the negative cases, eg: Network failure, File system not accessible, DB errors. You need to change the response of these dependencies to execute each unit test case for the class.Mock comes to rescue unit tests. Mock doesn’t mean to make mockery of any object.

Mockito Framework

Mockito is an open source testing framework for Java. The framework allows the creation of Test Double objects called, “Mock Objects” in automated unit tests for the purpose of Test-driven Development or Behavior Driven Development. Mockito Framework enables mocks creation, stubbing and verification..

What is Object Mocking?
Mock objects simulate (fake) real objects and they use the same Interface(s) or class as the real object. A mock object allows you to set positive or negative expectations. The mock object lets you verify that the expectations were met or not, i.e. it records all the interactions which can be verified later.

When to Mock?
Object mocking should be conducted when the real object:

  • Has a behavior that is hard to cause or is non-deterministic.
  • Is slow and difficult to set up.
  • Has (or is ) a UI.
  • Does not exist For example,  Team A is working on X and requires Y from Team B, at the same time team B is working on Y, to start the task of X team A can mock Y.
  • Can simulate both behavior and ill-behavior.

To know more about Mockito,
Visit: http://docs.mockito.googlecode.com/hg/org/mockito/Mockito.html

Agile Database Migration Tools for Java

Agile methodologies of software development require a major shift in approach towards database management. The reason is requirements are never really frozen during agile development. Though changes are controlled, the attitude of the process is to enable change as much as possible. This change can be in response to the inherent instability of requirements in many projects or to better support dynamic business environments.

Thus agile development needs tools to enable and support evolutionary database design, along with solving problems like porting database schema changes on an instance with critical data.

Ruby on Rails, which is designed for agile development, provides built-in capabilities to cope with this problem, in the form of Active Record Migration. However, Java currently doesn’t have a solution of that pedigree. There are a few viable alternatives though, the most popular among the current breed is Liquibase. I will present an evaluation here which might help you choose the best tool.

Liquibase: Here are my observations, from trying to integrate it with a Spring, Hibernate3 annotation based configuration project. Liquibase is described as a substitute for Hibernate’s hbm2ddl. It can be integrated with an existing project in a few simple steps. The advantages are Spring integration so dev environments have updated databases. Secondly version management – changesets are ID-ed and stored in a table after being applied to a database and changelogs can be written in SQL.

Liquibase gives 2 ways of writing migrations, XML based and SQL based. The biggest disadvantage of Liquibase is the inability to write java-based migrations. This lead us to a search for a solution that provides most of the features present in Liquibase, as well as support for Java-based migrations.

c5-db-migration: c5-db-migration supports migrations in Groovy, as well as migrations from within the application i.e. it provides APIs for migrations. However it doesn’t support migrations written in Java. There is no multiple schema support and it isn’t present in maven central.

Migrate4j: This tool supports Java migrations and it also provides an API for migrations. Disadvantage is that it doesn’t support even plain SQL migrations and is terribly short on features (no auto creation of metadata table, no multiple schema support and no maven support).

Flyway: Flyway offers cleaner versioning than Liquibase, migrations can be written in Java as well as SQL, and it supports auto-discovery of migrations in project packages.  However the one missing feature is that it doesn’t support rollbacks, though it is more of a design choice taken by Flyway developers.

After careful evaluation of each of these tools we decided to go ahead with Flyway for the project, and it has been great so far.

Building Web Services with JAX-WS : An Introduction

Developing SOAP based web services seem difficult because the message formats are complex.  JAX-WS API  makes it easy by hiding the complexity from the application developer. JAX-WS is the abbreviation for Java API for XML Web Services. JAX-WS is a technology used for building web services and clients that communicate using XML.

JAX-WS allows developers to write message-oriented as well as RPC-oriented web services.

A web service operation invocation in JAX-WS is represented by an XML-based protocol like SOAP. The envelope structure, encoding rules and conventions for representing web service invocations and responses are defined by the SOAP specification. These calls and responses are transmitted as SOAP messages over HTTP.

Though SOAP messages appear complex, the JAX-WS API hides this complexity from the application developer.  On the server-side, you can specify the web service operations by defining methods in an interface written in the Java programming Language. You need to code one or more classes that implement these methods.

It is equally easy to write client code. A client creates a proxy, a local object representing the service and then invokes methods on the proxy. With JAX-WS you do not generate or parse SOAP messages. Thanks to JAX-WS runtime system that converts the API calls and responses to and from SOAP messages!

With JAX-WS, web services and clients have a big advantage which is the platform independence of the Java programming language. Additionally JAX-WS is not restrictive: a JAX-WS client can access a web service that is not running on the Java platform and vice versa. This flexibility is possible because JAX-WS uses technologies defined by the World Wide Web Consortium – HTTP SOAP and the Web Service Description Language or WSDL. WSDL specifies an XML format to describe a service as a set of endpoints operating on messages.

Three most popular implementations of JAX-WS are:

Metro: It is developed and open sourced by Sun Microsystems. Metro incorporates the reference implementations of the JAXB 2.x data-binding and JAX-WS 2.x web services standards along with other XML-related Java standards. Here is the project link: http://jax-ws.java.net/. You can go through the documentation and download the implementation.

Axis: The original Apache Axis was based on the first Java standard for Web services which was JAX-RPC. This did not turn out to be a great approach because JAX-RPC constrained the internal design of the Axis code and caused performance issues and lack of flexibility. JAX-RPC also made some assumptions about the direction of Web services development, which turned out to be wrong!

By the time the Axis2 development started, replacement for JAX-RPC had already come into picture. So Axis2 was designed to be flexible enough to support the replacement web services standard on top of the base framework. Recent versions of Axis2 have implemented support for both the JAXB 2.x Java XML data-binding standard and the JAX-WS 2.x Java web services standard that replaced JAX-RPC(JAX-RPC: Java API for XML-based Remote Procedure Calls. Since RPC mechanism enables clients also to execute procedures on other systems, it is often used in a distributed client-server model. RPC in JAX-RPC is represented by an XML-based protocol such as SOAP). Here is the project link: http://axis.apache.org/axis2/java/core/

CXF: Another web services stack by the Apache. Though both Axis2 and CXF originated at Apache, they take very different approaches about how web services are configured and delivered. CXF is very well documented and has much more flexibility and additional functionality if you’re willing to go beyond the JAX-WS specification. It also supports Spring. Here is the project link: http://cxf.apache.org/

At Talentica we have used all the three implementations mentioned above in various projects.

Why CXF is my choice?
Every framework has its strengths but CXF is my choice. CXF has great integration with the Spring framework and I am a huge fan of Spring projects. It’s modular and easy to use. It has great community support and you can find lots of tutorials/resources online. It also supports both JAX-WS and JAX-RS specification but that should not be a considering factor while choosing CXF. Performance-wise it is better than AXIS and it gives almost similar performance when compared to Metro.

So CXF is my choice, what is yours?

Event: Node.js Over the Air!

Over the Air sessions at Talentica are technical workshops where a bunch of developers roll up their sleeves, tinker around with new platforms/technologies to learn together, gather new insights and get a healthy dollop of inspiration. Last week we had an “Over the Air” session on Node.js.

Node.JS is a server side javascript interpreter that changes the notion of how a server should work. It’s goal is to enable a programmer to build highly scalable applications that handle tens of thousands of simultaneous connections on a single server machine. Node.js is one of the most talked about technology today. To know how it works really, we picked it up for this Over the Air session.

Once we gathered up, it took a little while for some of the participants to get used to the event-driven programming style. Pretty soon, we were all working together on building a cool chat app. By the end of the day, we had a fully working version of a chat room app in which any user can enter the chat room by simply entering a nickname. Subsequent entries are posted to all logged in users. The right side pane shows all the logged in users.

This is a fairly decent basic version. Going forward, we plan to enhance the User Interface so that people can play games using the chat app; integrate the UI with the chat engine and enable users to be able to challenge each other to play while chatting.

First Impressions
Node.js is an excellent option for interactive apps. I will not hesitate to use Node.js in products that require interactive functionality like chat, auctions, online multiplayer games. One can use Node.js to suit a part of the product than building the complete product on it.

The fact that we can code server side with Javascript should make Javascript developers jump with joy. Code reuse between client and server side might actually be possible!

On the negative side, I am not sure if the event programming model is a good one on the server side. It might lead to spaghetti code with callbacks all over the place. Another thing is that though the community is very active and plug-ins are being developed at a rapid pace – it is still not a tried and tested technology at this moment!

Multi-server Applications on the Wireless Web

Here we will discuss how we can build Web applications that can serve wireless clients according to client capabilities.

What are the challenges?
Development of mobile applications is often highly dependent on the target platform. When developing any mobile content portal we generally think about the accessibility of that portal through the mobile browsers (like Nokia, Openwave, i-mode browsers, AvantGo in PDA etc) which generally use markup languages like WML, HDML, cHTML, XHTML etc. We want to ensure that the browser gets the compatible markup language and can present the portal content in correct format. In short, creating a wireless application that works on as many devices as possible is not difficult, it’s useless. If you invest a huge amount of resources today, chance are that a new device will be shipped tomorrow and you‘ll need to tweak your application again.

What is the solution?
Wireless Universal Resource File (WURFL) is an open source project that uses XML to describe the capabilities of wireless devices. It is a database (some call it a “repository”) of wireless device capabilities. With WURFL, figuring out which phone works with which technology is a whole lot easier. We can use the WURFL to figure out device capabilities programmatically and to serve different content to different devices dynamically, depending on the device accessing the content.

Here are some of the things WURFL can help you know about a device:

  • Screen size of the device
  • Supported image, audio, video, ringtone, wallpaper, and screensaver formats
  • Whether the device supports Unicode
  • Whether it is a wireless device? What markup does it support?
  • What XHTML MP/WML/cHTML features does it support? Does it work with tables? Can it work with standard HTML?
  • Does it have a pointing device? Can it use CSS?
  • Does it have Flash Lite/J2ME support? What features?
  • Can images be used as links on this device? Can it display image and text on the same line?
  • If it is an iMode phone, what region is it from? Japan, US or Europe?
  • Does the device auto-expand a select drop down? Does it have inline input for text fields?
  • What SMS/MMS features are supported?

WURFL framework also contains tools, utilities and libraries to parse and query the stored data in WURFL. WURFL API is available in many programming languages, including Java, PHP, .Net, Ruby, and Python. Various open source tools are build around this WURL – HAWHAW(PHP), WALL(Java) , HAWHAW.NET (.Net framework) , HawTag (JSP Custom tag library etc).

How does WURFL work?
When a mobile or non-mobile web browser visits your site, it sends a User Agent along with the request for your page. The user agent contains information about the type of device and browser that is being used. Unfortunately, this information is very limited and at times is not representative of the actual device. Using WURFL API, the framework then extracts the capabilities associated with that device. Based on the device capabilities, the framework creates the dynamic content – WML, HTML, XHTML etc.

Though there is concern with the extra latency time taken due to user-agent look up, it’s worth to use it looking at its advantages. One of the biggest advantages is regarding a new device if and when it enters the market, we will not need to change our application, but just update the WURFL to keep the application optimized. It is very simple and the architecture is sound. Go for it!!!

Machine Learning for Text Extraction

In a previous post we looked at the use of Natural Language Processing techniques in text extraction. Several steps are involved in the processing as each document passes through a pipeline of chained tasks.

A deep pipeline can take several seconds for a document. So if one is dealing with thousands of documents an hour the processing requirements could make the system nonviable. Care needs to be taken to evaluate the trade-off between the improvements in accuracy caused by adding pipeline tasks with the additional processing power that it entails.

One reason for the slow speed in our email processing is that we are parsing the entire email and all emails regardless of whether they are of importance to use. In our case only 2% of the emails received will be of interest. So we would like to reduce the amount of text we process by ignoring the unwanted stuff. This process of weeding out irrelevant text should itself not take too long otherwise our purpose is lost!

Machine Learning (ML), which is a key area in AI, offers a solution. GATE comes with various machine learning Processing Resources implementing common ML algorithms like Support Vector Machine (SVM), Bayes classification and K-nearest neighbor (KNN). You “train” the algorithm using training sets of text samples.

Training is done by manually classifying sentences in a binary fashion: is this sentence of interest to me or not? Ideally you need thousands of representative sentences. The algorithm is then trained on this data: internally the various features and annotations are used to reverse engineer patterns based on the manual classification.

In production you first run your input text through the Machine Learning pipeline task. If it predicts that the text is of interest then you run it through the rest of the pipeline, otherwise ignore it. The problem is that this prediction is probabilistic. There could be two kinds of mistakes, one where it wrongly tells you that a dud document is of interest, causing wasted CPU cycles. A more troublesome mistake is when a valid document is marked as of no interest.

In our case for example this is an unacceptable error. We will miss reporting valid events to customers and they will no longer be able to rely on our service to do so. Unfortunately ML algorithms are such that these two types of errors cannot be reduced independently: if you want all valid documents you also get a lot of duds eating up your cpu cycles.

In addition ML can give you strange results. Bad data in your training sets can have a significant impact on your results. Debugging such issues is very difficult because of the non-deterministic nature of learning algorithms. A lot of trial and error is involved, mostly tedious work manually annotating documents, running different training sets and validating the results on real data.

However as in the deterministic NLP process using JAPE the result is magic. Once you have your training sets clean and complete the ML task can significantly weed out unwanted documents. Iteratively adding runtime learning to the system (where you enhance the training sets as you go along) can add dramatic improvements over time.

After the first experience with email parsing we are now using NLP in another project. We have a product for recruiters where resume parsing is an important piece. It currently parses candidate information using regular expressions and string matches.

The accuracy is around 80% for basic information which is a problem since 1 out of 5 fields is missed or wrong. Using a slightly different pipeline from the one described above and building in some heuristic in a custom PR we have been able to get to over 95% accuracy in the lab. In addition we are now extracting several other types of information which was considered too difficult to do using traditional programming.

Our experiences have made us look at other aspects of NLP like collaborative filtering and content-based recommendation engines as well as enhanced search using NL techniques. You might see a post on this soon!

Text Extraction using Natural Language Processing

A few months ago I was asked to look into an email processing problem. We needed to extract event related information from consumer-originated email. As a traditional programmer the first instinct was to think in terms of regular expressions and lookup tables! Experience quickly tempered that thought and I decided to look at Natural Language Processing.

There were several standard methodologies in place for natural language processing tasks and quite a few open source tools were available. The jargon was daunting: corpuses, entities, gazetteers, POS tags, transducers, and JAPE were just a few terms that I had to wade through. The thought of the alternative: debugging code with zillions of unreadable regular expressions kept me going!

I downloaded GATE and was able to quickly build a prototype parsing emails to get to our target data. GATE breaks down the task of processing text into small specialized chunks of work strung together in a “pipeline”. The tasks work by putting XML annotations in the text or enhancing/using the annotations put by a previous task. It is a simple and beautiful architecture living up to its acronym: General Architecture for Text Engineering.

Each task is called a Processing Resource (PR) in GATE. You can choose from a host of preinstalled resources, or find and install PRs from the internet or just go ahead and write your own. Let us look at a simple GATE pipeline for text processing.
The first PR in pipeline is a tokenizer: this takes the email text and converts it into a series of tokens like numbers, upper-case strings, space or punctuation, etc. The second PR splits the text into sentences based on space and punctuation tokens
We then have a Parts Of Speech (POS) tagger: it understands sentence grammar and breaks the sentence into nouns, verbs, adjectives, pronouns etc.

A gazetteer is another useful Processing Resource which marks the text which matches your lookup tables. Take a list of colleges for example. If one of these colleges appears in the text then it gets annotated as a College.

We are almost there! The last stage is the scary sounding JAPE transducer. This is nothing but a way of defining regular expressions over the GATE annotations using a rule based language. But didn’t we switch to NLP to avoid regular expressions?
JAPE is a very different beast as compared to standard regular expressions.

– It works on the annotations added by the pipeline which capture grammar and lookups instead of raw text strings.
– JAPE rules are applied in a declarative manner. Regular expressions are sequential and in many occasions the order in which they are applied affect the result.

JAPE is bit difficult to understand however the accuracy, stability and maintainability offered by the GATE pipeline are far better than using traditional programming approaches.

There are several features of NLP that make it an art rather than a science. For each type of processing task there are several different types of PRs that you can choose from. For example we found that people use a lot of abbreviations in email and regularly leave out full stops at the end of sentences. A standard sentence splitter fails in such cases. We turned to the RegEx sentence splitter where we were able to enhance the logic used by defining our own regular expressions to detect or ignore such cases.

In addition, the order of tasks in the pipeline can make a big difference to the accuracy. Moving the gazetteer up the chain and using its annotations in sentence splitting helps resolve problems where the PR might split the sentence where abbreviations like U.S.A. are used (the full stop at the end of A and the space following it causes a line break in a usage like U.S.A Today).
The Java interface to GATE is simple. Once you are happy with the pipeline, from the IDE you

– Save it as a .gapp file in the GATE IDE.
– Load the gapp file (in Java), load the documents to process into a collection (the “corpus)
– Execute the pipeline.

For each document you get an annotated XML file which you parse using a standard XML parser to look for the tags your application is interested in.
A major complexity that I have avoided discussing until now is performance. Look forward to the next post to know more!!

Common Myth Regarding ViewState in ASP.NET

Through this article I want to defeat a very common misconception about ViewState. Most ASP.NET developers think that the ASP.NET ViewState is responsible for holding the values of controls such as TextBoxes so that they are retained even after postback.

But that is not the case!

Let’s take an example to understand the above:

Place a web server TextBox control (tbControl) and a web server Label control (lblControl).

Set the “Text” property of label and textbox to “Initial Label Text” and “Initial TextBox Text” respectively and set the “EnableViewState” property of both the controls to false.

Place two button controls and set their text to “Change Label Text” and “Post to Server”. First button changes label’s text by handling button click event and second button only does the postback.

private void btnChangeLabel_Click(object sender, System.EventArgs e)
{
lblControl.Text = “Label’s Text Changed”;
}

On running this application, you can see the initial texts in the controls as you have set.

Now, change the text in TextBox and set it to “Changed TextBox Text”.

Now click the Post to Server button. What happens is that the textbox retains its value, in spite of the ViewState property being set to false.

The reason for this behavior is that ViewState is not responsible for storing the modified values for controls such as TextBoxes, dropdowns, CheckBoxList etc., that is, those controls which inherit from the IPostBackDataHandler interface.

After Page_Init(), there is an event known as LoadViewState, in which the Page class loads values from the hidden __VIEWSTATE from the field for those controls (e.g., Label) whose ViewState is enabled.

Then the LoadPostBackData event fires, in which the Page class loads the values of those controls which inherit from the IPostBackDataHandler interface, (e.g. TextBox) from the HTTP POST headers.

Now, on clicking “Change Label Text” button which changes label text programmatically (made by above mentioned event handler), then on clicking “Post to Server”, page reloads and programmatic change is lost i.e. label text changes to initial value – “Initial Label Text”.

This is because the Label control does not inherit from the IPostBackDataHandler interface. So the ViewState is responsible for persisting its value across postbacks.

Also since ViewState has been disabled, the Label loses its value after clicking the “Change Label Text” button.

Now enable ViewState for the Label control, and you can see the modified value (“Label’s Text Changed”) after clicking the same button.

So we conclude that controls which inherit from the IPostBackDataHandler interface retain their values even if the ViewState has been disabled. This is because the values are stored in HTTP POST headers.