Event: Node.js Over the Air!

Over the Air sessions at Talentica are technical workshops where a bunch of developers roll up their sleeves, tinker around with new platforms/technologies to learn together, gather new insights and get a healthy dollop of inspiration. Last week we had an “Over the Air” session on Node.js.

Node.JS is a server side javascript interpreter that changes the notion of how a server should work. It’s goal is to enable a programmer to build highly scalable applications that handle tens of thousands of simultaneous connections on a single server machine. Node.js is one of the most talked about technology today. To know how it works really, we picked it up for this Over the Air session.

Once we gathered up, it took a little while for some of the participants to get used to the event-driven programming style. Pretty soon, we were all working together on building a cool chat app. By the end of the day, we had a fully working version of a chat room app in which any user can enter the chat room by simply entering a nickname. Subsequent entries are posted to all logged in users. The right side pane shows all the logged in users.

This is a fairly decent basic version. Going forward, we plan to enhance the User Interface so that people can play games using the chat app; integrate the UI with the chat engine and enable users to be able to challenge each other to play while chatting.

First Impressions
Node.js is an excellent option for interactive apps. I will not hesitate to use Node.js in products that require interactive functionality like chat, auctions, online multiplayer games. One can use Node.js to suit a part of the product than building the complete product on it.

The fact that we can code server side with Javascript should make Javascript developers jump with joy. Code reuse between client and server side might actually be possible!

On the negative side, I am not sure if the event programming model is a good one on the server side. It might lead to spaghetti code with callbacks all over the place. Another thing is that though the community is very active and plug-ins are being developed at a rapid pace – it is still not a tried and tested technology at this moment!

Multi-server Applications on the Wireless Web

Here we will discuss how we can build Web applications that can serve wireless clients according to client capabilities.

What are the challenges?
Development of mobile applications is often highly dependent on the target platform. When developing any mobile content portal we generally think about the accessibility of that portal through the mobile browsers (like Nokia, Openwave, i-mode browsers, AvantGo in PDA etc) which generally use markup languages like WML, HDML, cHTML, XHTML etc. We want to ensure that the browser gets the compatible markup language and can present the portal content in correct format. In short, creating a wireless application that works on as many devices as possible is not difficult, it’s useless. If you invest a huge amount of resources today, chance are that a new device will be shipped tomorrow and you‘ll need to tweak your application again.

What is the solution?
Wireless Universal Resource File (WURFL) is an open source project that uses XML to describe the capabilities of wireless devices. It is a database (some call it a “repository”) of wireless device capabilities. With WURFL, figuring out which phone works with which technology is a whole lot easier. We can use the WURFL to figure out device capabilities programmatically and to serve different content to different devices dynamically, depending on the device accessing the content.

Here are some of the things WURFL can help you know about a device:

  • Screen size of the device
  • Supported image, audio, video, ringtone, wallpaper, and screensaver formats
  • Whether the device supports Unicode
  • Whether it is a wireless device? What markup does it support?
  • What XHTML MP/WML/cHTML features does it support? Does it work with tables? Can it work with standard HTML?
  • Does it have a pointing device? Can it use CSS?
  • Does it have Flash Lite/J2ME support? What features?
  • Can images be used as links on this device? Can it display image and text on the same line?
  • If it is an iMode phone, what region is it from? Japan, US or Europe?
  • Does the device auto-expand a select drop down? Does it have inline input for text fields?
  • What SMS/MMS features are supported?

WURFL framework also contains tools, utilities and libraries to parse and query the stored data in WURFL. WURFL API is available in many programming languages, including Java, PHP, .Net, Ruby, and Python. Various open source tools are build around this WURL – HAWHAW(PHP), WALL(Java) , HAWHAW.NET (.Net framework) , HawTag (JSP Custom tag library etc).

How does WURFL work?
When a mobile or non-mobile web browser visits your site, it sends a User Agent along with the request for your page. The user agent contains information about the type of device and browser that is being used. Unfortunately, this information is very limited and at times is not representative of the actual device. Using WURFL API, the framework then extracts the capabilities associated with that device. Based on the device capabilities, the framework creates the dynamic content – WML, HTML, XHTML etc.

Though there is concern with the extra latency time taken due to user-agent look up, it’s worth to use it looking at its advantages. One of the biggest advantages is regarding a new device if and when it enters the market, we will not need to change our application, but just update the WURFL to keep the application optimized. It is very simple and the architecture is sound. Go for it!!!

Machine Learning for Text Extraction

In a previous post we looked at the use of Natural Language Processing techniques in text extraction. Several steps are involved in the processing as each document passes through a pipeline of chained tasks.

A deep pipeline can take several seconds for a document. So if one is dealing with thousands of documents an hour the processing requirements could make the system nonviable. Care needs to be taken to evaluate the trade-off between the improvements in accuracy caused by adding pipeline tasks with the additional processing power that it entails.

One reason for the slow speed in our email processing is that we are parsing the entire email and all emails regardless of whether they are of importance to use. In our case only 2% of the emails received will be of interest. So we would like to reduce the amount of text we process by ignoring the unwanted stuff. This process of weeding out irrelevant text should itself not take too long otherwise our purpose is lost!

Machine Learning (ML), which is a key area in AI, offers a solution. GATE comes with various machine learning Processing Resources implementing common ML algorithms like Support Vector Machine (SVM), Bayes classification and K-nearest neighbor (KNN). You “train” the algorithm using training sets of text samples.

Training is done by manually classifying sentences in a binary fashion: is this sentence of interest to me or not? Ideally you need thousands of representative sentences. The algorithm is then trained on this data: internally the various features and annotations are used to reverse engineer patterns based on the manual classification.

In production you first run your input text through the Machine Learning pipeline task. If it predicts that the text is of interest then you run it through the rest of the pipeline, otherwise ignore it. The problem is that this prediction is probabilistic. There could be two kinds of mistakes, one where it wrongly tells you that a dud document is of interest, causing wasted CPU cycles. A more troublesome mistake is when a valid document is marked as of no interest.

In our case for example this is an unacceptable error. We will miss reporting valid events to customers and they will no longer be able to rely on our service to do so. Unfortunately ML algorithms are such that these two types of errors cannot be reduced independently: if you want all valid documents you also get a lot of duds eating up your cpu cycles.

In addition ML can give you strange results. Bad data in your training sets can have a significant impact on your results. Debugging such issues is very difficult because of the non-deterministic nature of learning algorithms. A lot of trial and error is involved, mostly tedious work manually annotating documents, running different training sets and validating the results on real data.

However as in the deterministic NLP process using JAPE the result is magic. Once you have your training sets clean and complete the ML task can significantly weed out unwanted documents. Iteratively adding runtime learning to the system (where you enhance the training sets as you go along) can add dramatic improvements over time.

After the first experience with email parsing we are now using NLP in another project. We have a product for recruiters where resume parsing is an important piece. It currently parses candidate information using regular expressions and string matches.

The accuracy is around 80% for basic information which is a problem since 1 out of 5 fields is missed or wrong. Using a slightly different pipeline from the one described above and building in some heuristic in a custom PR we have been able to get to over 95% accuracy in the lab. In addition we are now extracting several other types of information which was considered too difficult to do using traditional programming.

Our experiences have made us look at other aspects of NLP like collaborative filtering and content-based recommendation engines as well as enhanced search using NL techniques. You might see a post on this soon!

Text Extraction using Natural Language Processing

A few months ago I was asked to look into an email processing problem. We needed to extract event related information from consumer-originated email. As a traditional programmer the first instinct was to think in terms of regular expressions and lookup tables! Experience quickly tempered that thought and I decided to look at Natural Language Processing.

There were several standard methodologies in place for natural language processing tasks and quite a few open source tools were available. The jargon was daunting: corpuses, entities, gazetteers, POS tags, transducers, and JAPE were just a few terms that I had to wade through. The thought of the alternative: debugging code with zillions of unreadable regular expressions kept me going!

I downloaded GATE and was able to quickly build a prototype parsing emails to get to our target data. GATE breaks down the task of processing text into small specialized chunks of work strung together in a “pipeline”. The tasks work by putting XML annotations in the text or enhancing/using the annotations put by a previous task. It is a simple and beautiful architecture living up to its acronym: General Architecture for Text Engineering.

Each task is called a Processing Resource (PR) in GATE. You can choose from a host of preinstalled resources, or find and install PRs from the internet or just go ahead and write your own. Let us look at a simple GATE pipeline for text processing.
The first PR in pipeline is a tokenizer: this takes the email text and converts it into a series of tokens like numbers, upper-case strings, space or punctuation, etc. The second PR splits the text into sentences based on space and punctuation tokens
We then have a Parts Of Speech (POS) tagger: it understands sentence grammar and breaks the sentence into nouns, verbs, adjectives, pronouns etc.

A gazetteer is another useful Processing Resource which marks the text which matches your lookup tables. Take a list of colleges for example. If one of these colleges appears in the text then it gets annotated as a College.

We are almost there! The last stage is the scary sounding JAPE transducer. This is nothing but a way of defining regular expressions over the GATE annotations using a rule based language. But didn’t we switch to NLP to avoid regular expressions?
JAPE is a very different beast as compared to standard regular expressions.

– It works on the annotations added by the pipeline which capture grammar and lookups instead of raw text strings.
– JAPE rules are applied in a declarative manner. Regular expressions are sequential and in many occasions the order in which they are applied affect the result.

JAPE is bit difficult to understand however the accuracy, stability and maintainability offered by the GATE pipeline are far better than using traditional programming approaches.

There are several features of NLP that make it an art rather than a science. For each type of processing task there are several different types of PRs that you can choose from. For example we found that people use a lot of abbreviations in email and regularly leave out full stops at the end of sentences. A standard sentence splitter fails in such cases. We turned to the RegEx sentence splitter where we were able to enhance the logic used by defining our own regular expressions to detect or ignore such cases.

In addition, the order of tasks in the pipeline can make a big difference to the accuracy. Moving the gazetteer up the chain and using its annotations in sentence splitting helps resolve problems where the PR might split the sentence where abbreviations like U.S.A. are used (the full stop at the end of A and the space following it causes a line break in a usage like U.S.A Today).
The Java interface to GATE is simple. Once you are happy with the pipeline, from the IDE you

– Save it as a .gapp file in the GATE IDE.
– Load the gapp file (in Java), load the documents to process into a collection (the “corpus)
– Execute the pipeline.

For each document you get an annotated XML file which you parse using a standard XML parser to look for the tags your application is interested in.
A major complexity that I have avoided discussing until now is performance. Look forward to the next post to know more!!

Common Myth Regarding ViewState in ASP.NET

Through this article I want to defeat a very common misconception about ViewState. Most ASP.NET developers think that the ASP.NET ViewState is responsible for holding the values of controls such as TextBoxes so that they are retained even after postback.

But that is not the case!

Let’s take an example to understand the above:

Place a web server TextBox control (tbControl) and a web server Label control (lblControl).

Set the “Text” property of label and textbox to “Initial Label Text” and “Initial TextBox Text” respectively and set the “EnableViewState” property of both the controls to false.

Place two button controls and set their text to “Change Label Text” and “Post to Server”. First button changes label’s text by handling button click event and second button only does the postback.

private void btnChangeLabel_Click(object sender, System.EventArgs e)
{
lblControl.Text = “Label’s Text Changed”;
}

On running this application, you can see the initial texts in the controls as you have set.

Now, change the text in TextBox and set it to “Changed TextBox Text”.

Now click the Post to Server button. What happens is that the textbox retains its value, in spite of the ViewState property being set to false.

The reason for this behavior is that ViewState is not responsible for storing the modified values for controls such as TextBoxes, dropdowns, CheckBoxList etc., that is, those controls which inherit from the IPostBackDataHandler interface.

After Page_Init(), there is an event known as LoadViewState, in which the Page class loads values from the hidden __VIEWSTATE from the field for those controls (e.g., Label) whose ViewState is enabled.

Then the LoadPostBackData event fires, in which the Page class loads the values of those controls which inherit from the IPostBackDataHandler interface, (e.g. TextBox) from the HTTP POST headers.

Now, on clicking “Change Label Text” button which changes label text programmatically (made by above mentioned event handler), then on clicking “Post to Server”, page reloads and programmatic change is lost i.e. label text changes to initial value – “Initial Label Text”.

This is because the Label control does not inherit from the IPostBackDataHandler interface. So the ViewState is responsible for persisting its value across postbacks.

Also since ViewState has been disabled, the Label loses its value after clicking the “Change Label Text” button.

Now enable ViewState for the Label control, and you can see the modified value (“Label’s Text Changed”) after clicking the same button.

So we conclude that controls which inherit from the IPostBackDataHandler interface retain their values even if the ViewState has been disabled. This is because the values are stored in HTTP POST headers.