Taming BlazeDS for Android with LongCalling

This post is a sequence to our announcement that we taught Adobe AIR to talk to native Android API by placing BlazeDS inside Android. Initially, we embedded BlazeDS into AIR-Android APK (watch this video) to use Google voice recognition for the data entry. The plan was to be able to invoke the following Java code from BlazeDS :
[quickcode:noclick] Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_PROMPT, prompt); // We wanted to use Text-to-Speech as well
startActivityForResult(intent, 12345);

The first challenge was that features like voice recognition, Toast (Android popups) and other dialog functions, are supposed to be run in UI thread vs. a worker thread of the BlazeDS. In particular, startActivityForResult() is a method of Activity class. This was an easy to solve problem, because we could place this code in our own activity and start this activity via Intent.

The real challenge was that the voice recognition software does not return anything until the computer-human interaction is complete. In other words, the Java piece to remote is asynchronous -  you start the Recognizer activity by invoking the function startActivityForResult() and, sometime later,  get notification via an async callback when the results are ready:

public void startVoiceRecognition(int requestCode, String prompt) {
    Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
    intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
    intent.putExtra(RecognizerIntent.EXTRA_PROMPT, prompt); // Customized prompt
    startActivityForResult(intent, REQUEST_CODE_RECOGNIZE_SPEECH);

The low-hanging solution to this asynchronicity  was to push the results from BlazeDS to a messaging destination that would be listened to by an AIR application:

protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    switch (requestCode) {
            if (resultCode == RESULT_OK) {
                ArrayList recognizedPhrases = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
                // Now we are ready to pass results back to Air code.
                MessageBroker msgBroker = MessageBroker.getMessageBroker(null);
                AsyncMessage msg = new AsyncMessage();
                msgBroker.routeMessageToService(msg, null);
    super.onActivityResult(requestCode, resultCode, data);

This worked, but made the AIR code ugly: we had a RemoteObject with a non-functional result handler plus the messaging destination and a Consumer, which clearly would be replicated for each call of that sort.

LongCalling to the Rescue

And then we realized that this was a déjà vu. We had been there when a stored procedure in one financial application would not return for 3 minutes. We helped that customer without breaking the remoting model. We used the “long calling” instead.

Long calling is Farata’s term to label the customization of a Java adapter and RemoteObject that allows us to quickly receive a dummy return of the original long remote call and reincarnate the mx.rpc.ResultEvent when the real data is ready.  By using a customized RemoteObject the application code becomes agnostic to the fact that remoting operation has “two legs”.  The only requirement that we add is that the name of such remoting method has to end with “AndWait” as in  recognizeVoiceAndWait(). This signals to the ActionScript side that this invocation is a “long calling” and this knowledge is carried forward to the custom Java adapter.

On the Java side, we implemented a PlatformServices object with several methods, most importantly,  startActivityAndWait() and complete(). Here is how we start our own SpeechRecognitionActivity. Notice that the first parameter of the runActivityAndWait() is an intent to start an activity, while the second one is a class of the expected result:

public List recognizeVoiceAndWait(final String prompt) {
    final Intent intent = new Intent(PlatformServices.context(), SpeechRecognitionActivity.class);
    intent.putExtra(RecognizerIntent.EXTRA_PROMPT, prompt);
    return PlatformServices.runActivityAndWait(intent, STRING_LIST);
final private static Class> STRING_LIST = null;

The SpeechRecognitionActivity() will start the Recognizer activity precisely like we did in the earlier code snippets. Meanwhile the worker thread that started it will be blocked. It will remain in the blocked state until the onActivityResult() callback from inside the SpeechRecognitionActivity issues the complete() passing exactly the same intent as was used during runActivityAndWait() – this intent is preserved in the onStart() of the SpeechRecognitionAcitivity:
public void onActivityResult(final int requestCode, final int resultCode, final Intent request) {
    if (requestCode == REQUEST_CODE_RECOGNIZE_SPEECH) {
        final List result;
        if (resultCode == RESULT_OK) {
            result = request.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
        } else {
            result = null;
        complete(originalIntent, result); // We preserved originalIntent during onStart() of the activity
    super.onActivityResult(requestCode, resultCode, request);

That’s it. The only limitation is that due to the blocking mechanism of runActivityAndWait() only one at a time call can be executed. Accordingly, the AIR application should avoid sending several “AndWait” requests in one AMF batch.

LongCalling in Action

And, of course, the AIR’s code won’t care about runActivityAndWait and complete() pairs at all. For all it knows, there will be regular remoting calls, albeit ending with “AndWait”:

<?xml version="1.0" encoding="utf-8"?>
<s:View xmlns:fx="http://ns.adobe.com/mxml/2009" 
    title="Voice Recognition"
    <c:RemoteObject id="service" destination="AndroidJavaDestination"/>
  import mx.collections.IList;
  import mx.rpc.AsyncToken;
  import mx.rpc.Responder;
  import mx.rpc.events.FaultEvent;
  import mx.rpc.events.ResultEvent;
  [Bindable] private var recognizedPhrases:IList;
  protected function onTextInputFocusIn(event:FocusEvent):void {
    var target:TextInput = event.currentTarget as TextInput;
    promptAndListen(target.toolTip || target.id, target);
  private function promptAndListen(prompt:String, target:Object):AsyncToken {
     recognizedPhrazes = null;
     var token:AsyncToken = service.recognizeVoiceAndWait(prompt);
       new mx.rpc.Responder(
    onRecognizeVoiceResult, onRecognizeVoiceFault
     token.target = target;
     return token;
  private function onRecognizeVoiceResult(event:ResultEvent):void {
     recognizedPhrases = event.result as IList;
     var textInput:TextInput = event.token.target as TextInput;
           var bestMatch:String ;
           .... Find the best match from recognizedPhrases  
     textInput.text = bestMatch;
  .  .  .
  <s:Form width="100%">
    <s:FormItem label="Name:">
      <s:TextInput focusIn="onTextInputFocusIn(event)" 
        toolTip="Employee name"/>
    <s:FormItem label="Phone:">
      <s:TextInput focusIn="onTextInputFocusIn(event)" 
        toolTip="Phone number"/>
          .  .  .

If you are interested to see this solution in action, I’ll be showing it in August during our fourth annual symposium on enterprise software.