ClearDS for Android

Clear DataServices (ClearDS) is a free productivity tool that compliments Adobe AIR Native Extensions(ANE) for Android devices. It allows to embed BlazeDS into the AIR-based Android application.

Why ClearDS?

ClearDS has been designed for two purposes. Similar to ANE, it allows to code part of your application in Java with full access to Android SDK. In addition, we can turn an Android device into a server, broadening possibilities for application architectures.

Prior to ANE, simple features like Android Toast and Status Bar notifications as well as advanced ones like Text-to-Speech or Speech Recognition were out of reach for ActionScript coders. ClearDS allows you to embed the Android code into a Java class hosted by a servlet container with  BlazeDS and remote to that class with standard and familiar <s:RemoteObject>. There is no dependency on AIR version and you can natively debug the Java portion of your application using Android Eclipse plugin.

By the same token embedding of BlazeDS into  APK turns your application into a server. Now any computer on the same LAN or VPN can remote to your mobile device. Imagine a scenario where your mobile phone is used as a signature pad where you sign to complete an insurance application or  a financial transaction. Here are some other use cases for your mobile phone:

– bar code reader
– better camera for your notebook,
– voice data/entry gadget that renders alphanumeric keyboard useless, eliminating costly clerical mistakes.

How Does It Work?

We fine tuned BlazeDS to be compatible with a set of core Java classes used by Android. To enable HTTP traffic between the SWF and Java portions of your APK we employed Winstone Servlet Container. Since Android APK generated by the Flash Builder generates a single-activity application, we extended this (activity) class. This allowed us to start the ClearDSService when the activity gets created. The service, in turn, starts the Winstone Servlet container. This allows your SWF-based ActionScript code to remote to your Java code within the boundaries of the same APK.

To make the developers’ life easy, we created the ClearDS Eclipse plugin.When you create a new ClearDS project you effectively create two projects: a Flash Mobile project and Android Java mobile project. These projects are related: the additional builder script of the Flash project contributes the SWF as an asset to the Android project. Then you just deploy your Android project the same was as the users of Android Eclipse plugin do. You develop MXML/ActionScript code in Eclipse’s Flash project,  and accordingly,  the Java code in the Android project.

Is ClearDS a Right Approach for You?

Adobe’s AIR Native Extension is a door opener to the world of native APIs of mobile devices. If you find an ANE with the functionality you need – just use it. Unfortunately, there is no way to debug mobile ANE. On the other hand, if you choose the ClearDS route, you will be able to debug the Java code using standard Android ADT Eclipse plugin.

So, as long as you feel comfortable writing Flex and Java code, download the ClearDS plugin and use it right away.

You can also watch this 10-minute screencast of ClearDS plugin in action here.

We also started added the ClearDS-related content to the wiki page of Clear Toolkit located at this URL. You may also subscribe for our new video channel at Youtube.

Victor

Taming BlazeDS for Android with LongCalling

This post is a sequence to our announcement that we taught Adobe AIR to talk to native Android API by placing BlazeDS inside Android. Initially, we embedded BlazeDS into AIR-Android APK (watch this video) to use Google voice recognition for the data entry. The plan was to be able to invoke the following Java code from BlazeDS :
[quickcode:noclick] Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_PROMPT, prompt); // We wanted to use Text-to-Speech as well
startActivityForResult(intent, 12345);
[/quickcode]

The first challenge was that features like voice recognition, Toast (Android popups) and other dialog functions, are supposed to be run in UI thread vs. a worker thread of the BlazeDS. In particular, startActivityForResult() is a method of Activity class. This was an easy to solve problem, because we could place this code in our own activity and start this activity via Intent.

The real challenge was that the voice recognition software does not return anything until the computer-human interaction is complete. In other words, the Java piece to remote is asynchronous –  you start the Recognizer activity by invoking the function startActivityForResult() and, sometime later,  get notification via an async callback when the results are ready:

public void startVoiceRecognition(int requestCode, String prompt) {
    Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
    intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
    intent.putExtra(RecognizerIntent.EXTRA_PROMPT, prompt); // Customized prompt
    startActivityForResult(intent, REQUEST_CODE_RECOGNIZE_SPEECH);
    return;
}

The low-hanging solution to this asynchronicity  was to push the results from BlazeDS to a messaging destination that would be listened to by an AIR application:

protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    switch (requestCode) {
        case REQUEST_CODE_RECOGNIZE_SPEECH:
            if (resultCode == RESULT_OK) {
                ArrayList recognizedPhrases = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
                // Now we are ready to pass results back to Air code.
                MessageBroker msgBroker = MessageBroker.getMessageBroker(null);
                AsyncMessage msg = new AsyncMessage();
                msg.setDestination("voiceRecognitionResult");
                msg.setClientId(UUIDUtils.createUUID(true));
                msg.setMessageId(UUIDUtils.createUUID(true));
                msg.setTimestamp(System.currentTimeMillis());
                msg.setBody(recognizedPhrases);
                msgBroker.routeMessageToService(msg, null);
            }
            break;
        }
    super.onActivityResult(requestCode, resultCode, data);
}

This worked, but made the AIR code ugly: we had a RemoteObject with a non-functional result handler plus the messaging destination and a Consumer, which clearly would be replicated for each call of that sort.

LongCalling to the Rescue

And then we realized that this was a déjà vu. We had been there when a stored procedure in one financial application would not return for 3 minutes. We helped that customer without breaking the remoting model. We used the “long calling” instead.

Long calling is Farata’s term to label the customization of a Java adapter and RemoteObject that allows us to quickly receive a dummy return of the original long remote call and reincarnate the mx.rpc.ResultEvent when the real data is ready.  By using a customized RemoteObject the application code becomes agnostic to the fact that remoting operation has “two legs”.  The only requirement that we add is that the name of such remoting method has to end with “AndWait” as in  recognizeVoiceAndWait(). This signals to the ActionScript side that this invocation is a “long calling” and this knowledge is carried forward to the custom Java adapter.

On the Java side, we implemented a PlatformServices object with several methods, most importantly,  startActivityAndWait() and complete(). Here is how we start our own SpeechRecognitionActivity. Notice that the first parameter of the runActivityAndWait() is an intent to start an activity, while the second one is a class of the expected result:

public List recognizeVoiceAndWait(final String prompt) {
    final Intent intent = new Intent(PlatformServices.context(), SpeechRecognitionActivity.class);
    intent.putExtra(RecognizerIntent.EXTRA_PROMPT, prompt);
    return PlatformServices.runActivityAndWait(intent, STRING_LIST);
}
final private static Class&gt; STRING_LIST = null;

The SpeechRecognitionActivity() will start the Recognizer activity precisely like we did in the earlier code snippets. Meanwhile the worker thread that started it will be blocked. It will remain in the blocked state until the onActivityResult() callback from inside the SpeechRecognitionActivity issues the complete() passing exactly the same intent as was used during runActivityAndWait() – this intent is preserved in the onStart() of the SpeechRecognitionAcitivity:
@Override
public void onActivityResult(final int requestCode, final int resultCode, final Intent request) {
    if (requestCode == REQUEST_CODE_RECOGNIZE_SPEECH) {
        final List result;
        if (resultCode == RESULT_OK) {
            result = request.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
        } else {
            result = null;
        }
        complete(originalIntent, result); // We preserved originalIntent during onStart() of the activity
        finish();
    }
    super.onActivityResult(requestCode, resultCode, request);
}

That’s it. The only limitation is that due to the blocking mechanism of runActivityAndWait() only one at a time call can be executed. Accordingly, the AIR application should avoid sending several “AndWait” requests in one AMF batch.

LongCalling in Action

And, of course, the AIR’s code won’t care about runActivityAndWait and complete() pairs at all. For all it knows, there will be regular remoting calls, albeit ending with “AndWait”:

<?xml version="1.0" encoding="utf-8"?>
<s:View xmlns:fx="http://ns.adobe.com/mxml/2009" 
    xmlns:s="library://ns.adobe.com/flex/spark" 
    xmlns:c="library://ns.cleartoolkit.com/flex/clear"
    title="Voice Recognition"
>
  <fx:Declarations>
    <c:RemoteObject id="service" destination="AndroidJavaDestination"/>
  </fx:Declarations>
  <fx:Script><![CDATA[
 
  import mx.collections.IList;
  import mx.rpc.AsyncToken;
  import mx.rpc.Responder;
  import mx.rpc.events.FaultEvent;
  import mx.rpc.events.ResultEvent;
 
  [Bindable] private var recognizedPhrases:IList;
 
  protected function onTextInputFocusIn(event:FocusEvent):void {
    var target:TextInput = event.currentTarget as TextInput;
    promptAndListen(target.toolTip || target.id, target);
  }
 
  private function promptAndListen(prompt:String, target:Object):AsyncToken {
     recognizedPhrazes = null;
     var token:AsyncToken = service.recognizeVoiceAndWait(prompt);
     token.addResponder(
       new mx.rpc.Responder(
    onRecognizeVoiceResult, onRecognizeVoiceFault
       )
     );
     token.target = target;
     return token;
  }
 
  private function onRecognizeVoiceResult(event:ResultEvent):void {
     recognizedPhrases = event.result as IList;
     var textInput:TextInput = event.token.target as TextInput;
           var bestMatch:String ;
 
           .... Find the best match from recognizedPhrases  
 
     textInput.text = bestMatch;
  }        
  ]]></fx:Script>
  .  .  .
  <s:Form width="100%">
    <s:FormItem label="Name:">
      <s:TextInput focusIn="onTextInputFocusIn(event)" 
        toolTip="Employee name"/>
    </s:FormItem>
    <s:FormItem label="Phone:">
      <s:TextInput focusIn="onTextInputFocusIn(event)" 
        toolTip="Phone number"/>
    </s:FormItem>
          .  .  .
  </s:Form>
</s:View>

If you are interested to see this solution in action, I’ll be showing it in August during our fourth annual symposium on enterprise software.

Victor

We taught Adobe AIR to talk to Native Android API

Adobe AIR is the most productive tool for developing the UI for Android.  But as of today, AIR can’t access native Android API. By the end of this year Adobe plans to offer some integration/bridge to the native Android applications, but it’s not clear how it’s going to be implemented.
Traditionally, Farata Systems is trying to get into emerging and promising technologies as soon as possible and the first results are already achieved.  We taught AIR to talk to the native Android API. I mean it. You’ll see a demo, where a user talks to an AIR application, which communicate with the native Android voice API, which recognizes his commands and fills out the AIR UI form.
Without going into much details, we are using a different from Adobe approach – we put their BlaseDS server right inside the Android device. This opens endless opportunities, and we are trying to find the best use for this solution that goes under the working name “Server in your pocket”.
My colleague Victor works full time on integrating AIR and Android. He has recorded a short video that features him talking to the AIR application on the Xoom tablet, which communicates to the native Android voice recognition API and fills out the AIR form. Everything is happening inside the Xoom tablet. This addition to our Clear Toolkit has a working name Clear APK. See it for yourself.
We’ll present this demo live in August during our fourth annual symposium on enterprise software.

Yakov Fain

Enterprise applications on tablets. State of the Union.

I’ve recorded our chat with my colleague Anatole Tartakovsky who leads the mobile development at Farata Systems. We were discussing approach for migrating existing legacy enterprise RIA to these shiny iPads, Xooms and the likes. You can listen to this podcast on any MP3 player, or simply press the icon POD at my NO BS IT podcast terminal to start playing on the device you’re using now.

If you are attending the 360 Flex Conference in Denver, attend Anatole’s presentation on this subject.