O'Reilly logo

Developing Android Applications with Flex 4.5 by Rich Tretola

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Chapter 4. Exploring the APIs

Now that you know how to create a new application with various layout options and how to request application permissions, it is time to explore the ways in which your application can interact with the Android operating system. The AIR 2.6 release includes access to many Android features. These include the accelerometer, the GPS unit, the camera, the camera roll, the file system and the multitouch screen.

Note

Up until this point, I have compiled the sample applications to the ADL simulator. However, to demonstrate the API integrations, it is necessary to run the applications on an Android device. The screenshots in this section are from an HTC NexusOne phone. Instructions on how to run an application on an Android device are included in Chapter 1.

Accelerometer

The accelerometer is a device that measures the speed or g-forces created when a device accelerates across multiple planes. The faster the device is moved through space, the higher the readings will be across the x, y, and z axes.

Let’s review the code below. First, you will notice that there is a private variable named accelerometer declared, of type flash.sensors.Accelerometer. Within applicationComplete of the application, an event handler function is called, which first checks to see if the device has an accelerometer by reading the static property of the Accelerometer class. If this property returns as true, a new instance of Accelerometer is created and an event listener of type AccelerometerEvent.UPDATE is added to handle updates. Upon update, the accelerometer information is read from the event and written to a TextArea within the handleUpdate function. The results can be seen within Figure 4-1:

<?xml version="1.0" encoding="utf-8"?>
<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
               xmlns:s="library://ns.adobe.com/flex/spark"
               applicationComplete="application1_applicationCompleteHandler(event)">
    <fx:Script>
          <![CDATA[
               import flash.sensors.Accelerometer;

               import mx.events.FlexEvent;

               privatevar accelerometer:Accelerometer;

               protectedfunction application1_applicationCompleteHandler
                   (event:FlexEvent):void {
                     if(Accelerometer.isSupported==true){
                          accelerometer = new Accelerometer();
                          accelerometer.addEventListener(AccelerometerEvent.
                              UPDATE,handleUpdate);
                     } else {
                          status.text = "Accelerometer not supported";
                     }

               }

               privatefunction handleUpdate(event:AccelerometerEvent):void {
                     info.text = "Updated: " + new Date().toTimeString() + "\n\n"
                     + "acceleration X: "+ event.accelerationX + "\n"
                     + "acceleration Y: " + event.accelerationY + "\n"
                     + "acceleration Z: " + event.accelerationZ;
               }

          ]]>
    </fx:Script>

    <fx:Declarations>
          <!-- Place non-visual elements (e.g., services, value objects) here -->
    </fx:Declarations>
    <s:Label id="status" text="Shake your phone a bit" top="10" width="100%" 
        textAlign="center"/>
    <s:TextArea id="info" width="100%" height="200" top="40" editable="false"/>
</s:Application>
Accelerometer information
Figure 4-1. Accelerometer information

GPS

GPS stands for Global Positioning System. GPS is a space-based satellite navigation system, which provides reliable location information to your handheld device.

If your application requires the use of the device’s GPS, you will need to select the ACCESS_FINE_LOCATION permission when you are creating your project. See Chapter 3 for help with permissions.

Let’s review the code below. First, you will notice that there is a private variable named geoLocation declared, of type flash.sensors.GeoLocation. Within applicationComplete of the application, an event handler function is called, which first checks to see if the device has an available GPS unit by reading the static property of the GeoLocation class. If this property returns as true, a new instance of GeoLocation is created; the data refresh interval is set to 500 milliseconds (.5 seconds) within the setRequestedUpdateInterval method; and an event listener of type GeoLocationEvent.UPDATE is added to handle updates. Upon update, the GPS information is read from the event and written to a TextArea within the handleUpdate function.

Note

There is also some math being done to convert the speed property into miles per hour and kilometers per hour.

The results can be seen within Figure 4-2:

<?xml version="1.0" encoding="utf-8"?>
<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
               xmlns:s="library://ns.adobe.com/flex/spark"
               applicationComplete="application1_applicationCompleteHandler(event)">
    <fx:Script>
          <![CDATA[
               import mx.events.FlexEvent;

               import flash.sensors.Geolocation;

               privatevar geoLocation:Geolocation;

               protectedfunction application1_applicationCompleteHandler
                   (event:FlexEvent):void {
                     if(Geolocation.isSupported==true){
                          geoLocation = new Geolocation();
                          geoLocation.setRequestedUpdateInterval(500);
                          geoLocation.addEventListener(GeolocationEvent.UPDATE, 
                              handleLocationRequest);
                     } else {
                          status.text = "Geolocation feature not supported";
                     }
               }

               privatefunction handleLocationRequest(event:GeolocationEvent):void {
                     var mph:Number = event.speed*2.23693629;
                     var kph:Number = event.speed*3.6;
                     info.text = "Updated: " + new Date().toTimeString() + "\n\n"
                          + "latitude: " + event.latitude.toString() + "\n"
                          + "longitude: " + event.longitude.toString() + "\n"
                          + "altitude: " + event.altitude.toString()  + "\n"
                          + "speed: " + event.speed.toString()  + "\n"
                          + "speed: " + mph.toString()  + " MPH \n"
                          + "speed: " + kph.toString()  + " KPH \n"
                          + "heading: " + event.heading.toString()  + "\n"
                          + "horizontal accuracy: "
                          + event.horizontalAccuracy.toString()  + "\n"
                          + "vertical accuracy: "
                          + event.verticalAccuracy.toString();
               }

          ]]>
    </fx:Script>
    <fx:Declarations>
          <!-- Place non-visual elements (e.g., services, value objects) here -->
    </fx:Declarations>

    <s:Label id="status" text="Geolocation Info" top="10" width="100%" 
        textAlign="center"/>
    <s:TextArea id="info" width="100%" top="40" editable="false"/>
</s:Application>
GPS information
Figure 4-2. GPS information

Camera UI

A camera is almost ubiquitous on handheld Android devices. In fact, many new Android devices now include both front- and rear-facing cameras.

If your application requires the use of the device’s camera, you will need to select the CAMERA permission when you are creating your project (see Chapter 3 for help with permissions). The Camera UI tools will allow your application to use the native Camera interface within the Android device.

Let’s review the code below. First, you will notice there is a private variable named camera declared, of type flash.media.CameraUI. Within applicationComplete of the application, an event handler function is called, which first checks to see if the device has an available camera by reading the static property of the CameraUI class. If this property returns as true, a new instance of CameraUI is created and event listeners of type MediaEvent.COMPLETE and ErrorEvent.COMPLETE are added to handle a successfully captured image as well as any errors that may occur.

A Button with an event listener on the click event is used to allow the application user to launch the CameraUI. When the user clicks the TAKE A PICTURE button, the captureImage method is called, which then opens the camera by calling the launch method and passing in the MediaType.IMAGE static property. At this point, the user is redirected from your application to the native camera. Once the user takes a picture and clicks OK, he is directed back to your application, the MediaEvent.COMPLETE event is triggered, and the onComplete method is called. Within the onComplete method, the event.data property is cast to a flash.Media.MediaPromise object. The mediaPromise.file.url property is then used to populate Label and Image components that display the path to the image and the actual image to the user.

Note

Utilizing CameraUI within your application is different than the raw camera access provided by Adobe AIR on the desktop. Raw camera access is also available within AIR on Android and works in the same way as the desktop version.

Figure 4-3 shows the application, Figure 4-4 shows the native camera user interface, and Figure 4-5 shows the application after a picture was taken and the user clicked OK to return to the application:

<?xml version="1.0" encoding="utf-8"?>
<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
               xmlns:s="library://ns.adobe.com/flex/spark"
               applicationComplete="application1_applicationCompleteHandler(event)">
    <fx:Script>
          <![CDATA[
               import mx.events.FlexEvent;

               privatevar camera:CameraUI;

               protectedfunction application1_applicationCompleteHandler
                   (event:FlexEvent):void {
                     if (CameraUI.isSupported){
                          camera = new CameraUI();
                          camera.addEventListener(MediaEvent.COMPLETE, onComplete);
                          camera.addEventListener(ErrorEvent.ERROR, onError);
                          status.text="CameraUI supported";
                     } else {
                          status.text="CameraUI NOT supported";
                     }
               }

               privatefunction captureImage(event:MouseEvent):void {
                     camera.launch(MediaType.IMAGE);
               }

               privatefunction onError(event:ErrorEvent):void {
                     trace("error has occurred");
               }

               privatefunction onComplete(event:MediaEvent):void {
                     var mediaPromise:MediaPromise = event.data;
                     status.text = mediaPromise.file.url;
                     image.source = mediaPromise.file.url;
               }

          ]]>
    </fx:Script>
    <fx:Declarations>
          <!-- Place non-visual elements (e.g., services, value objects) here -->
    </fx:Declarations>

    <s:Label id="status" text="Click Take a Picture button" top="10" width="100%" 
        textAlign="center"/>

    <s:Button width="300" height="60" label="TAKE A PICTURE" 
        click="captureImage(event)"
                 horizontalCenter="0" enabled="{CameraUI.isSupported}"
                 top="80"/>

    <s:Image id="image" width="230" height="350" horizontalCenter="0" top="170"/>
</s:Application>
The Camera UI Application
Figure 4-3. The Camera UI Application
The native camera UI
Figure 4-4. The native camera UI
Application after taking picture
Figure 4-5. Application after taking picture

Camera Roll

The Camera Roll provides access to the camera’s gallery of images.

If your application requires the use of the device’s camera roll, you will need to select the WRITE_EXTERNAL_STORAGE permission when you are creating your project. See Chapter 3 for help with permissions.

Let’s review the code below. First, you will notice that there is a private variable named cameraRoll declared, of type flash.media.CameraRoll. Within applicationComplete of the application, an event handler function is called, which first checks to see if the device supports access to the image gallery by reading the static property of the CameraRoll class. If this property returns as true, a new instance of CameraRoll is created and event listeners of type MediaEvent.COMPLETE and ErrorEvent.COMPLETE are added to handle a successfully captured image (as well as any errors that may occur).

A Button with an event listener on the click event is used to allow the user to browse the image gallery. When the user clicks the BROWSEGALLERY button, the browseGallery method is called, which then opens the device’s image gallery. At this point, the user is redirected from your application to the native gallery application. Once the user selects an image from the gallery, she is directed back to your application, the MediaEvent.COMPLETE event is triggered, and the mediaSelected method is called. Within the mediaSelected method, the event.data property is cast to a flash.Media.MediaPromise object. The mediaPromise.file.url property is then used to populate Label and Image components that display the path to the image and the actual image to the user. Figure 4-6 shows the application and Figure 4-7 shows the application after a picture was selected from the gallery and the user has returned to the application:

<?xml version="1.0" encoding="utf-8"?>
<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
               xmlns:s="library://ns.adobe.com/flex/spark"
               applicationComplete="application1_applicationCompleteHandler(event)">
    <fx:Script>
          <![CDATA[
               import mx.events.FlexEvent;

               privatevar cameraRoll:CameraRoll;

               protectedfunction application1_applicationCompleteHandler
                  (event:FlexEvent):void {
                     if(CameraRoll.supportsBrowseForImage){
                         cameraRoll = new CameraRoll();
                         cameraRoll.addEventListener(MediaEvent.SELECT, 
                             mediaSelected);
                         cameraRoll.addEventListener(ErrorEvent.ERROR, onError);
                     } else{
                          status.text="CameraRoll NOT supported";
                     }
               }

               privatefunction browseGallery(event:MouseEvent):void {
                     cameraRoll.browseForImage();
               }

               privatefunction onError(event:ErrorEvent):void {
                     trace("error has occurred");
               }

               privatefunction mediaSelected(event:MediaEvent):void{
                     var mediaPromise:MediaPromise = event.data;
                     status.text = mediaPromise.file.url;
                     image.source = mediaPromise.file.url;
               }
          ]]>
    </fx:Script>
    <fx:Declarations>
          <!-- Place non-visual elements (e.g., services, value objects) here -->
    </fx:Declarations>

    <s:Label id="status" text="Click Browse Gallery to select image" top="10"  
        width="100%" textAlign="center"/>

    <s:Button width="300" height="60" label="BROWSE GALLERY" 
        click="browseGallery(event)"
                 enabled="{CameraRoll.supportsBrowseForImage}"
                 top="80" horizontalCenter="0"/>

    <s:Image id="image" width="230" height="350" top="170" horizontalCenter="0"/>
</s:Application>
The Browse Gallery application
Figure 4-6. The Browse Gallery application
The Browse Gallery application with a picture selected
Figure 4-7. The Browse Gallery application with a picture selected

Microphone

If your application requires the use of the device’s microphone, you will need to select the RECORD_AUDIO permission when you are creating your project. See Chapter 3 for help with permissions.

Let’s review the code below. First, you will notice that there is a private variable named microphone declared, of type flash.media.Microphone. Within applicationComplete of the application, an event handler function is called, which first checks to see if the device supports access to the microphone by reading the static property of the Microphone class. If this property returns as true, an instance of the Microphone is retrieved and set to the microphone variable, the rate is set to 44, and the setUseEchoSuppression method is used to set the echo suppression to true. There are also variables of type ByteArray and Sound declared within this application (instances of these variables will be created when the application runs).

There are three button components within the application to trigger the record, stop, and playback functionalities, respectively.

Clicking the record button will call the record_clickHandler function, which will create a new instance of the recording variable of type ByteArray. An event listener of type SampleDataEvent.SAMPLE_DATA is added to the microphone, which will call the micDataHandler method when it receives data. Within the micDataHandler method, the data is written to the recording ByteArray.

Clicking the stop button will stop the recording by removing the SampleDataEvent.SAMPLE_DATA event listener.

Clicking the play button will call the play_clickHandler method, which first sets the position of the recording ByteArray to 0 so it is ready for playback. It then creates a new instance of the Sound class and sets it to the sound variable. It also adds an event listener of type SampleDataEvent.SAMPLE_DATA that will call the playSound method when it receives data. Finally the play method is called on the sound variable to start playback.

The playSound method loops through the recording ByteArray in memory and writes those bytes back to the data property of the SampleDataEvent, which then plays through the device’s speaker.

To extend this sample, you would need to use some open source classes to convert the recording ByteArray to an .mp3 or .wav file so that it can be saved to disk. See the application in Figure 4-8:

<?xml version="1.0" encoding="utf-8"?>
<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
               xmlns:s="library://ns.adobe.com/flex/spark"
               applicationComplete="application1_applicationCompleteHandler(event)">
    <fx:Script>
          <![CDATA[
               import mx.events.FlexEvent;

               privatevar microphone:Microphone;
               privatevar recording:ByteArray;
               privatevar sound:Sound;

               protectedfunction application1_applicationCompleteHandler
                   (event:FlexEvent):void
               {
                     if(Microphone.isSupported){
                          microphone = Microphone.getMicrophone();
                          microphone.rate = 44;
                          microphone.setUseEchoSuppression(true);
                     } else {
                          status.text="Microphone NOT supported";
                     }
               }

               privatefunction micDataHandler(event:SampleDataEvent):void{
                     recording.writeBytes(event.data);
               }

               protectedfunction record_clickHandler(event:MouseEvent):void
               {                     recording = new ByteArray();
                     microphone.addEventListener(SampleDataEvent.SAMPLE_DATA, 
                         micDataHandler);
               }

               protectedfunction stop_clickHandler(event:MouseEvent):void
               {
                     microphone.removeEventListener(SampleDataEvent.SAMPLE_DATA, 
                         micDataHandler);
               }

               protectedfunction play_clickHandler(event:MouseEvent):void
               {
                     recording.position = 0;
                     sound = new Sound();
                     sound.addEventListener(SampleDataEvent.SAMPLE_DATA, playSound);
                     sound.play();
               }

               privatefunction playSound(event:SampleDataEvent):void
               {
                     for (var i:int = 0; i < 8192 && recording.bytesAvailable > 0; 
                         i++){
                          var sample:Number = recording.readFloat();
                          event.data.writeFloat(sample);
                          event.data.writeFloat(sample);
                     }
               }

          ]]>
    </fx:Script>
    <fx:Declarations>
          <!-- Place non-visual elements (e.g., services, value objects) here -->
    </fx:Declarations>

    <s:Label id="status" text="Click Record to grab some audio, 
        then Stop and Play it back"
               top="10" width="100%" textAlign="center"/>
    <s:HGrouptop="80" horizontalCenter="0">
          <s:Button id="record" label="Record" click="record_clickHandler(event)" />
          <s:Button id="stop" label="Stop" click="stop_clickHandler(event)" />
          <s:Button id="play" label="Play" click="play_clickHandler(event)" />
    </s:HGroup>
</s:Application>
The microphone application
Figure 4-8. The microphone application

Multitouch

One of the navigation methods unique to mobile devices is the ability to interact with an application via gestures on the device’s touchscreen. Multitouch is defined as the ability to simultaneously register three or more touch points on the device. Within Adobe AIR 2.6, there are two event classes used to listen for multitouch events.

GestureEvent

The GestureEvent class is used to listen for a two-finger tap on the device. The event used to listen for this action is GESTURE_TWO_FINGER_TAP. This event will return the registration points for the x and y coordinates when a two-finger tap occurs for both stage positioning as well as object positioning.

Let’s review the code below. Within applicationComplete of the application, an event handler function is called, which first sets the Multitouch.inputMode to MultitouchInputMode.GESTURE. Next, it checks to see if the device supports multitouch by reading the static property of the Multitouch class. If this property returns as true, an event listener is added to the stage to listen for GestureEvent.GESTURE_TWO_FINGER_TAP events. When this event occurs, the onGestureTwoFinderTap method is called., which will capture the localX and localY coordinates as well as the stageX and stageY coordinates. If you two-finger-tap on an empty portion of the stage, these values will be identical. If you two-finger-tap on an object on the stage, the localX and localY coordinates will be the values within the object, and the stageX and stageY will be relative to the stage itself. See Figure 4-9 for an example of a two-finger tap on the stage and Figure 4-10 for a two-finger tap on the Android image:

<?xml version="1.0" encoding="utf-8"?>
<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
               xmlns:s="library://ns.adobe.com/flex/spark"
               applicationComplete="application1_applicationCompleteHandler(event)">
    <fx:Script>
          <![CDATA[
               import mx.events.FlexEvent;

               protectedfunction application1_applicationCompleteHandler
                   (event:FlexEvent):void {
                     Multitouch.inputMode = MultitouchInputMode.GESTURE;
                     if(Multitouch.supportsGestureEvents){
                          stage.addEventListener(GestureEvent.GESTURE_TWO_FINGER_TAP, 
                              onGestureTwoFingerTap);
                     } else {
                          status.text="gestures not supported";
                     }

               }
               privatefunction onGestureTwoFingerTap(event:GestureEvent):void {
                     info.text = "event = " + event.type + "\n" +
                          "localX = " + event.localX + "\n" +
                          "localX = " + event.localY + "\n" +
                          "stageX = " + event.stageX + "\n" +
                          "stageY = " + event.stageY;
               }

          ]]>
    </fx:Script>
    <fx:Declarations>
          <!-- Place non-visual elements (e.g., services, value objects) here -->
    </fx:Declarations>
    <s:Label id="status" text="Do a 2 finger tap both on and off the object"
                top="10" width="100%" textAlign="center"/>
    <s:TextArea id="info" width="100%" top="40" editable="false"/>
    <s:Image width="384" height="384" bottom="10" horizontalCenter="0"
               source="@Embed('android_icon.png')"/>
</s:Application>
A two-finger tap on the stage
Figure 4-9. A two-finger tap on the stage
A two-finger tap on an image object
Figure 4-10. A two-finger tap on an image object

TransformGesture

There are multiple transform gesture events available within AIR 2.6. Each will capture a unique multitouch event. The example below demonstrates how to listen for GESTURE_PAN, GESTURE_ROTATE, GESTURE_SWIPE, and GESTURE_ZOOM events.

Let’s review the code below. Within applicationComplete of the application, an event handler function is called, which first sets the Multitouch.inputMode to MultitouchInputMode.GESTURE. Next, it checks to see if the device supports multitouch by reading the static property of the Multitouch class. If this property returns as true, event listeners are added to the stage to listen for the TransformGestureEvent.GESTURE_PAN, TransformGestureEvent.GESTURE_ROTATE, TransformGestureEvent.GESTURE_SWIPE, and TransformGestureEvent.GESTURE_ZOOM events.

When a user grabs an object with two fingers and drags it, the TransformGestureEvent.GESTURE_PAN event is triggered and the onGesturePan method is called. Within the onGesturePan method, the offsetX and offsetY values of this event are written to the text property of the TextArea component. Adding the event’s offsetX and offsetY values sets the object’s x and y to move the object across the stage. The results can be seen in Figure 4-11.

The GESTURE_PAN event
Figure 4-11. The GESTURE_PAN event

When a user grabs an object with two fingers and rotates it, the TransformGestureEvent.GESTURE_ROTATE event is triggered and the onGestureRotate method is called. Within the onGestureRotate method, the rotation value of this event is written to the text property of the TextArea component. To allow the object to rotate around its center, the object’s transformAround method is called and the event’s rotation value is added to the object’s rotationZ value. The results can be seen in Figure 4-12.

The GESTURE_ROTATE event
Figure 4-12. The GESTURE_ROTATE event

When a user swipes across an object with one finger in any direction, the TransformGestureEvent.GESTURE_SWIPE event is triggered and the onGestureSwipe method is called. Within the onGestureSwipe method, the values of the event’s offsetX and offsetY are evaluated to determine the direction in which the user swiped across the object. This direction is then written to the text property of the TextArea component. The results can be seen in Figure 4-13.

The GESTURE_SWIPE event
Figure 4-13. The GESTURE_SWIPE event

When a user performs a “pinch and zoom” action with two fingers on an object, the TransformGestureEvent.GESTURE_ZOOM event is triggered and the onGestureZoom method is called. Within the onGestureZoom method, the values of the event’s scaleX and scaleY are written to the text property of the TextArea component. The scaleX value is then used as a multiplier on the object’s scaleX and scaleY property to increase or decrease the size of the object as the user pinches or expands two fingers on the object. The results can be seen in Figure 4-14:

<?xml version="1.0" encoding="utf-8"?>
<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
               xmlns:s="library://ns.adobe.com/flex/spark"
               applicationComplete="application1_applicationCompleteHandler(event)">
    <fx:Script>
          <![CDATA[
               import mx.events.FlexEvent;

               protectedfunction application1_applicationCompleteHandler
                   (event:FlexEvent):void {
                     Multitouch.inputMode = MultitouchInputMode.GESTURE;
                     if(Multitouch.supportsGestureEvents){
                          image.addEventListener(TransformGestureEvent.GESTURE_PAN, 
                              onGesturePan);                                                    image.addEventListener(TransformGestureEvent.GESTURE_ROTATE, 
                              onGestureRotate);
                          image.addEventListener(TransformGestureEvent.GESTURE_SWIPE, 
                              onGestureSwipe);
                          image.addEventListener(TransformGestureEvent.GESTURE_ZOOM, 
                              onGestureZoom);
                     } else {
                          status.text="gestures not supported";
                     }
               }

               privatefunction onGesturePan(event:TransformGestureEvent):void{
                     info.text = "event = " + event.type + "\n" +
                     "offsetX = " + event.offsetX + "\n" +
                     "offsetY = " + event.offsetY;
                     image.x += event.offsetX;
                     image.y += event.offsetY;
               }

               privatefunction onGestureRotate( event : 
                   TransformGestureEvent ) : void {
                     info.text = "event = " + event.type + "\n" +
                     "rotation = " + event.rotation;
                     image.transformAround(new Vector3D(image.width/2, 
                                                        image.height/2, 0),
                                                    null,
                                                    new Vector3D(0,0,image.rotationZ 
                                                         + event.rotation));
               }

               privatefunction onGestureSwipe( event : 
                   TransformGestureEvent ) : void {
                     var direction:String = "";
                     if(event.offsetX == 1) direction = "right";
                     if(event.offsetX == −1) direction = "left";
                     if(event.offsetY == 1) direction = "down";
                     if(event.offsetY == −1) direction = "up";
                     info.text = "event = " + event.type + "\n" +
                     "direction = " + direction;
               }

               privatefunction onGestureZoom( event : 
                   TransformGestureEvent ) : void {
                     info.text = "event = " + event.type + "\n" +
                     "scaleX = " + event.scaleX + "\n" +
                     "scaleY = " + event.scaleY;
                     image.scaleX = image.scaleY *= event.scaleX;
               }

               protectedfunction button1_clickHandler(event:MouseEvent):void
               {
                     image.rotation = 0;
                     image.scaleX = 1;
                     image.scaleY = 1;                     image.x = 40;
                     image.y = 260;
                     info.text = "";
               }

          ]]>
    </fx:Script>
    <fx:Declarations>
          <!-- Place non-visual elements (e.g., services, value objects) here -->
    </fx:Declarations>
    <s:Label id="status" text="Transform Gestures" top="10" width="100%" 
        textAlign="center"/>
    <s:HGroup width="100%" top="40" left="5" right="5">
          <s:TextArea id="info" editable="false" width="100%" height="200"/>
          <s:Button label="Reset" click="button1_clickHandler(event)"/>
    </s:HGroup>
    <s:Image id="image" x="40" y="260" width="400" height="400"
                source="@Embed('android_icon.png')"/>
</s:Application>
The GESTURE_ZOOM event
Figure 4-14. The GESTURE_ZOOM event

Busy Indicator

A new component has been added to provide feedback to the users within your mobile application. While there is no cursor to show busy status as there is in desktop development, the BusyIndicator component was added specifically for this reason. Using this component is extremely simple.

Let’s review the code below. There is a CheckBox with the label “Show Busy Indicator”, which when checked, calls the checkbox1_clickHandler method. There is a BusyIndicator component with an id of indicator, set to visible (or false). Within the checkbox1_clickHandler method, the indicator’s visible property is set to the value of the CheckBox. This simply shows or hides the BusyIndicator. Within the BusyIndicator, you can set the height, width, and symbolColor to suit the needs and style of your application. The results can be seen in Figure 4-15:

<?xml version="1.0" encoding="utf-8"?>
<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
                  xmlns:s="library://ns.adobe.com/flex/spark">

    <fx:Script>
          <![CDATA[
               protectedfunction checkbox1_clickHandler(event:MouseEvent):void
               {
                     indicator.visible = event.target.selected;
               }
          ]]>
    </fx:Script>

    <fx:Declarations>
          <!-- Place non-visual elements (e.g., services, value objects) here -->
    </fx:Declarations>

    <s:CheckBox label="Show Busy Indicator"
                     horizontalCenter="0"
                     click="checkbox1_clickHandler(event)" top="10"/>
    <s:BusyIndicator id="indicator" height="300" width="300"
                           verticalCenter="0"
                          horizontalCenter="0"
                           visible="false"
                           symbolColor="black"/>

</s:Application>
The BusyIndicator component
Figure 4-15. The BusyIndicator component

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required