Buy any Easy Native Extensions 2nd Edition package and get our $99 iOS + Android ANE Template completely free before the end of June 2015.
- step-by-step guide to making your iOS extension in under an hour
- library for data conversion between ActionScript and native code
- tutorials
- infographics
- code included
At the end of this part you will have
This:
In other words, you will have an app that displays the frames you grab from the native camera. You might need to find a cat first.
Time
15-20 minutes
Wait, have you done these first?
- Part 1: Create a test app – 15-20 minutes
- Part 2: Set up the Xcode project – 8-10 minutes
- Part 3: Set up the AIR Library – 8-10 minutes
- Part 4: Connect to the camera in Objective-C – 15-20 minutes
- Part 5: Start the camera from ActionScript – 5-6 minutes
- Part 6: Grab frames from iOS camera – 15-20 minutes
- Part 6A: Fake triple buffering – 5-7 minutes, optional
- Got a cat? Yeah, me neither… I had to do my own selfies through the whole tutorial. Hard work.
Step 1: Decide how your AIR Library should receive a frame
Before we plunge in the Objective-C thicket, it would be good to decide what kind of data you will want from it.
In Flash Builder find your AIR Library project, CameraTutorialAIRLib, and open CameraDriver.as. Then add this public method to it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
public function getVideoFrame() : BitmapData { // Homework for you: for greater efficiency // you can cache the ByteArray and just copy pixels into it. // You'll only need to create a new one // if the frames you get from the camera vary in size. var videoFrameBytes : ByteArray = new ByteArray(); var frameSize : Point = new Point(); m_extContext.call( "as_copyLastFrame", videoFrameBytes, frameSize ); // If we haven't got a valid frame, don't bother copying it: if ( 0 == videoFrameBytes.length || 0 == frameSize.x || 0 == frameSize.y ) { return null; } // Homework for you: ditto with BitmapData - you can cache that to make your ANE more efficient var bitmap : BitmapData = new BitmapData( frameSize.x, frameSize.y, true ); // We asked the native camera for kCVPixelFormatType_32BGRA. // ActionScript expects RGBA. Setting the ByteArray to be LITTLE_ENDIAN // says to ActionScript: this is "RGBA read backwards": videoFrameBytes.endian = Endian.LITTLE_ENDIAN; videoFrameBytes.position = 0; // More homework for you: to avoid copying a frame more than once, // count the frame indices you've copied. If the frame you are about to get // from the native side doesn't have a newer index than the last frame // you've displayed, don't bother copying it. bitmap.setPixels( new Rectangle( 0, 0, frameSize.x, frameSize.y ), videoFrameBytes ); return bitmap; } |
What does this mean for your native code?
1. You will need to expose a C function, which ActionScript can call as “as_copyLastFrame”.
2. “as_copyLastFrame” will take a flash.utils.ByteArray as its first argument, which it will need to set to an appropriate size, so it can copy pixels from the camera to it.
3. As a second argument “as_copyLastFrame” will take a flash.geom.Point and set its x and y properties to the width and height of the frame that was copied into the ByteArray.
Now that we know what your native code needs to do, let’s do it.
Step 2: Get a video frame from CameraDelegate
Before passing it to ActionScript, you first need to get hold of a video frame from your CameraDelegate class.
In your Xcode project open CameraDelegate.m and add a method that will do just that:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
- ( const NSData * const ) getLastFrame: ( int * ) frameWidth height: ( int * ) frameHeight { @synchronized( self ) { m_consumerReadBuffer = m_middleManBuffer; m_middleManBuffer = NULL; *frameWidth = m_frameWidth; *frameHeight = m_frameHeight; } return m_consumerReadBuffer; } |
Remember the fake triple buffering technique we use for handling the frames that are copied from the camera? This is the last step of it, where we ‘swap’ the middleman and the consumer read buffer, making sure that we put a mutex around that (hence the @synchronized directive) to ensure that the middle man buffer won’t be changed in the middle of our ‘swap’.
Put the getLastFrame() signature in CameraDelegate.h to make it callable from outside the CameraDelegate class (make it public). This goes inside @interface CameraDelegate:
1 2 |
- ( const NSData * const ) getLastFrame: ( int * ) frameWidth height: ( int * ) frameHeight; |
Step 3: Copy pixels into the ActionScript ByteArray:
In your Xcode project open the source file that defines your native library’s interface to AIR. That would be CameraLibiOS.m.
3.1. Add a function that ActionScript will call that will take a flash.utils.ByteArray and a flash.geom.Point:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
FREObject ASCopyLastFrame( FREContext ctx, void* funcData, uint32_t argc, FREObject argv[] ) { // A bit anal, but saves headache. // Look up "C enumerated types" if enum is unfamiliar: enum { ARG_BYTE_ARRAY = 0, ARG_FRAME_SIZE, ARG_COUNT }; // Make sure we are prepared: there is a CameraDelegate instance // and we've got the right number of parameters from ActionScript assert( ARG_COUNT == argc ); assert( NULL != g_cameraDelegate ); // 1. Get hold of the last frame that CameraDelegate received from the camera int frameWidth = 0; int frameHeight = 0; const NSData * const videoFrameData = [ g_cameraDelegate getLastFrame: &frameWidth height: &frameHeight ]; // 2. Check if we've got a usable frame if ( !isVideoFrameValid( videoFrameData, frameWidth, frameHeight ) ) { // TODO: optional - send an error event to ActionScript return NULL; } // 3. Copy pixels into the ByteArray that ActionScript passed here: // 3.1. Get hold of the ByteArray wrapper object: FREObject byteArrayWrapper = argv[ ARG_BYTE_ARRAY ]; assert( NULL != byteArrayWrapper ); // 3.2. Make sure we've got enough bytes in the ByteArray: uint32_t numBytesToCopy = ( uint32_t ) videoFrameData.length; FREObject byteArrayLengthObj = NULL; FRENewObjectFromInt32( numBytesToCopy, &byteArrayLengthObj ); FRESetObjectProperty( byteArrayWrapper, ( const uint8_t* ) "length", byteArrayLengthObj, NULL ); // 3.3. Then get hold of the actual ByteArray: FREByteArray byteArray; FREResult status = FREAcquireByteArray( byteArrayWrapper, &byteArray ); if ( FRE_OK != status ) { // TODO: optional - send an error event to ActionScript return NULL; } // 3.4. Now do the copying: memcpy( byteArray.bytes, videoFrameData.bytes, numBytesToCopy ); // 3.5. Finally, release the hold on the ByteArray: FREReleaseByteArray( byteArrayWrapper ); // 4. Let ActionScript know the size of the video frame: FREObject frameSizeObj = argv[ ARG_FRAME_SIZE ]; FREObject widthObj = NULL; FRENewObjectFromInt32( frameWidth, &widthObj ); FRESetObjectProperty( frameSizeObj, ( const uint8_t* ) "x", widthObj, NULL ); FREObject heightObj = NULL; FRENewObjectFromInt32( frameHeight, &heightObj ); FRESetObjectProperty( frameSizeObj, ( const uint8_t* ) "y", heightObj, NULL ); return NULL; } |
Add a helper function for checking if we’ve got a valid video frame:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
BOOL isVideoFrameValid( const NSData * const videoFrameData, int frameWidth, int frameHeight ) { if ( NULL == videoFrameData ) { return false; // The video frame hasn't been captured from the camera yet } if ( NULL == videoFrameData.bytes ) { return false; // The data pointer hasn't been set } if ( 0 == videoFrameData.length ) { return false; // Empty frame... } // Finally, make sure we've got a valid size for the frame: return frameWidth > 0 && frameHeight > 0; } |
That’s quite a lot to take in. If you have questions and the comments in the code don’t answer it for you, leave a comment at the bottom of this post – I’ll try and answer as best I can.
3.2. Expose this function to ActionScript. In CameraLibiOS.m find your context initializer, CameraLibContextInitializer(), and add ASCopyLastFrame() to the extensionFunctions array, so it now looks like this:
1 2 3 4 5 6 |
static FRENamedFunction extensionFunctions[] = { { ( const uint8_t* ) "as_startCameraPreview", NULL, &ASStartCameraPreview }, { ( const uint8_t* ) "as_stopCameraPreview", NULL, &ASStopCameraPreview }, { ( const uint8_t* ) "as_copyLastFrame", NULL, &ASCopyLastFrame } }; |
Step 4: Display frames in your test app
Aren’t you glad you already have an app set up, so you can test your code in a jiffy?
4.1. Add a function that will take a video frame from your ANE in the form of flash.display.BitmapData and pass it to the spark Image you added to your stage. This will go inside the <fx:Script> section in CameraTutorialAppHomeView.mxml:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
private function displayVideoFrame() : void { // 1. Obtain the frame data from the ANE var bitmap : BitmapData = m_cameraDriver.getVideoFrame(); // 2. If it's a valid frame, // give it to previewImage to display if ( null != bitmap ) { previewImage.graphics.clear(); previewImage.source = bitmap; } } |
4.2. So when will you call displayVideoFrame()?
You have a couple of choices here. You can either notify your app every time a frame becomes available by sending it an event or you can have the app ask for frames on a timer, irrespective of when they become ready in the native library. I’ve found the latter to be a lot more flexible, as there are times when you want the camera and the consumer to run at different frame rates. Honest, I have war stories to share. Ask me in the comments below if you are curious.
A timer it is, then.
In case you happily skipped the rant above (can’t blame you), this is what you’ll do:
4.2.1. Add a flash.utils.Timer object to CameraTutorialAppHomeView.mxml:
1 |
private var m_refreshTimer : Timer = null; |
4.2.2. Start the timer in your onCameraStarted() event handler:
1 2 3 4 5 6 7 8 9 |
private function onCameraStarted( _event : Event ) : void { // 1. Update the UI btnStart.enabled = false; btnStop.enabled = true; // 2. Start requesting video frames startRefreshTimer(); } |
of course, add a definition for startRefreshTimer():
1 2 3 4 5 6 7 8 9 10 11 12 13 |
private function startRefreshTimer() : void { if ( null == m_refreshTimer ) { // Let's update our screen at 15 frames per second: var framesPerSecond : Number = 15.0; m_refreshTimer = new Timer( 1000.0 / framesPerSecond ); m_refreshTimer.addEventListener( TimerEvent.TIMER, onRefresh ); } m_refreshTimer.start(); } |
4.3.3. In your timer handler, onRefresh(), call displayVideoFrame():
1 2 3 4 |
private function onRefresh( _event : Event ) : void { displayVideoFrame(); } |
What’s next?
- If I were you, I would run the app and see if I get a camera preview. Hey, where did this cat come from!?
- There is something we haven’t done yet however, before we can finish this tutorial. Can you guess what it is? Head on to Part 8: Stop the camera (6-7 minutes).
- Here is the table of contents for the tutorial, in case you want to jump back or ahead.
Check out the DiaDraw Camera Driver ANE.
Quinn
I think the definition for the isVideFrameValid (isVideoFrameValid) is missing from the above – great tutorial btw
Radoslava
Thanks, Quinn!
I’ve now corrected the typo and included the definition. It’s great to see readers paying attention!
Radoslava
Lewis Smallwood
Thank you so much for this post! I was having a hard time exposing functions to ActionScript, worked a treat!