Camera Tutorial, Part 6: Grab frames from iOS camera

Opt In Image
Early bird offer on the ANE eBooks

 

Buy any Easy Native Extensions 2nd Edition package and get our $99 iOS + Android ANE Template completely free before the end of June 2015.

 

 

  • step-by-step guide to making your iOS extension in under an hour
  • library for data conversion between ActionScript and native code
  • tutorials
  • infographics
  • code included

At the end of this part you will have

An ANE that grabs frames from the native camera.

Time

15-20 minutes

Wait, have you done these first?

You should have an Xcode iOS library project that looks roughly like this:

Xcode camera project

and a test app and Flex Library project that look like this:

Camera ANE Flex projects

If you aren’t interested in completing the full tutorial, but want to see frame capturing with AVFondation, you can stick around – this post should be quite informative.

Step 1: Hook to the video frame queue in the native library

Video frames from the camera arrive on the callback you added in Part 4 of the tutorial to your CameraDelegate class in CameraDelegate.m:

captureOutput:didOutputSampleBuffer:fromConnection: is called every time there is a new frame available. Note that this call happens on a special queue, same one that you set in Part 4.

A dispatch queue is a way of asynchronously executing code. Blocks of code that are sent to a single dispatch queue are executed one after the other (FIFO), but in concurrency with the rest of the code in your app. Although your queue is not guaranteed to always run on the same thread, it may help to think of in this case it as a thread, on which video frames arrive.

Step 2: Inspect data that comes form the camera

Have a look at captureOtput again – it takes three arguments:

You can set up multiple outputs, including ones for audio if you want to capture data from the microphone. This is why the first thing we do in captureOutput is to check whether the data sent to it comes from the output we are interested in.

  • AVCaptureConnection * connection allows you access to the device that data is coming from – in this case the camera.
  • CMSampleBufferRef sampleBuffer is what we are most interested in right now: this sample buffer wraps the video frame pixels we want to copy and send to ActionScript. Depending on which output sampleBuffer comes from, it can contain an image or an audio frame. To get hold of an image frame, which is what we’re expecting here, you’ll query sampleBuffer for a reference to an image buffer and from the image buffer you’ll copy the raw pixels as bytes (done in Step 4 below).

Step 3: Add a way of storing copied frames

As we saw in Step 2, you’ll be copying bytes and storing them for ActionScript to access. There are a couple of things we need to consider:

  • data structure: you’ll need something that you can easily allocate, copy bytes to and from. NSData and NSMutableData seem like good candidates. 
  • concurrency: we noted in Step 1 that data will be arriving on a thread of its own. ActionScript on the other hand only operates on the main thread. So chances are that new data will be arriving from the camera while ActionScript is still reading the previous frame’s data. One way of making sure that these two operations don’t step on each other’s toes is to keep copies of old frames for ActionScript to consume, while copying the new frames into a fresh bit of memory. On a mobile device you are starved for RAM and the system will shut your app down it becomes too greedy for memory, so you’d better not keep too many old frames around. In this tutorial we’ll use a technique, which I refer to as fake triple buffering. It only keeps one old frame in your native library and does minimal synchronization between the main thread and the camera thread so as not to block either of them for any length of time.

3.1. Declare your three ‘fake’ buffers as private members to CameraDelegate (find the @private directive in CameraDelegate.m):

The reason I call the buffers ‘fake’ is that we’ve got three pointers to blocks of data, but only one actual block of data. You’ll be juggling the three pointers in a way that would make sure the one frame we keep for ActionScript isn’t read from and written to at the same time without blocking either the reading or the writing for too long. Go on, have a look at the details of fake triple buffering, there be cat selfies.

3.2. Make sure they are initialized. Find CameraDelegate‘s init() function which you added in Step 10 of Part 4 of the tutorial and add the initialization of the three buffers to it:

Step 4: Access pixels in the sample buffer

Now that you’ve seen where video frames come and in what shape, let’s get the actual frame pixels out of one of these.

Open CameraDelegate.m and find the placeholder function you added to it just for this purpose:

Get rid of the //TODO: remark and add the following.

4.1. Get hold of the pixel buffer.

CVPixelBufferRef will give you information about the video frame and access to its data.

4.2. Lock the pixel buffer for reading. You’ll have to unlock it when you’re done with it (see step 4.7. below).

4.3. Prepare to copy bytes from the pixel buffer: check how many there are and where they are in memory.

4.4. Copy pixels into a new block of memory:

If you are wondering why we allocate a new block of memory each time, have a look at fake triple buffering.

4.5. Move the copied frame along, so it can be accessed by ActionScript:

Here the @synchronized directive creates a mutex lock that ensures that no other thread will be able to access m_middleManBuffer while we are changing it.

If you are new to multithreaded programming: a mutex is a way of allowing ‘mutually exclusive’ rights to threads to execute a piece of code. A thread that gets a mutex lock is the only one that can run that piece of code while it holds the lock. Once it releases it (in the case above, goes out of the scope of @synchronized), another thread can acquire the lock and run the same code. This helps keep the integrity of data: for example, we want to make sure that the pixels we’ve copied for ActionScript to access aren’t messed with (changed or destroyed) while ActionScript is half-way through reading them.

4.6. Store the video frame size

Notice the two variables I snuck in the last code block: m_frameWidth and m_frameHeight? These will let you know how big each frame is, so you can tell ActionScript later. We aren’t expecting the size to differ from frame to frame in this tutorial, but you can run into that if you decide to do any processing on the frames, cropping for example.

So declare these as private members (under the @private directive):

And make sure they get initialized when the camera starts. Add these two lines to your startCamera() function, just before you call [ m_captureSession startRunning ]:

4.7. Almost forgot!

Let’s unlock that poor pixel buffer:

And that’s it, you have secured a frame from the camera.

What’s next?

  • If you are still wondering what that ‘fake triple buffering’ is all about, check it out before moving on. Like the cat selfie?
  • Next, though, it’s time to see those frames on the screen, what do you think? This is the objective of Part 7: Pass video frames to ActionScript (15-20 minutes).
  • Here is the table of contents for the tutorial, in case you want to jump back or ahead.
Wait, want more features and Android support?

Check out the DiaDraw Camera Driver ANE.

Comments

  1. ganidu

    I want read data from delegate method and put it in to buffer to read from a buffer , this buffer should have capability crop video from two time based location , how can i do it ?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">