Tuesday, 21 May 2013

Prototype mobile app for capturing marathon footage & tagging runners.

Written on 24/04/13

We used the Android SDK to build a prototype application to allow spectators to tag runners within video clips. The application launches the camera’s default camera application and runs a background service to log the location and orientation of the device during capture. By combining these two items of metadata we can generate a second by second approximation of the camera’s field of view, which intersects the marathon course. By adding the tagging metadata we can infer the location of each runner in terms of distance around the course. In order to interact with the default camera application to trigger the metadata logging, we monitored the file system of the device using the FileObserver. Once the spectator has finished capture, the video is immediately played back and a keypad is displayed for them to easily enter in runner numbers.

Screenshot from prototype tagging interface.

One of the issues with the mobile user experience was that it slowed down users who wanted to quickly return to recording runners. The application did not allow a fast enough switch between capture and reviewing footage. The tagging process immediately after capture was restricting the amount of capture that took place. In addition to this, runner numbers in the footage were not clear for tagging during playback, spectators were having to remember numbers and several took to reading out the numbers aloud so they could hear it back in the footage.

In comparison to previous findings from my own research (on citizen journalism), the design of the mobile interface resulted in all of the videos being captured in a landscape orientation, which is much more pleasant for potential online viewers. Metadata generated from the event appears to be decent quality and we are now going through the process of analysing the data to determine the accuracy of the runner location tagging.

For the second iteration of the application we are moving towards using our own camera application with a live tagging system to solve some of the problems seen in the initial trials. This will allow us to have access to the lower-level camera functionality and adjust the video capture encoding to reduce the file size for upload.To accompany this, a newly designed front screen will show the live locations of runners who are requesting footage alongside thumbnails of footage which have been submitted by other spectators.

No comments:

Post a Comment