Monday, 5 August 2013

Update on development of Marathon Live Mobile App

Since the last event, the Nottingham Race For Life, we have been developing the mobile application to get it ready for testing with a group of Marathon spectators who are not members of the research team. We've made the following changes to the app:
  • Created a new front screen which lists all of the tags which have been made and displays them on a map. This will eventually include requests from runners and their friends and family for footage.
  • Improved the location accuracy of the app using Google's new Google Play Services location provider, which provides an easy and optimum way of getting location across the network and GPS location providers.
  • Moved the runner number keypad to the right hand side of the screen to allow tagging to be done without obscuring the lens of the camera when held in landscape mode.
  • Removed the record button from the app and switched to auto-record instead. This allows the process of recording runners and switching to the front screen easier and faster.
  • Added YouTube upload to the app (still experimental) using the example Android application for YouTube Direct Lite. This is in the early stages of development and is being trialled (to be technical, in a branch) amongst spectators to get a feel of how fast we can upload content from the events.
We ran a technical trial with the application at the Nottingham Riverside, which hosted a 5K event. This was also an opportunity to test capture incentivisation by feeding event spectators messages from a central control room to encourage them to capture footage of runners.


Saturday, 15 June 2013

Nottingham Race for Life



On Sunday 9th June, James and I spectated at the 'Race for Life' in Nottingham. This was a 5k run around the Victoria Embankment, near to Trent Bridge. This event raised around £300,000 for charity so was a busy and popular event, giving us a good opportunity to test our application at a run with a high volume of people. This was also our first opportunity to test out the 'live tagging' interface that has been developed, which allows the user to tag runners using an on-screen keypad whilst they film.

This was a valuable experience for us because it helped highlight a number of bugs in the current application. For example, on James' phone, a Samsung Galaxy S2, the application recorded videos as expected, whereas on my own phone, a Samsung Galaxy S4, the application recorded videos but when these were played back it was found that actually only audio had been recorded properly and video consisted of a series of green and pink fuzzy lines.

With regards to live tagging, this event helped highlight how difficult this may be, particularly at a high volume event. A lot of the participants at this race were walking rather than running, yet live tagging was still a significant challenge. At future events we should test the live-tagging interface further by improving the on-screen keypad. Currently it was very fiddly to use and a lot of my own personal errors in tagging came from the fact that I wasn't able to accurately type in the runners number and had to keep correcting myself.

For now, the application is at a stage where it will benefit greatly from iterative design, prototyping and testing. Key goals for the next event to trial the application at should be consistently successful video capture and an improved on-screen keypad.

Tuesday, 21 May 2013

Prototype mobile app for capturing marathon footage & tagging runners.

Written on 24/04/13

We used the Android SDK to build a prototype application to allow spectators to tag runners within video clips. The application launches the camera’s default camera application and runs a background service to log the location and orientation of the device during capture. By combining these two items of metadata we can generate a second by second approximation of the camera’s field of view, which intersects the marathon course. By adding the tagging metadata we can infer the location of each runner in terms of distance around the course. In order to interact with the default camera application to trigger the metadata logging, we monitored the file system of the device using the FileObserver. Once the spectator has finished capture, the video is immediately played back and a keypad is displayed for them to easily enter in runner numbers.

Screenshot from prototype tagging interface.

One of the issues with the mobile user experience was that it slowed down users who wanted to quickly return to recording runners. The application did not allow a fast enough switch between capture and reviewing footage. The tagging process immediately after capture was restricting the amount of capture that took place. In addition to this, runner numbers in the footage were not clear for tagging during playback, spectators were having to remember numbers and several took to reading out the numbers aloud so they could hear it back in the footage.

In comparison to previous findings from my own research (on citizen journalism), the design of the mobile interface resulted in all of the videos being captured in a landscape orientation, which is much more pleasant for potential online viewers. Metadata generated from the event appears to be decent quality and we are now going through the process of analysing the data to determine the accuracy of the runner location tagging.

For the second iteration of the application we are moving towards using our own camera application with a live tagging system to solve some of the problems seen in the initial trials. This will allow us to have access to the lower-level camera functionality and adjust the video capture encoding to reduce the file size for upload.To accompany this, a newly designed front screen will show the live locations of runners who are requesting footage alongside thumbnails of footage which have been submitted by other spectators.

First test with real users


In this blog post I’m going to talk a little about our first user trial of the mobile application developed by Tim for the capture and tagging of user videos at the marathon. The functionality of the app allowed users to film video, while the system records timing, locative and view cone metadata. (Check out Tim’s post on the application to find out about view cones.)  Subsequently they can replay the recording and add tags to identify runners by tapping on their image in the video and entering their runner number.  The intention would then be to use the metadata and tags to position the runner within the marathon course as part of the presentation to an online user. 

At this early stage of development the evaluation we were planning was a formative user trial in a realistic context. We had two main goals for the study.  Firstly to test out the post hoc method of tagging that we had developed, how would users find the task of selecting runners from the video, is the tagging interface intuitive and what could we do as designers of the system to motivate them to complete it.  Secondly to investigate how the practices of capturing video in a running event, do users try and catch the overall atmosphere of the event or are they more focused on capturing particular runners.  In conducting the trial we teamed up with the University of Nottingham Student Union’s athletics club who have regular training sessions on our campuses.  Luckily for us they were willing partners in taking part with the trial.  However during our negotiations with the club we talked quite a lot about not interfering with the runners’ progress and focus.  Even though we were only looking at recording a training session the club made it really clear to us that having the runners carry mobiles or having technology interfere with the pure sports of distance and track running was off limits.  We were able to reassure them that we didn’t think that this would be a problem; we were focusing on the use of technology by the spectators.  
   
I only have room here to talk briefly about the results so will just mention a couple of interesting things that came out of the trial.  We found that the post hoc tagging system whilst an effective method of tagging the runners within the videos it came with a couple of caveats.  When users were tagging they weren’t capturing and several users reported that they had missed some of the action whilst trying to number users.  Secondly there were some cognitive issues with tagging after the video was taken.  We found that not all videos had a clear image of the runners’ numbers so users had to try and remember the number to enter it for tagging.  An obvious usability issue for our system, however what was interesting was that users attempted to deal with this problem by calling out the number whilst videoing, so when they watched the video back the number was in the audio stream for them to enter.  Maybe an idea for a more technologically complex system would be to use speech-to-text APIs to recognise these numbers automatically as the video is captured.

Thursday, 16 May 2013

First Test of Marathon Live App, at a real event!


On a sunny, if cold, morning in April a few of the team descended into the snowy Derbyshire countryside to test the first incarnation of the Marathon Live App, as well as get some first-hand experience in being a spectator at a running event. The event itself was the Wirksworth Wiggle, a 10Km event of approximately 100 competitors, predominantly cross-country, raising funds for the local fire fighting service.

On arrival, we split up as best we could, with one of the team situated at the start / finish line, and two others placed along the main straight, about 2/3rd’s into the race. It became apparent quite early that with such a short event (the top runners would easily be finished in under an hour) we would have to settle in the one position to watch, especially as the cross-country nature of the course meant driving around the route was not an option.

During the event the main features of the app worked well, allowing the user to video the event easily when chosen, and also to identify or ‘tag’ the runners in the video retrospectively after recording had finished. Although this is the case, issues were discovered regarding the ease, and perhaps more importantly the moment of tagging, which will need to be addressed in future development. Overall however the trial was a success, with a large proportion of the competitors recorded and identified in more than one location. A typical video clip taken can be found here:


Behavioural trends were also noticed in regards to the different agents involved, be it spectators or competitors. It was found that spectators’ recording habits depended both on race progression and geographical position. Towards the middle of the event, longer videos were taken often of a panoramic ‘sweeping’ nature, perhaps in anticipation of the next group of runners appearing in view. Towards the end of the event, the videos recorded were much shorter and often static. Also longer videos were taken when the spectator was positioned along a straight part of the course, perhaps reflecting the fact that runners were visible from a greater distance.

The competitors’ reaction to being videoed was overwhelmingly positive in nature, with many runners saying hello and giving ‘thumbs-up’ gestures to the spectator and the device.

It is hoped to have a follow up trial at an event early to mid-June, to test version 2.0 of the app and continue studies regarding spectator behaviour.



Initial survey of marathon spectators and their behaviour – Sam Howard


At the beginning of this project it was important that we grasped a basic understanding of the behaviour of marathon spectators. We wanted to know details such as why people spectate at marathons, what they do when they’re there, whether they take photos and videos, who they take photos and videos of, etc.
To answer these questions an online survey was created using Qualtrics (http://www.qualtrics.com/). Targeting marathon spectators is clearly more challenging than targeting marathon runners, but it was assumed that the two groups are not mutually exclusive and a link to the survey was posted on both the Runners World forum (http://www.runnersworld.co.uk/forum/) and the Running Bug forum (http://therunningbug.co.uk/rbforums/default.aspx). Interestingly, the Runners World forum is a much larger forum, but a far greater survey response was obtained from members of the much smaller Running Bug forum who were all very willing to help out with the research and seemed keen to be updated on future developments.
The survey had 62 respondents in total. The age of respondents was varied, with twenty-six participants aged between 36-45, twenty aged 26-35, ten aged 46-55, five aged 16-25 and one aged 56-65. 41 of the survey respondents were male and the remaining 21 were female.
Below are some of the key findings of the survey….
·      80.6% said they were there to support a runner that they know, as opposed to being there to support a charity or due to living in the local area.
·      69.4% of respondents spectate with 1-4 other people. Only 1.6% spectate with 5+ people, but 14.5% said they vary between 1-4 and 5+.
·      70% of respondents who said they were there to support a runner that they know said they look for ways to pass the time before they see them.
·      88.7% of people spend time talking to the people around them
·      58.1% of people spend time taking photos.  Of these people, 61.1% take photos on a Digital Camera and 38.9% take photos on a mobile phone. 100% of these people take photos of runners that they know, 41.6% take photos of Fun Runners and 41.6% take photos of Celebrities.
·      17.7% of people spend time taking videos. 72.7% of these people take videos on their mobile phone, 27.3% take them on a digital camera or video camera. 90.9% of people who take videos take them of runners that they know. 27.2% take them of Fun Runners and 63.6% take them of Celebrities.
·      95% of people spend time simply watching the runners
·      Of the 47 people who spectate to see runners that they know, 34% said they move during the race to see their runner more than once, 59.6% sometimes do and 6.4% said they don’t.
This survey was very useful as a preliminary study to help identify some of the key characteristics and behaviours of marathon spectators. It must be taken into consideration that the sample used was small and only taken from two sources, but it still shows some interesting patterns. It suggests that marathon spectators tend to be there to watch a runner that they know. It also suggests that a large portion of these people look for ways to pass the time before they see the runner they’re there for, and often move during the race to see that runner more than once. This is all useful information to take into consideration when designing our application.