Axon – a game for science

By Phil Stuart, How we did it

We’ve talked a lot about making games that are about something but with Axon, our recent game for the Wellcome Collection, we hope to have gone some way to making a game that’s not only about science, but for science too!

Before we explain in detail here’s a little bit of background about the game and its results. The game went live on Wellcome servers on Thursday the 19th March. It rolled on Kongregate, MiniClip and Newgrounds on 22nd March, followed by ArmourGames on 26th March.

The game hit 3 major peaks:

  1. On Saturday 24th March it gained traction (predominantly on Kongregate) with 159,793 games plays.
  2. On Tuesday 27th March the game appeared on ArmorGames pushing total plays to 248,998.
  3. On Thursday 29th March Axon was given badges on Kongregate, resulting in homepage promotion and a total of 361,880 plays.

To date, Friday 18th May, the game has been played 3,643,578 times from 1,322,704 loads (average 2.75 plays per load). From this date traffic has fallen in a predictable ‘long tail’ with the boost from Kong badges lasting about a week.

We were very pleased with how engaged players were with the embedded learning in Axon. There have been a total of 507,642 clicks through to wikipedia pages – a 14% click through rate, and this positive ‘learning’ experience was comfirmed by the community reaction.

Beyond the game

So far, so normal – we launched a game and have presented the results of how much it’s been played. However, what we hope makes this blog post different is in what I referred to earlier – that this is a game not only about science but for science.

During development we were approached by Tom Stafford who is a Lecturer in Psychology and Cognitive Science in the Department of Psychology at University of Sheffield. What Tom wanted from us was access to the stats from one of our games so he could analyse millions of results as opposed to the hundreds (or less) he was used to. Afterall, the wider the data range, the more interesting and conclusive the results that can be obtained.

Neuroscientists know that learning something changes the wiring of the brain, strengthening connections between areas that are active during a task and even causing physically observable changes in the size of brain areas which are working hardest. As a psychologist, Tom was interested in tracking these changes by measuring behaviour to better understand the neurological and biological basis of the acquisition of new skills. As such Axon was the perfect fit. Not only would the player be making neurological connections within the game, but they’d also be making neurological connections in their own brain with each successive play. We hoped that in providing Tom with the games analytics he could analyse the neurological development of the players own brains as they got better and better scores through playing the game again and again.

With Wellcome’s permission we created two custom events in Google Analytics that would help provide Tom with sets of data he could analyse to help him to gain a greater understanding in the accepted wisdom that “the more you practice a task, the better you get”.

After that introduction it’s probably best that we let Tom explain his initial results himself.

Tom Stafford’s analysis & results

Before I begin here’s a bit of an overview about how these initial results have been formed, what data I use and what data I don’t.

  • I collated data from the first 50 days on which the Axon game was played, recording anonymously the score each time the game was played.
  • I throw out everyone who plays less than 10 times, and then I sort the remainder according to their all time personal best scores.
  • I then plot the average score on each attempt for two groups: in red, the top scorers (people whose personal best is among the top 10%) and, in blue, everyone else (whose personal best is in the bottom 90%).

Now your personal best may have come on play 10 or play 100, but what you can see is that, on average:

  1. Players get better with each successive play (as expected, but nice to see)
  2. From the get-go – attempt 1 onwards – the top scorers score higher than everyone else
  3. Not only do the top scorers score higher, they get better *quicker*

I call this the “anti-Gladwell” result, because it contradicts the 10,000 hours rule of practice of K. A. Ericcson that Gladwell popularised. By attempt 4 the best players are scoring better, on average, than the rest of us will ever score. Because the blue line flattens out, no amount of practice will turn (the average) ‘everyone else’ player into a top scorer

Does learning increase proportionately with the number of game plays?

Obviously, the first thing I was interested in was if I could see learning occur as people played the game more. One simple way to do this is to calculate the average score on people’s first attempt, second attempt, etc.

The graph shows that on average, people get better rapidly. Note also how smooth the line is drawn from the data points. Individual performances in the game vary a lot (sometimes you have a lucky streak, sometimes you crash and burn immediately for no good reason), but with so many players these differences average out and you can see a clear underlying pattern.

Next, I wanted to see if learning continued beyond the initial few goes. You can imagine that some things in life you either get or you don’t, and once you’ve got them there isn’t too much progress to make. Could the Axon game be like this? Or do you keep getting better and better?

The blue line is the average score on attempts 10-100. There’s still a discernible improvement, but the pattern isn’t so clear. Fewer and fewer people play the game for the higher number of attempts – this is shown in the lower plot with the red line – and this drop off means that the data points get noiser but I should note that the number of players playing more than 100 times is still greater than a 1000 (which is about 200 times more than take part in most published psychology experiments).

We’ve seen how the attempt number predicts score, now lets look at the data from the other way around. I considered all the people who played the game and coded them by their all time top score (their ‘personal best’) and how many times they’d played the game. Then I ranked them all, from the people who completely sucked, so that almost everyone else had a personal best better than theirs, to the aces, who had personal bests better than nearly everyone else.

The top scorers are to the right, the bottom scorers to the left. The y-axis is the average number of attempts for people at each point on this scale. The fact that the line moves up as it moves to the right is unsurprising – people who score higher have more attempts, and everybody knows practice makes perfect.

What’s interesting is the acceleration of the curve as we get to people who are in the top 25% of players. These guys don’t just practice more, they practice a lot more.

The implications fit our intuitions of other learning domains, such as sports, but it’s nice to see them validated here. If you want to be among the best you don’t need to just work harder, you need to work a lot harder. And if you want to be the best of the best you need to work a lot harder than even the best! The reason for this need for increased levels of practice becomes clear if we go back to look at the learning curves, but from another angle.

The earlier graph showed average score for each attempt number. A different approach is to look at the scores for each individual, rather than for each attempt, but sort the individuals by the number of attempts they had. Doing this shows that people who have more attempts have, on average, higher personal bests. This is was we would expect from the first graph, but can’t be assumed. The implication is that individuals keep learning as they practice, but look at the rate of learning:

The curve decelerates, rather than accelerates, with increased attempts. It gets harder and harder to improve, as we all know from practicing things in our own lives. I’ve added the error bars to the graph to give an indication of how reliable the data is (again, as we get to higher attempt numbers we have less data to built our points out of).

My hope is that the patterns that hold here can be used to explore ideas from the study of learning generally. I’m enormously grateful to everyone who has helped so far (Charles, Phil & Cam at Preloaded, Stu and Alex at University of Sheffield, and Toby Barnes for introductions. I couldn’t have done this analysis without advice and considerable help from Mike Dewar of The data provided by players of the Axon game are a great opportunity to throw a lens on something we all do every day – learn – but something that is very hard to study in the lab.

There are more surprises yet to uncover but we wanted to share the headline results. One of the drawbacks of a large data set is that it can take days to run the analytic scripts.

What next?

It’s incredibly exciting to think that a by-product of making a fun, educational game to the end user we can also help society learn more about itself in the process, and we hope this is the start of further academic collaborations between Preloaded and the wider academic community.

Phil is the co-founder and Creative Director of Preloaded. A fanatical gamer and a champion for the power of games which do more than just entertain, he is responsible for the studio’s ‘Games with Purpose’ vision and quality.

Previous post

The Art of Wondermind

Next post

Games for Change #9