This week our team traded our offices for the Plymouth coastline as we headed out to the beach to test the latest version of the Bioblitz App we are developing in partnership with the Rock Pool Project.
Bioblitz, what is it?
“A bioblitz is an event that focuses on finding and identifying as many species as possible in a specific area over a short period of time, usually 24 hours.” - The Rock Pool Project
In this case, the Rock Pool Project make this a competition. Splitting people into teams and getting them to compete to score as many points as possible by making as many discoveries as possible. A lot of this is currently managed manually and the Rock Pool Project team asked us to support them in digitalising the process to make it more engaging and easier for them to manage.
Why did we need to run our user test in the real world?
Testing in the real world is both the most exciting and most nerve wracking part of any delivery process. No matter how much you run tests during development, nothing quite matches putting the product in the user's hands, getting a fresh and often more familiar pair of eyes on the process you are trying to improve. In a good way it offers an element of chaos, where unpredicted behaviours and environmental factors can come into play that you need to mitigate. Regardless of your efforts you just don’t know what might happen, but finding out means you can iterate and improve.
For the Bioblitz app this was even more important than normal, as the game is very situational. It runs at a specific time for only 90 minutes, in specific places outdoors where 4G and 5G data can be limited, allows users to upload photos of discoveries, uses AI to identify the species and gives them points based on rarity. Emulating all of these aspects at once constantly and accurately in development isn’t really practical, so running a test event on location ensured all the above could be tested together in parallel, with the added factor of real user input and behaviours.
.jpg)
So what were our key findings?
The output from the session has provided really valuable insight into areas the app can improve. In particular the session has highlighted device variations we need to account for, specifically in terms of image size and geolocation management.
One of our key findings was a high fail rate (23%) on the API endpoint we were using for doing the AI image analysis. Some users were having issues getting results when taking pictures through the app while others were getting 100% success rate. We were already aware that the iNaturalist app would sometimes fail uploads due to image size, so we established this as the first possible cause to investigate. Our suspicions were proved right, and so we have implemented an image resizer as part of the upload process to test again and see if this fixes the issue.
We found that some testers also had previous experience with the iNaturalist application which had given them some learnt behaviours. In future developments we are going to have to revisit how we show suggested species to align with expectations and make sure we give the users enough information to choose the right species for their discovery.
Overall the general feedback has been very positive with users highly rating the usability and design of the app. We will be working on the next iteration taking our learnings from the live test to make key improvements.
So, why is user testing important?
As evidenced above, user testing is really important to help us identify usability issues, flag any potential technical issues with more niche devices and helps us to see if the product we are developing is on the right track. It also gives us the evidence we need to prioritise the next iteration of development and make sure that we are designing a platform that is fit for purpose.
How to do user testing?
If you want to learn how to structure your user testing session, read about our 6 easy steps to cracking user testing.
Newsletter
Sign up for monthly insights, concept designs and product tips