A mini usability test

I did a mini usability test last week at work. Knowing that asking for more time or budget on things can be scary, I decided it would be ok to just do it as part of the QA process for a website we are about to launch. Here’s a disclaimer: I didn’t follow all the rules.

In an ideal world, I would have had actual users and not employees of my company. But like I said in my last post, I’m pushing a boulder up a hill, and any little thing I can squeeze in helps. It's good practice, and once momentum builds I can start asking for things.

How I began

Our typical “QA” process for launching websites includes a few hours of employees’ time spent clicking on all links, calling phone numbers, spell-checking, copy-proofing. It may seem obvious, but this is not a complete test to see if things are kosher. In an ideal scenario (and if it’s up to me, all future projects), we would test earlier in the design/development cycle, as early as on the wireframe stage. The idea is to catch things when they’re easy and cheap to fix. Obv! BUT... old dogs, new tricks, eh?

Of course I wanted someone proofing the site, but I also wanted someone using it to complete some tasks. I am very familiar with this client and background, and the site we are launching is a redesign of an existing one, so the tasks very much remained the same as before, though now they should be easier and faster to complete, and... I am way too close to be a good test subject.

I wrote down four scenarios for my test subjects, and asked them to complete a task for each. I selected one person for the mobile version of the site, and one person for the desktop version. My test subjects were removed enough from this client and project that I felt they were as much of a safe bet as I could find in the office. They were also of two different age brackets that represented a larger range of the user base.

Next, I sat down with each subject and recorded with my iPhone videos of each task I asked the subject to perform. I was careful to only tell them the scenario, the task, and a reminder that there were no wrong answers.


There were a couple apparent changes that needed to be made based on navigation habits, but as I watched the videos afterward, I also noticed subtle gestures each subject made — on mobile and desktop — that highlighted something I wasn’t looking for in the original questions.

One of them was that when a thumbnail graphic was present, the behavior was to instinctively click or tap on it. When the images that were tapped didn’t link anywhere, it didn’t seem to slow down the subject, and I don’t think they noticed that they did it, but it made me excited inside that I found a little easter egg of truth that I wasn’t initially looking for. People click on thumbnails. Again, obv, right? Well, seeing the evidence made this a fact.

The other thing that became apparent was that both subjects used immediate scrolling to become acclimated to the site — to get to know it a bit before reading anything. Similar to walking around a new neighborhood when you first move into it to get an understanding of your orientation. Would this happen with different subjects on a different site? I can’t make that assumption, but it was true for this one.

My final finding was a feature I believed we could up-sell to the client in a future iteration of their site, as it was something both test subjects wanted to do to make a certain task faster to accomplish.

My takeaway for this exercise

1) It was fairly easy to create and moderate a small usability test, and it provided me with useful indications of what to modify on the site before launch. Some of the things found were issues I’d noticed earlier on, but in an attempt to remove my personal bias from the equation as much as I could, the test validated my findings which was satisfying.

2) It was not that time consuming. Recording each subject’s journey took just under 15 minutes each (total for 4 scenarios), and it took me another 45 minutes to watch the videos and write down my findings on what I would recommend changing on the site. (Making the changes to the site was outside this scope, but I’m almost certain the client would have asked about several of them if we didn’t catch them.)

How would I test differently in the future?

I would use a 2 or 3 more people. An ideal usability test should be about 5 subjects, depending on how involved the test is. I could have easily asked friends not employed by my company. Even if employees are “outsiders” to the project I’m working on, they still share a bias in wanting the company to be successful, as well as being mindful of time spent for budgetary reasons on any company-related tasks.

I would also try this earlier, during the wireframe stage. Axure makes it so easy to create interactive/clickable wireframes that it seems silly not to test at this point.