Quantified prototype testing early in the design cycle with Axure, Mouseflow, Piwik and R


#1

Rigorous statistical methods are used to validate large A/B/N testing programs run by leading online players. But is it possible to obtain reliable statistical insights for small sample size tests run on design prototypes that are created and iterated early in the product cycle?

The answer is a resounding ‘yes!’ …the potential outputs of this low-cost, high-touch approach include:

  • Time on task analysis
  • Error rate analysis
  • Psychometric instrumentation (SEQ, SUS, UMUX, SUPR-Q, etc)
  • Open ended qualitative surveys
  • Task completion analysis
  • A/B/N feature testing
  • Screen recording
  • Heatmap recording
  • Think Out Loud elicitation
  • Funnel analysis
  • Problem matrix’
  • Etc.

Kids, please don’t try this at home unless your prototype tool is Axure! :wink: … Figma, XD, InVision, Sketch, etc won’t do this as far as i know.

For more UX/usability brainstorms pls see here: https://plexgraf.com/sk-portfolio/


#2

In the above video… I was so busy trying to demo Axure pushing events into the repos and R Studio that I didn’t fully discuss the design philosophy that springs out of this unmoderated approach

My background is human factors, psych, linguistics, learning science, cogsci… etc.

So I fully buy into the view that:

"If you have to choose between what the user says and what they do… CHOOSE WHAT THEY DO ! "

Corollaries

A) Users can’t tell you why they do what they do
B) You are not the user… you can’t fully plumb their minds
C) It’s extremely easy to accidentally inject bias, constraints and leading ideas into ethnography

I agree if you say there’s nothing better than talking to users in their native context but in general the more you stay out of the elicitation process, esp initially… the better.

When I see UX folks doing screen walkthroughs sitting with a subject in a conversational way … I cringe… this is too often moving away from true user behaviors and needs… not towards them.

Philosophy… when testing a new feature or flow :

After the initial interviews, concept coding, categorization, profiles, scenarios, and functional prototype.

  1. Identify a group of 5 - 500 users who you have not talked to yet.
  2. Run them through an unmoderated usability test session with Axure similar to what’s in the video
  3. At the end of the session throw in psychometric questions… and possibly an open ended feedback request.
  4. Get a screen recording of every session
  5. Review the session data and code forms inputs into problems, ideas and ratings etc.
  6. Review the screen caps and identify anomalies.
  7. Do interviews and follow-up walkthroughs with the users who had especially good/bad/interesting results in the initial test

This way you are introducing your biases and cognitive influences AFTER the primary research … and you can efficiently test a lot of users cheaply

There’s many ways to do this of course … but that’s my current thinking: Don’t poison the well ! … get a really rich body of unmoderated research… and then follow up in person.

And one more wrinkle… the Axure event push test method in the video can be used for moderated sessions, too… and it probably should be used… because it does such a great job of unobtrusive event capture. …

Please share your thoughts and push back if this doesn’t make sense !!

thanks…