Bachelors degree project / research
Surveys, can we improve them?
UX research

OVERVIEW

Towards the end of my computer science degree, I was confronted by the fact that I needed to write a research paper as a graduation requirement.

Disclaimer: this is a very compact version of my graduate research, if you’d like to review the whole document, with sources and so on, you can get it by clicking here

inception

My fellow seniors were going to write papers on things they were really passionate about - Artificial Intelligence, Algorithms, and so on. That’s why I decided to do mine on something I’m passionate about, Human Computer Interaction ❤️.

As I was searching for research ideas, it struck me, most of the survey engines leavea a lot to be desired from a UX perspective.

It’s not necessary to be “nice”, we’d all been there before with customer satisfaction surveys, work surveys, or in my case, academy performance surveys.

In order to illustrate my frustrations with survey engines, here’s a quick list of symptoms that I had experienced when answering one of these academy performance surveys:

– Endless scrolling (a.k.a. OMG when is this going to end?)

– That little voice in your head constantly asking: “Is this the last question I have to answer?”

– Am I supposed to write a number here? 🤔

So at that point, it was clear, we need to fight for the users who have to fill these surveys every semester, resulting in more meaningful answers from the alumni.

THE PROJECT

So I was onto researching - would changes in the core user experience of filling up a survey affect user perceptions? The change that I was to test was navigation: scrolling surveys v.s. fancy-pants navigation (one question in full screen)

In the past few years, some survey engines like typeform and survmetrics introduced this premise. But there was a little problem, none of these offered scientific proof that their change actually worked better for the final user.

GETTING TO WORK

For simplicity, I’ll briefly explain my testing model:

– 66 tests. 33 users answered the survey in scroll mode, 33 in fancy-pants navigation. (Quick note, the questions in each variant were exactly the same)

– The user interactions and key metrics (such as completion time) were recorded in the server. And when they finished, 3 simple questions were asked. These questions were related to: ease of completion, perceived speed of completion and perceived length of the survey.

the results

So there I was with a ton of data about survey engine interaction, what was I going to do with all this power in my hands?, ok thats a bit dramatic. Let’s jump into the graphics:

For the following sections we’ll be referring to the scroll navigation surveys as Long forms and the fancy-pants navigation as Single task forms.


Perceived ease of answering

The idea of this metric was for the users, to tell us how they felt after answering the survey. As you can see in the graphic, there’s no big difference in perceptions here, though one persons felt that answering one question at a time was tedious 😰


Perceived speed of completion

This metric was aimed to get a feel from the users on the time investment for each variation of the survey. Here we noticed our first discrepancy in the users perceptions. The single task form users felt that completion was fast. Even though we recorded a bigger average in time completion for this variant v.s. the long form.


Perceived length

Our final metric was related to the perceived length, here we wanted to know if the users felt this survey included a normal amount of questions, few questions or a ton of questions.

Here also we noticed something interesting: 10 users from our 33 that solved the single task form felt that the same exact survey presented in the long form, was shorter.

CONCLUSIONS

Based on the data presented here and the more extensive information included in my full document, we concluded that the fancy-pants navigation was perceived better by the users, but also made the process of answering slower. Does that mean fancy-pants navigations is bad? No, it serves a purpose, maybe not the let’s-answer-this-fast purpose, but it makes the user feel better compared to scrolling through a billion questions.

There is a lot of future possibilities here, some that come to mind are:

– Does this navigation affect the quality of answers a survey gets?

– Does fancy-pants navigations diminish or lessen errors from the user?

And the sky's the limit. So thanks a lot for taking the time to read this, I would love to hear your feedback. This was my first “scientific” research in the field, so I would love to spark some discussion here.

Want more details? remember you can download my full document here.