How do you bias a survey to get just the result you want?
Bias the device
Here’s how SNCF does it. They put some kind of device in the new renovated Saint-Lazare station. The display invites passers-by to tell how much they love the new station (of course, it is beautiful, it’s brand new, full of natural light, and though we suffered a lot with years of works, it was worth the pain). You have two choices : either click on a pink button with a big heart shape (saying, I love it), or take a picture of a QR code “to explain why you don’t love it”.
The surest way to get more than 90% approval.
Bias the questionnaire
But it gets better. After clicking a few times on the button (ok, maybe they handle that kind of childish behavior), I scanned the QR code. There, after waiting for long seconds, I was retargeted to a page saying “3221 people love the new station, and you?” with… a button to say “I love it”, and a smaller one to go to a new page to enter a message. All in a positive tone (help us improve!) So cute.
I left a message saying something about statistics and fiability. For now, I have just received the standard reply, with a thank you.
What is it good for?
From the url linked by the QR code, I found their campaign was using MyFeelBack solutions. So there are some customer feedback professionals making this kind of survey, and there are people making money. Well, I can only hope they take into account the bias introduced in their procedure when they analyze the result. How can they do this?
- compare the number of clicks with the number of people coming to the station everyday (or, if they have the figure, the number that can pass near the device),
- add a new device with a random question and estimate how much people want to click on it,
- count the number of customers from which you can expect feedback with a button vs. a QR code and a form,
- only take the written feedback into account.
Maybe the device is not entirely useless, after all? What do you think?