Are Your Surveys Gambling with Your Museum's Future?

Preview

Why do we gamble and how can we stop?


A few weeks ago, I asked if we could stop gambling in 2020.

If you don’t feel like rewinding to that post, I basically wrote that we gamble when our surveys and focus groups consist of questions built on conditionals like “if” and “would.”

For example, a question that begins something like, “What would you do if…”

“What if we offered free parking? Would you buy a membership then?”

I want to dig a little deeper into gambling because I feel like I see it all the time these days.

It’s risky to make decisions based on responses to questions that are speculative because people may not really know what they would do or they may idealize future behavior. But many organizations still favor attitudinal research over behavioral research. I find it jarring to see organizations asking these gambling questions because, in UX research or customer development, they’re not really viewed as reliable sources of information.

d11576a4-3307-4164-af7e-a7a183b883c8_500x349.gif

Of course, you can make money gambling, sometimes. But most of us would not rely on gambling or playing the lottery to earn a living, so it seems like an inadvisable way for an organization to plan for its future.

Why do we gamble?

  • Habit or inertia. When we’re used to asking people, “How was your experience today?” it seems like a reasonable, short step to start asking questions along the lines of, “How might your experience be improved in the future?”

  • It seems like the right thing to do. Asking people simple, direct questions feels honest and straightforward — We may assume people understand our goals and may forget that willing research participants are humans who don’t have a complete view of their circumstances or behaviors.

  • Behavioral research doesn’t scale as easily as attitudinal research (surveys). People may (mistakenly) believe that findings from a study of eight qualified participants can’t be reliable or actionable, and no one wants to have to defend against accusations that their findings aren’t statistically significant. Running a survey seems like the easiest way to defend against those accusations — “Look at all these responses!” Surveys make for inexpensive armor.And, since we’ve been trained over time by funders, vendors, and marketing tools to believe that quantitative insights are what matter, survey results may feel truthier. It doesn’t matter if the question is hot garbage — We asked it 50,000 times, so it must be significant, right?

  • We also genuinely need shortcuts. We gamble when we’re short on resources — using other people’s data, for example. The problem is that other people’s research findings may not be relevant, or the research may only be a first step toward answering a hard question, like “Why do people fail to renew their membership?”

Finally, when we see others gambling, it may make gambling seem like a reasonable thing to do.

I’ve asked gambling questions in this newsletter before, to my eternal embarrassment. It’s little consolation that even the best research out there — from sources I respect and trust — will sometimes gamble.

Example: Last week, Colleen Dilenschneider shared an article on cultural competitors. It’s an insightful post on an important topic, yet it includes this chart, which shows an increasing number of people who say they prefer to stay home during a week off of work or school:

So, we can see that an increasing number of people say they prefer to stay home — but are those people actually staying home?

I do find it interesting that an increasing number of people say they prefer to stay home, but I’m not sure how to apply it. What if the respondents are saying that they prefer to stay home because they’re cranky that their commute is getting longer? What if their behavior hasn’t changed at all? The survey only takes us so far — no further.

Ask me what I’d like to do during my week off, and I’ll tell you that I prefer to spend my time at the gym and volunteering at church — but I’m ten pounds overweight and if you visit the Church of St. Patrick here in Huntington, don’t be surprised if no one knows my name.

I’m joking, but I’m also quite serious about the underlying point.

Asking lots of people what they prefer to do rather than studying what people in a specific community actually do is like counting page views to your website rather than tracking behavioral goals (e.g., purchasing a ticket or making a donation). A million people can visit a website intending to buy a ticket, but how much does it matter if none of them actually buy?

We’re dealing in vanity metrics — we’re gambling — when we talk about stated preferences alone.

How do we stop gambling?

We can:

  • Stop asking people what they would or might do and start studying what they have actually done.

  • Keep in mind who benefits when we blindly worship at the altar of statistical significance. (Often, it’s software companies that want you to buy more ads or run more surveys.)

  • Understand the limits of surveys — If we want to understand behavior, we need to introduce more diverse methods, like diary studies, journey mapping, and customer interviews.

  • Develop some sort of heuristic or framework to understand what research we can apply to our work and what isn’t relevant or useful. (This is something I’d like to explore in future letters.)

Thanks for reading,

Kyle

Kyle Bowen

Kyle is the founder of Museums as Progress.

Previous
Previous

Breakup Letters: A Powerful Tool for Uncovering Visitor Sentiment

Next
Next

Let’s Stop Gambling in 2020