Giving Research a Bad Name
November 4, 2022
Kevin Schulman, Founder, DonorVoice and DVCanvass
I’m on record saying most research is garbage in, garbage out. I’ve produced some of it over the years, giving me seasoned perspective. And, at the time, I thought it was the greatest thing since sliced bread.
The reason most research sucks is threefold,
- Poorly designed survey. Survey design is a science.
- No analysis beyond descriptive. Listing the frequency distribution to Q. 1, 2, 3,…20 is not analysis, it’s a 4th grade book report.
- Fancy analysis leading to nowhere.
I came across a great example of point 3 from Bain Consulting, which just goes to show, a bunch of Ivy League MBAs’ are no more qualified to do useful survey research and analysis than a team of monkeys, with no insult intended to the monkeys.
Let’s take the critique from the (literal) top:
- Personas? Uuuugghh. This is likely output from a cluster analysis, which I’ve politely referred to as a Cluster-F$*# here. Lots of technical reasons this is garbage but for now, keep this question in mind, “How Do I Use This to Change my Marketing?” For the life of me I can’t imagine how I’d use this and I’ve seen this fundraising research equivalent produced by Bain wanna-be’s many times over.
- Willingness to Take Sustainable Action. The survey had a list of “sustainable” activities one could do and asked about willingness for each. As a general rule, asking people to project forward about some future behavior when there is no consequence for a socially desirable answer produces, at best, not very good data.
- Frequency of Sustainable Purchases. We do a lot of research work in the fundraising sector. We never ask about giving behavior. Why? People give unreliable answers and it’s a waste of the donor’s time since we have good, reliable answers on the CRM. If you aren’t combining survey data with CRM behavior data then your analysis is always sub-par, a lot like one-hand clapping.
- Concern about sustainability. If a survey question is any good it will have variance – i.e. not everyone answering the same way. Running that data through a ‘clustering’ to report a high/medium/low grouping is hardly insightful.
- All the demos. I was going to do each of these in turn but couldn’t put you or me through it. The demographics aren’t causal, they are merely descriptive and my guess is the demographic differences between these groups is small while the demo differences within a given “persona” is much larger than suggested by the chart. But, ignore that supposition. These demos are only useful if I can use them to target and reverse engineer a way to find these mythical, ephemeral personas. Does anyone feel like aiming my ads for sustainable fashion at a younger, more affluent, higher educated (correlated with affluence) audience is a searing insight worth the price of the study? And, there is zero about how I should market it (message, imagery) differently.
This research was destined to live and die in PowerPoint before it ever got started. That is a silver lining win in that the only mistake bigger than commissioning it would be using it.
If you are considering an outside research firm,
- press them on how, specifically, the substance of your marketing/fundraising will change after the PowerPoint is shared.
- And ask about the analytical plan.
- If they aren’t combining survey data with CRM data, hit the red pause button.
- If they are doing cluster analysis, end the meeting early and hire the monkeys.