Online survey research has been around almost as long as the modern internet. The concept of a survey goes back even farther. For example, the popular Likert Scale that is still widely used in surveys today was invented back in 1932 by Rensis Likert. Since the 1990s, surveys have been a common format used to help organizations draw conclusions about where they are doing well and where they can improve.
Many of us at Clozd have years of experience working directly with survey feedback, and we recognize where this feedback is valuable. However, we’ve learned that when analyzing wins and losses in a complex B2B sales environment, surveys pose some unique challenges. Survey response rates and issues with data quality can prevent organizations from capturing the full story behind why they are winning and losing valuable business. If you are considering a survey-based approach to win-loss analysis, first consider these challenges:
Challenge I: B2B Survey Response Rates
While the popularity of online survey research is growing, response rates to B2B surveys have been declining over the years. Depending on the sector, response rates can be very low. As Adelynne Chao, associate director of Morar-HPI describes, “In the surveys we run for our clients, some B2B sectors can have response rates under 2%, vs. engaging consumer sectors which can be as high as 10%.” Response rates this low may can make it difficult to paint an accurate and complete picture of why you're winning and losing.
Research shows that increasing response rates to surveys, especially in a B2B setting, requires making tradeoffs, sometimes very complex tradeoffs. If you strive for more detailed, actionable data you'll have to probe more and ask more questions in your survey. However, doing so will increase the risk of survey fatigue (this is when your respondents become tired of the survey and pay less attention and/or drop out entirely). When survey fatigue becomes a problem, you’ll almost certainly get fewer completed responses, and the responses you do get will be less truthful.
Some other common factors that influence response rates are:
- Lack of understanding target population and how best to reach them. Getting the survey to the respondents’ screen requires unique strategies depending on the population, and the survey should be written using language and phrasing that can be commonly understood by the target group.
- Frequency of survey requests, and length/complexity of description. If potential respondents are feeling pestered by the number of invitations or overwhelmed by content in the requests themselves, they’ll be less likely to take your survey.
- Prior completion of other similar surveys. Respondents who have already given feedback on similar topics are less likely to take the time to go through the process again, especially within a short window of time.
- Type/amount of incentive offered (if any). Time is money. Often, survey incentives are small, but if respondents feel it is not worth their time to take your survey, they won’t.
- Reputation of researcher or sponsor. Respondents who strongly like/dislike the researcher or sponsor might provide either very extreme feedback, or no feedback at all
- Asking what should already be known. If your survey invitation indicates that you already know certain information (e.g. location, email address, position, etc.) about each respondent but your survey asks for this information anyway, your respondents may be turned off by the redundancy and exit the survey.
There are a lot of ways to get useful information from your surveys, but making sure you are actually getting enough responses to act on is key. As you can probably tell, this is often very difficult to do, and even a high response rate will rarely tell the whole story.
Challenge II: B2B Survey Response Data
To minimize survey fatigue and boost response rates, while still allowing for greater detail, researchers will often try creating a survey that is shorter, but made up of mostly open-ended responses. This approach allows for more flexibility and variety in responses, but tends to encourage the input of people with the most extreme opinions, both on the positive and negative ends of the spectrum. This is also problematic, because feedback all along the spectrum is not only important to get, it also tends to be more accurate.
Other challenges that influence the quality of the data you are able to collect using a survey alone are:
- Static phrasing of questions. If a respondent misinterprets what you are trying to learn from a particular question, you don’t have an opportunity to clarify in a survey.
- Content validation. Some platforms have options that can put guidelines around the content that the survey will accept, but these options can only do so much.
- Recency and primacy effects. Respondents may give feedback based only on their most recent interaction with the organization, or based on their first impression, rather than giving comprehensive feedback on their overall experience.
- Not giving an “out." Ideally, a seasoned survey expert will test for this prior to launch, but some surveys will leave out vital “Not Applicable,” “Other,” or “I don’t know” answer choices, forcing respondents to choose a response, often at random, that doesn’t reflect their experience at all.
If you’re not careful, collecting win-loss feedback through survey research alone bears the risk of only confirming what you already know, and not offering any new insight. While confirmation is important, a far better approach is to combine a relatively short and very high-level survey with an interview-based study that can do both. Helping our clients succeed in this process from start to finish is exactly we at Clozd are expertly trained to do.
Value of Qualitative Interviews
Though there are challenges, gathering post-decision feedback via a survey is anything but worthless. In fact, we believe it is a very important step in gaining a full understanding of where you’re succeeding and where there is room to grow. However, far more can be captured and understood when relying just as heavily (if not more so) on a post-decision interview, especially one conducted by a neutral and unbiased third party.
As Clozd co-founder, Andrew Peterson, describes in greater depth in a related blog post, there are a lot of reasons why allowing a neutral third-party to conduct win-loss analysis using a qualitative, interview-focused methodology along with a survey is really the best approach:
- Fewer conflicts of interest. You’ll get much more honest and open feedback from an outside party than you would if you approach the buyer directly for feedback.
- Skilled interviewers. Rather than following a generic script, we employ consultants familiar with your industry, who can ask the right probing questions at the right time to gain deeper insight.
- Nuance. There is greater opportunity in an interview setting to ask clarifying questions and probe deeper, and this provides greater detail and nuance than a survey alone.
- Avoid “best fit” responses. When engaging in a guided conversation, Interviewees can’t click through and select from among a limited number of options that are the “best fit,” but may not tell the whole story. In an interview, we can get all the detail you need, and there is less opportunity to “fudge” answers.
- Context. Interviews allow the client to consult a dashboard to themes mentioned across many interviews. Using a combined interview + post-interview survey approach, we can capture not just quantitative measurements, but context about the deal that is important and often missed in a survey.
- Depth of understanding. In an interview, there is greater opportunity to examine all points across buying process (e.g. product requirements, brand strength, sales, pricing, customer service), while survey respondents might only leave feedback on their experience at the point of sale.
- A proactive approach. Our insights let you identify detailed key themes to act on before they pose a greater issue, and can lead to winning back (and keeping) valuable business.
- Bandwidth. Outsourcing this service to a company like Clozd not only gives you the benefits of working with a neutral third-party, but allows us to put in the required man hours that your organization might be short on.
If you’re considering either a survey- or interview-based approach to win-loss analysis, you have probably already learned that relying on your sales force’s self-reported reasoning as to why deals are won and lost is not enough. To uncover truly valuable insight, you need to base your analysis on the candid perspective of actual buyers, not the interpretations and assumptions of your sales reps.
Jim Wilson, a seasoned sales strategist, explains that “All good founders live in their own reality distortion bubble: it’s what makes them have the conviction, enthusiasm and motivation to build the company. Great win-loss analysis, collected from multiple channels, will help balance that enthusiasm with the cold, hard facts and the voice of the customer.”
For more details on why we recommend interviews in your win-loss analysis, read Andrew’s blog post! It’s a great one. And for more information about Clozd's win-loss interviewing services, visit www.clozd.com/services.