How Many Employee Surveys Did You Do Last Year?

Guest blog by Thymometrics' co-founder, Hugh Tonks.

---------------------------------------------------------

If you’re still doing a fairly traditional annual employee survey, you’d probably answer “one”. However, that really isn’t the case, as this article will explain.

Let’s say, for the sake of argument, that your survey consists of 48 carefully-chosen questions on all sorts of aspects of the employee experience, all answered on a “Likert Scale” basis – that is, a choice from answers like “disagree/slightly disagree/neutral/slightly agree/agree”, or something similar. A lot of effort has been put in to choosing the optimal set of questions to ask, each probing a different issue (because it would be a waste to duplicate questions, and besides, this would be labelled by some as a trap to elicit inconsistent answers). How many surveys did you do?

The answer isn’t, as mentioned above, one. The answer is 48. That’s 48 separate one-question surveys, and we know this because there is no way to compare the answers to one question with the answers to another, and get a reliable, meaningful comparison. Traditional surveys just don’t offer this facility. 

Comparisons are important, because on its own, the result of answering a question yields very little useful information beyond the obvious. Even then, what are you assessing the answer against? Suppose you have a result which says that 70% of employees agree with a statement in the survey, with 20% against, and 10% sitting on the fence. Is that good or bad, and how could you tell? There’s not much to go on, so let’s look at ways of improving the usefulness of this data using comparisons, of which there are three main types which we’ll call external, time-based and internal.

External comparisons are usually referred as benchmarks. Of course, they are useless if you haven’t asked the same question as the benchmark question and offered the employee the same choice of answers. Relying on benchmarks means using questions for which there are benchmarks extant, so they can’t be used for questions which may apply only to your organisation. It’s also a property of comparisons that they work best when only one thing changes (the thing you’re comparing, which is usually the “score” – the percentage of your employees picking each answer vs the benchmark percentages) – all other things need to remain the same. 

But benchmarks change two things: the score, and the people answering the questions. That means a straight comparison between results is largely meaningless, unless you can somehow show that having different sets of people answer the question doesn’t affect the comparison – and that can only be done by using statistics (and here be dragons!). If you can show that your people are statistically no different to other people, then maybe the comparison is meaningful, but which employer doesn’t think their people are special or exceptional? Relying on benchmarks is essentially an admission that they aren’t special.

There is another, darker, side to benchmarks. What will your company do with the information if it turns out your employees’ scores are “better” than the benchmark? The risk is that the company will do nothing, even though the scores may indicate that there is a real issue that needs addressing. In short, benchmarks are the slippery slope to complacency. And if the scores are worse? It’s hardly “actionable information”; it might tell you where there’s a problem (although a worse score doesn’t necessarily indicate there is) but you need more information to work out what to do about it. There are better (and cheaper) ways of doing comparisons than benchmarking. So cultivate your own garden and forget the weeds in your neighbour’s.

Time-based comparisons are more useful, as they will show you how the answers to a question change over time. You can’t really tell that much from a single data point, especially if it’s the sort of result you were expecting. But knowing whether the scores are getting better or worse is extremely valuable, especially if there is a clear trend. Multiple data points allow you to create a graph or bar chart, and this visual display of the data is far easier to comprehend than a list of numbers. If you can split the data by demographics, then you can compare the trends in different demographics and extract all sorts of interesting and useful clues as to where things may not be going well for employees. The speed at which scores rise and (particularly) fall also provides a possible sense of urgency to any intervention the company may wish to make. 

So, yay for time-based comparisons. They’re really useful – as the stock trading maxim goes, “the trend is your friend”. But does your survey support them, and to what degree? To be maximally effective, a survey will be able to compare the scores for each employee, not just the overall aggregated scores. If an aggregated score falls, you don’t know how many of the employees reduced their scores. Some may have increased their scores. Why does this matter? It’s because a decision to resign is based on an individual employee’s views, not an average view. If the company wants to reduce attrition, it can only do so by improving the scores of those unhappy employees who are in danger of quitting, not by raising the average. So an employee-level comparison over time is essential in discovering how many employees are in danger of quitting (and being able to split this by demographic is a fantastic bonus feature).

And internal comparisons? They’re probably the most useful comparison of all. An internal comparison is one which allows you to meaningfully compare the results of different survey questions with each other. It’s internal comparisons that turn your 48 mini-surveys back into one single, unified survey. However, there’s a problem, stemming from the idea that meaningful comparisons only compare one thing. How can the scores for two different questions be compared? It’s surely an apples vs oranges situation, I hear you say – and you’d be right. How can we resolve this conundrum?

The answer is some the result of some original critical thinking, and that is to jettison the 48 questions entirely, and ask the same question about every issue. Then comparisons are entirely meaningful. But what question should we ask? This is related to the fundamental question of why the company is doing a survey at all. Surveys cost money. Surveys ought to pay for themselves, if possible, by increasing efficiency and hence profit, or by reducing expenditure. And an incredibly rich seam to mine, one that’s near the top of most HR professionals’ list, is that of retaining talent. Replacing people is very expensive, and if you can reduce attrition even slightly, it will almost certainly pay for the cost of a survey many times over. You can write the business case on the back of an envelope, it’s so clear. Of course, some attrition you can do nothing about – retirement, moving away for family reasons, an unmissable opportunity elsewhere – but you have a fighting chance of retaining people who are leaving simply because they are unhappy. It makes a lot of sense, then to ask people how happy, satisfied, content – whatever synonym you like - they are with a given topic. 

Is this enough, what is now basically a satisfaction survey? No. It’s easy to construct an example which shows we really need more information to isolate the causes of discontent. Suppose we run this sort of survey, and find that 67% of employees are unhappy with their salary, whereas 85% are unhappy with the benefits on offer. Which problem should you address? On the face of it, employees appear to think that benefits are a bigger problem than salary. But we haven’t actually asked them which has the higher priority. So we ask them, and find that 85% of employees regard salary as important, whereas only 34% think benefits are important. They don’t rate the benefits highly, but it’s not something that matters to them that much, certainly not as much as salary does. Now, it’s clear that fixing a salary issue will have a bigger impact on employees than tinkering with the benefits. We therefore must also ask employees about their priorities, as well as their satisfaction levels, to get the true picture. 

And now we’re cooking on gas. We can compare the satisfaction levels of different factors affecting working life. More crucially, we can compare employee priorities to understand what’s the most important factor for each demographic, and overall. And we can have incisive analytics which are time-based, to pick up the trends in all these areas. (Just watch “job security” soar in importance when a reorganisation is announced!) Essentially, we’ve killed the traditional survey by showing how bad it is at providing useful, actionable data, and we’ve replaced it with something that is far better.

It only remains to say two things: 

1) Tada! We have succeeded, via some original and critical thinking, of turning dozens of one-question surveys back into a single, unified, survey with rich data and more ability to compare results than any traditional survey. That’s the icing on the cake, and the cherry on the icing is to use an always-on survey so that this data gets to you as quickly as possible.

2) If you really like the ideas in this article, you can license right now a survey which does all this and more, at thymometrics.com – please check out the website to read case studies of the unrivalled benefits of this platform, or to arrange a demonstration. 

----------------------------------------------------

Learn about Thymometrics' range of employee feedback solutions. Email us at hello@thymometrics.com or call +44 (0)1223 750 251. 

Photo by Christina at wocintechchat.com on Unsplash.

Good morning Good afternoon Good evening