We talk about the weather a lot.
Particularly since the threat of climate change became apparent. Many of us are interested in weather records – warmest winters, fastest winds, biggest rainfalls – and we have access to all sorts of metrics that help us to discuss the weather, and help meteorologists to predict the weather. We (or rather, the weather people) measure temperature highs and lows, rainfall, snow days, frosts, lightning strikes, humidity, hours of sunshine, air pressure, wind speed and direction, sea temperature and flows and cloud cover – all over the globe, and all the time. We want to know when and how the weather is changing, short-term and long-term, because both business and pleasure, not to say lives, depend on it. But the one thing you never hear the experts do is to combine all these metrics into a single “weather index”.
Why? Because it’s meaningless.
Reducing weather to a single number – simply because it makes year-on-year comparisons easier – is like taking a marshmallow mallet to a concrete nut, and then missing the nut. Nevertheless, let’s see how such an index is put together. When we create a one-dimensional entity like a weather index, what we’re actually doing is taking the metrics we do have, i.e. numbers from things it is actually possible to measure, stuffing them into a formula, and turning the handle to produce a single number. So all we need now is a formula with which to do it.
The simplest formula is to take all the metrics and just add them up. But would that be right? Is it right to add hours of sunshine, temperature and rainfall together (and what would this even mean)? Let’s stretch credulity for a moment and assume it is meaningful, but then it’s unlikely all metrics would carry equal weight – so now you have to decide on the weightings for each metric (and so far we’ve assumed a simple linear formula, but why shouldn’t the formula even be more complex?) The creation of such a formula is fraught with difficulty, because it’s next to impossible to justify that any given set of weightings is, in any sense, “correct”. So any formula is basically subjective. Even if you do succeed, what use would a “weather index” be? It doesn’t tell you where to build flood defences, or whether you’ll need your brolly tomorrow. It can’t tell you which day to go to the beach, or whether to cover up your bedding plants against an unexpected frost. It may be interesting (to some), but it’s not even slightly useful.
So, what’s the weather like in your company?
The pattern of employee mood, attitude and engagement is very much like the weather. There are good times and bad times, localised problems, widespread contentment, tensions building. Small issues can build into major storms, and anticipated problems can fade away. Like local meteorologists, managers can sniff the air and predict rain in their area – but to map out company weather in a more consistent fashion, and so that an overview is possible and trends discernible, requires the collection of more standardised metrics – repeatedly, if not continuously. And where do these metrics come from? Your company’s employee engagement surveys.
At this point, if your company has an “engagement index”, you might want to think about the formula that’s used to create it and whether it’s really fit for purpose, and what (if anything) the index contributes to the creation of an actionable plan for real improvements. And if you do eschew this essentially pointless metric in favour of a more useful set of measurements, then does your survey really deliver what you need, when you need it? The chances are that if you’re still sending out a paper questionnaire – or its computerised equivalent – just once a year, the answer is likely to be a resounding no.