Juggling Balanced Scorecard Metrics©
by Arthur M. Schneiderman
Back in 1988, I was shown a monthly metrics report by the VP of Quality Assurance from a large mid-West US bank. The report (more than 50 pages long, as I remember) contained nine graphs per page and the pages were beautifully bound into a glossy, full-color publication. She was really proud of this output from her department. I'm sure that everything that could be measured was. It reminded me of the old army directive: "If it moves, measure it; if it doesn't, paint it." Flipping through the many pages, one thing stood out clearly to me: virtually all the graphs were flat. The data scattered randomly around a horizontal line drawn at their precise mean. For each of the graphs, there was a horizontal line located at a target value. The resulting gap between current performance and this target remained essentially constant.
The report left me with three messages: TQM was not being practiced by the people being measured, the goal setters didn't recognize the importance of establishing specific milestone dates for their goals, and the bank's management did not understand the concept of organizational capacity and the consequent importance of focusing on the "vital few."
Several years earlier, I had seen a study of change initiatives at a US automaker. The three or four "programs" created at the top of the organization quickly proliferated, on average, to well over a hundred supporting implementation tasks by the time they reached a foreman who was more than 20 organizational levels below. Interviews of the beleaguered foremen showed that they employed several time-tested survival strategies for dealing with this initiative overload. Absent, of course, was the strategy of doing them all.
With the later popularity of balanced scorecards, this issue has surfaced as a central question: What is a good rule-of-thumb for the maximum number of metrics on an individual scorecard? It is clear from the many balanced scorecard presentations that I've seen that organizations quickly recognize that they have too many to manage. They often start with more than a 100 metrics and over a few years winnow the list down to 10 to 20 survivors. But is that the "right number" to avoid metrics overload? I'm unaware of any definitive studies on this specific question. My instincts tell me that that's still too high and that five to seven is the correct answer. Let me try to support those instincts.
Scorecards and Metrics Need Owners
Let's start by making a few important distinctions. First of all, I strongly believe that each scorecard must have an individual owner. That owner makes the personal commitment to do everything possible to assure that the scorecard's goals are achieved and is held accountable for that commitment. In some cases, that may require an hour a month of effort, in other cases, it will be a full time job. On average, it's probably in the range of 10-20% of their time. Furthermore, every metric on their scorecard also needs to have an owner who is willing and able to make this very same commitment. The metric's owner usually creates their own subordinate scorecard and negotiates ownership for each of its metrics with other individuals. This pattern is replicated throughout the organization. I have called this process "scorecard deployment."
A scorecard without an owner is nothing more than a report. The number of metrics in a report obviously depends on its purpose. Often organizations confuse this distinction and refer to their metrics report as their balanced scorecard. But these reports can be nothing more than a collection of individual scorecards, each of which in turn must have its own owner. One of the jobs of a scorecard owner is to present periodic status reviews to upper management. These reviews address variances from plan including both root causes and corrective actions when they're negative. Positive variances represent breakthroughs and their causes are valuable contributors to organizational and process learning.
Once we acknowledge the correspondents between scorecards and individual owners, our question translates into its equivalent form: What is a good rule-of-thumb for the maximum number of metrics an individual can manage? Why should there be a limit? First of all, there is the minimum amount of time that it takes to accomplish any meaningful part of the task. Then there's a phenomenon called multiplexing. Whenever we switch from one task to another, we loose time in closing the first and opening the second. The sum of these times (opening, executing and closing), divided into the total available time gives the number of tasks we can address in other than a cursory manner. Since the switching time, usually referred to as "overhead," is non-productive, the more tasks we try to do in any given period, the greater the fraction of time wasted on overhead.
You might find it informative to track your own time for a few days using these categories. Make sure that you rate yourself on the effectiveness of your effort too and see if you can make a rough estimate for your average time per effectively executed task. A look at your calendar will also give you a hint: what's the minimum time that you schedule for a one issue meetings? Let's say that the answer is 30 minutes. If you're spending two hours per week managing your scorecard's metrics owners, then you have enough time to manage no more than eight metrics per week. Remember though, that you are probably the owner for a metric on someone else's scorecard, so make sure you net out the time required for you to execute your commitments there.
Without a body of relevent data to guide us, we need to look for analogies to help us answer our question. Two come to mind: juggling and span-of-control.
The ancient art of juggling provides a good benchmark for the number of things that we can do repeatedly when these things require both physical and mental effort. A "juggle" involves throwing an object into the air and catching it more than one time. On the other hand, a "flash" only requires that the object be caught once.
The chart on the right shows the current world records for numbers juggling. The ordinate is the number of times the object has been caught and the abscissa is the number of objects being thrown. The current world record for two catches is 10, while seven balls have been maintained in the air through nearly 100 cycles. I'm told by one of the record holders that good jugglers can juggle four or five balls "indefinitely."
Believe it or not, there is an underlying scientific theory of juggling based on muscle biophysics (how high can a human throw a ball) and Newton's laws of Motion. It suggests that these records are currently limited by mental rather than physical constraints. It is probably fair to say that accomplished jugglers can juggle between five and seven balls for extended periods of time. Keep in mind though that it takes lots of practice to get to that level.
Span of Control
Another place we can look for our answer is in the management of people. For any given people management model (control, empowerment, etc.), there is a practical limit to the number of people that can be effectively supervised by their manager. By looking at individual organizations, we can infer their average span of control, a good surrogate for what we're looking for.
Consider, for example, the case where each supervisor has exactly 3 direct reports. Let's start with the person at the top. He or she has 3 reports, so our sub-total is 1+3=4. Each of the 3 reports has 3 reports, or a total of 3x3=9, which brings the total so far to 4+9=13. Each of the nine has 3 reports which adds 9x3=27, so the total is now 13+27=40. So far, we've counted four levels.
Let's look at the pattern:
We can easily generalize this to
total= 1+s+s2+s3+s4+...+sL-1, or
where s is the average span of control (average number of direct reports per supervisor) and L is the total number of levels in the organization. Lets see how this looks graphically:
The middle curve shows the relationship between number of levels in the organization and the average span of control for an organization of 5000 people. If the typical organizational hierarchy was CEO, COO, Division General Manager, Director, Manager, Supervisor and Operator, then there would be seven levels and from the graph, the average span of control would be about 3.9 people per manager. If we reduced the number of levels to five, then the average span of control would increase to 8.1. Increasing to nine levels reduces the span of control to 2.7.
The other curves show this relationship for different size organizations ranging from 200 to 125000 people. I've used this model in a number of organizations and business units and always found that the average lies somewhere between four and six. Try placing your organization on this chart and let me know what you find. You can usually get the number of levels from your HR department. Sometimes, they have actual data on the average number of reports per supervisor.
As you can see, increasing the average span of control decreases the number of organizational levels for an organization of given size. Decreasing the number of levels reduces organizational complexity. This in turn accelerates the rate of process improvement and decreases the improvement half-life. So, clearly it is desirable to increase the span of control while maintaining effective management. This can only be accomplished by more self-management through effective empowerment. The people at Milliken & Company have a saying: "Empowerment by abandonment is not empowerment." In other words, there's a lot more to empowerment than simply stepping back. Improved communications as well as skills training are essential ingredients for effective empowerment.
The above analysis considers average span of control. Obviously, some individuals can manage more, others fewer. And effective span of control probably decreases with increasing organizational level. The type of organization also plays a role. Command and Control based organizations (like the military) can have larger spans of control than consensus managed organizations. Russ Ackoff uses the terms uni-minded and multi-minded organizations to make a similar distinction. In any case, current practice seems to imply that the average manager can only supervise between four and six other people; seven is probably a good guess for the upper quartile of good managers.
Good jugglers can keep five or six balls in the air; average managers can manage four to six other people; so I think that my instincts are about right: scorecards should contain a maximum of five to seven metrics. Keep in mind though that it takes lots of practice to get to these levels. Furthermore, Hoshin Kanri, the older Japanese relative of the Balanced Scorecard, always limits tops-down breakthrough initiatives to a total of one or two at any given time. This, even after decades of organizational experience using that approach. A prudent starting point may be one to three metrics per scorecard owner.
Metrics reports can contain more, but they are principally for documentation, root cause analysis, or communications, not for performance management purposes. Lengthy metrics reports have no place in organizational alignment efforts or performance review meetings. The important distinction here is that a scorecard must be limited to the vital few metrics that can really make a difference in the organization's overall achievements. If that means more than the number that its managers can effectively handle, it usually is a sign that the situation is terminal.