First published in Health Service Journal, 14 June 2001
Just as the Conservatives loved markets, Labour loves targets. So when chancellor Gordon Brown eventually loosened the Treasury’s purse strings, what came out had strings attached.
Through public service agreements, spending departments are tied to the Treasury, with contracts detailing what they have to deliver, how their success or failure will be measured and what sanctions will result. This use of performance targets has spread throughout the public sector.
Labour, with its election manifesto full of numerical, timed pledges, obviously believes that public services need tough targets in lieu of the profit motive, to ensure it produces what prime minister Tony Blair has promised the voters.
The NHS is not the only sector currently facing a future where organisations that succeed win greater freedom – and the winners get to take over the losers. Education is already some way down this road.
‘The real disadvantage to this approach is that you don’t empower people to make decisions to determine their own services,’ says Lisa Harker, deputy director of the Institute for Public Policy Research. ‘You pay a penalty of morale and satisfaction within the public sector.’
However, the institute sees no reason why information about public services’ performance should not be collected and published.
Tony Travers, director of the Greater London group at the London School of Economics, agrees: ‘What public money is buying, the public should know about,’ he says. ‘The difficulty with performance targets of the kind that have mushroomed is that institutions seek to meet these targets to the exclusion of everything else.’
This is exacerbated by the sheer number of targets. ‘If the upper tiers of government want to carry on in this vein, they certainly need to ensure that there are not so many public service agreements, targets and indicators that they can’t be rationally understood by the public,’ says Mr Travers. At present, ‘even experts find them very complex’.
The second comprehensive spending review, now in force, has lessened the number of targets in action. ‘It’s better,’ says Ms Harker. ‘But there’s still a complex matrix of targets.’
The National Audit Office, in a recent report*, said this complexity can lead to ‘perverse’ behaviour – whereby organisations go against the grain of a target’s intention to meet a target.
Examples include councils collecting a quota of waste paper while lacking the ability to recycle it. The NAO urged greater care in the design of such targets, with the involvement of all parties affected.
In the health service, the targets used to very simple, according to John Appleby, director of health systems at the King’s Fund.
In the early 1990s, health authorities were judged by the purchaser efficiency index, on which an authority’s success was measured by how much approved ‘activity’ it bought for each pound spent.
‘That created a lot of perverse incentives, gaming and straight lying,’ Mr Appleby recalls. Some HAs concentrated on boosting whichever kind of operation or procedure would most enhance their activity/£ ratio (‘gaming’).
Others re-categorised spending to keep it out of the figure used to calculate efficiency. It suffered from the usual drawback of a simplistic target: it was easy to cheat.
When Labour won power, this system was replaced with several dozen targets, known as high-level performance indicators, grouped into five areas. ‘Labour’s response to perverse outcomes was not to aggregate,’ says Mr Appleby.
Apart from anything else, some of Labour’s indicators work against each other: an HA doing well on quality of care might do badly on waiting list times.
But now, the Department of Health looks set to use these targets to reintroduce an aggregate indicator, through the traffic light system. ‘It’s very hard to escape the desire for a single number,’ Mr Appleby says.
The King’s Fund argues that the likely calculation method, of taking the existing indicator results, comparing each to the English NHS average, then adding them all up, would be unfair.
This is partly because some work against each other, partly because some indicators should aim to converge on a target, rather than try to be as low or as high as possible (such as time spent in hospital), and partly because all the indicators look set to have equal weighting.
The aggregated result might well matter more than just in terms of hurt pride. The government is ready to name and shame NHS organisations at the bottom of the table. Schools are already accustomed to this.
Furthermore, and in another parallel with schools, the best HAs could take over the worst.
Mr Appleby says the ideal answer is neither dozens of targets, nor the oversimplification of a single figure. ‘A lot of NHS managers would like up to ten indicators per organisation at most, with targets set on these.’
But then there is the question of how carefully targets are chosen. One example of how they can go wrong is demonstrated by the ‘hello nurse’ in the accident and emergency department, who meets patients within five minutes – thus meeting the relevant target – but then leaves them waiting hours for treatment.
‘People have got around this by achieving the target, rather than the spirit of it, which is that those who need attention receive it as quickly as possible,’ says Gordon Mitchell, deputy chief executive of medical assessment charity Health Quality Service.
His solution is not that the target be dumped, but that it is given a qualitative element: ‘You could say that the assessment should be conducted by an individual with certain qualifications,’ he says.
‘Or you could specify different criteria depending on the level of emergency, with these being published for patients to see.’
Meanwhile the IPPR takes the view that local people should be more involved in setting the goals for the state, in areas like local government, as well a health.
It points to Scotland, which has piloted a jury system. A ‘people’s jury’ negotiates with a ‘stakeholder jury’ through an ‘inter-jury forum’, with the aim of setting a popular, yet practical, set of targets.
Ms Harker says that this approach can lead away from simplistic popular goals (such as reducing waiting lists) to targets which are linked to outcomes, but are still understandable to members of the public.
Another way to involve the public is through setting targets based on how happy they are with a service.
For the first time this year, local authorities have been surveying users of their services such as waste collection, to a common format.
In future years, indicators could encompass improvements to these satisfaction ratings.
Mr Appleby suggests polling could be used to help produce relative weightings for different indicators, if the DoH insists on combining them into the traffic light rating.
But many targets will continue to be set centrally. What can HAs or local authorities do if negotiation fails and they get landed with an unworkable goal?
‘Collectively, if they think a particular target is going to lead to a perverse incentive, they should absolutely say so,’ says Mr Travers.
There are opportunities: the Department of the Environment, Transport and the Regions and the Home Office plan to continue the widespread consultation previously carried out by the Audit Commission for setting central targets for local authorities, for example.
But Mr Travers adds: ‘I think it’s very difficult for individual institutions to complain.’
One solution might simply be to wait for a few years, until someone works out how much the vast amount of surveying and auditing costs. ‘I think a government will eventually be elected that will say, let’s sweep them all away,’ Travers says.
Don’t hold your breath.
* Measuring the Performance of Government Departments, National Audit Office
Copyright SA Mathieson 2001