
This is a story about evaluation, the most important — and least sexy — story in public policy.
Evaluation doesn’t set pulses racing. It’s for people who can’t cope with the excitement of auditing. I should know, I worked in an evaluation section as a junior bureaucrat nearly a quarter century ago.
But it’s crucial to effective government, and we’re not doing it well. Barely at all, in fact. A lack of evaluation, for example, is a recurring theme in one of the greatest policy failures of Australian governments, Indigenous health. Time and again, academics, sector experts, reviewers and even politicians like Ken Wyatt, the current Indigenous Health minister, have lamented the lack of evaluation of Indigenous health programs in order to better target them.
Labor’s Andrew Leigh has just announced the biggest development in evaluation in decades: Labor will set up an “evaluator general” within Treasury, “to conduct high-quality evaluations, preferably randomised trials, of government programs.” Leigh’s big on randomised public policy trials — the bibliorrheic economist recently wrote another book, Randomistas, on random trials in public policy — and the evaluator general, with a $5 million budget, will be point person for this new approach in Australia.
The key question, however, is whether Leigh’s colleagues will actually want to know whether their policies work or not.
Evaluation used to be big in the Commonwealth, back in the 1990s. Public service departments had dedicated sections whose role was to select programs and evaluate whether they had achieved their goals. Departments were supposed to have evaluation plans, which detailed their evaluation work, and how programs would be designed from the outset so that they could be assessed for efficiency and effectiveness. But after the Howard government came to power, evaluation was kicked to the curb and partly replaced by the outcomes/outputs budget framework adopted at the turn of the century.
That meant — theoretically — each area of expenditure would have identified output indicators in the budget to enable assessment of how effective programs were. In reality, most of the indicators were process-related ones like “x million spent” and “95% of clients handled within specified timeframe” rather than anything about what happened in the real world. That left the evaluation space empty, at least until the Australian National Audit Office decided last year to expand its focus to program effectiveness.
The abandonment of evaluation was discussed by the National Commission of Audit in 2014, and it must have made for some uncomfortable reading for the Coalition.
A joint evaluation model, involving central and line agencies, operated in Australia between 1987 and 1997. Under this process there was a formal requirement for all programmes to be evaluated every three to five years. Each portfolio was required to prepare an annual portfolio evaluation plan and all new policy proposals needed to include a statement regarding arrangements for future evaluations. The process was also intended to provide formal evidence of programme managers’ oversight and management of resources. The model was considered reasonably successful but had shortcomings. As well as being resource intensive, many agencies regarded it as an external impost rather than a tool to improve policy-making.
Tony Shepherd and co — who elsewhere commented on the lack of evaluation of Indigenous programs — also put their finger on the bigger problem.
Ultimately, the success of an evaluation process depends on the appetite of ministers for rigorous assessments of programme [sic] effectiveness, and, importantly, their willingness to act on results.
There are no upsides to rigorous assessments of programs. As Leigh pointed out in his speech announcing the Evaluator General, the more rigorous an evaluation is, the more likely it is to find that a program hasn’t worked. What do you do with such an evaluation? Use it to cancel a program? Ministers hate that. It’ll result in a bad headline no matter how useless a program is. And if a program is successful, no one except policy wonks is interested — and certainly not the media.
Shepherd and friends recommended forcing ministers to evaluate programs by requiring them to report back every budget process with the results of program evaluations, so that the results would feed into the budget process. And if they didn’t comply, Treasury and Finance could be sent in to vet the entire portfolio’s expenditure.
Leigh’s proposal is a welcome step in an important area. But if Labor is elected, he’ll need some compulsion to force his colleagues to cooperate.
As a long term researcher, evaluator, sometime bureaucrat, policy advocate, academic teacher of research methods plus, I have long experience in both evaluating and advocating for evaluated policies and changes. I finally said to students that politicians hate data as it ruins the prejudices they usually use. For example, the Income management programs (Basicscard and cashless debit card) have been extensively evaluated. both well and badly, and there is no valid evidence they work. However these results have not reduced their use .
of the models or their intentions to expand the model. The politics of policy making in most areas of public interest override any evidence offered, however well designed!!
I have been thinking about your comment for some number of days Eva. On the one hand there are hardly grounds for dissent but I am wondering if the internal politics of the universities (a generalisation – I know) doesn’t contribute to (1) selective appointments and (2) sanatised argument. Political Correctness has all but denied particular forms of research (in regard to funding – no funding then why bother – unless one has a fairy godmother).
We seem to have moved from Voltaire’s well known quip regarding free speech to particular topics and theorists being “out of bounds”; i.e. no right for particular research to be considered in the first place.
I know nothing of your background and I am not suggesting that you are making such an inference but its a tad disingenuous for research institutions to dribble on about “objectivity” when objectives and agendas to the experienced eye are only too clear.
Having made that point the meticulous research of Sidney and Beatrice Webb didn’t cut a lot of ice either so don’t feel too bad. I would, however, appreciate a link to the basiccard (thing) research.
I think this could be a functional part of Government, but it’s almost like the proposed-EG would need some legislative power to end programs if they end up being proven to be particularly bad.
Though really, this feels like something similar to the proposed FICAC. Absolutely necessary, but something that neither party wants as they’d hate the scrutiny. However, if Labor is brave enough to put it together, then good on them.
Writing in the Times, Phillip Collins made the following observation in connection with the Brexit fiasco. “Government is about blending your principles with those who disagree. It is about negotiation and deals in which not all good things can be had at once.“
An evaluation process for high level policy proposals is well and good. However if the antagonists are unwilling to work together to find a pathway forward such a process might well end up as an abbatoir of good intentions. The Australian Greens in particular should take heed of Collins’ remarks.
“programme [sic] “. Look, this is how bad it’s got (sic). Our pathetic sucking up to the US has led (sic) to us careering (sic) into honorary American-ness. Congratulations (sic).
Well done rumtytum, I agree and was going to post a comment to point out that this is actually the correct way to spell this word, sic that up your bum!
Regardless of the “correct” way, putting ‘sic’ in front of a perfectly acceptable variant (some would say the only acceptable variant) is particularly petty.
Why, Bernard?
“And if a program is successful, no one except policy wonks is interested — and certainly not the media.”
Well perhaps you could try taking interest in successful policies yourself and start a trend. Got to be better than a piece which is pre-emptively negative about an objectively good idea (apparently the fair go for Morrison which you demanded doesn’t apply to ALP policy ideas?)