Performance funding: The burden of proof

| 1 comment

When a policy is proposed, the burden of proof lies with the people making the proposal. They need to explain why the new policy is better, and they need to provide evidence to support their claim. This is how sensible policy gets made.

The Government of Ontario is interested in performance funding for universities. That much is clear.  In the Premier’s mandate letter to the Minister of Training, Colleges, and Universities, the Ministry was asked to improve the “consistency and availability of institution-level and system-level outcome measures” in order to help “the implementation of a reformed funding model for universities.” With a review of Ontario’s university funding formula on the horizon, it is likely that outcomes and performance funding will figure in the conversation.

So what is performance funding? The idea behind it is that a portion of the public funding received by universities should be determined by the ability of a given institution to meet certain performance targets. If targets are met, then funding is granted. If targets are not met, then a portion of funding will be withheld.

The argument made by its proponents is that tying funding to performance will incentivize institutions to be more productive, more efficient, and more in-tune with the labour market.

There is ample evidence that Ontario’s universities are extremely productive and very good at moving students into the world of work. We educate more students with less money than anywhere else in Canada. So, if there is little in the way of efficiency to be gained, the argument that performance funding improves universities needs to be very strong.  We’re left to ask: Does performance funding have anything to offer Ontario?

If we review the available research, there is little reason to accept the idea that performance funding will improve higher education in Ontario. There is virtually no evidence that demonstrates that performance funding improves the effectiveness of universities.

This lack of evidence is not for lack of examples. In the USA, 34 states have implemented, or are in the process of implementing, some form of performance funding for higher education. The design of these funding regimes varies wildly. In Indiana, six per cent of state funding is allocated according to performance indicators. In Tennessee – the state often held up as an exemplar of performance funding – between 80 and 85 per cent of state funding is allocated through outcome-based metrics. Some states have had decades of experience with performance funding, while others are just getting started.

Given the widespread use of performance funding in the US, one would expect to see some research suggesting that this method of allocating funds actually improves the quality of higher education. But instead the literature suggests that performance funding has, at least so far, no real benefit at all.

In a recent study published in the journal Educational Evaluation and Policy Analysis, researchers found that performance funding had no discernible effects on retention or degree completions at Washington State’s community colleges, when compared with jurisdictions without performance funding mechanisms. This follows a 2013 policy brief from the Wisconsin Center for the Advancement of Postsecondary Education (WISCAPE) that concluded,

“Results suggest the [performance funding ] policy has not been significantly effective for increasing associate or baccalaureate degree completions in performance funding states, and it may even have had negative effects in some states.”

Closer to home, the Higher Education Quality Council of Ontario (HEQCO) recently released an “extensive review of outcomes-based funding models used in postsecondary education and their effectiveness.” It found that, “research on outcomes-based funding of higher education has shown little evidence that these policies are associated with improved student outcomes.” All of these studies are easy to find, and a simple Google search will return many more studies that outline the lack of evidence for – and the evidence against – performance funding.

Curiously, the President and CEO of HEQCO put out a blog post counselling that we not worry too much about the lack of evidence revealed by his own organization’s research. According to what he describes as “human nature and basic laws of behavior,” performance funding just works. Some readers may be surprised to learn that human nature and behavior is so rigid and immutable.

This laissez-faire attitude towards evidence in policy-making is risky. There is considerable evidence that performance funding produces a variety of unintended consequences, some that could harm higher education in Ontario.  For example, in the Washington study, the researchers found that one clear response to performance funding was the increased granting of short-term certificates. These credentials have limited labour market utility, but have the advantage of moving students through an institution quickly, thereby protecting student retention and graduation rates (often key metrics in performance funding regimes). Performance targets have been met by blasting students through the institution, but graduates are left with a questionable credential on the other end. That’s not performance—that’s the academic equivalent of cooking the books.

Evidence suggests that another unintended consequence may be the raising of admission requirements, with negative implications for equity and access. More qualified – or at least better prepared – students are more likely to persist to degree completion, again boosting performance stats. But this might hurt prospective students from marginalized backgrounds, and harm institutions who work to serve these individuals (like historically black colleges in the USA). In this instance, performance funding cuts against broader societal access to higher education, while penalizing universities who seek to enroll under-represented students.

In the end, “Trust us!” is a poor argument for policy change. Ontario’s university funding formula is critical to the quality and accessibility of the sector. Changes to the formula cannot be justified on good intentions and fond hopes alone. The proponents of performance funding have a responsibility to show us how their ideas will make Ontario’s universities better. The burden of proof belongs to them. And so far, the proof they need to make their case is in short supply.

One Response to “Performance funding: The burden of proof”

  1. Janok Bhattacharya

    As a full professor in a faculty of science, my annual evaluations are weighted accordingly: 40% Research, 40% Teaching, and 20% Service. The measure of research performance is mostly based on my quality of the publications as well as the funds that I raise. This research, in my case, is accomplished through graduate students, many funded by either industrial or federal funds (NOT provincial), and some are international students. My service contributions include those to international scientific organizations as well as the University. The current emphasis on performance funding seems to be almost exclusively on undergraduate graduation rates. A laudable cause, but completely out of whack with the priorities that our own universities use to evaluate our performance. The research that we complete provides new knowledge, sometimes this can be capitalized (e.g. new pharmaceuticals) but may simply be curiosity driven. Will provincial evaluation metrics include this all important research component that allows some Ontario Universities to be ranked as top global institutions?

Leave a Reply

You must be logged in to post a comment.