Use Sophia to knock out your gen-ed requirements quickly and affordably. Learn more
×

Ontario's New Performance-Based Funding Model: Will it Make a Real Impact on Institutional Outcomes?

Author: Joyce Bott

1.0 Introduction

Ontario’s Differentiation Policy Framework for Post-Secondary Education (Ontario, 2013) was a response by the Ontario Government to the huge provincial debt (over 280 billion dollars) and decreasing demographic of 18-20 year olds across the nation.  It was essentially a call for more efficient spending and increased accountability for public spending. The overarching goal behind the policy is for each publicly assisted post-secondary education (PSE) institution in Ontario to find a key area of differentiation to specialize in.  This will hopefully reduce redundancy in programming (and in program funding), and relieve the system from competition for the same shrinking pool of entry-level students.

The Differentiation Framework consists of 6 key components to help guide PSE programming.  Worth noting is the first component which is focused on jobs and economic development. To ensure PSE institutions follow through with the Differentiation Framework Policy, each PSE institution was required to sign a unique Strategic Mandate Agreement (SMA) with the ministry which identifies the institution's key area of differentiation as well as what metrics will be reported annually to show progress the province's objectives.  

In addition to this, Ontario plans to reform the current PSE funding model from an enrollment-based model to more of an outcomes based model (to be announced in 2017). After 5 months of consultations with universities, colleges, employers and students, Ontario released the University Funding Reform Consultation Report (MTCU, 2015).  This paper aims to reflect on the consultation report as well as draw lessons learned from other institutions in Canada and the United States on their experiences with performance-based funding (PBF ) models. The question this paper will attempt to answer is, “What is required for Ontario’s new Performance-Based Funding model to make a real impact on Institutional Outcomes?” This question is important because if the ministry doesn't have a good understanding of what level of incentives will motivate schools to change, than outcomes will remain the same and PBF will really only be about increased accountability. Is it really worth changing the funding model if outcomes stay the same?

This paper will first examine a case study of a collapsed PBF model in the Alberta higher education system.  The second section will focus on best practices of PBF systems in the United States.  The third section will examine PBF consultation report from the Province of Ontario. The conclusion will feature recommendations of what key components are required in the new Funding Model in order for the province to see real change at the PSE institution level.

2.0 Performance-Based Funding in Alberta

In the early 1990's, the Government of Alberta implemented a PBF policy for post-secondary institutions as a way to show increased accountability and drive institutional progress towards provincial goals.  This was largely in response to the $2,120 billion provincial debt. Kerr (2011) highlights the four-stage development process used for this initiative which included: 1) Initial discussions that resulted set of measurement goals; 2) Evaluation of Key Performance Indicators (KPIs) for each of the measurement goals; 3) Normalization of data across institutions; and 4) Sharing of the KPI information to design a KPI report for fund allocation. 

Some of the problems faced by the Alberta government during the development process included the following: 1) Inconsistent data definitions across the system which led to calculation problems; 2) Institutions struggling to cope with data requests; and 3) a level of incentive barely sufficient to cover the cost associated with data collection and submission. Corbett-Lorenco (2001) note that, "College presidents increasingly observed that the amount of the performance award was relatively minor and barely sufficient to cover the cost associated with data collection and submission (as cited in Kerr, 2011, p. 53). There was also a lot of debate in choosing indicators that might lead to unintended coercive behaviour of administrators in reaction to the KPI's.  For example, in achieving the goal of improved retention, "the administration could respond by applying covert pressure on deans and faculty to stop issuing failing grades.  In this scenario, the administration would achieve successful accountability by displacing the longterm goal" (p. 49). Due to the inability for the key players to come to a consensus of what items to measure, the KPI project ended in 2003 with a return to historic resource allocation methods. As stated by the Council of Ontario Universities (2013), "It is difficult to establish appropriate performance measures that are well-balanced and that do not result in the creation of perverse incentives" (p.9).

Probably the biggest lesson learned here is that that in order for a PBF initiative to be effective, the incentive level must be high enough to justify the amount of effort and resources required to produce the reports. Ball (2003) reinforces this notion in discussing similar challenges in measuring teaching performativity, "As a number of commentators have pointed out, acquiring the performative information necessary for perfect control, consumes so much energy that it drastically reduces the energy available for making improvement inputs" (p. 221). In the case of the Alberta PSE system, the province set aside a mere $25 million in performance-based funding. This was hardly worth the high opportunity costs faced by the institutions. Lang (2013) acknowledges the difficulty in matching performance funding with the cost of the performance being measured. He points out that performance funding is linear - whereas a percentage point rise in a performance indicator generates the same funding. However not all performance increases are equal in terms of costs (as cited in COU, 2015). It is clear that the performance funding in Alberta did not adequately compensate the cost of the performance being measured.  As a result, the policy did not drive any real change in institutional outcomes and the initiative was eventually abandoned.

Source: Kerr, Stephen G.1. “Performance Funding of Public Universities: A Case Study.” Journal of Academic Administration in Higher Education 7, no. 2 (Fall 2011): 47–59

3.0 Performance-Based Funding in the United States

Performance-based funding in the United States began to emerge in 1979. By 2009, 26 states implemented or experimented with PBF initiatives (Miao, 2012). Over the years, PBF has moved away from budgetary bonuses and become more integrated into base funding (COU, 2013). The following six States are observed by Miao (2012):

1. Ohio State  - Ohio started their PBF initiative by allocating 5% of total public funding to it in 2012. This allocation increased to 70% by 2015 (this is considered a high level of base funding relative to other states). Ohio developed three unique funding formulas for universities, regional universities, and community colleges. System wide indicators include: Course completion, degree completion, special community college indicators such as completion of developmental education courses, and transfer rates into a four year college or university. Extra system-wide rewards are granted for "at-risk" student achievements defined by demographic data collected by the state. Concerns about dramatic changes in funding are addressed with a cap of 1% on the amount of money an institution could lose in the first year. Prior to the first year was a learning year where institutions received a detailed report on what the financial impact would have been that year under the new policy. Interestingly, Tandberg and Nicholas' (2013) study suggest the effect of PBF on four-year degree completions in Ohio State was actually negative. However, they fail to recognize that degree completions are not the only measure of performance output for universities. To counter Tandberg and Nicholas, HCM Strategists (2015) claim that Ohio reported faster time-to-degree, and greater persistence and completion for at-risk students as a result of PBF.

2. Pennsylvania State - Pennsylvania started their PBF initiative in 2000 by allocating 8% of total public funding to it (a relatively low level of base funding). System wide indicators include: degrees awarded, graduation rates, reduction in achievement gaps, diversity of faculty, and private donations. Since 2000, the state reported a 10% increase in overall graduation rates and 15% increase in retention rates. Contrarily, Tandberg and Nicholas' (2013) study suggest the effect of PBF on four-year degree completions in Pennsylvania State was actually negative. However, it's likely that Pennsylvania State's claim of increased graduation rates account for all degree levels and not just 4-year degrees.

3, Indiana State  - Indiana allocates 5% of total public funding to PBF (a relatively low level of base funding). The same benchmarks are used for all institutions. Indicators include: Degrees conferred, degree completion of low income students, number of community college transfers. In addition to this, $5000 is awarded for each bachelor’s degree produced, and $3500 is awarded for each associate's degree produced. Of special interest is how enrollment is measured at the end of the year rather than at the beginning. Indiana is one of the few institutions where Tandberg and Nicholas' (2013) study suggest that the effect of PBF on four-year degree completions was actually positive.

4. Tennessee State  - Tennessee allocates 80% of total public funding to PBF (a relatively high level of base funding). Indicators include-- student retention, degree attainment, and completion of remedial courses. The funding formula is adjusted for each institution to account for differences in institutional focus; no targets set – the formula is purely based on output. While Tandberg and Nicholas' (2013) study claims their methodology could not measure the effect of PBF on degree completions in Tennessee, HCM Strategists (2015) claim that positive learning gains were reported at all institutions in Tennessee. 

5. Washington State - Washington implemented their second attempt at PBF in 2009. Funds are allocated in supplemental funds rather than base funds (a relatively low level of bonus funding). Funds are calculated based on achievement points attained. Points are granted for: Improvement in test scores, progress in remedial courses, earning 15 or 30 credits, receiving a degree certificate, completing an apprenticeship program, and attainment of pre-college skills for at-risk students. Tandberg and Nicholas' (2013) study suggests the effect of PBF on two-year degree completions were positive. HCM Strategists (2015) claim that Washington reported a boost in the number of momentum points achieved by 12 percent.

6. Louisiana State - Louisiana allocates 5% total public funding to PBF (a relatively low level of base funding) as well as implemented the GRAD Act in 2010 that allows schools to increase tuition by 10% each year if performance goals are met. This will eventually make up 25% of school's budget. Indicators are related to: Student performance, articulation, transfer, and workforce development. Tandberg and Nicholas' (2013) study suggest the effect of PBF on four-year degree completions in Ohio State was not significant. 

There are two major takeaways from looking at the various funding models in the States.  One is that although it's important for all institutions to contribute to meeting the state’s post-secondary goals, it's also important for the state to understand how each institution's unique mission and student population affect performance differently. "Metrics should allow for differences in institutional mission, student population and other characteristics. Some states have chosen to apply a few metrics across institutions, while adopting other unique metrics and weighting them differently across types of institutions" (HCM Strategists, 2015, p. 22). This is reminiscent of Osborne and Gaebler's concept in the early 1990's of "reinventing government", where policy direction is combined with decentralization: "Reinventing government combined decentralization with direction, by being tight on setting goals and evaluating performance but loose in allowing managers to choose the means for achieving the desired results" (as cited in Burke, 2005, p. 8). Of the six states featured, Ohio and Tennessee seem to the have the best models that account for institutional differences.

The other major takeaway is that to counter unintended consequences of rewarding institutions that mostly serve the better-prepared students, there must incentives in place to reward institutions that serve underrepresented populations. Stone (2012) reminds us that "manipulation is the Achilles' heel of pay-for-performance systems" (p. 200). It would not be surprising to see states struggling to meet output goals to simply make admission standards more restrictive in order to boost completion rates of students. This may in fact result in the closing of doors to populations that need the service the most. Of the six states featured, all but Washington state have incentives that prioritize underrepresented students. For example, Tennessee awards a premium for the production of certain outcomes by low-income or adult students completing any of the metrics (COU, 2013). In the simplest terms, the funding formula needs to have both flexibility (in how things get measured), and balance (in what things get rewarded) in order to limit unintended consequences as much as possible.

Source: Miao, Kysie, and Center for American Progress. “Performance-Based Funding of Higher Education: A Detailed Look at Best Practices in 6 States.” Center for American Progress, August 1, 2012. Center for American Progress. 1333 H Street NW 10th Floor, Washington, DC 20005. Tel: 202-682-1611; Web site: http://www.americanprogress.org

4.0 Performance-Based Funding Consultation Report from the Ontario Government

The consultation report entitled, "Focus on Outcomes, Center on Students: Perspectives on Evolving Ontario's University Funding Model" (MTCU, 2015) begins with a brief mention of Ontario’s Differentiation Framework Policy released in 2013. Key objectives for the PSE education sector are highlighted here including: 1)Support student success and access to a high-quality Ontario post-secondary education; 2. Improve the global competitiveness of Ontario’s post-secondary education system; 3. Build on and help focus the well-established strengths of Ontario colleges and universities while avoiding unnecessary duplication; and 4. Maintain an efficient and financially sustainable post-secondary education system. This is followed by a brief description of Ontario’s current Funding model (75% being basic operating grant; 4% Performance funding; the rest being special purpose grants). 

Feedback collected over 5 months of consultation with universities, colleges, students and employers is organized into alignment items and debate items (p. 32). The following list highlights alignment items (things consultation participants generally agreed on):

  • There is a need for improved data and reporting.
  • Student success is more than academics.
  • Experiential and entrepreneurial learning opportunities, research opportunities, and social opportunities, need to be offered to students to improve the student experience and future job prospects.
  • The new funding formula must recognize the integral role of sessional and non-academic staff in supporting quality teaching.
  • Differentiation and specialization efforts need to be enhanced.
  • Strategic Mandate Agreements should be a vehicle for differentiation.
  • Funding should accommodate various university circumstances.
  • Teaching and research excellence should be given equal weight.
  • There must be fairness in funding
  • Funding model must be simple and transparent
  • Funding should be predictable and stable
  • Funding flexibility across programs and activities is important
  • Addressing costs at universities is crucial

What's even more interesting are the debate items (things consultation participants had differences of opinion on) which are listed below and will be addressed further in the final section of this paper (p. 34):

  • The extent of tying outcomes to the funding formula.
  • What learning outcomes need to be measured and how?
  • What should be included in quality assurance (i.e. Teaching and breadth of programming, student services, maintenance of facilities, etc.).
  • Whether universities should be more accountable for public dollars or whether this undermines professional autonomy.
  • The extent of tying enrollment to the new funding formula.

Some of the key recommendations of the consultation report include the following (p.44):

  • The ministry should apply an outcomes lens to all its investments
  • The success of PBF depends on valid and reliable data, easily accessible to the public.
  • The outcomes-based component of funding should be phased in and grow over time.
  • Full implementation should occur over the next two Strategic Mandate Agreement cycles (in time for the 2017 negotiations).
  • Ongoing engagement with the sector during the funding reform process.
  • For universities facing declining enrollment, there should be a support mechanism in place as a way of stabilizing finances.
  • After focusing on student success, the outcomes lens should be extended to include research excellence.


5.0 Conclusion and Recommendations

In the context of K-12 schools, Hoecht (2006) talks about how schools have moved from a high level of trust and professional autonomy in the 1990's to a highly prescribed process today which may be detrimental to innovative teaching and learning. Quality is important but Hoecht argues the type of quality management currently used does not achieve real improvements in teaching and learning.  Similar concerns are apparent in the context of PSE schools. Administrators and department chairs may feel overwhelmed with the bureaucracy of reporting outputs, and there is also a great risk of unintended consequences (e.g. Students promoted to degree-completion just for the sake of meeting the institution's output goals).

In the Alberta case study, the PBF initiative ultimately failed because decision-makers couldn't come to an agreement on what outcomes to measure and the government did not offer enough of an incentive for administrators to justify the efforts involved in collecting the data. In looking at the debate items listed in the MTCU consultation report as well as some of the best practices identified in the PBF models in the United States, the following suggestions are highlighted:

  • On the extent of tying outcomes to the funding formula - "Put enough funding at stake to create an incentive for institutions to improve results" (COU, 2013, p. 28).
  • On determining what learning outcomes need to be measured and how? - Selection of metrics should depend on the different missions of the institutions and the unique populations they serve. "For example, research universities could be rewarded for research and development performance, while community colleges could be rewarded for workforce training results" (COU, 2013, p. 28). The goal of improved degree completion should be maintained but also balanced with goals of progress and success (e.g. improved retention rates). Unintended consequences of admitting only students who are most likely to succeed, can be balanced with incentives for serving under-represented or at-risk populations.
  • What should be included in quality assurance (i.e. Teaching and breadth of programming, student services, maintenance of facilities, etc.) - quality assurance should not be about academics alone.  The MTCU consultation report makes it clear that students want improvements in the overall educational experience including more experiential learning and entrepreneurial learning opportunities as well as social opportunities that will better prepare them for the workforce.
  • Whether universities should be more accountable for public dollars or whether this undermines professional autonomy - Performance-based funding is a popular approach when the government is faced with obscene debt. Governments need to satisfy taxpayers and account for every dollar spent. For this reason, it is unlikely that performance indicators are going to go away. In order to reduce the administrative burden on teachers and administrators so reporting process is palatable, a simplified and transparent reporting structure needs to be established across the sector.
  • On the extent of tying enrollment to the new funding formula - enrollment still needs to be measured but due to the declining demographic of 18-20 year old students across the nation, it needs to have less of an impact. Stop loss provisions need to be put in place to prevent institutions from losing more than a certain level of funding each year (Miao, 2012)

The ultimate goal of performance-based funding is to get universities to improve outcomes aligned with the province's goals (i.e. Economic Growth and Experiential Learning). In order to achieve this, policy makers need to carefully select the right targets and structure incentives in a way that actually motivate institutions to make real change. Policy makers also need to make efforts to build flexibility into the funding formula to account for the unique needs of each and every institution so as to minimize obvious winners and losers. Finally, consideration must be taken in order to simplify the reporting process as much as possible to reduce the administrative burden on schools and teachers, and let them focus on their profession—that is, to teach and engage the students.

Source: Hoecht, A. (2006). Quality assurance in UK higher education: Issues of trust, control, professional autonomy and accountability. Higher Education,(4), 541-563.