Generated by GPT-5-mini| Excellence in Research for Australia | |
|---|---|
| Name | Excellence in Research for Australia |
| Established | 2008 |
| Country | Australia |
| Administered by | Australian Research Council |
| Predecessor | Australian Research Council ERA Pilot |
Excellence in Research for Australia is a national research assessment initiative that evaluated research quality within Australian higher education institutions. The exercise produced comparative measures of research performance across disciplines and institutions, influencing allocations by funding bodies and informing policy debates involving ministers, agencies, and universities. It interacted with international assessments and institutional strategies in the late 2000s and 2010s.
The initiative linked university submissions to discipline classifications used by the Australian Research Council, aligning outcomes with policy frameworks involving the Commonwealth of Australia, the Department of Education, Employment and Workplace Relations, and the Department of Industry, Innovation and Science. Institutions such as the University of Melbourne, University of Sydney, Australian National University, University of Queensland, and Monash University participated alongside non-metropolitan providers like the University of New England and vocationally oriented providers such as the Hunter Institute of TAFE in coordinated rounds. The process drew comparisons with international exercises including the Research Excellence Framework, the United Kingdom Research Assessment Exercise, the Leiden Ranking, the Times Higher Education World University Rankings, and the ShanghaiRanking.
Development began after reviews involving the Bradley Review of Higher Education 2008 and consultations with the Commonwealth Scientific and Industrial Research Organisation and the Group of Eight (Australian universities). Early pilots referenced methods from the Research Assessment Exercise 2001 and the Higher Education Funding Council for England. Stakeholders included vice-chancellors from Ian Young-era leadership teams, research offices at institutions such as the University of Western Australia and University of Tasmania, and peak bodies like the Universities Australia and the Australian Vice-Chancellors' Committee. Subsequent rounds were shaped by policy instruments influenced by ministers from the Rudd Ministry and the Gillan Ministry.
The framework combined peer review processes with bibliometric indicators drawn from databases such as Scopus, Web of Science, and citation analyses used by the Centre for Science and Technology Studies. Panels convened experts from institutions including the Walter and Eliza Hall Institute of Medical Research, the CSIRO, and the Garvan Institute of Medical Research. Disciplines were mapped to classifications comparable to the Field of Research codes used by the Australian Bureau of Statistics in collaboration with statistical producers like the Australian Bureau of Agricultural and Resource Economics. Metrics included citation impact, output volume, and esteem indicators referencing awards such as the Prime Minister's Prize for Science, the Australia Prize, and fellowships from the Australian Academy of Science and the Australian Academy of the Humanities.
Outcomes influenced grant distributions from bodies including the Australian Research Council, the National Health and Medical Research Council, and state agencies such as the Victorian Department of Jobs, Precincts and Regions. Institutional rankings affected strategic decisions at the University of Adelaide, Griffith University, La Trobe University, and Deakin University, and informed capital investments by entities like the New South Wales Treasury and philanthropic donors such as the Ian Potter Foundation. The assessment informed higher education policy debates in parliaments, involving committees such as the Senate of Australia standing committees and inquiries initiated by the Productivity Commission.
Critiques emerged from academics associated with the National Tertiary Education Union, scholars from the Centre for the Study of Higher Education at the University of Melbourne, and commentators in outlets such as debates at the Australian Broadcasting Corporation and submissions to the Australian Senate Education and Employment References Committee. Concerns included methodological bias noted by researchers at the University of Technology Sydney, the potential narrowing of research agendas echoed by faculty at the University of Wollongong, and disputes over the treatment of creative outputs raised by members of the Australian Council for the Arts. Legal and administrative challenges referenced transparency standards used in public sector reviews overseen by state Ombudsmen and audits by the Australian National Audit Office.
Institutions responded by creating internal quality assurance units modeled after groups at the University of New South Wales, establishing research strategy offices like those at the Curtin University of Technology and launching recruitment drives attracting scholars from international institutions such as Harvard University, Oxford University, and the University of California, Berkeley. Some universities revised promotion criteria to emphasize metrics similar to those used in the assessment, drawing on examples from the Imperial College London and the Massachusetts Institute of Technology. Collaborative networks formed between the Australian Research Council and partners including the European Research Council and the National Science Foundation to align benchmarking practices. The initiative contributed to institutional planning, affecting library collections at entities like the State Library of Victoria and research infrastructure investments involving the Australian Synchrotron and the National Computational Infrastructure.
Category:Australian higher education Category:Research assessment