To enable research organizations including HEIs to monitor and manage their performance and understand and disseminate the contribution that they are making to local, national, and international communities. This framework is intended to be used as a learning tool to develop a better understanding of how research interactions lead to social impact rather than as an assessment tool for judging, showcasing, or even linking impact to a specific piece of research. Inform funding. Scriven (2007:2) synthesised the definition of evaluation which appears in most dictionaries and the professional literature, and defined evaluation as "the process of determining merit, worth, or significance; an evaluation is a product of that process." . What is the Concept and Importance of Continuous and Comprehensive Evaluation. Media coverage is a useful means of disseminating our research and ideas and may be considered alongside other evidence as contributing to or an indicator of impact. They risk being monetized or converted into a lowest common denominator in an attempt to compare the cost of a new theatre against that of a hospital. Measurement assessment and evaluation also enables educators to measure the skills, knowledge, beliefs, and attitude of the learners. Describe and use several methods for finding previous research on a particular research idea or question. The justification for a university is that it preserves the connection between knowledge and the zest of life, by uniting the young and the old in the imaginative consideration of learning. A variety of types of indicators can be captured within systems; however, it is important that these are universally understood. Despite many attempts to replace it, no alternative definition has . The Social Return on Investment (SROI) guide (The SROI Network 2012) suggests that The language varies impact, returns, benefits, value but the questions around what sort of difference and how much of a difference we are making are the same. Wooding et al. Why should this be the case? The Goldsmith report (Cooke and Nadim 2011) recommended making indicators value free, enabling the value or quality to be established in an impact descriptor that could be assessed by expert panels. Evaluation is a procedure that reviews a program critically. In the UK, evidence and research impacts will be assessed for the REF within research disciplines. Reviewing the research literature means finding, reading, and summarizing the published research relevant to your question. One way in which change of opinion and user perceptions can be evidenced is by gathering of stakeholder and user testimonies or undertaking surveys. A discussion on the benefits and drawbacks of a range of evaluation tools (bibliometrics, economic rate of return, peer review, case study, logic modelling, and benchmarking) can be found in the article by Grant (2006). The Payback Framework has been adopted internationally, largely within the health sector, by organizations such as the Canadian Institute of Health Research, the Dutch Public Health Authority, the Australian National Health and Medical Research Council, and the Welfare Bureau in Hong Kong (Bernstein et al. For more extensive reviews of the Payback Framework, see Davies et al. 0000342798 00000 n
The first category includes approaches that promote invalid or incomplete findings (referred to as pseudoevaluations), while the other three include approaches that agree, more or less, with the definition (i.e., Questions and/or Methods- 0000328114 00000 n
3. An evaluation essay or report is a type of argument that provides evidence to justify a writer's opinions about a subject. One might consider that by funding excellent research, impacts (including those that are unforeseen) will follow, and traditionally, assessment of university research focused on academic quality and productivity. These techniques have the potential to provide a transformation in data capture and impact assessment (Jones and Grant 2013). The criteria for assessment were also supported by a model developed by Brunel for measurement of impact that used similar measures defined as depth and spread. Worth refers to extrinsic value to those outside the . Donovan (2011) asserts that there should be no disincentive for conducting basic research. 2005). In line with its mandate to support better evaluation, EvalNet is committed to working with partners in the global evaluation community to address these concerns, and is currently exploring options for additional work. The Oxford English Dictionary defines impact as a 'Marked effect or influence', this is clearly a very broad definition. The first attempt globally to comprehensively capture the socio-economic impact of research across all disciplines was undertaken for the Australian Research Quality Framework (RQF), using a case study approach. The transfer of information electronically can be traced and reviewed to provide data on where and to whom research findings are going. Evaluation research aimed at determining the overall merit, worth, or value of a program or policy derives its utility from being explicitly judgment-oriented. The quality and reliability of impact indicators will vary according to the impact we are trying to describe and link to research. 2007; Grant et al. % This work was supported by Jisc [DIINN10]. 1. Co-author. Table 1 summarizes some of the advantages and disadvantages of the case study approach. Studies (Buxton, Hanney and Jones 2004) into the economic gains from biomedical and health sciences determined that different methodologies provide different ways of considering economic benefits. Johnston (Johnston 1995) notes that by developing relationships between researchers and industry, new research strategies can be developed. In development of the RQF, The Allen Consulting Group (2005) highlighted that defining a time lag between research and impact was difficult. In designing systems and tools for collating data related to impact, it is important to consider who will populate the database and ensure that the time and capability required for capture of information is considered. 0000007307 00000 n
A very different approach known as Social Impact Assessment Methods for research and funding instruments through the study of Productive Interactions (SIAMPI) was developed from the Dutch project Evaluating Research in Context and has a central theme of capturing productive interactions between researchers and stakeholders by analysing the networks that evolve during research programmes (Spaapen and Drooge, 2011; Spaapen et al. The definition problem in evaluation has been around for decades (as early as Carter, 1971), and multiple definitions of evaluation have been offered throughout the years (see Table 1 for some examples). Classroom Assessment -- (sometime referred to as Course-based Assessment) - is a process of gathering data on student learning during the educational experience, designed to help the instructor determine which concepts or skills the students are not learning well, so that steps may be taken to improve the students' learning while the course is 4. The Economic and Social Benefits of HRB-funded Research, Measuring the Economic and Social Impact of the Arts: A Review, Research Excellence Framework Impact Pilot Exercise: Findings of the Expert Panels, Assessment Framework and Guidance on Submissions, Research Impact Evaluation, a Wider Context. Throughout history, the activities of a university have been to provide both education and research, but the fundamental purpose of a university was perhaps described in the writings of mathematician and philosopher Alfred North Whitehead (1929). Authors from Asia, Europe, and Latin America provide a series of in-depth investigations into how concepts of . A key concern here is that we could find that universities which can afford to employ either consultants or impact administrators will generate the best case studies. The RQF pioneered the case study approach to assessing research impact; however, with a change in government in 2007, this framework was never implemented in Australia, although it has since been taken up and adapted for the UK REF. If knowledge exchange events could be captured, for example, electronically as they occur or automatically if flagged from an electronic calendar or a diary, then far more of these events could be recorded with relative ease. Such a framework should be not linear but recursive, including elements from contextual environments that influence and/or interact with various aspects of the system. Here we outline a few of the most notable models that demonstrate the contrast in approaches available. Also called evaluative writing, evaluative essay or report, and critical evaluation essay . Assessment is the collection of relevant information that may be relied on for making decisions., 3. In the UK, more sophisticated assessments of impact incorporating wider socio-economic benefits were first investigated within the fields of Biomedical and Health Sciences (Grant 2006), an area of research that wanted to be able to justify the significant investment it received. Any person who has made a significant . What are the challenges associated with understanding and evaluating research impact? A total of 10 Cone beam computed tomography (CBCT) were selected to perform semi-automatic segmentation of the condyles by using three free-source software (Invesalius, version 3.0.0, Centro de Tecnologia da . 0000012122 00000 n
It is therefore in an institutions interest to have a process by which all the necessary information is captured to enable a story to be developed in the absence of a researcher who may have left the employment of the institution. To allow comparisons between institutions, identifying a comprehensive taxonomy of impact, and the evidence for it, that can be used universally is seen to be very valuable. Here we address types of evidence that need to be captured to enable an overview of impact to be developed. Evaluation is the systematic collection and inter- pretation of evidence leading as a part of process to a judgement of value with a view to action., Evaluation is the application of a standard and a decision-making system to assessment data to produce judgments about the amount and adequacy of the learning that has taken place., 1. This petition was signed by 17,570 academics (52,409 academics were returned to the 2008 Research Assessment Exercise), including Nobel laureates and Fellows of the Royal Society (University and College Union 2011). (2011) Maximising the Impacts of Your Research: A Handbook for Social Scientists (Pubd online) <, Lets Make Science Metrics More Scientific, Measuring Impact Under CERIF (MICE) Project Blog, Information systems of research funding agencies in the era of the Big Data. The current definition of health, formulated by the WHO, is no longer adequate for dealing with the new challenges in health care systems. (2007) surveyed researchers in the US top research institutions during 2005; the survey of more than 6000 researchers found that, on average, more than 40% of their time was spent doing administrative tasks. It is concerned with both the evaluation of achievement and its enhancement. For example, some of the key learnings from the evaluation of products and personnel often apply to the evaluation of programs and policies and vice versa. 0000004019 00000 n
Capturing knowledge exchange events would greatly assist the linking of research with impact. Clearly there is the possibility that the potential new drug will fail at any one of these phases but each phase can be classed as an interim impact of the original discovery work on route to the delivery of health benefits, but the time at which an impact assessment takes place will influence the degree of impact that has taken place. The case study of the Research Information System of the European Research Council, E-Infrastructures for Research and Innovation: Linking Information Systems to Improve Scientific Knowledge, Proceedings of the 11th International Conference on Current Research Information Systems, (June 69, 2012), pp. (2007:11-12), describes and explains the different types of value claim. Definition of Assessment & Evaluation in Education by Different Authors with Its Characteristics, Evaluation is the collection, analysis and interpretation of information about any aspect of a programme of education, as part of a recognised process of judging its effectiveness, its efficiency and any other outcomes it may have., 2. The Oxford English Dictionary defines impact as a Marked effect or influence, this is clearly a very broad definition. It is acknowledged that one of the outcomes of developing new knowledge through research can be knowledge creep where new data or information becomes accepted and gets absorbed over time. 0000001883 00000 n
Hb```f``e`c`Tgf@ aV(G Ldw0p)}c4Amff0`U.q$*6mS,T",?*+DutQZ&vO T4]2rBWrL.7bs/lcx&-SbiDEQ&. This article aims to explore what is understood by the term research impact and to provide a comprehensive assimilation of available literature and information, drawing on global experiences to understand the potential for methods and frameworks of impact assessment being implemented for UK impact assessment. We take a more focused look at the impact component of the UK Research Excellence Framework taking place in 2014 and some of the challenges to evaluating impact and the role that systems might play in the future for capturing the links between research and impact and the requirements we have for these systems. (2005), Wooding et al. For systems to be able to capture a full range of systems, definitions and categories of impact need to be determined that can be incorporated into system development. This might describe support for and development of research with end users, public engagement and evidence of knowledge exchange, or a demonstration of change in public opinion as a result of research. However, it must be remembered that in the case of the UK REF, impact is only considered that is based on research that has taken place within the institution submitting the case study. Although metrics can provide evidence of quantitative changes or impacts from our research, they are unable to adequately provide evidence of the qualitative impacts that take place and hence are not suitable for all of the impact we will encounter. Understand. 2007). In this sense, when reading an opinion piece, you must decide if you agree or disagree with the writer by making an informed judgment. In developing the UK REF, HEFCE commissioned a report, in 2009, from RAND to review international practice for assessing research impact and provide recommendations to inform the development of the REF. The Goldsmith report concluded that general categories of evidence would be more useful such that indicators could encompass dissemination and circulation, re-use and influence, collaboration and boundary work, and innovation and invention. Where narratives are used in conjunction with metrics, a complete picture of impact can be developed, again from a particular perspective but with the evidence available to corroborate the claims made. Citations (outside of academia) and documentation can be used as evidence to demonstrate the use research findings in developing new ideas and products for example. 0000004692 00000 n
The term "assessment" may be defined in multiple ways by different individuals or institutions, perhaps with different goals. While the case study is a useful way of showcasing impact, its limitations must be understood if we are to use this for evaluation purposes. If this research is to be assessed alongside more applied research, it is important that we are able to at least determine the contribution of basic research. It is possible to incorporate both metrics and narratives within systems, for example, within the Research Outcomes System and Researchfish, currently used by several of the UK research councils to allow impacts to be recorded; although recording narratives has the advantage of allowing some context to be documented, it may make the evidence less flexible for use by different stakeholder groups (which include government, funding bodies, research assessment agencies, research providers, and user communities) for whom the purpose of analysis may vary (Davies et al. 0000008241 00000 n
2005; Wooding et al. evaluation practice and systems that go beyond the criteria and their definitions. 0000001178 00000 n
Collecting this type of evidence is time-consuming, and again, it can be difficult to gather the required evidence retrospectively when, for example, the appropriate user group might have dispersed. The definition of health is not just a theoretical issue, because it has many implications for practice, policy, and health services.