Research evaluation in context 2: One joint protocol, three criteria and four aspects

Research evaluation in context 2: One joint protocol, three criteria and four aspects

This is the 2nd in a series of blog posts on research evaluation in the Netherlands. This post is dedicated to the Strategy Evaluation Protocol 2021-2027, the evaluation goals and the criteria and aspects that need to be addressed in an evaluation.

A joint protocol…

Ever since 2003, the Association of Universities in the Netherlands (VSNU), the Netherlands Organisation for Scientific Research (NWO) and the Royal Netherlands Academy of Arts and Sciences (KNAW) share the responsibility for the evaluation of all academic research units. They ensure that there is an evaluation protocol that is regularly updated. And they ensure that all academic research is evaluated once every six years, the duration of a protocol.

VSNU represents 14 large, research-intensive universities. Four smaller publicly funded universities use the SEP as well. NWO and KNAW both govern a number of academic research institutes. The universities of applied sciences have a different protocol, yet the system is more or less similar.

The current Strategy Evaluation Protocol 2021-2027 (SEP 2021-2027) is the fourth since the introduction of the joint protocols in 2003. The text of this protocol is a major source for this post.

… for the formative evaluation…

An evaluation addresses both past developments, strategies and achievements, as well as future plans. The latter aspect is rather crucial. The idea is that the evaluation contributes to maintaining and, when necessary, improving the quality and relevance of the research.

The current protocol presents the evaluation as integral part of an ongoing quality assurance cycle. Evaluation should facilitate the continuous dialogue between the unit and the board. The evaluation committee is asked to reflect on positive issues and – constructively – on weaknesses, and to suggest where and how improvements are envisaged. These recommendations serve as input in the quality assurance cycle.

… of academic research units….

The evaluation takes place on the level of a research unit. The protocol characterizes a unit as an entity that is known in its own right, within and outside of the institution, and that is sufficiently large. The SEP indicates at least ten research FTEs among its permanent academic staff.

An evaluation shouldn’t relate to a random collection of researchers that happen to work on the same floor, but to a clearly identifiable entity. Moreover, the evaluation does not relate to a collection of research outputs, but to the strategy of the unit.

… in light of their own context, aims and strategy.

One major change, compared to the previous protocol, is its name. Until 2021, the protocol was named Standard Evaluation Protocol. Since 2021, it is the Strategy Evaluation Protocol. The protocol stresses, even more so than before, that the goal is to evaluate a research unit in light of its own aims and strategy. An evaluation is not so much focused on the research itself, as it is on the strategy of the unit with regards to research.

The context in which a unit operates should be taken into account as well. The influence of the organization, with its policies and strategies, and the discipline, with its practices and quality standards, shouldn’t be ignored.

In order to take these specific aspects into account, the protocol leaves room for plurality with respect to the application and interpretation of the different elements.

The evaluation criteria are research quality,

Research quality is one of three evaluation criteria. Central is the contribution to the body of academic knowledge, the quality and scientific relevance of the research and the academic reputation and leadership within the field. Research quality is not further specified. The protocol invites the unit to propose indicators for research quality that fit the context and strategy of the unit and to explain what the indicators actually indicate. The protocol doesn’t provide benchmarks, nor does it prescribe to use benchmarks. The protocol doesn’t provide lists of indicators either. In other words: how research quality is operationalized, is partly up to the unit itself.

… societal relevance,

Societal relevance is another of three evaluation criteria. The protocol suggests how societal can be understood: economic, social, cultural, educational or any other terms that may be relevant. It also suggests an interpretation of relevance: impact, public engagement and uptake. Again, the protocol invites units to choose indicators, including case studies, that suit the nature, context and strategy of the unit.

… and viability…

The final criterion is viability of the unit. Here the focus shifts from a retrospective view towards a forward-looking view. The unit is asked to provide information on future goals, plans and strategy. Viability relates to the extent to which the unit’s future goals are relevant and to whether its strategy fits these goals.

… plus, four additional aspects.

Over the years, specific and diverse elements have been introduced to the protocol. They are now characterized as aspects that need to be addressed during the evaluation. They are 1) Open Science, 2) PhD Policy and Training, 3) Academic Culture and 4) Human Resources Policy.

For Open Science, the protocol explains that this relates to the involvement of stakeholders in research, FAIR data practices, Open Access publishing, etc. It also refers to the Dutch National Programme on Open Science, especially for the definition of Open Science and Open Science practices. This was done because the definition of Open Science is still developing, the protocol has been written in 2019 and early 2020, and will be used until 2027. By then, Open Science will most certainly have a different connotation than in the late 2010’s.

PhD policy and training covers the supervision and instruction of PhD candidates. Here the context of the Netherlands is important. In the vast majority of cases, PhD candidates are not registered as students. Usually, PhD candidates are employed by the university as (temporary) scientific staff, with the task to do research. There is also a substantive amount of external PhD candidates. These are employed elsewhere and do their PhD research supervised by scientific staff of the unit. The implication is that PhD policy and training is not being assessed in a regular teaching assessment.

Academic culture is defined as openness, (social) safety and inclusivity of the research environment. It includes multiplicity of perspectives and identities. Academic culture also covers research integrity and ethics.

The final aspect, one that partly relates to the previous, is Human Resources Policy. This includes diversity of staff in terms of gender; age; ethnic and cultural background; disciplinary background. It also covers talent management. There is a strong link with current developments in Dutch academia regarding recognition and rewards of academic staff. More on that in a future blog post.

But first: the evaluation procedure and the role and responsibilities of the board, unit and committee. That is the subject of the next blog post.


Add a comment