FCT R&D Units Evaluation by ESF

ESF mobilised more than 650 international experts from 46 countries to evaluate FCT’s research Units.

Over more than three decades, ESF has developed a strong and deep knowledge of peer review and evaluation processes. In 2011, ESF issued, together with its Member Organisations, the European Peer Review Guide – Integrating Policies and Practices into Coherent Procedures. The guide resulted from an in-depth analysis of peer review processes implemented by the leading European public research funding organisations. This benchmarking and synthesising exercise allowed ESF to refine and improve its approach and internal processes in order to integrate some of the best practices identified during that analysis.


Similarly, ESF organised a detailed review of evaluation practices across European research organisations that allowed it to develop a strong understanding of the complex issues raised by scientific evaluation. Following on from this review, in 2012, ESF issued a report entitled Evaluation in Research and Research Funding Organisations: European Practices.


For its own programmes as well as for national and European institutions, ESF implements scientific peer review and evaluation across all domains of science at the levels of funding proposals,  projects, programmes and national research systems.

ESF was commissioned by FCT to set-up review panels and implement the first stage of the 2013 Research Units evaluation.  The independent evaluation delivered by ESF, its panel members and expert peer reviewers was performed in line with the highest international standards.


This important evaluation exercise did not look at the impact of individual researchers nor research project proposals. The objective of the evaluation was to assess merit at the level of research units and their plans in a holistic manner. The assessment was primarily based on applications submitted by the research units and performed by experts in the field who also utilised bibliometric information compiled by Elsevier. The evaluation exercise considered historic and current performance of individual research units but also their six-year future research strategy, in terms of its content and implementation strengths. The following evaluation criteria were at the core of the assessment:

  •  The performance of the research units

    • Productivity and contribution to the National Scientific and Technological System (criterion A),
    • Scientific and technological merit of the research team (criterion B),

  • The research units’ strategic programme

    • o Scientific and technological merit of the research team (criterion B),
    • o Scientific merit and innovative nature of the strategic programme (criterion C),
    • o Feasibility of work plan and reasonability of the requested budget (criterion D).

The assessment process was conducted by 659 international experts from 46 countries who were specifically targeted for their expertise and independence (absence of conflict of interest). These experts acted either as remote reviewers or review panel members. The overall assessment process was based on a (targeted) subject specific review (performed by external referees) and a domain-specific review (performed by review panels).


To conduct the assessment, ESF independently set up six disciplinary review panels:

  • Exact Sciences
  • Engineering Sciences
  • Health and Life Sciences
  • Natural and Environmental Sciences
  • Social Sciences
  • Humanities


To concur with the specificities of the evaluation, these panels typically involved heads of laboratories, institutes or departments, members of international committees and panels. Besides their duties as research managers, review panel members also have a high academic profile.


Collegiality and consensus were at the core of the first stage of evaluation. Each application had two review panel members assigned to them, one lead and one secondary rapporteur who were privileged readers. The panel’s assessment process provided for a consensual evaluation of all research units falling under their disciplinary remit being reached. In addition to the six disciplinary review panels, ESF also set up a seventh panel, composed of members from the others six panels. This panel assessed applications submitted under a multi-disciplinary heading. It focussed on the specifics of the multi-disciplinary applications while keeping continuity and consistency across the process.


In order to feed-in and complement the domain-specific assessment performed by review panels, ESF independently identified and appointed two remote reviewers for each application submitted in addition to the review panel rapporteurs. These experts, who generally assessed a single application, were identified and selected based on their in-depth knowledge of the research topics put forward in the applications. These more detailed subject-specific assessments are to be considered as non-binding tools that provided a further quality dimension to refine the assessment process. They provided review panels with additional insight and perspectives on the performance and strategic plans of the research units evaluated.

A rebuttal step was also incorporated into the process in line with good practice and transparency. The rebuttal opportunity allowed research units to have access to the two external reviewers’ reports as well as the preliminary assessment of one review panel members in order to provide them with opportunity to comment on these reports and clarify factual points or misunderstandings before the meeting of the review panels. These direct feedbacks were considered critical elements of the panel work by their members.

The Review Panels met between 26 and 29 May 2014. They reviewed the applications submitted by the research units in the light of the bibliometric information available, the remote reviewers’ reports, the panel members’ preliminary assessments as well as the research units’ rebuttal text. Each application was discussed in detail and once a consensus and a coherent ranking of units had been reached, marks (substantiated by comments) were attributed to each criteria. Review panels also identified key questions to be addressed during site visits by units proceeding to stage two.

As the final result of this first stage of the evaluation, announced by FCT on 2 October 2014, 20% of the research units were graded ‘fair’ or ‘poor’ and 25% were rated as ‘good’. The remaining 55%, rated ‘very good’ or above, move on to the second stage of the evaluation and will be visited by groups of review panel members between July and November 2014. The review panels will re-convene in November 2014 to finalise the evaluation.

The European Science Foundation wishes to warmly thank review panels members and remote reviewers for their commitment, dedication and commitment to a fair and transparent process.

Link to FCT’s 2013 evaluation webpage:

http://www.fct.pt/apoios/unidades/avaliacoes/2013/index.phtml.en

FCT Statement on the 2013 Evaluation of R&D Units:

http://www.fct.pt/noticias/index.phtml.en?id=89&/2014/7/Statement_on_2013_Evaluation_of_R&D_Units