Performance indicators for environmental inspection systems
2008 - 2012
CompletedProject description and aims
The aim of the project was to work out performance indicators for the environmental inspectorates. Concerning the scope of the project, it was agreed that it should cover indicators
related to the RMCEI. The indicators should include input, output and outcome indicators.
Brainstorm phase
The first phase, took place in 2008. The group agreed on a short list of indicators that should be further analysed in a future IMPEL project. Under the project volunteer countries will provide the necessary data for the selected indicators, which will then be analysed and discussed. As gathering data on all installations covered by the RMCEI was deemed too burdensome, it was agreed that the project should focus on IPPC installations. A further limitation to individual sectors under the IPPC Directive could be considered if this leads to more comparable and representative data. The scope should be evaluated at the end of the project.
Defining the indicators
The aim of this exercise was to define the 10 performance indicators proposed by the 2008 IMPEL project “ Brainstorming on an IMPEL Project to develop performance indicators for environmental inspectorates “, to assess their strength and weaknesses, and to run a pilot test among a short list of IMPEL members. On this basis, a revised and as precisely defined as possible list of indicators was proposed, together with a qualitative assessment of each the indicators.
Throughout the project, due to the many political and operational difficulties between Member States, defining EU-wide comparable indicators proved to be of utmost difficulty. The pilot demonstrated that the comparability is often low, the availability of data variable and the range of answers high. It was also agreed that the proposed list of indicators does not characterise the effectiveness of the inspectorates. It is a partial assessment of their capacity.
Some recommendations concerning the way to use them are made : in particular, it is better use several indicators than one, and indicators need to be combined with quality-oriented instruments. It was concluded that it would be helpful to organise a in depth discussion between IMPEL and the Commission and other relevant parties like the OECD, to further explore what qualitative and quantitative assessment tools like audits, peer reviews (IRI), concrete sector/directive specific output and outcome indicators and combinations of these could be used for EU wide monitoring of performance of inspectorates.
Exploring qualitative and quantitative assessment tools to evaluate the performance of environmental inspectorates across the EU
This project set out to examine the current and potential use of assessment tools to evaluate environmental inspectorates across the EU.
Various indicators and assessment tools were examined and their use evaluated for three types of assessment:
- by an individual inspectorate to measure its performance and identify areas for improvement;
- external verification that an inspectorate has the necessary ‘building blocks’ in place to operate effectively; and
- if possible, to compare inspectorates within a Member State and across the EU.
This project has not been able to identify a single set of numerical indicators that can be incorporated into assessment tools and used in a fair and meaningful way to numerically rank inspectorates’ performance across the European Union. This is because the circumstances in which each inspectorate operates can be significantly different. However, this project has identified principles that may allow for limited comparison based on outcome indicators. The primary purpose of such comparisons should be to allow inspectorates to understand the actions that have made the greatest contribution to outcomes (i.e. causality), better understand alternative approaches that are effective in certain circumstances, and to encourage the sharing of best practice. Using outcome indicators to facilitate league tables or other types of ranking is not recommended. Different approaches to data verification, the lack of common definitions, and differences in local context are unlikely to result in fair or meaningful comparison and risk the assessment becoming a source of dispute rather than a tool for improvement.
For more information click here
Number: 2008/03 - 2009/03 - 2011/08 – Status: Completed – Period: 2008 - 2012 – Topic: Cross-cutting tools and approaches - Tags: