Measuring Quality in Libraries

Didar Bayır & Bill Simpson

Didar Bayır, Koç University Suna Kıraç Library, Rumelifeneri Yolu Sariyer, Istanbul 34450, Turkey, dbayir@ku.edu.tr

Bill Simpson, John Rylands University Library, University of Manchester, Oxford Road, Manchester N13 9PP, UK, Bill.Simpson@manchester.ac.uk

Introduction

This paper is based on the proceedings of a Seminar held at the Bibliothèque Nationale in Paris on 23 March 2007. The Seminar involved 26 invited participants from eleven countries and included nine presentations followed by detailed discussion and the exchange of ideas. The broad themes were: Tools for Quality Measurement, Standards and Performance Indicators, Benchmarking and Auditing. The Seminar’s agreed purpose was to identify the available tools, such as LibQUAL+ and ISO standards, to explore the results of the assessments undertaken, to identify actions to take forward, to look at the issues involved from a European perspective and to identify the basis for comparisons across Europe.

A variety of methodologies

Martha Kyrillidou’s presentation on LibQUAL+ covered the widespread use of the methodology, which has already been translated into a number of European languages, whilst emphasising that it is user-centred and measures impressions of the library under a number of headings without taking account of issues of finance or cost-effectiveness. It is a powerful tool for measuring user satisfaction but less useful from a purely management perspective.

Stephen Town’s presentation pointed to a wider range of options from a management perspective, including quality assessment, peer review, performance indicators, satisfaction surveys and total quality management, some of which can be used in conjunction with others. He pointed out that, for a university, two ‘bottom lines’ are crucial - the financial and the academic (the value added by the library to research, teaching and learning). It is essential that the library should have a positive impact on both. It is also crucial that librarians should have the information necessary for upward advocacy of the library’s cause within the parent institution and that the library should be capable of moving from a ‘business as usual’ model to the development and management of new initiatives (the Capability Maturing Model). Demonstrating the library’s impact and value (how much is developed per €1 invested in the library for research and teaching and what value do library staff add to the institution?) will be a vital part of this process of advocacy and positive change.

Pierre-Yves Renaud covered ISO 2789 and 11620, emphasising that, while neither standard is new, they are well-established and internationally recognised and can express, with a common set of concepts and parameters, what can be clearly and effectively defined. He drew attention to the need for a common standard that will be able to synthesise a set of data into a single figure that can be used for comparative purposes.

Willy Vanderpijpen focused on the specific needs of national libraries, with reference to the Working Group on Quality Measures for National Libraries. He mentioned several different initiatives and the need for co-ordination and collaboration and proposed a range of indicators covering a number of areas from quality of service to efficiency in management. The Seminar agreed that the approach adopted and indicators identified were as applicable to university as to national libraries.

Benchmarking in Britain, Ireland and Switzerland

Paul Ayris, Paul Sheehan, Bill Simpson and Stephen Town described their often widely different benchmarking experiences in Britain and Ireland. Paul Sheehan’s experience in Ireland in the light of the Irish Universities Act of 1997, which made a standard quality review process across all academic and support activities of each university mandatory, was particularly instructive. The process, which includes self-assessment, peer review, a quality improvement plan and presentations to senior officers of each university, with a view to the implementation of changes necessary for improvement, is useful though very labour-intensive. The drawback is that the work involved can be wasted if, as is often the case, budgetary constraints make it impossible to implement recommendations.

Paul Ayris spoke of the University College London (UCL) Library Strategy 2005-10, which is underpinned by performance and quality measures, including benchmarks, key performance indicators (KPIs) and user satisfaction and impact surveys. He added that the process at UCL is being managed by an Operational Planning Team.

Bill Simpson spoke about the Manchester process, which predates but has some affinities with that adopted by UCL. It includes formal Operational Performance Reviews (OPRs), which covers performance against KPIs, LibQUAL outcomes and benchmarking against other members of the international benchmarking group set up by Manchester in 2006. It also judges performance against the university’s Manchester 2015 Agenda and the library’s own Strategic and Operational Plans and the findings of internal student satisfaction surveys. Staff performance in relation to returns for the UK Research Assessment Exercise, membership of prestigious external bodies and learned societies, publications and papers at conferences is also assessed as part of a very comprehensive process.

Stephen Town reminded the Seminar that, though a powerful tool for analysis and improvement, benchmarking requires time, effort and commitment and that existing measures are not always helpful. He raised the questions of whether LIBER could facilitate international consortia and if it would be possible to set up a national or international (e.g. LIBER) clearing-house for benchmarking, and considered the possibilities of e-benchmarking.

Ulrich Niederer spoke of the Swiss Benchmarking Project, which began in 2001 with agreement on performance indicators. It was initially unsuccessful because of the amount of work involved from Library staff but assumed a new life as national statistics became available in more usable form. “Circles of Comparison” now include public as well as academic libraries and attempts have been made to achieve international comparisons by including German libraries in the process.

Key conclusions

The Seminar agreed on a number of key points:

Framework for a LIBER initiative

It was clear from the level of participation in the Seminar, from the enthusiasm and commitment of the participants and from the quality of discussion that there is a strong desire for an initiative from LIBER on measuring quality at a European level. Participants were in no doubt that this is possible as well as desirable, but had also no illusions as to the amount of work that would be needed to produce a common, co-ordinated framework for realistic comparisons across national boundaries. Among the barriers to be overcome are:

Asking the right questions in this context would not be easy and there will always be a danger of making misleading comparisons.

Because of these concerns, combined with the potential workload, it is important that, at the outset, LIBER should only set out to do what is realistically achievable, recognising that if already overloaded institutions are asked to produce significantly more information for comparison than they can already provide, as with the early Swiss experience, nothing will happen. It will be essential to recognise that it is people, not committees, who make things happen, that it is important that the process should begin, however simply and basically, rather than that it should be perfect from the outset since it can evolve into something more sophisticated over time, and that LIBER should facilitate links and provide a framework rather than trying to organise detailed arrangements until it has greater resources.

A good starting point will be a Working Group (now set up under the aegis of the Library Management and Administration Division) to agree common standards for comparison, some or all of which might be adopted by different benchmarking groups within LIBER. The model agreed by the Manchester led international group might provide a useful initial template for this. LibQUAL+ comparisons can be made for user satisfaction but it will be essential to ensure statistically valid sample sizes.

Once established, a regular cycle of review either across LIBER as a whole or within separate groups of LIBER members, amongst which there is sufficient commonality to provide valid comparisons, will provide both synchronic comparisons with the performance of other libraries and diachronic comparisons of the performance of individual libraries over time. The process, though, however simple or complex, will only succeed both in securing continuing participation by library users and in achieving the improvements that we seek for their benefit if we act quickly and decisively to remedy what we are judged to do badly and strengthen what we do well.

Websites referred to in the text

ISO, http://www.iso.org/iso/home.htm

LIBQUAL, http://www.libqual.org/

OPR - Operational Performance Review. http://www.campus.manchester.ac.uk/planningsupportoffice/PSO/PlanningPerformanceReview/OPRS/

Swiss Benchmarking Project. http://www.libqual.com/documents/admin/niederer.pdf

UCL Library Services - Library Strategy 2005-10. http://www.ucl.ac.uk/Library/libstrat.shtml