Appl Clin Inform 2013; 04(03): 331-358
DOI: 10.4338/ACI-2013-04-RA-0024
Research Article
Schattauer GmbH

STARE-HI – Statement on Reporting of Evaluation Studies in Health Informatics

Explanation and Elaboration
J. Brender
1  Department of Health Science and Technology, Aalborg University, and V-CHI, Aalborg, Denmark
,
J. Talmon
2  School of Public Health and Primary Care – CAPHRI, Maastricht University, Maastricht, The Netherlands
,
N. de Keizer
3  Department of Medical Informatics, Academic Medical Center, Amsterdam, The Netherlands
,
P. Nykänen
4  School of Information Sciences, University of Tampere, Tampere, Finland
,
M. Rigby
5  School of Public Policy and Professional Practice, Keele University, Keele, United Kingdom
,
E. Ammenwerth
6  Institute of Medical Informatics, UMIT – University for Health Sciences, Medical Informatics and Technology, Hall in Tyrol, Austria
› Author Affiliations
Further Information

Publication History

received: 20 April 2013

accepted: 29 June 2013

Publication Date:
16 December 2017 (online)

  

Summary

Background: Improving the quality of reporting of evaluation studies in health informatics is an important requirement towards the vision of evidence-based health informatics. The STARE-HI – Statement on Reporting of Evaluation Studies in health informatics, published in 2009, provides guidelines on the elements to be contained in an evaluation study report.

Objectives: To elaborate on and provide a rationale for the principles of STARE-HI and to guide authors and readers of evaluation studies in health informatics by providing explanatory examples of reporting.

Methods: A group of methodologists, researchers and editors prepared the present elaboration of the STARE-HI statement and selected examples from the literature.

Results: The 35 STARE-HI items to be addressed in evaluation papers describing health informatics interventions are discussed one by one and each is extended with examples and elaborations. Conclusion: The STARE-HI statement and this elaboration document should be helpful resources to improve reporting of both quantitative and qualitative evaluation studies. Evaluation manuscripts adhering to the principles will enable readers of such papers to better place the studies in a proper context and judge their validity and generalizability, and thus in turn optimize the exploitation of the evidence contained therein.

Limitations: This paper is based on experiences of a group of editors, reviewers, authors of systematic reviews and readers of the scientific literature. The applicability of the details of these principles has to evolve as a function of their use in practice.