Subscribe to RSS
What big size you have! Using effect sizes to determine the impact of public health nursing interventions
21 July 2013
accepted in revised form: 14 September 2013
16 December 2017 (online)
Background: The Omaha System is a standardized interface terminology that is used extensively by public health nurses in community settings to document interventions and client outcomes. Researchers using Omaha System data to analyze the effectiveness of interventions have typically calculated p-values to determine whether significant client changes occurred between admission and discharge. However, p-values are highly dependent on sample size, making it difficult to distinguish statistically significant changes from clinically meaningful changes. Effect sizes can help identify practical differences but have not yet been applied to Omaha System data.
Methods: We compared p-values and effect sizes (Cohen’s d) for mean differences between admission and discharge for 13 client problems documented in the electronic health records of 1,016 young low-income parents. Client problems were documented anywhere from 6 (Health Care Supervision) to 906 (Caretaking/parenting) times.
Results: On a scale from 1 to 5, the mean change needed to yield a large effect size (Cohen’s d 0.80) was approximately 0.60 (range = 0.50 – 1.03) regardless of p-value or sample size (i.e., the number of times a client problem was documented in the electronic health record).
Conclusions: Researchers using the Omaha System should report effect sizes to help readers determine which differences are practical and meaningful. Such disclosures will allow for increased recognition of effective interventions.
- 1 American Psychological Association.. Publication manual of the American Psychological Association.. 6th ed. Washington, DC: The Association; 2001
- 2 Greenwald AG, Gonzalez R, Harris RJ, Guthrie D. Effect sizes and p values: what should be reported and what should be replicated?. Psychophysiology 1996; 3 (02) 175-183.
- 3 Huck SW. Reading statistics and research.. 5th ed. Boston: Pearson/Allyn & Bacon; 2008
- 4 Kotrlik JW, Williams HA. The incorporation of effect size in information technology, learning, and performance research.. Inf Technol Learn Perform 2003; 21 (01) 1-7.
- 5 Martin KS. The Omaha System: a key to practice, documentation, and information management.. Reprinted 2nd ed. Omaha, NE: Health Connections Press; 2005
- 6 Monsen KA, Fulkerson JA, Lytton AB, Taft LL, Schwichtenberg LD, Martin KS. Comparing maternal child health problems and outcomes across public health nursing agencies.. Matern Child Health J 2010; 14 (03) 412-421.
- 7 Zakzanis KK. Statistics to tell the truth, the whole truth, and nothing but the truth: formulae, illustrative numerical example, and heuristic interpretation of effect size analyses for neuropsychological researchers.. Arch Clin Neuropsychol 2001; 16: 653-667.
- 8 Cohen J. A power primer.. Psychol Bull 1992; 112 (01) 155-159.
- 9 Thompson B. What future quantitative social science research could look like: confidence intervals for effect sizes.. Educ Res 2002; 31 (03) 25-32.
- 10 Martin KS, Norris J, Leak GK. Psychometric analysis of the Problem Rating Scale for Outcomes.. Outcomes Manag Nurs Pract 1999; 3: 20-25.
- 11 Kadel RP, Kip KE. A SAS macro to compute effect size (Cohen’s d) and its confidence interval from raw survey data.. Proceedings of the Annual Southeast SAS Users Group Conference; 2012, paper SD-06.
- 12 Vacha-Hasse T, Nilsson JE, Reetz DR, Lance TS, Thompson B. Reporting practices and APA editorial policies regarding statistical significance and effect size.. Theor Psychol 2000; 10 (03) 413-425.