Evaluation of suicide prevention activities

8.3 Outcome measurement by NSPP projects

Page last updated: January 2014

All projects are required to report output and financial data as part of their funding agreement and to submit regular progress reports. These progress reports were largely based on quantitative output and financial data, with narrative self-report used to describe the effects of activities. To date, outcome measurement involving validated tools has been rare among NSPP-funded activities. Furthermore, the dearth of validated and standardised tools limited the extent of comparison that could be made between projects engaged in similar activities across the program.

Projects that were required to undergo independent external evaluations under their funding agreement (see Table 4-1) tended to generate objectives-based evaluations82 that addressed achievements relative to input and output objectives, rather than outcomes. Evaluations largely relied on consultations with key stakeholders (service users, community, etc.) as their primary source of data.

Table 8-1 provides an overview of the evaluations conducted for the 47 projects that are both in-scope of the evaluation and that were operating at June 2013.

As indicated above, only three validated tools were used, namely:

  • Clinical Global Impressions Scale
  • Harter Social Acceptance Work Readiness Questionnaire
  • Kessler Psychological Distress Scale (K10).
The fact that two of the three tools cited were clinical tools highlights that:
  • Individual level interventions are generally more easily assessed using validated tools than group/community activities
  • Suicide is often perceived predominantly in mental health terms; a perception that fails to acknowledge the complex array of personal, social and community factors that need to be considered in suicide prevention and overlooks an extensive range of suicide prevention activity that aims to address these factors.
This focus on validated tools in this section should not be interpreted as diminishing the 'merits in multiple methods, marrying quantitative and qualitative data'.83 The use of both quantitative (validated and/or standardised tools, as appropriate) and qualitative data sources are strongly advocated for further NSPP evaluations (see Chapter 12).

However, it should be recognised that the diverse range of methods used by projects in their evaluations limits comparisons that can be made across projects. The use of bespoke tools (such as customer satisfaction and general surveys) restricts inter-project comparisons if these tools are not standardised across projects. Without common domains or measures, the relative achievement of different approaches cannot be ascertained. It is noted that bespoke tools were used in situations where validated tools currently exist.

This dearth of comparative outcome data has restricted not only the extent to which the effectiveness of the NSPP could be evaluated in this current report, but also the range of economic analysis that could be conducted.

Strategies to improve outcome measurement are identified in Chapter 12.

Key findings

  • Outcome measurement using validated tools is rare among NSPP-funded activities. A range of quantitative and qualitative information was collected; however the dearth of validated and standardised tools limited the extent of comparison that could be made between projects engaged in similar activities across the program.
  • The absence of quantifiable outcome data restricted not only the extent to which the effectiveness of the NSPP could be evaluated in this current report, but also the range of economic analysis that could be conducted. Top of page

Table 8-1: External evaluation profile of projects

Note: Table 8-1 is identified as a table in the original PDF document, but it is really only a nested list, as presented here:
  • 29 of 47 projects (62%) had an external evaluation completed
  • 31 independent external evaluation reports were generated, of which two projects each had two reports related to different aspects
  • In addition one economic analysis was commissioned by the project, independent of their NSPP funding requirements
  • These 31 evaluation reports were completed by:
    • Private consultants (15, 48%)
    • University (14, 45%)
    • Not for profit organisation (1, 3%)
    • Jointly between private consultant and project personnel (1, 3%)
  • Three evaluations cited use of validated outcome measurement instruments:
    • Clinical Global Impressions Scale
    • Harter Social Acceptance Work Readiness Questionnaire
    • Kessler Psychological Distress Scale (K10)
  • Various qualitative and quantitative data collection methods were used in these evaluations, including:
    • Customer satisfaction surveys
    • Social network analysis
    • Surveys
    • Participant observation
    • Document review
    • Case notes
    • Community visits
    • Focus groups
    • Semi-structured interviews
    • Key informant interviews
    • Review of output data
    • Un-validated quality of life measures.

82 JM Owens, Program Evaluation – Forms and Approaches, 3rd edn, Allen & Unwin, NSW, 2006, p48.
83 Pawson & Tilley, Realistic Evaluation.