Design Evaluation Plan

TIPS FOR WRITING THE EVALUATION SECTION OF YOUR PROPOSAL

Logic Model

Logic Models are tools used to assist in program planning, implementation, management and evaluation. Your logic model provides a visual of the program's Theory of Change. Your theory of change is thought of as: why and how program activities lead to achievement of program goals and objectives. The basic logic model consists of Inputs, Activities, & Outcomes:

  • Inputs are the various resources available to support the program (e.g., staff, materials, curricula, funding, equipment)
  • Activities are the action components of the program (e.g. develop or select a curriculum, write a plan, implement a curriculum, train educators, pull together a coalition). These are sometimes referred to as process objectives.
  • Outcomes are the intended accomplishments of the program. They include short-term, intermediate, and long-term or distal outcomes.

See sample logic models

Process Evaluation

Process Evaluation

Before tackling your outcome evaluation, think through your process evaluation questions. Generally, there are two types of questions addressed by process evaluations:

  1. Is the program reaching the target population?
  2. Consumer characteristics - (demographics, presenting issues)
  1. Is the program being delivered in the way it was intended?
  2. Provider characteristics (staff demographics, training and educational backgrounds)
  3. Type of service provided (home-based, outpatient)
  4. Amount of service provided (dosage and duration of service as well as service intensity)
  5. Adherence to program protocol or model fidelity

Outcome Evaluation

Some general principles of outcome evaluation to consider are that the more rigorous the design, the more plausible the resulting estimate of the program effects. The more reliable and valid the measurements used, the more plausible the results. The larger the sample, the more likely the program can show significant effects or the greater statistical power you have. When deciding what to measure, focus only on things related to your program objectives/outcomes or program theory and logic model. Also consider what is most important to the various program stakeholders (sometimes funders will dictate certain performance measures to grantees). Finally, only measure what is possible and practical. Remember, not everything needs to be measured and not everything is measurable.

Outcome measurement can tell us several things some typical outcome measurement questions are:

  1. Did a change occur?
  2. What was the direction of change?
  3. How large was the change and was it significant? (clinically or statistically)
  4. How fast did the change occur?
  5. How long did the change last?
  6. What factors influenced the change (such as consumer characteristics, service intensity)?
  7. Is the theory of change articulated in your logic model upheld?

Data Sources and Measures

Once you have identified the process and outcome evaluation questions you want to ask, you need to think about the variables or data elements you will need to answer these research questions. The next step is to determine from where you will get these variables or data elements, for example, consumer characteristics may come from intake assessments and change in functioning from standardized clinical assessment instruments. Then you need to identify who will collect these data and when they will be collected - typically at intake and case closure. Most standardized instruments have set administration schedules (quarterly, every six months). Some additional measurement tips:

  • Use existing measures when possible
  • Use measures with norms and clinical cut-points when available
  • Use "gold standard" measures when available, that is instruments with proven validity and reliability
  • Use multiple measures and multiple informants to give context to the data collected from clinical instruments and to allow you to triangulate your data sources/analysis
  • Choose the best measures available

Evaluation Design and Data Analysis Plan

The last component of your evaluation section is the evaluation design and data analysis plan. If your organization does not have expertise in this area you may want to consult an experienced evaluator for assistance with this section. Feel free to contact Crystal Mills, Director of Inter-Professional Research for Eastern Michigan University's College of Health and Human Services or the RAC for technical assistance. (See also the technical assistance request form A link to a pdf file is present.  on this website). However, simple evaluation designs can be developed by persons with little or no experience. The gold standard is experimental designs using random assignment. The types of questions experimental designs can answer include:

  • Is the program effective?
  • Is the new program more effective than an existing program?
  • Are the component parts of a program as effective as the entire package?
  • Is a larger dose of a program more effective than a smaller dose?
  • Do program effects last over time?