Skip directly to search Skip directly to A to Z list Skip directly to navigation Skip directly to page options Skip directly to site content

Evaluation

What It Is?

Effective program evaluation is a systematic way to improve and account for public health actions by involving procedures that are useful, feasible, ethical, and accurate. Evaluation activities should be:

  • useful (i.e., responsive to stakeholder information needs)
  • feasible given time, resources, and available expertise
  • accurate enough to inform the kinds of decisions to be made
  • proper/ethical

How It Is Done?

Identify program elements to monitor

  • Monitoring and evaluation are mutually supportive ways of asking if your program is working. Program monitoring is essential for management and accountability. It is an ongoing process that tracks:
    • the resources invested in the program
    • the number and quality of activities the program offers
    • adherence to timelines and budgets
  • Monitoring is often called process evaluation. You will always need to track process variables such as:
    • funding received
    • products and services delivered
    • payments made
    • other resources contributed to and expended by the program
    • program activities
    • adherence to timelines
  • You will also want to know:
    • whether the program is being implemented as planned (fidelity)
    • how well the program is reaching your target audience (reach)
    • whether staff and representative participants see problems
  • To decide which components of the program to monitor, ask yourself who will use the information and how, what resources are available, and whether the data can be collected in a technically sound and ethical manner.

Select the key evaluation questions:

  • Basic evaluation questions include:
    • Was fidelity to the intervention plan maintained?
    • Were exposure levels adequate to make a measurable difference?
    • Were behavioral determinants affected by (or associated with) intervention exposures as predicted?
    • Did the determinants, in turn, affect behavior as predicted (i.e., was the internal logic of the intervention valid)?
    • Can any other event or influence explain the observed effects attributed to the intervention?
    • Were there any unintended effects?
  • Adapt each of these basic questions to your program content
  • Engage stakeholders in the planning process. Trim your list of potential questions by asking who will use the information and what they care most about. Stakeholders want various kinds of input into evaluation plans, depending on their levels of investment in the program and their interest and experience in program evaluation. Find out from stakeholders:
    • what they want to know
    • how they will use the information
    • when the data must be available in order to be useful

Determine how the information will be gathered

  • Try to find measures that can detect deviations from program plans quickly. Whether you are tracking the number of brochures distributed, the due dates of bills to pay, or the number of program participants who report being satisfied, monitoring data collection should be a routine function. It should be built into daily record-keeping and integrated into program management.
  • Each outcome included in the evaluation needs at least one strong measure to indicate whether the program is being successful in that regard. It's wise to have multiple measures of major outcomes. Multiple measures of key outcomes provide cross-validation when their findings agree.
  • If you are seeking permanent behavior change (e.g., smoking cessation), define end-points for your intervention and final outcome evaluation that are far enough out to assess permanent change as your field defines it.
  • Sources of evaluation data may include all the sources listed earlier for monitoring and also:
    • extensive participant interviews or surveys
    • archival documents
    • direct observations
  • Multiple sources will provide different perspectives about the program and thus enhance the credibility of your evaluation. Mixing internal and external perspectives provides a more comprehensive view of the program.
  • Choose the data collection method best suited to answering each evaluation question. Bear in mind that good data collection plans often integrate qualitative methods (those that produce descriptive information) with quantitative methods (those that generate numerical data such as frequencies, percentages or rates).
  • Qualitative methods add depth, detail and meaning to your research. However, quantitative evidence is usually needed to show that a program increased or decreased the frequency of some health behavior. Commonly used qualitative methods include:
    • participant observation
    • unstructured and semi-structured interviews
    • focus groups
    • document theme coding
  • Quantitative data provide useful background information to help interpret qualitative data. The integration of qualitative and quantitative information can increase the chances that the evidence base will be balanced, helping to meet the needs and expectations of diverse users. Examples of quantitative methods are:
    • surveys (via telephone, internet, laptop computer, face-to-face, etc.)
    • numeric coding of clinic records
    • structured observations
  • To make a credible argument that your program activities led to specific outcomes, you will need to use an appropriate research design. Some typical choices include:
    • Experimental designs, which use random assignment to create intervention and control groups (considered equivalent before the intervention), give the intervention to only one group, and then compare the groups on some measure of interest to see if the intervention had an effect.
    • Quasi-experimental designs, which compare existing and possibly nonequivalent groups (e.g., program participants versus those on a waiting list), or use multiple waves of data to set up a comparison (e.g., interrupted time series) to see if the intervention had an effect.
    • Correlational designs, which examine relationships between features of members of one group (e.g., cross-sectional surveys)
    • (Multiple) Case Study designs, in which an individual (or a situation) is investigated deeply, and considered substantially unique.

Develop a data analysis and reporting plan

  • Your plan should outline how the data for each monitoring and evaluation question will be coded, summarized and analyzed.The plan should address:
    • how conclusions will be justified (e.g., how the data relate to standards, if there are any)
    • how stakeholders both inside and outside the agency will be kept informed about the monitoring and evaluation activities and findings and supported in using the information that is generated
    • when the monitoring and evaluation activities will be implemented and how they are timed in relation to program implementation
    • the costs of monitoring and evaluation, presented in the format preferred or required by your agency or funding agency.
  • Describe how the monitoring and evaluation data will be reported. The reports should include several key parts:
    • A brief description of the program components and activities that will be assessed by monitoring and evaluation
    • The monitoring and evaluation questions and methods that will be used and where they came from
    • How results will be interpreted, and when and how they will be available

Develop a timetable and budget

  • Your timeline should cover:
    • Evaluation activities (e.g., obtaining resources, hiring personnel, securing a vendor, obtaining IRB clearance if needed, recruiting participants, collecting data, analyzing data, reporting findings)
    • Program activities
  • Your budget should:
    • Tell the same story as your monitoring/evaluation narrative
    • Include detailed descriptions or justifications if needed
    • Estimate monitoring/evaluation costs to be incurred during the program's duration
    • Set aside funds for miscellaneous or contingency expenses
    • Include all items required by the funding source
    • Include all items paid for by other sources
    • Include volunteer and in-kind services to be provided
    • Detail fringe benefits separate from salaries, if required
    • Include all fees for consultants or contractors
    • Delineate details of all non-personnel costs
    • Include indirect costs when appropriate

Tools and Templates

  • Page last reviewed: August 9, 2010
  • Page last updated: August 9, 2010
  • Content source:
TOP