Glossary and Appendices SUGGESTED CITATION: SalabarrэaнPeёa, Y, Apt, B.S., Walsh, C.M. Practical Use of Program Evaluation among Sexually Transmitted Disease (STD) Programs, Atlanta (GA): Centers for Disease Control and Prevention; 2007. Should you need help or more information regarding this manual or any specific tool, please contact CDC/DSTDP program evaluation staff at (404) 639н8276 or eval@cdc.gov. Glossary of Key Terms Х Activities: Actual events that take place as part of your program (e.g., developing pamphlets, testing patients). Х Audience: The individuals (such as your stakeholders and other evaluation users) with whom you want to communicate the results of an evaluation. Х Case study: A type of evaluation design used to learn about a program as a whole and in its context. Х Cluster evaluation: A type of evaluation design that looks at how well a set of related projects implemented at different sites achieves its goals and objectives. Х Code book: A document detailing instructions on how the data for a specific evaluation is coded. It describes each code so that codes are applied to the data in a standardized way. Х Coding: In quantitative analysis this is the process of arranging the data so that the computer can УreadФ the code and perform an analysis (e.g., if one of the variables is УsexФ you might code this as 1 for УfemaleФ and 2 for УmaleФ). In qualitative analysis, coding is used to reduce the data by organizing the text (data) into categories/themes. The codes are applied to text segments that match the theme(s) associated with the code. Х Context: The setting and environmental influences in which your program operates (e.g., laws, regulations, political climate). Х Data cleaning: The process of reviewing the data and preparing it for analysis by correcting erroneous data entry. Х Data collection: The process of administering instruments and gathering responses. Х Data interpretation: The process of determining the meaning or significance of evaluation findings to your program. Х Data management: The control of data handling operationsннsuch as acquisition, analysis, translation, coding, storage, retrieval, and distribution of data. Х Decision makers: Stakeholders in a position to do or decide something about your program. Х Dissemination: The process of communicating the procedures, results, and the lessons learned from an evaluation. Х Effectiveness: This relates to outcome evaluation, and it refers to the contribution a program makes to produce changes in the target population/organization. Х Evaluation plan: A document that includes what an evaluation consists of (i.e., purpose/uses/users of the evaluation, program goals and objectives related with the evaluation, logic model, evaluations questions and design, data collection sources and methods, and dissemination plan) and the procedures that will help guide the implementation of evaluation activities to be undertaken by your program. Х Executive summary: A 1н2 page summary of the full evaluation report. It provides a concise description of the evaluation activities, procedures, results, conclusions, and recommendations. Since this information can be extracted from sections of the full report, the summary is written last, but presented at the beginning of the report. Х Experimental design: An evaluation design in which individuals, groups, programs or facilities (i.e., clinics) are randomly assigned to an intervention (program) group or a control (nonнprogram) group. Because of random assignment, you reduce the chances of underlying differences between members of the control and intervention groups, which allows you to attribute change in outcomes to your programТs activities. Х Fidelity: When your STD program or intervention is implemented as intended. Х Focus group: A qualitative method used to collect data from a group of people (about 6 н11) who meet for 1н2 hours to discuss their insights, ideas, and observations about a particular topic with a trained moderator. Participants are selected because they share certain characteristics (e.g., individuals who have been tested for syphilis, women in detention facilities) relevant to the evaluation. Х Global Logic Model: Type of logic model which illustrates in a pictorial way how an entire program is supposed to work. Х Goal: A broad statement related to the purpose of your program that states what the program will accomplish (the desired result). Х Implementers: Stakeholders directly involved in undertaking program activities. Х Incidence: New cases of a disease in a specific population within a defined time period. Х Indicator: A specific, observable, and measurable accomplishment or change that shows whether progress has been made toward achieving a specific program output or outcome. Х Individual interview: A data collection method which involves dialogue with individuals who are carefully selected for their personal experience and knowledge with the issues at hand. Since these interviews are conducted individually, they are useful when anonymity is an issue or when asking about sensitive topics so participants can feel free to express their ideas. Х Inputs: Program resources (e.g., money, staff, materials). Х Intermediate outcomes: Intended effects of your program in the target population/organization that takes longer than shortнterm outcomes to occur (e.g., changes in STDнrelated policy or in behavior of the target population). Х Logic model: A picture of how a program/component/activity is supposed to work. Х Longнterm outcomes: Intended effects of your program in the target population/organization that may take several years to achieve, such as reduced disease transmission and incidence. Х Mixedнmethod design: A methodological approach where you collect data from more than one source and/or through different methods. The advantages of using mixed methods include: increasing the crossнchecks on the evaluation findings, examining different facets of the same phenomenon, and increasing stakeholdersТ confidence in the overall evaluation results. An example of mixed methods is using both a focus group and a survey to understand a target populationТs reluctance to use condoms. Х Morbidity: Sickness or illness. Х Nested logic model: A type of logic model, which depicts a component of a global logic model and describes the component in detail. Х Nonнexperimental design: An evaluation design in which participant information is gathered either before and after the program intervention or only afterwards. A control or comparison group is not used. Therefore, this design does not allow you to determine whether the program or other factors are responsible for producing a given change. Х Objectives: Measurable statements that describe the manner in which your program goals will be achieved. Х Observation: A data collection method in which you take field notes on the behavior and activities of individuals or describe the environment while observing these in the field. For example, you might take notes on the behavior of gay men in bath houses as part of your data collection procedures, or take notes on how patients are treated by clinic staff, and use such information to further develop or improve your program. Х Outcome evaluation: A type of evaluation that determines the effects of your program activities in the target population (e.g., changes in: knowledge, attitudes, beliefs, skills) or organization. The outcome components of a logic model (the right side) are used to plan an outcome evaluation. Х Outcome evaluation questions: Evaluation questions concerned with your program outcomes. Such questions can address whether the desired outcomes of your program were achieved, and whether your program produced changes in the target population(s)/organization. Х Outcome indicators: These measure whether progress was made toward achieving your shortнterm, intermediate, or longнterm outcomes. Х Outcome objectives: Measurable statements specifying the intended effect of your program in the target population(s)/organization. Х Outcomes: Intended effects or changes in the target population(s)/organization that result from your program. Х Outputs: The direct products of your program activities or services delivered (e.g., pamphlets developed, patients tested). Х Participants: Stakeholders being served or affected by your program. Х Participatory evaluation: Approach which involves stakeholders in all aspects of the evaluation process (i.e., design, question development, data collection, analysis, reporting, and use of results for decision making). Х Partners: Stakeholders who actively support and/or who have invested in your program. Х Performance measures: A set of indicators developed by CDCТs Division of STD Prevention with input from members of NCSD, state representatives of NCSD member grantees, and seven project areas where the measures were pilotнtested. Each project area receiving CDC funds is required to report on the measures (indicators) that apply to them. Х Population at risk: Groups that have a high probability of developing an STD or a related condition. Х Pre/post design: A nonнexperimental design where measures (data collection) are taken from the target population(s) before and after the activity/intervention. Х Postнonly design: A nonнexperimental design where measures (data collection) are taken from the target population(s) after the activity/intervention. Since this is a nonнexperimental design, it does not involve comparison/control groups. Х Prevalence: Number of cases of a disease in a population at a given point in time. Х Primary data: Data directly obtained by your program (e.g., surveillance, number of sex partners of syphilis cases collected through DIS interviews). Х Process evaluation: Also referred to as implementation evaluation, is a type of evaluation that determines whether your program and its activities are implemented as intended and why? /why not? Information gathered is used for refining or modifying these activities and related procedures. The inputs, activities, and outputs of a logic model (the left side) are used to plan a process evaluation. Х Process evaluation questions: Evaluation questions concerned with the implementation of your program or specific program component/activity. You develop process evaluation questions to examine the development and delivery of services and activities of your program, as well as its operations and administrative functions. Х Process indicators: Indicators that measure whether progress is made toward achieving implementation fidelity by your program. These indicators measure whether your program is functioning as planned, and relate to the outputs in your program logic model. Х Process objectives: Measurable statements describing your program activities and the actions involved in their implementation. Х Purpose of evaluation: General intent of the evaluation (e.g., to fineнtune program operations). Х Qualitative data: Detailed/narrative information that allows an inнdepth understanding of a topic/issue/population. An example of qualitative data is the answers representatives of a CBO would provide when asked for their thoughts on how to reach highнrisk adolescents. Х Qualitative design: Evaluation designs used to capture the target populationТs perceptions, opinions, experiences about your program activities, and/or to better understand a programmatic aspect in more depth by telling how and what happened, and when and to whom. Х Qualitative methods: Data collection methods used to gather narrative data to better understand the experiences of the target population and how a program activity works. Х Quantitative data: Numerical information. An example is data that identify the number of times (e.g., 1, 2, 3, 10) each client has visited your clinic within the last year. Х Quantitative design: A type of evaluation design which relies on examining quantitative data obtained from such instruments as closedнended surveys. This design option observes and measures information numerically, and employs statistical procedures. Х Quantitative methods: Data collection methods that are used to collect numerical data. An example is the use of a survey that queries respondents about their sexual history using closedнended questions in which numbers can be assigned to responses (e.g., number of sexual partners, frequency of condom use). Х Quasiнexperimental design: A type of evaluation design that makes comparisons between groups (intervention and control), but does not involve random assignment to intervention and control groups. It may be possible to attribute changes to the program if you can document with baseline information that the two groups are similar prior to receiving the program. Х Reliability: The consistency of a measure or question in obtaining very similar or identical results when used repeatedly. Х Risk factor: A factor that increases a person's chances of getting a disease or condition (e.g., having multiple sexual partners, lack of access to health care). Х Secondary Data: Information your program can use that has been collected by someone else (e.g., national data). This may include epidemiological data, socioнdemographics, health risk behaviors, and health policies. Х Shortнterm outcomes: Immediate effects of your program in the target population/organization (e.g., changes in knowledge, attitudes, skills, awareness, or beliefs). Х SMART: An acronym describing the criteria used to write objectives that are Specific, Measurable, Achievable, Relevant, and Timeнbound. Х Stage of development: The level of maturity of your program, which influences the type of evaluation you conduct (e.g., planning, implementation and maintenance stages). Х Stakeholders: Individuals or organizations directly or indirectly affected by your program and/or the evaluation results (e.g., STD program staff, family planning staff, representatives of target populations). Х STDнrelated risk factors: Specific behaviors, attitudes and/or limited knowledge that put individuals at risk of STDs. Х Surveillance data: Data collected in an ongoing, systematic way regarding agent/hazard, risk factor, exposure, or health event. Surveillance data are essential for the planning, implementation, and evaluation of public health practice. Х Survey: A method of collecting information that can be selfнadministered, administered over a telephone, administered using a computer or administered faceнtoнface. Surveys generally include closeнended questions that are asked to individuals in a specific order and provide multiple choice or discrete responses (e.g., УHave you been tested for syphilis in the last 6 months?Ф). Х Users of an evaluation: The specific persons/organizations that will employ the evaluation findings in some way (e.g., STD Director, CBO, funder). Х Uses of an evaluation: The specific ways that program staff and other stakeholders will apply what is learned from the evaluation (e.g., change STD clinical practice, inform STD prevention policy). Х Validity: The extent to which a question actually measures what it is supposed to measure. For example, a question that asks how often an individual uses a condom is valid if it accurately measures the actual level of condom use; it is not valid if instead it measures the extent to which an individual realizes that s/he should wear a condom. List of Appendices Appendix A Evaluation Designs Appendix B Syphilis Case Illustrating the Application of the Manual Appendix C Sample Logic Models of STD Programs Х California DHS/STD Control Branch and California STD/HIV Prevention Training Center Х Idaho Department of Health and Welfare, STD/AIDS Program Х Forsyth CountyТs Syphilis Elimination Project (North Carolina) Х Michigan Department of Community Health, STD Program Appendix D Sample Evaluation Plans of STD Programs Х California DHS/STD Control Branch and California STD/HIV Prevention Training Center Х Idaho Department of Health and Welfare, STD/AIDS Program Х Forsyth CountyТs Syphilis Elimination Project (North Carolina) Х Michigan Department of Community Health, STD Program APPENDIX A Evaluation Designs Appendix A Evaluation Designs INTRODUCTION The design of your evaluation influences the types of conclusions you can make from your findings. You select the evaluation design(s) once you have the final list of evaluation questions, have classified them as either process or outcome, and have determined the resources available for your evaluation activities. WHAT IS AN EVALAUTION DESIGN? An evaluation design is the selection of procedures used to demonstrate that a program is worthwhile, effective and efficient. Also, it allows you to draw, with varying degrees of certainty, specific conclusions as accurately as possible, and to determine the limitations of the evaluation. WHAT TYPES OF DESIGN CAN YOU USE FOR AN EVALUATION? The evaluation design should be based on the evaluation questions, stakeholdersТ needs, and available resources, including time and expertise. Generally evaluation questions that look at monitoring outcomes or measuring change in the target population as a result of a program/intervention tend to apply nonнexperimental/observational or quasiнexperimental/experimental designs, respectively. However, if the evaluation question looks at how the program is being implemented (barriers and facilitators of program operations/implementation) or whether your program is reaching the appropriate target population, qualitative methods alone or in combination with quantitative methods will tend to be applied (see Tool 4.2). Nonнexperimental/observational designs quantify progress toward achieving your outcomes or change in the target population without using comparison groups (e.g., crossнsectional, longitudinal). You can collect information from program participants before and after (pre/post) an activity or after the activity only (post only). With this design you cannot determine with certainty whether your program or other factors are responsible for producing change, but nonнexperimental designs can give you an idea if your activity(ies) are accomplishing what you intend. These also require less time and money to implement compared to experimental/quasiнexperimental designs. For instance, you designed a health education activity about chlamydia (Ct) for females at a family planning clinic. You are interested in monitoring progress toward achieving one of your program outcomes (i.e., increased knowledge of target population regarding Ct symptoms, prevention, and where to go for screening/testing). You can consider a nonнexperimental design where you will administer a questionnaire before and after the education activity. The results can give you an idea of the progress made toward achieving your outcome, and may give you sufficient information to improve your program and persuade stakeholders that the program is making a contribution. However, you will not be able to determine if the results were due to your activity or to other issues (e.g., participants attended another Ct educational activity somewhere). If you want to determine if the outcomes achieved can be attributed to your programЧcalled Уcausal attribution,Ф you can use УexperimentalФ or УquasiнexperimentalФ designs. However, based on the cost, time, and controls inherent in these designs, they are not always feasible for evaluating STD program activities. Below you will find information on these types of designs. Experimental designs can produce the strongest evidence of program effectiveness, primarily because groups or individuals (e.g., clinics, groups of patients, individual clients, or patients) are randomly assigned to either an intervention program group or a control (nonнprogram) group. All groups/individuals have equal probability of being selected to the program or nonнprogram group. Randomization increases the likelihood that any changes in your target population(s) can be attributed to the program. Nevertheless, experimental designs are rarely practical for STD and public health programs due budget, staffing and time constraints. Also, this design may raise ethical issues because clients who are in the control group do not receive the designed intervention, which often includes more beneficial or enhanced services. Quasiнexperimental designs can be used when you choose to evaluate program outcomes, but randomization is not possible/difficult or you do not have the resources to do an experimental design. As with experimental designs, one group of individuals is chosen to receive the program/activity, and another group of individuals serves as the comparison group and is not engaged in the program activities. However, you are unable to randomly assign individuals into groups to participate in the evaluation and the groups you select may not be equivalent in key characteristics, such as demographic. It is, therefore, important that you document how the groups are similar and how they differ on any key factors relevant to your program. Because the groups are not identical, you are unable to attribute any changes in the intervention group solely to your activity. However, the more the two groups are similar, the more confident you can be that your intervention activities are leading in the desired effect. Consider for instance a new health education intervention you have developed. You want to evaluate the effectiveness of this intervention in changing the attitudes and behaviors about STD prevention and transmission in the target population. Since you do not have enough resources to conduct a full experimental evaluation, you design a quasiнexperimental evaluation, using two STD clinics. In one you would provide the usual health education activities, and in the other you would conduct the new education and counseling activities. You would collect information from the two clinics before and after the educational activities to determine if there was a change in patientsТ attitudes and behaviors pertaining to STDs, and if these changes were different between clinics. In this case you will have more certainty than the results may have been due to your program compared to a nonнexperimental design. Qualitative methods help examine evaluation issues/questions in depth and rely on openнended questions to elicit detailed information from a limited number of individuals. Some examples of qualitative methods include inнdepth interviews, focus group, and observation (see Tool 4.2). You can utilize these methods if you are interested in learning how your program operates and why; or if you want to capture participantsТ stories, perceptions and experiences with your program or specific program activities, or if you want to measure whether a program is reaching the appropriate target population, or whether your program activities are being implemented as planned. Qualitative designs/methods can help answer the УhowФ and УwhyФ. KEY TERMS Experimental design: An evaluation design in which individuals, groups, programs or facilities (i.e., clinics) are randomly assigned to an intervention (program) group or a control (nonнprogram) group. Because of random assignment, you reduce the chances of underlying differences between members of the control and intervention groups, which allows you to attribute change in outcomes to your programТs activities. Mixedнmethod design: A methodological approach where you collect data from more than one source and/or through different methods. The advantages of using mixed methods include: increasing the crossнchecks on the evaluation findings, allowing you to examine different facets of the same phenomenon, and increasing stakeholdersТ confidence in the overall evaluation results. An example of mixed methods is using both a focus group and a survey to understand a target populationТs reluctance to use condoms. Nonнexperimental design/observational: An outcome evaluation design in which participant information is gathered either before or after the program or only afterwards. A control or comparison group is not used. Therefore the design does not allow you to determine whether the program or other factors may be responsible for producing a given change. Qualitative design: A type of evaluation design used to capture the target populationТs perceptions, opinions, experiences about your program activities, and/or to better understand a programmatic aspect in more depth by telling how and what happened, and when and to whom. Quasiнexperimental design: A type of evaluation design that approximates experimental design, but with no random assignment to groups (intervention and control), but does not involve random assignment to intervention or control groups. It may be possible to attribute changes to the program if you can document with baseline information that the two groups are similar prior to receiving the program. REFERENCES California Department of Health ServicesТ Tobacco Control Section. Using Case Studies to do Evaluation. Retrieved January 1, 2005, from (www.dhs.cahwnet.gov/ps/cdic/ccb/TCS/documents/ProgramEvaluation/pdf). Centers for Disease Control and Prevention. (1999). Framework for program evaluation in public health. MMWR Recommendations and Reports, 48(RRн11). Retrieved January 5, 2005, from http://www.cdc.gov/mmwr/preview/ mmwrhtml/rr4811a1.htm Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Division of Adolescent and School Health. (2004). Evaluation steps tools. Unpublished manuscript. MacDonald, G., Starr, G., Schooley, M., Yee, S. L., Klimowski, K., & Turner, K. (2001, November). Introduction to Program Evaluation for Comprehensive Tobacco Control Programs. Atlanta, GA: Centers for Disease Control and Prevention. Retrieved October 17, 2004, from http://www.cdc.gov/tobacco/ evaluation_manual/Evaluation.pdf Patton, M.Q. Qualitative Research and Evaluation Methods. (3rd Edition): Sage Publications, Inc. Preskill, H. & RussнEft, D. (2005). Building Evaluation Capacity: 72 Activities for Teaching and Training. Thousand Oaks, CA: Sage. Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (1999). Evaluation: A systematic approach (6th ed.). Thousand Oaks, CA: Sage. Savela, P.D. and McDermott (1993). Health Education Evaluation and Measurement. Brown & Benchmark Publishers. Stecher, B. M., & Davis, W. A. (1987). How to focus an evaluation. Newbury Park, CA: Sage. Thompson N.J., McClintock H.O. (1998). Demonstrating your programТs worth: a primer for evaluation on programs to prevent unintentional injury. Atlanta: GA: Centers for Disease Control and Prevention, National Center for Injury Prevention and Control. W.K. Kellogg Foundation (2001, December). Logic model development guide. Retrieved January 8, 2005, from http://www.wkkf.org/Programming/ ResourceOverview.aspx?CID=281&ID;=3669 W.K. Kellogg Foundation. (1998, January). W.K. Kellogg Foundation Evaluation Handbook. Retrieved October 17, 2004, fromhttp://www.wkkf.org/ Programming/ResourceOverview.aspx?CID=281&ID;=770 APPENDIX B Syphilis Case Illustrating the Application of the Manual Appendix B Syphilis Case Illustrating the Application of the Manual THE SITUATION After analyzing syphilis morbidity reports and interview records, STD officials in the city of ChancriнLa noticed an increase in the number of syphilis cases among men who reported having sex with men (MSM). From 1999 to 2002, the number of MSM cases had gone up, as well as the percentage of MSM cases. In 1999, there was only 1 MSM case, which represented .9% of the syphilis cases in males. By 2002, the number of MSM cases had increased to 14, and represented 29.2% of their male cases. Further analysis revealed that the cases were not concentrated in one geographic part of the city, based on the malesТ residences. However, through interviews conducted by the Disease Intervention Specialists (DIS), the STD officials learned that most of the males socialized in the same area. ACTIONS TAKEN A DIS was already screening sporadically at a gay bar. To address this emerging problem, STD officials initiated meetings with six communityнbased organizations (CBOs) that work with the MSM community. Together, they designed a plan of action to implement jointly. One of the activities implemented was syphilis screening in different venues (i.e., bathhouses, gay bars, CBOs, mobile unit, and a gay parade). The STD director and program staff were interested in determining which of these screening approaches was more successful in reaching the target population. The following illustrates the steps involved in designing and implementing this evaluation. Step 1: Engaging Stakeholders in the Evaluation. 1.) Who were the stakeholders in this scenario? Х Implementers: STD staff, CBOs staff Х Decision Makers: HD management and STD director Х Participants: Representatives of the target population ( MSM) Х Partners: Businesses (i.e., bathhouses, gay bars), parade organizers Х State Laboratory 2.) How was stakeholdersТ input obtained for the evaluation? The STD program staff organized a meeting to brief other stakeholders on the screening activities implemented and importance of conducting an evaluation of such initiative. Also to: Х obtain stakeholder input, Х determine stakeholder needs, interests, and concerns about the evaluation, Х plan stakeholder involvement in the evaluation and what they hope to learn, and Х plan methods of keeping stakeholders informed during evaluation. 3.) How was stakeholdersТ involvement retained throughout the evaluation? StakeholdersТ roles and responsibilities were discussed and agreed upon. All stakeholders reviewed documents pertaining to the evaluation (e.g., evaluation plan, instruments, analysis, and report), and decisions were made by consensus. The STD staff agreed to send a monthly email summarizing the progress of the evaluation as well as STDнrelated information affecting MSM to all stakeholders. In addition, stakeholders participated in a monthly meeting through the end of the evaluation. Step 2: Describing the Program Considering the importance of mutual understanding and that the evaluation involves individuals who may be not be familiar with the STD program and the screening activity, the STD program staff shared some background information with the stakeholders at one of the monthly meetings. The project area had been involved in needs assessment activities when developing the Comprehensive STD Prevention System (CSPS) grant application and shared information on the syphilis outbreak among MSM as well as behavioral data. They also reiterated that the purpose of the entire STD program was to address the STD needs of the project area, with emphasis on MSM. Also, the STD director provided the following information about the screening activity to be evaluated. 1.) What was the goal of the screening activity? Х Reduce syphilis in atнrisk MSM living in ChancriнLa. 2.) What were the screening related objectives? Х By December 2002 the STD program staff will implement syphilis screening in 4 venues frequented by MSMs. Х By December 2003, the number of atнrisk MSM screened for syphilis will increase from X to Y. 3.) What were the resources? Х HD staff Х CBO staff Х HD$ Х CBO $ Х Access to four venues Х Screening equipment and supplies Х Mobile Van Х Laboratory services Х Condoms 4.) What activities were being conducted with those resources? Х Training of CBO staff to provide information on syphilis screening Х Monthly screenings in 3 venues (bathhouses, gay bars, and through mobile van). Х Screening at Gay Pride parade Х Distribution of condoms Х Request for assistance from local businesses frequented by MSM (e.g., permission to park mobile van in their parking lots). Stakeholders wanted to better understand how these screening components were going to fit with other STD program activities and lead the way to the results they were expecting. They decided to develop a logic model of the screening activity (see Logic Model on next page). Step 3: Focusing the Evaluation Design Once stakeholders understood, via the logic model, the connections between the screening activity components and corresponding outputs and outcomes, they focused the evaluation by determining the uses and users of evaluation results, the questions they wanted the evaluation to answer, and the evaluation design(s) to be applied. The following shows the decision making process. 1.) What is the purpose of the evaluation? Stakeholders agreed that they wanted to gain insight on the screening activities implemented and determine which venue(s) was the most successful in reaching the target population (i.e., atнrisk MSM). 2.) Who will use the evaluation results? All of the aforementioned stakeholders, most particularly the STD program staff. Logic Model: Syphilis Outbreak in ChancriнLa Situation: Outbreak of syphilis in MSM community. Screening initiated in 4 venues. INPUTS ACTIVITIES OUTPUTS OUTCOMES Х StaffХ FundingХ Screening suppliesХCondomsХLaboratoryХAppropriatemedicationХ Mobile VanХEducationalmaterials Х Train CBO staff onsyphilis screeningХ Meet withbathhouse and gaybar managementХ Conduct screeningsin:Ц BathhouseЦ 2 Gay BarsЦ Mobile vanЦ Gay Pride paradeХ Treat syphilis cases.Х CondomdistributionХ Distribution ofeducationalmaterials Х Meetings wereconducted with 2CBOs and gay barmanagementХ Training deliveredto CBO staff onsyphilis screeningand treatmentХ Syphilis screeningconducted on amonthly basis in:Ц BathhousesЦ 2 Gay BarsЦ Mobile vanнvarious locationsХ Syphilis screeningat the annual GayPride paradeХ MSM screened andinfected individualstreatedХ Condoms andeducationalmaterialsdistributed Short Intermediate Long Х Increased use ofsyphilis screeningamong atнrisk MSMХ Increasedawareness ofsyphilis amongMSM communityand CBO staff Х Increased accessof MSM tosyphilis and STDprevention andcontrol servicesХ Increased condomuse among atнrisk MSM Х Reduced riskbehaviorsХ Reduced incidenceof syphilis andother STDs. 3.) How will the evaluation results be used? It was determined that the results of the evaluation would be used to reduce/expand the screening activity locations. 4.) What questions do you want the evaluation to answer? Stakeholders submitted all possible questions they wanted the evaluation to answer which included: Х Was the screening activity implemented as planned in the different venues? Ц What are the barriers and facilitators in carrying out syphilis screening in the different venues? Х Which venue(s) is(are) more effective in reaching and screening atнrisk MSM? Ц Which venue is more acceptable for syphilis screening among atнrisk MSM? Ц How many MSM were screened, by venue? What was the number of new positives found, by venue? If not as expected, why? Ц Where should screenings be conducted, and when? Х Where should condoms be distributed? Ц Were the condoms distributed to the establishments where cases are found? (Right number and to the right places.) Ц Were these the appropriate places to distribute condoms? Х Was the number of cases reduced to the degree planned? Х Did awareness about the syphilis outbreak increase among atнrisk MSM and CBO staff? Х Did awareness of prevention measures among atнrisk MSM and CBO staff increase? 5.) Since results of the evaluation needed to be submitted within 3 months, and the STD program and CBOs did not have the resources to answer all these evaluation questions, stakeholders decided to focus on the following group of questions by their level of importance. The questions were also classified as either process or outcome. Х Was the screening activity implemented as planned in the different venues? (process) Ц What are the barriers and facilitators in carrying out syphilis screening in the different venues? (process) Х Which venue is more effective in reaching and screening atнrisk MSM? Ц Which venue is more acceptable for syphilis screening among atнrisk MSM? (process) Ц How many MSM were screened? What was the number of new positives found? If not as expected, why? (process) Ц Where should screenings be conducted, and when? (process) 6.) Which evaluation design is most appropriate to guide data collection for the evaluation questions given the available resources (budget, time, staffing)? Since the purpose of the evaluation was to make programmatic decisions about the screening venues as opposed to 1) determining the effects of the screening activity in the target population or 2) if these were due to the screening activity, quasiнexperimental and experimental evaluation designs were out of the question. So stakeholders, along with a professional evaluator from the health department (HD), selected nonнexperimental and qualitative designs to guide the data collection process pertaining to the evaluation questions. EVALUATION QUESTIONS DESIGN RATIONALE Х Was the screening activity implemented as planned in the different venues? Qualitative Design Х Used to record (observe) screening activities as they occurred in the four venues and to determine if they were implemented with fidelity. Х What are the barriers and facilitators in carrying out syphilis screening in the different venues? Х Used to obtain inнdepth understanding of perceived factors that either hindered or facilitated the implementation of syphilis screening in the different venues among implementers and business owners. Х Which venue(s) is(are) more effective in reaching and screening atнrisk MSM? This question depends on the next three questions to be answered Х Which venue is more acceptable for syphilis screening among atнrisk MSM? Х How many MSM were screened? What was the number of new positives found? If not as expected, why? Qualitative Design Х Used to obtain the opinions of a sample of individuals at the screening venues regarding factors that motivated them to accept being screened in the venue, thoughts re the other venues, other venues that still need to be reached. Х Used to count how many MSM were actually screened per venue and how many of these were active cases of syphilis. Х Where should screening be conducted, and when? Nonнexperimental post only design Х Used after the screening takes place to determine where and which time/days of the week received the highest number of atнrisk MSM being screened. Step 4: Gathering Credible Evidence Since all the evaluation questions measured process of the screening activity, stakeholders reviewed the logic model to identify corresponding outputs. Then, they selected the indicators to measure progress of the syphilis screening activity in the different venues, where/from whom data would be obtained for each indicator, and the corresponding data collection method(s). The following reflects the decisions made accordingly. To help maintain confidentiality of respondents it was agreed that (1) data collectors would strip all identifiers from the data gathered (observation logs, interviews, focus groups), and (2) secure it in the EvaluatorнHDТs office. Stakeholders organized all the decisions made up to that point and developed an evaluation plan consisting of a narrative component (stakeholders, rationale, purpose, goal/objectives to be addressed in the evaluation, logic model, users/uses of the evaluation, dissemination approach, timeline, and budget) and a matrix (evaluation question, design, indicators, data sources/methods, person responsible and schedule). Then, the evaluators (HD/CBO) and STD staff drafted all evaluation instruments and protocols, gave these to other stakeholders for their input, and incorporated changes. Instruments were also pilotнtested. EVALUATION PLAN MATRIX EVALUATIONQUESTIONS PROCESSINDICATORS DATA SOURCE DATA COLLECTIONMETHOD DATA COLLECTION PROCEDURES DATA ANALYSIS(SEE STEP 5) PERSON RESPONSIBLE SCHEDULE Q1. Was thescreening activityimplemented asplanned in thedifferent venues? Х Number ofimplementers whofollowed thescreening procedureswith 100%consistency in thefour venues. Х Observations(implementersТperformance duringscreening) Х Observation (log) Х Evaluator from HD Х Collected for eachimplementer on threeoccasions duringevaluation; final bydd/mm/yy Х Quantitative(descriptive) Х Type of changes madeto the screeningactivity in the fourvenues from the timeit started. Х Individuals(implementers andSTD director) Х Interview(individual/openнended) Х Evaluator from HD Х Collected bydd/mm/yy Х Review interviews,identify commonthemes and groupthem by data sources. Q2. What are thebarriers andfacilitators in carryingout syphilis screeningin the differentvenues? Х Barriers andfacilitators identifiedby implementers,business owners anddecision makers rethe implementation ofthe screening activity.Х Type of challenges rethe implementation ofsyphilis screening atthe different venuesreported byimplementers. Х Individuals(implementers,business owners,decision makers) Х Interview (focusgroups) Х Evaluator from oneof the CBOs Х Collected bydd/mm/yyХ Collected bydd/mm/yy Х Review transcriptions,identify commonthemes, group themby data sources, andidentify any patternsacross and withinsources. Q3. Which venue ismore acceptable forsyphilis screeningamong atнrisk MSM? Х Factors thatmotivated MSM toaccept screening in avenue and theiropinion on the otherthree venues. Х Individuals (sampleof MSM as they arescreened at thedifferent venues) Х Interview(individual/openнended) Х Evaluator from oneof the CBOs Х Review transcriptions,identify commonthemes, and identifyany patterns acrossrespondents. continued EVALUATIONQUESTIONS PROCESSINDICATORS DATA SOURCE DATA COLLECTIONMETHOD DATA COLLECTION PROCEDURES DATA ANALYSIS(SEE STEP 5) PERSON RESPONSIBLE SCHEDULE Х Type ofrecommendationsprovided by MSMabout other venues,which still need to bereached. Q4. How many MSMwere screened? Whatwas the number ofnew positives found?If not as expected,why? Х Number of monthlysyphilis screeningamong MSM atbathhouses, gay bars,and mobile van.Х Number of syphilisscreenings conductedamong MSM at thegay pride parade.Х Number of screeningtests which turnpositive. Х Documents(implementers log,lab records) Х Documents Review Х DISХ Evaluator from HD Х Collected bydd/mm/yy of eachmonth of theevaluationХ Collected within 4days of Gay PrideParadeХ Collected within 7days of parade andmonthly for the othervenues Х Number of screeningswill be comparedacross venues andwith expectednumbers set at thebeginning of theactivity. Q5. Where shouldscreening beconducted, and when? Х Venue(s) yielding themost number of testsand new positives. Х Documents(implementers log,lab records) Х Documents Review Х Collected bydd/mm/yy Х Will use findings fromQ2, Q3, Q4. Step 5: Justifying Conclusions While data collection was taking place, stakeholders met and determined how the data from the indicators were to be analyzed. The evaluation plan was revised to include the data analysis process (as presented in the last column of the previous table), and the schedule and person responsible for conducting the analyses. The following illustrates the main findings of the evaluation, organized by evaluation question and corresponding indicators. Evaluation Question: Was the screening activity implemented as planned in the different venues? INDICATOR SUMMARY OF FINDINGS Х Number of implementers who followed the screening procedures with 100% consistency in the four venues Х Observations of all 7 staff screening individuals revealed that most of them (i.e., 5) followed the screening procedure all the time in the four venues. It was also found that the 2 staff not following the procedures were relatively new, not only to STD, but to the screening activity and protocols. Due to time constraints of the STD field supervisors, the training received had not included practice sessions. Х Type of changes made to the screening activity in the four venues from the time it started. Х Interviews with implementers and STD director indicated that in the past year, all the monthly screenings were held at the bath house, but only for the first 6 months at one of the bars (because it closed), and only three times at the second bar. Monthly screenings were held every month in the mobile van, but not in the locations they hoped for. Screening was held all day at the Gay Pride parade. Х Mobile van locations had to change twice because of complaints from neighborhood residents. Two locations that had been chosen originally had no parking available for the van and were removed from the list. APPENDIX B Evaluation Question: What are the barriers and facilitators in carrying out syphilis screening in the different venues? INDICATOR SUMMARY OF FINDINGS Х Barriers and facilitators identified by implementers, business owners and decision makers re the implementation of the screening activity. Х Screenings needed to be held at night, and it was hard to get staff to work those hours. Х STD program staff needed commercial driving licenses to drive the mobile van; only one staff person had that license. Х When interviewed, the bar managers expressed a fear of revenue loss when patrons were away from their barstool, or tables, to getting tested. They also feared poor bar attendance if the screening events were advertised, since this might keep some patrons away. One bar closed halfway through the year. So, even though they had agreed to participate, УsomethingФ always seemed to come up on the night the screening was scheduled, so it had to be cancelled. Х In general, MSM claimed they were more interested being tested for HIV than for syphilis because HIV status was more important than syphilis, and they did not believe syphilis was present in their community. Х Having insufficient time to create attractive materials for the Gay Pride Parade to encourage MSM to be tested for syphilis. Х Facilitators included: (1) each facility having a room that was private, and could be used for screening, (2) having a contact from one of the CBOs work with the organizers of the Gay Pride parade to allow advertising and testing for syphilis, and (3) the bathhouse manger encouraged participation in the screening and advertised when the screenings would be held. Х Type of challenges re the implementation of syphilis screening at the different venues reported by implementers. Х Gay bar owners feared that their clients were going to identify their locales with infections or consider it a Уa dirty placeФ and lose clients as a result. Х The mission of the gay bar was socialization; to introduce screening for a sexually transmitted disease was not compatible with their mission. Х Lack of knowledge and experience of half of the screening staff with the MSM community. Х Getting permission to draw blood at a public gathering (Gay Pride). Х Neighborhood complaints about the noise that the van produced resulted in much staff time being spent responding to complaints, and to relocating the van. Evaluation Question: Which venue is more acceptable for syphilis screening among atнrisk MSM? INDICATOR SUMMARY OF FINDINGS Х Factors that motivated MSM to accept screening in a venue and their opinion on the other three venues. Х Results of interviews with MSM indicated more willingness to be screened for syphilis at the bathhouse than at the gay bars. Since there is more sexual activity going on in the bathhouse than in the bars, they said they feel more at greater risk for syphilis and other STDs. Х Previous syphilis infection or knowing someone who had syphilis was another motivator. Х Ease of access/quickness of both the screening test and test results. Х Gay Pride testing was good for visibility; however, most MSM surveyed there declined testing if it involved waiting 30 minutes or more. Х Important to have a consistent schedule for mobile van so that clients could locate van easily to obtain results. Х Type of recommendations provided by MSM about other venues, which still need to be reached. Х Interviewees suggested having screening activities or arranging it with those who have Уcircuit partiesФ. Х Another suggestion was to include an ad in the local gay newspaper and in gay websites about the syphilis outbreak and where to be screened/treated. APPENDIX B Evaluation Question: How many MSM were screened? What was the number of new positives found? If not as expected, why? INDICATOR SUMMARY OF FINDINGS Х Number of monthly syphilis screening among MSM at bathhouses, gay bars, and mobile van. Х Bathhouses: 250 men approached; 150 screened Х Gay Bars: 500 men approached; 150 screened Х Mobile Van: 1000 men approached; 300 screened Х Number of syphilis screenings conducted among MSM at the gay pride parade. Х Gay Pride: 200 men approached; 30 screened Х Number of screening tests which turn positive. Х Bathhouses: 5 positive Х Gay Bars: 2 positive Х Mobile Van: 1 positive Х Gay Pride: 0 positive Evaluation Question: Where should screenings be conducted, and when? Х Venue(s) yielding the most Х Highest number of atнrisk MSM tested: Mobile van number of tests and positives. Х Highest percentage of active syphilis cases: Bathhouses INTERPRETATION OF FINDINGS Stakeholders received these findings and met to interpret them. It was concluded that the implementation of the screening activity was facilitated by: Х Having most of the screening staff follow the screening protocols. Х Having available private rooms to conduct the screening at each venue. Х Partnering with a proactive bath house manager (who agreed to advertise screening). Х High selfнperceived risk for syphilis among bathhouse clients. Х Previous experience with syphilis among MSM. Х Ease of access/quickness of both the screening test and test results. Х Being visible at the Gay Pride Parade. Х Increasing access of atнrisk MSM to syphilis screening via a mobile van. There were factors that affected screening implementation such as: Х Preнplanning issues Ц The need for more training on the implementation of screening protocols for new staff. Ц Van locations with no parking available. Ц Limited number of staff with commercial driving license to drive the van. Ц Competing demands among screening staff, making it difficult to work after hours. Ц Neighborhood complaints about the noise produced by the van. Ц Lack of attractive advertising materials. Х Business limitations Ц Gay bar being closed. Ц Fear by gay bar managers having their business being perceived as УdirtyФ if STD testing was necessary. Ц Conflict between aim of bar (socialization) and distracting public health activity. Ц No time of gay bar managers to advertise screening. Х Target populationТs lowнperceived risk for syphilis and lack of awareness about the outbreak among gay bar clients, and having to wait more than 30 minutes to be screened at the parade. RECOMMENDATIONS Based on the findings the following were recommended: Х To conduct booster sessions on screening protocols with all screening staff and couching from field supervisors with new staff. Х Continue using the mobile van for syphilis screening to reach atнrisk MSMs with the following recommendations: Ц Before using the van in residential areas, obtain permits in advance to locate the van. It is important to meet with the neighborhood leaders to make them aware of the magnitude of the outbreak and the importance of conducting screening. Build a relationship with them to gain access and acceptability into the community, and request their input on where/when to place the van. Ц Increase the number of screening staff with commercial drivers licenses by given those interested time to obtain the training and license and incentives for doing so (e.g., acknowledgement at staff meeting). Ц Have a consistent schedule for mobile van so that clients could locate van easily to obtain results. Ц Make sure that the waiting time for screening is less than 30 minutes. Х Keep strengthening the relationship with the bathhouse manager so screening activities can continue. Х Since gay bars do not seem to be the most successful places for syphilis screening, keep providing them with prevention materials and explore other venues such as Уcircuit partiesФ. Х Develop monthly schedules in advance, including the exact times in which screening activities will be held, so that screening staff can make arrangements to work after hours, if needed. Х Consult with the communication or health education specialists within the health department and CBOs to develop attractive material to advertise screening times/places in the gay media and establishments, as well as places that MSM tend to visit. Step 6: Sharing Lessons Learned and Ensuring Use of Findings The evaluation findings were shared with pertinent audiences and some of the evaluation recommendations have been implemented by the STD program and other stakeholders. The following shows who received the evaluation results and in which format, how the STD program ensured that the evaluation results would be used for decision making, and which decisions have been implemented. 1.) Who received information on the evaluation results and in which format? Х HD and STD director (executive summary and full evaluation report) Х STD program staff (executive summary and oral presentation) Х CBOs (executive summary and oral presentation) Х MSM Leaders, represented on the Stakeholder group (oral presentation and fact sheet) Х MSM Community (fact sheet) Х CDC (oral presentation at the National STD Conference) Х NCSD (executive summary, fact sheet) Х Businesses (i.e., bathhouses, gay bars) and parade organizers (oral presentation, executive summary and fact sheet) 2.) How were stakeholders kept informed on the evaluation? Х Regular monthly meetings Х Eнmail Х Final report 3.) What steps were taken to ensure use of the evaluation findings? Х Stakeholders helped draft recommendations based on the evaluation findings. Х STD director proposed recommended changes to HD management, MSM leaders, and CBOs. Х Followнup meetings were conducted with those who can make decisions regarding the implementation of syphilis screening in different venues. 4.) How were evaluation findings used? Х Day and times of mobile van screening were adjusted to meet increased demand at peak times for two venues. Х One venue was discontinued as a result of the analysis of volume of positive test results (i.e., gay bars). Х As a result of discovering that the mobile van driver needed a commercial license, the STD program identified several staff willing to drive the van and arranged commercial driver training for those staff. Four staff subsequently received their commercial driverТs license. Х The STD program revised the plan to incorporate meetings to advise local law enforcement about the mobile van activities. APPENDIX C Sample Logic Models of STD Programs Appendix C Sample Logic Models of STD Programs CALIFORNIA DHS/STD CONTROL BRANCH AND CALIFORNIA STD/HIV PREVENTION TRAINING CENTER Goal: To reduce prevalence of STDs among HIV+ MSM in California IDAHO DEPARTMENT OF HEALTH AND WELFARE, STD/AIDS PROGRAM LOGIC MODEL FOR SYPHILLIS REPORTING SYSTEM Goal: Improve the quality of syphillis interviewing and reporting FORSYTH COUNTYТS SYPHILIS ELIMINATION PROJECT (NORTH CAROLINA) Goal: To reduce the incidence of syphilis among highнrisk African Americans male and female, 18н45 years of age in Forsyth County. MICHIGAN DEPARTMENT OF COMMUNITY HEALTH, STD PROGRAM Goal: Increase Chlamydia screening in females 15Ц24 years old at an emergency department (ED) in Detroit. APPENDIX D Sample Evaluation Plans of STD Programs Appendix D Sample Evaluation Plans of STD Programs CALIFORNIA DHS/STD CONTROL BRANCH AND CALIFORNIA STD/HIV PREVENTION TRAINING CENTER Susan Watson, MPH and Michael McElroy, MPH Evaluation Plan for California STD Toolkit EVALUATION COMPONENT ACTIVITY List the STD program activity to be evaluated. Х Program activity to be evaluated: Toolkit for clinical providers of HIV+ MSM aimed at facilitating an increase routine STD screening and awareness of health needs. Х Stakeholders involved in the evaluation: Program manager, STD Program staff, CDC/AED, Medical advisory board, MSM health service providers and clinic staff, DIS staff, Consultants (Better World Advertising), STD Director, Office of AIDS Prevention Chief, DIS Chief, Clients/patients and community members. Х Rationale for selecting the program activity: STDs increase the risk of acquisition and transmission of HIV, but adherence to the STD screening guidelines for MSM is inconsistent among clinical providers. By creating and distributing a toolkit with relevant reference materials (e.g., risk assessment guidelines, screening guidelines) to clinical providers of HIV+ MSM, there should be an increase in awareness of the need for and ultimately, the practice of increased routine STD screening. Х Purpose of the evaluation: To evaluate the implementation and effectiveness of the MSM Toolkit. List stakeholders (agency) involved in the evaluation. List the rationale for the STD program activity to be evaluated. List the purpose of the evaluation. List the program goal(s) and objectives to be addressed through the evaluation. (Note: those objectives with У*Ф will be addressed in this evaluation.) GOAL: To reduce the prevalence of STDs among HIV+ MSM in California. Process objectives*: Х By June, 2006, project staff will have developed provider reference materials on STD screening recommendations and sexual health of HIV+ MSM. Х By September, 2006, project staff will distribute the MSM Toolkit to a sample of clinical providers caring for HIV+ MSM in California to pilot the intervention (approximately 50н60 across 4 local health jurisdictions). Х By December, 2006, project staff will revise the MSM Toolkit based on feedback from the providers who participated in the pilot. continued EVALUATION COMPONENT ACTIVITY Shortнterm outcome objectives*: Х By November, 2006, the clinical providers given a Toolkit will report in a postнtoolkit assessment questionnaire as having increased awareness about the need for STD screening among HIV+ MSM from Y% to Z%. Х By November, 2006, the clinical providers given a Toolkit will report in a postнtoolkit assessment questionnaire as having increased awareness of the health needs of MSM form Y% to Z%. Intermediate outcome objectives: Х By (month/year), there will be an increase in routine screening for STDs in HIV+ MSM among clinical providers given the pilot version of the Toolkit from Y% to Z%. Longнterm outcome objectives: Х By (month/year), the prevalence of STDs among HIV+ MSM will decrease from Y% to Z%. Attach logic model. See Attachment A. List individuals and roles on the evaluation team. List the users and uses of the evaluation findings. List the approach to disseminating the evaluation findings to appropriate users. Х Program evaluator: Oversees and leads all evaluation activities. Х Project manager: Oversees all project activities; develops timeline; determines clinic sites and jurisdictions; supervises data collection and handling; reviews all components of evaluation and final report; disseminates findings. Х Project assistant: Develops toolkit instruments in consultation with project manager and other program staff; assists program evaluator with evaluation activities; maintains files of completed evaluation tools; conducts data entry of evaluation data; analyzes data. Х STD program staff: Assists with project and evaluation activities as needed. Х CDC/AED: Provides guidance and assistance throughout the evaluation process. Х Implementers (Program manager, STD project staff, MSM health service providers and clinic staff, DIS, CDC/AED): Determine the effectiveness of the Toolkit in changing awareness and screening practices of clinical providers. Use evaluation findings to improve the Toolkit and how it is distributed; plan future activities; allocate resources; and increase the capacity of the advisory board to promote toolkit. CDC/AED will use the evaluation to measure the effectiveness and refine the evaluation tools, and publish results Х Decision makers (STD chief and program manager, Office of AIDS prevention chief): Ensure that HIV+ MSM are receiving appropriate STD services and that clinical providers are adhering to recommended STD screening guidelines. Use evaluation findings to plan future activities; allocate future funding; and inform program and policy changes. Х Partners (Medical advisory board): Improve clinical practice and decisions. Х Funders/: Presentation and/or written report (including an executive summary) Х Other STD staff and programs: Presentation and/or report. Х MSM health service providers and clinical staff: Report and/or presentation. Х Advocacy group: Report and/or presentation. Х Scientific community/CDC: Manuscript publication. Attach the timeline for completing the evaluation. See Attachment B. Attach the evaluation budget. N/A EVALUATION PLAN MATRIX continued ATTACHMENT A LOGIC MODEL нSTD TOOLKIT Goal: To reduce prevalence of STDs among HIV+ MSM in California ATTACHMENT B TIMELINE FOR STD TOOLKIT IDAHO DEPARTMENT OF HEALTH AND WELFARE, STD/AIDS PROGRAM Annabeth Elliott, RN Evaluation Plan for Syphylis Reporting EVALUATION PLAN NARRATIVE Activity to be evaluated is the syphilis reporting system and the interviews conducted to elicit partners from reported syphilis cases Stakeholders involved in the evaluation include the Idaho Dept. of Health and Welfare STD/AIDS Program, Idaho Dept. of Health and Welfare Office of Epidemiology and Food Protection OEFP, CDC. Rationale: By determining the barriers and facilitators to effective interviewing and reporting of syphilis, stakeholders will make appropriate decisions to improve the system Purpose of the evaluation is to evaluate the existing system of reporting and interviewing for fidelity and timliness and to determine the barriers and facilitators of prompt, complete and accurate submittal of syphilis reports to the CDC. Program goals and objectives: Goal: Improve the quality of Syphilis interviewing and reporting Х Objective #1. By 12/31/2005, at least 90% of syphilis cases will be confidentially interviewed by district Epidemiologist to thoroughly elicit partners within 30 days. (CDC PM: Proportion of P & S syphilis cases interviewed within 7, 14, and 30 calendar days from date of specimen collection. Number of associates and suspects tested, per case of P & S syphilis. Number of associates and suspects treated for newly diagnosed syphilis, per case of P & S syphilis) Х Objective #2. By 12/31/2005, at least 90% of syphilis cases will be brought to treatment by district Epidemiologist within 30 days (CDC PM: Number of contacts prophylactically treated or newly diagnosed and treated within 7, 14, 30 calendar days from day of interview of index case, per case of P & S syphilis.) Х Objective #3 By 12/31/2005, on at least 90% of charts district epidemiologist will document all communication, education, treatment and case management of highнpriority STD according to the OEFP contract guidelines. Х Objective #4. By 12/31/2005, every month district Epidemiologist will access current data on local and statewide syphilis epidemiology provided by the OEFP Х Objective #5. By 12/31/2005, at least 90% of syphilis cases will receive confidential risk reduction counseling within 30 days. Х Objective #6 By 12/31/2005 improve the proportion of reported cases of P & S syphilis, EL syphilis and congenital syphilis sent to the CDC via NETSS that has data for age, race, sex, county, and date of specimen collection to 60% Logic Model: See Attachment C. Individuals and roles in evaluation team Х Annabeth Elliott, STD Program Specialist Ц conduct evaluation at Idaho STD Program Х CDC Evaluation Team Ц Yamir SalabarrэaнPeёa Ц Evaluation Project Team lead, oversite of evaluation activities, TA, site visit to assist with data collection. Ц Richard Sawyer Provide TA Ц some of which included drafting evaluation questions, indicators, data sources, data collection methods. Will assist with data collection during site visit. Ц Stacey Little Ц coordinate TA, will assist with data collection during site visit. Х District Epi staff Ц answer evaluation questions & provide feedback Х OEFP staffЦ answer evaluation questions & provide feedback Users of the evaluation findings Х District Health Dept Directors Ц Decision makers нwritten report with verbal followнup Х Other STD Programs Ц Partners нPanel at NatТl STD conference, discuss briefly at IPP Conference and Thursday report. Х Epi staff and managers Ц Implementers нwritten report and possibly present at Epi conference. Use evaluation findings to improve syphilis interviewing and reporting Х CDC Ц funders нwritten report Timeline: See Attachment D Budget: See Attachment E IDAHO DEPARTMENT OF HEALTH AND WELFARE, STD/AIDS PROGRAM LOGIC MODEL FOR SYPHILLIS REPORTING SYSTEM Goal: Improve the quality of syphillis interviewing and reporting ATTACHMENT D TIMELINE ATTACHMENT E BUDGET FORSYTH COUNTYТS SYPHILIS ELIMINATION PROJECT (NORTH CAROLINA) Monica Brown, MPH; Kawanna Glenn, BS; Monica Melvin, BS; Chantha Prak, BS; Lumbe Davis, MPH Evaluation Plan for Outreach Component of Forsyth CountyТs Syphilis Elimination Project EVALUATION PLAN NARRATIVE STD Component Program to be evaluated: Community outreach efforts of the Forsyth County Health DepartmentТs Syphilis Elimination Project. Stakeholders involved: SEP and NTS program staff, DIS, STD Director, STD Coordinator, POSSE Task Force, STD Clinical Staff, Step One Substance Abuse Services (CBO), health commissioners, CDC evaluation team, AED technical assistant team, and service providers. Rationale for selecting the program activity: Syphilis outreach efforts are monumental in building rapport with the community in order to educate, increase awareness, and facilitate behavior change. Consistent and effective outreach should enhance risk reduction behaviors, increase risk perception, and lead to more screenings within the community. Purpose of the Evaluation: To evaluate effectiveness of current outreach in highнrisk areas of Forsyth County. Goals and Objectives to be addressed: Goal: To reduce the incidence of syphilis among highнrisk, male and female racial/ethnic minorities 18н45 years of age in Forsyth County, NC Process Objectives: 1. Between October 2005 and June 2006, program staff will provide an average of 35 of health education contacts/communications per month to males and females of the target population. 2. Between October 2005 and June 2006, program staff will distribute an average of 35 safe sex kits (containing brochures, condoms, and testing information) per months to males and females of the target population. 3. Between October 5 and June 2006, program staff will implement an average of 20 community outreach events in target zip codes 27101, 27105, and 27107 Logic Model: See Attachment F. Individuals and Roles on the Evaluation Team: Х Health Promotions Director: Oversees all health promotion activities Х STD/HIV Director: Supervises SEP and NTS program staff and oversees all NTS and SEP activities. Collects data, maintains records, analyzes monthly reports, and assists with outreach efforts upon request. Х NTS and SEP Program Staff: Provide education, counseling and screenings. Conduct community outreach, collect data, develop evaluation tool, determine outreach venues based on statistics and DIS reports, Х AED and CDC: Provides assistants in developing evaluation plans, goals and objectives and will help with data collection and analysis. Х DIS: Provides assistance in determining outreach locations. Provides followнup interviews for syphilis positive cases. Possibly collects outreach data. Users and Uses of Evaluation Findings: Х Implementers (STD/HIV Director, SEP and NTS program staff): Determine the effectiveness of community outreach efforts in target zip codes and population. Based on findings, changes will be implemented to improve outreach efforts and to provide information to stakeholders on appropriate methods to access target populations. Х Decision Makers (NC SEP Program, Health Promotions Director, STD/HIV Director, SEP and NTS program staff, Funders, Health Commissioners): Provide support in changes made to outreach efforts; change outreach to ensure its effectiveness. Х Partners (POSSE Task Force, Step One Substance Abuse Services, Service Providers, STD Clinical Staff, and DIS: Provide ideas for modifications to outreach efforts. Approach to Disseminate the Evaluation Findings: Х Written Report: To be used by funders, Forsyth County Health Department, CDC and NC SEP Program. Х Presentations at conferences and local meetings: POSSE task force, CBOs, NC SEP programs, county health departments, faithнbased organizations, service providers, community members and Forsyth County health department staff. ATTACHMENT F LOGIC MODEL OUTREACH COMPONENT OF FORSYTH COUNTYТS SYPHILIS ELIMINATION PROJECT Goal: To reduce the incidence of syphilis among highнrisk African Americans male and female, 18н45 years of age in Forsyth County. MICHIGAN DEPARTMENT OF COMMUNITY HEALTH, STD PROGRAM Kristine Judd, BSPH and Bruce Nowak, BS Evaluation Plan for Ct Activity in an Emergency Department EVALUATION PLAN NARRATIVE Program Activity: Determine if there is an existing protocol for Chlamydia screening in the Emergency Department of St. John Hospital and evaluate the adherence to or barriers to the protocol. Implement universal screening of 15 Ц 24 year old females to determine level of infection that was previously missed. Stakeholders: Х Mark A. Miller, STD Director Х Kristine Judd, STD Administrative Program Manager Х Bruce Nowak, STD Surveillance Supervisor Х Yamir SalabarrэaнPeёa, Dr.P.H., MPH, Health Scientist/ Evaluation Specialist, CDC Х Richard Sawyer, Ph.D., Senior Program and Evaluation Manager, AED Х Susan Rogers, Ph.D., Senior Research and Evaluation Advisor, AED Х Karen Lighheart and Alana Thomas, STD DIS, Surveillance Х Detroit Health Department STD DIS Х James Rudrik, Ph.D., Microbiology Section Manager, MDCH Bureau of Laboratories Х Dr. Southall, Director, Emergency Department, St. John Hospital Х Dr. Charlene Irvin, Research Director, St. John Hospital Х Medical Students/Residents, St. John Hospital Rationale for selecting program activity: In Michigan, Chlamydia prevalence is highest among those ages 15 Ц 19 and 20 Ц 24 with rates of 1906 and 2406, respectively, per 100,000 population in 2004. Additionally, screening conducted at adolescent venues (schoolнbased clinics, juvenile detention facilities, and teen health centers) show high positivity, up to 24% in females and 21% in males. Among schoolнbased clinics studied, 49% of the students that tested positive for Chlamydia accessed service for reasons other than STD check. Purpose of the Evaluation: This evaluation will examine the implementation of a revised Ct screening protocol at facility, and, for a 6 month period, offer universal screening to females ages 15 Ц 24 accessing service in St. John Hospital Emergency Department. This facility was chosen as it is located in SE Michigan, a high morbidity area. Results will be analyzed to determine how many cases of Chlamydia would have gone undetected had traditional screening protocol been followed. Program goal and objectives to be addressed through the Evaluation: Goal: By November 1, 2006, St. John ED will fully adopt protocol to universally screen all 15 Ц 24 year old females for Chlamydia. Process Objectives: Х By January 10, 2006, Michigan STD (MSTD) will identify an emergency department for pilot evaluation of chlamydia screening protocol. Х By January 13, 2006, MSTD will establish a partnership with the emergency department at St. JohnТs Hospital in Detroit. Х By January 13, 2006, MSTD will meet with CDC/AED to establish evaluation timeline. Х By January 13, 2006, MSTD and CDC/AED will delineate roles and responsibilities for evaluation. Х By February 15, 2006, MSTD will meet with site to discuss evaluation process, delineate roles and responsibilities, and gather existing policies/procedures on CT screening. Х By May 15, 2006, MSTD will develop data collection instrument to be used by Emergency Department (ED). Х By June 1, 2006, MSTD will provide ED with all materials (laboratory) and training on procedure to collect and submit specimens to MDCH laboratory. Х By July 1, 2006, MSTD will conduct a site visit and chart review to assess adherence to revised protocol. Х By November 1, 2006, MSTD will finalize data collection and forward to CDC and AED. Shortнterm Outcome Objectives: Х By May 1, 2006, St. JohnТs ED will accept revised protocol to universally screen all 15 Ц 24 year old females for chlamydia. Х By June 1, 2006, 60% of ED staff at St. JohnТs will increase awareness of chlamydia prevalence among target population, as measured by postнtraining evaluation. Intermediate Outcome Objectives: Х By May 15, 2006, St. JohnТs ED, as part of the protocol will submit specimens to MDCH regional laboratory in Detroit. Х By September 20, 2006, St. JohnТs will achieve 80% adherence to revised screening protocol. Longнterm Outcome Objectives: Х By November 1, 2006, St. JohnТs ED will fully adopt protocol to universally screen females ages 15 Ц 24 for chlamydia. Logic Model: See Attachment G. List the users and uses of the evaluation findings: The Michigan STD Program will use the evaluation findings to inform resource allocation for future Chlamydia screening as well as advocate for increased screening in other venues. St. John ED will use the results of this evaluation to determine if they will make a permanent adjustment to their Chlamydia screening criteria. CDC/AED нMeasure the effectiveness of the evaluation tools and refine them accordingly. Publish results related with the pilotнtesting process and the actual evaluation. List the approach to disseminating the evaluation findings to appropriate users: CDC/AED: written report, publications St. John: written report and oral presentation MDCH/STD: written report Michigan IPP: written report and oral presentation Timeline for completing evaluation: Build partnerships with ED: Ongoing Define/Delineate Roles and Resp. for Evaluation: March, 2006 Provide/deliver resources: May Ц Oct, 2006 Provide TA to ED on purpose of evaluation: Ongoing Develop data collection instrument for use by ED: May, 2006 Conduct regular meetings with ED: Monthly Evaluate current ED protocol: February, 2006 ATTACHMENT G MICHIGAN DEPARTMENT OF COMMUNITY HEALTH, STD PROGRAM Goal: Increase Chlamydia screening in females 15Ц24 years old at an emergency department (ED) in Detroit.