Skip directly to search Skip directly to A to Z list Skip directly to navigation Skip directly to page options Skip directly to site content

Project Report and User Guide

Chapter Seven. Potential Effects and Future Refinements

The potential effects of this study can be classified into two categories. One is in methodology development for analyzing the costs and effects of interventions to reduce motor vehicle–related injuries. In this project, we have provided transparent calculations starting from the input assumptions and ending with the estimated cost and effectiveness outcomes. The other is in assistance to state decisionmakers in determining the best mix of interventions to reduce motor vehicle crashes based on a given implementation budget. We discuss both categories in turn, before proceeding to possible future refinements of the tool.

Potential Effects on Methodology Development

This project has departed from traditional methodologies and provided a new way to look at analyzing the costs and effectiveness of interventions in several ways.

First, methods for assessing and ranking the attractiveness of interventions are typically based on cost–effectiveness ratio. The idea is that more effectiveness for each dollar expended is better. However, this ratio approach ignores the interdependencies among interventions and overestimates the total effectiveness when interdependent interventions are implemented. For this project, we calculated results both with and without consideration of such interdependencies. For the 12 interventions assessed in this study, ignoring interdependencies may amount to as large as a 39-percent overestimation of benefits. In the future, when this tool is applied to other cases, the errors may be smaller or larger, depending on the number of interdependent interventions and the degrees of their interdependencies.

Second, as to the best package of interventions to implement for a given budget, the traditional cost-effectiveness approach would first arrange all the candidate interventions into a ranked list: the one with the highest calculated cost–effectiveness ratio at the top and the one with the lowest ratio at the bottom. Then, one would merely select the interventions from the top down until the implementation budget is fully committed. However, significant interdependences could alter the order in which interventions should be selected.

More importantly, as discussed below under “Future Refinements of the Tool,” planners could be interested in not only the binary (i.e., full or no) implementation options for an intervention but also partial implementations at various funding levels for corresponding lesser levels of effectiveness. Partial funding of one or more interventions can result in a larger total effectiveness than full funding of a smaller set of interventions for the same implementation budget. The simple cost–effectiveness ratio method is ill-suited to determine the optimal combination (portfolio) of interventions that are interdependent or can be partially funded for lesser effectiveness. On the other hand, the methodology used in this study employs a mixed-integer linear programming model specifically designed to address these more-complicated and realistic cases.

Third, evidence about the effectiveness of interventions predominantly comes from empirical studies of only several states or localities. Studies, such as Preusser et al., 2008, have developed methods to express these empirically determined effects on injuries and deaths generically so that they can be combined with state-specific characteristics to estimate the effects for a given state. This is an important step because, when decisionmakers consider implementing a new intervention, they do not typically have empirical data on its efficacy in their state and therefore must rely on data from other states. However, on the cost side (unlike the effectiveness side), there has been no systematic study of the extrapolation of empirical data to draw conclusions for other states. For this study, we first designed a structure consisting of ten cost components and 62 cost subcomponents. We then broke these costs into unit costs, such as the cost of advertising per 1,000 viewers or the annual lease fee for a red-light camera. Because only some cost subcomponents are relevant to any intervention, we developed a table showing which subcomponents are pertinent. Further, we adjusted these costs for every state by accounting for its specific characteristics, such as demographics and crash deaths, which determine (for example) how many viewers need to see the publicity campaign and how many red-light cameras are needed. These costs in component and subcomponent form are useful not only for informing the decision of which interventions to select but also planning, for each intervention, which costs will be incurred during implementation.

Finally, this web tool provides an end-to-end method to track and display the effectiveness and costs of interventions. It serves as a library of relevant information, such as where interventions have been implemented and what the experiences have been. The relevant data are stored as input parameters to the model. Any update of the inputs can be immediately available to estimate the impact on effectiveness and costs for implementing various interventions. As noted in the “Future Refinements of the Tool” section, the tool owner can update the tool periodically, and the tool user can update it in real time. Consequently, when using this tool to rank and select new interventions, a state planner can use the most-recent data available and see the expected effectiveness and cost of interventions under consideration in near-real time.

Potential Effects of the Tool in Its Current Version

The tool was developed to help states understand the trade-offs and prioritize the most cost-effective interventions to reduce motor vehicle–related injuries and deaths. States can use the tool to do the following:

  • Determine and compare the costs and effects of individual interventions without considering their interdependencies in a conventional cost-effectiveness analyses.
  • Determine the optimal portfolio (i.e., the combination) of interventions that would generate the largest total effectiveness for a given budget, accounting for interdependencies.
  • Determine the total effectiveness in terms of reductions in injuries and deaths for the optimal portfolio of interventions selected.
  • Determine the total effectiveness and costs both with and without the collection of fines and fees.
  • Determine the cost structure of the optimal portfolio.
  • Perform sensitivity analysis by changing the input values, such as the costs of various components, the value of a life saved or an injury prevented, and the estimated reductions in injuries and deaths.

This tool contributes to the national effort to reduce motor vehicle–related injuries and deaths. It provides state decisionmakers with the information needed to prioritize and select the most cost-effective interventions for their state.

Limitations

Building state-specific cost-effectiveness estimates for 12 interventions was an ambitious undertaking, and there are many challenges associated with developing the cost and effectiveness estimates. In particular, many assumptions are needed to generate these estimates. For example, the effectiveness estimates from the literature are typically associated with a particular jurisdiction and reflect the effect of the intervention as implemented there. We have tried to reflect that in our calculations of implementation costs. However, the literature does not always provide sufficient detail to do this, so we made many assumptions to build the estimate. Moreover, the cost-effectiveness estimates based on these assumptions reflect the level and characteristics (e.g., whether there was a publicity campaign) of implementation of the successful intervention. If the intervention is not implemented at the same level (e.g., not as much publicity about a seat belt enforcement) as assumed in the tool, the estimated costs and effects reported in the tool will not be a good match.

As another example, the existing studies do not always report the intervention’s effect on the outcomes of most interest. In fact, we have an estimate of the effect on injuries for only one intervention, so we assume that the reductions in injuries for other interventions are proportional to the reductions in deaths.

There are also limitations associated with the data that are used in the analysis. For example, we could not identify a data set that provides comprehensive information on motor vehicle–related injuries. The available data sources that provide information on injuries describe only a sample of crashes. We therefore had to make a set of assumptions to translate the available data into the information needed for the tool, which included an assumption that the proportion of injuries reduced was equivalent to the proportion of deaths reduced, in the absence of injury-specific information.

In many cases, the literature does not provide as much information as would be ideal, and there is certainly room for reasonable disagreement about the assumptions we have made. We have tried to mitigate this problem in several ways. First, we have worked to find the best available data and evidence off which to build the assumptions. Second, we have also been very transparent, describing our assumptions and calculations in detail, so readers can assess the assumptions themselves. Finally, those who disagree with the assumptions can conduct sensitivity analyses with the tool by adjusting many of the model parameters and use that analysis to inform their selection of the most cost-effective interventions.

The estimates provided by the tool are approximations. They are meant to give decisionmakers a sense of the relative costs and effects of different interventions under consideration. There may be other costs and benefits not captured by the tool that should be considered (e.g., the improved employment or quality of life among people who are deterred from driving while drunk, effects on civil liberties) or political issues that make some interventions more feasible than others. In essence, they are designed to be one category of information in a decisionmaking process about which interventions to implement.

Despite the necessary reliance on assumptions to build the model, we believe that the tool will be of great use to state decisionmakers. Although information about which interventions are effective has been generally available, this is the first effort to estimate the implementation costs across a broad array of interventions and to translate these costs to the state level according to a specific state’s demographics and traffic crash profile. States need information on both the potential costs and effects of interventions to make informed resource allocation decisions.

Future Refinements of the Tool

This tool could be refined in a variety of ways. First, this tool reports top-level results based on the default assumptions. The current sensitivity analysis allows changes be made to a limited number of model parameters. In the future, one might want to expand the sensitivity analysis options to include the ten implementation cost components and potentially even to include the 38 cost subcomponents so that a user can see how changes at the more granular level would affect the cost. Similarly, we could expand the sensitivity analysis to allow for changes in the default assumptions about the intervention. For example, for in-person license renewal, we could allow the user to change the age threshold (e.g., from 70 to 75) or the required frequency (e.g., from four to six years), or both.

A second potential refinement is to incorporate new estimates of reductions in injuries and deaths. At the current time, with one exception, all of the estimated reductions in injuries are the same as for deaths because we were unable to locate any studies that directly measured the reduction in injuries due to a particular intervention. However, for example, if we learned of a study that estimated injury reductions of 25 percent at red-light camera intersections, we would incorporate that figure into the calculations rather than the current 17 percent, which is based on reductions in deaths. In this way, we could refine the tool with more-detailed estimates of effectiveness. Tools users would benefit from having such updates programmed into the tool rather than waiting for publications or searching out such information themselves.

Finally, it would also be useful if the expected cost and effectiveness of an intervention could be refined for multiple levels of partial implementation in addition to the full implementation that is currently included. Because cost-effectiveness consideration may find a less-than-full implementation of some interventions to yield bigger bang for the buck, a model that allows scaled-down implementation of interventions (e.g., fewer speed cameras, fewer saturation patrols) may turn out to be more useful because it would better reflect implementation choices available to a state. Our methodology facilitates the estimation of effectiveness for implementation at different funding levels. For example, our method for estimating cost is based on unit costs, such as annual lease payments for red-light cameras. Different funding levels imply that different numbers of red-light cameras are leased. A city could install cameras only at intersections with the most red-light–running incidents or at the majority of intersections; these two courses of action would require different levels of funding. The tool could estimate the different levels of effectiveness at both levels of implementation funding.

Top