Skip directly to search Skip directly to A to Z list Skip directly to navigation Skip directly to page options Skip directly to site content

Interpreting the Incidence Data

Change to the 2000 U.S. Standard Population

The U.S. Department of Health and Human Services’ policy for reporting death and disease rates was motivated by a need to standardize age-adjustment procedures across government agencies.1 2 The change to the 2000 U.S. standard population updated the calculation of age-adjusted rates to reflect more closely the current age distribution of the U.S. population. Because of the aging of the U.S. population, the 2000 U.S. standard population gives more weight to older age categories than the 1940 and 1970 standard populations did.2

Because the incidence of cancer increases with age, the change to the 2000 U.S. standard population resulted in higher incidence rates for most cancers. The data on this Web site should not be compared with cancer incidence rates adjusted to different standard populations.

Incidence rates also are influenced by the choice of population denominators used in calculating these rates. Because some state health departments use customized projections of the state’s population when calculating incidence rates, the rates on this Web site may differ slightly from those published by individual states.

Statistical Bias

Statistical bias can arise if, within a region, division, or country, the sub-area for which data are available has rates that differ substantially from the rates in the sub-area for which data are not available. Because of bias, rates for a U.S. Census region, U.S. Census division, or the country may not meet statistical criteria for inclusion. It is possible to have some statistical bias even if the percentage of coverage is high and large numbers of cases are recorded. Where coverage is less than 100%, merely increasing the percentage of the population covered may not reduce statistical bias unless the covered population is similar to the uncovered population in terms of cancer rates or proportions.

Registries’ Data Quality

Data quality is evaluated routinely by CDC’s National Program of Cancer Registries (NPCR) and the National Cancer Institute’s Surveillance, Epidemiology, and End Results (SEER) Program.3 4 Some evaluation activities are conducted intermittently to find missing cases or to identify errors in the data. Although the cancer registries meet data quality criteria for all invasive sites combined, the completeness and quality of site-specific data may vary. The observed rates may have been influenced by differences in the timeliness, completeness, and accuracy of the data from one registry to another, from one reporting period to another, or from one primary cancer site to another.

Reporting Time Intervals

Completeness and accuracy of the site-specific data also may be affected by the time interval allowed for reporting data to the two federal programs. The NPCR and SEER time interval for reporting data differed by 30 days: NPCR allowed an interval of 23 months after the close of the diagnosis year (data submission by November 30, 2013), and SEER allowed an interval of 22 months after the close of the diagnosis year (data submission by November 1, 2013).

Reporting Delays

Delays in reporting cancer cases can affect the timely and accurate calculation of cancer incidence rates.6 Cases are reported continuously to state and metropolitan-area cancer registries in accordance with statutory and contractual requirements. After the initial submission of the most recent year’s data to the federal funding agency, cancer registries revise and update their data on the basis of new information received. Therefore, some cancer cases likely will have been reported to state and metropolitan-area cancer registries after the registries submitted their data to CDC or NCI. For this reason, incidence rates and case counts reported directly by state or metropolitan-area cancer registries may differ from those that appear on this Web site. Reporting delays appear to be more common for cancers that usually are diagnosed and treated in non-hospital settings such as physicians’ offices (for example, early-stage prostate and breast cancer, melanoma of the skin). Methods to adjust incidence rates for reporting delay were not applied to the data in this report.6

Continual Data Updates

Each year, state cancer registries submit data for a new diagnosis year to CDC or NCI, plus an updated version of previous years’ data. Federal agencies in turn update their cancer incidence statistics with each data submission and document the states’ date of data submission whenever the data are published. These continual updates by state and federal agencies illustrate the dynamic nature of cancer surveillance and the attention to detail that is characteristic of cancer registries. Each year when United States Cancer Statistics is published, we publish updates to previous years’ data. Users of cancer incidence data published by federal agencies should be mindful of the data submission dates for all data used in their analyses.

Geographic Variation

Geographic variation in cancer incidence rates may result from regional differences in the exposure of the population to known or unknown risk factors.7 8 9 10 Differences may arise because of differences in sociodemographic characteristics of the population (age, race and ethnicity, geographic region, urban or rural residence), screening use, health-related behaviors (for example, tobacco use, diet, physical activity), exposure to cancer-causing agents, or factors associated with the registries’ operations (completeness, timeliness, specificity in coding cancer sites). Cancer researchers are investigating variability associated with known factors that affect cancer rates and risks by using model-based statistical techniques and other approaches for surveillance research. Differences in registry operations are being evaluated to ensure consistency and quality in reporting data.

References

  1. Anderson RN, Rosenberg HM. Report of the Second Workshop on Age Adjustment. Vital and Health Statistics, Series 4. 1998;(30):I–VI, 1–37.
  2. Anderson RN, Rosenberg HM. Age standardization of death rates: implementation of the year 2000 standard. National Vital Statistics Reports 1998;47(3):1–16, 20.
  3. Fritz A. The SEER Program’s commitment to data quality. Journal of Registry Management 2001;28(1):35–40.
  4. Hutton MD, Simpson LD, Miller DS, Weir HK, McDavid K, Hall HI. Progress toward nationwide cancer surveillance: an evaluation of the National Program of Cancer Registries, 1994–1999. Journal of Registry Management 2001;28(3):113–120.
  5. Thoburn KK, German RR, Lewis M, Nichols PJ, Ahmed F, Jackson-Thompson J. Case completeness and data accuracy in the Centers for Disease Control and Prevention’s National Program of Cancer Registries. Cancer 2007;109(8):1607–16.
  6. Clegg LX, Feuer EJ, Midthune DN, Fay MP, Hankey BF. Impact of reporting delay and reporting error on cancer incidence rates and trends. Journal of the National Cancer Institute 2002;94(20):1537–1545.
  7. Centers for Disease Control and Prevention. Behavioral Risk Factor Surveillance System Operational and User’s Guide. Version 3.0. Atlanta (GA): Centers for Disease Control and Prevention; 2005.
  8. Devesa SS, Grauman DJ, Blot WJ, Pennello GA, Hoover RN. Atlas of Cancer Mortality in the United States, 1950–1994. Bethesda (MD): National Cancer Institute; 1999.
  9. Howe HL, Keller JE, Lehnherr M. Relation between population density and cancer incidence, Illinois, 1986–1990. American Journal of Epidemiology 1993;138(1):29–36.
  10. Wingo PA, Jamison PM, Hiatt RA, Weir HK, Gargiullo PM, Hutton M, Lee NC, Hall HI. Building the infrastructure for nationwide cancer surveillance and control—a comparison between the National Program of Cancer Registries (NPCR) and the Surveillance, Epidemiology, and End Results (SEER) Program (United States). Cancer Causes and Control 2003;14(2):175–193.
Top