NAMCS and NHAMCS Estimation Procedures
Visit the National Ambulatory Medical Care Survey website at https://www.cdc.gov/nchs/namcs/ and the National Hospital Ambulatory Medical Care Survey at https://www.cdc.gov/nchs/nhamcs/ for more information about these surveys.
NAMCS Estimation Procedures
Statistics were derived by a multistage estimation procedure. The procedure produces essentially unbiased national estimates and has basically four components: 1) inflation by reciprocals of the probabilities of selection, 2) adjustment for nonresponse, 3) a ratio adjustment to fixed totals, and 4) weight smoothing. Each of these components is described briefly below.
1. Inflation of Reciprocals by Sampling Probabilities
Since the survey utilized a three-stage sample design, there were three probabilities:
- the probability of selecting the PSU;
- the probability of selecting a physician within the PSU; and
- the probability of selecting a patient visit within the physician’s practice.
The last probability was defined to be the exact number of office visits during the physician’s specified reporting week divided by the number of Patient Record forms completed. All weekly estimates were inflated by a factor of 52 to derive annual estimates.
2. Adjustment for nonresponse
Estimates from NAMCS data were adjusted to account for in-scope physicians who did not provide PRFs (non-PRF physicians) either because they saw no patients during their sample week or failed to provide PRFs for visits by patients they did see during their sample week.
Beginning with 2004 data, changes were made to the nonresponse adjustment factor to account for the seasonality of the reporting period. Extra weights for nonresponding physicians were shifted to responding physicians in reporting periods within the same quarter of the year. The shift in nonresponse adjustment did not significantly affect any of the overall annual estimates.
Beginning with 2003 data, the adjustment for non-PRF physicians differs from the adjustment used in prior years. Previously the adjustment accounted for non-response by physician specialty, geographic region, and metropolitan statistical area status. The revised non-response adjustment also accounts for non-response from physicians by practice size, as measured by number of weekly visits, and for variability in number of weeks that participating physicians saw patients during the year.
Previously, these characteristics were assumed to be the same for physicians providing patient encounter information and those not providing such information. However, research done for the first time with 2003 data showed that these two assumptions are not always true. In general, the weekly visit volume for non-PRF physicians was larger than for PRF physicians. Also, physicians who saw no patients during their sample week tended to see patients fewer weeks annually than did physicians who saw patients during their week. To minimize understatement (and in some cases, overstatement) of visits, the non-response adjustment factor was revised to include information on the number of weeks physicians actually practiced during a typical year and the number of visits physicians reported during a week. Both data items were collected for responding and nonresponding physicians during the induction interview starting with the 2001 survey.
The 2003 weight with the revised non-response adjustment increased the overall visit estimate by 12 percent over the same estimate obtained using the original weight. For this reason, 2003 (and 2004) visit estimates are not comparable to visit estimates computed using the previous weighting strategy.
3. Ratio Adjustment
A postratio adjustment was made within each of the 15 physician specialty groups. The ratio adjustment is a multiplication factor which had as its numerator the number of physicians in the universe in each physician specialty group and as its denominator the estimated number of physicians in that particular specialty group. The numerator was based on figures obtained from the AMA-AOA master files, and the denominator was based on data from the sample.
4. Weight Smoothing
Each year there are a few sample physicians whose final visit weights are large relative to those for the rest of the sample. There is a concern that those few may adversely affect the ability of the resulting statistics to reflect the universe, especially if the sampled patient visits to some of those few physicians should be unusual relative to the universe. Extremes in final weights also increase the resulting variances. Extreme weights can be truncated, but this leads to an understatement of the total visit count. The technique of weight smoothing is used instead, because it preserves the total estimated visit count within each specialty by shifting the “excess” from visits with the largest weights to visits with smaller weights.
Excessively large visit weights were truncated, and a ratio adjustment was performed. The ratio adjustment is a multiplication factor that uses as its numerator the total visit count in each physician specialty group before the largest weights are truncated, and, as its denominator, the total visit count in the same specialty group after the largest weights are truncated. The ratio adjustment was made within each of the 15 physician specialty groups and yields the same estimated total visit count as the unsmoothed weights.
NHAMCS Estimation Procedures
Statistics from the NHAMCS are derived by a multistage estimation procedure that produces essentially unbiased estimates. The estimation procedure has three basic components: 1) inflation by reciprocals of the sampling selection probabilities; 2) adjustment for nonresponse; and 3) a population weighting ratio adjustment. Beginning with 1997 data, the population weighting ratio adjustment for OPD estimates was replaced by an adjustment that controls for effects of rotating hospital sample panels into and out of the sample each year. (The full NHAMCS hospital sample is partitioned into 16 panels which are rotated into the sample over 16 periods of 4 weeks each so that only 13 panels are used in each year.)
1. Inflation by reciprocals of selection probabilities
There is one probability for each sampling stage: a) the probability of selecting the PSU; b) the probability of selecting the hospital; c) the probability of selecting the emergency service area (ESA) or OPD clinic from within the hospital; and d) the probability of selecting the visit within the ESA or clinic. The last probability is calculated to be the sample size from the ESA or clinic divided by the total number of visits occurring in that unit during that unit’s data collection period. The overall probability of selection is the product of the probabilities at each stage. The inverse of the overall selection probability is the basic inflation weight. Beginning with the 1997 data, the overall selection probabilities of some OPDs were permanently trimmed to prevent individual OPDs from contributing too much of their region’s total to OPD visit estimates.
2. Adjustment for nonresponse
NHAMCS data are adjusted to account for two types of nonresponse. The first type of nonresponse occurred when a sample hospital refused to provide information about its ED(s) and/or OPD(s), which were publically known to exist. In this case, the weights of visits to hospitals similar to the nonrespondent hospitals were inflated to account for visits represented by the nonrespondent hospitals where hospitals were judged to be similar if they were in the same region, ownership control group (government, non-Federal; voluntary nonprofit; or proprietary), and had the same metropolitan statistical area (MSA) status (that is, whether they were located in an MSA or not in an MSA). This adjustment was made separately by department type.
The second type of nonresponse occurred when a sample ESA or OPD clinic within a “respondent” hospital failed to provide completed Patient Record forms for a sample of its patient visits. The weights of visits to ESAs/OPD clinics similar to the nonrespondent ESAs/OPD clinics were inflated to account for visits represented by the nonrespondent ESAs/OPD clinics where ESAs/OPD clinics were judged to be similar if they were in the same region, ownership control group, MSA status group, and ESA/OPD clinic group. For this purpose, there were six OPD clinic groups: general medicine, pediatrics, surgery, OB/GYN, alcohol and/or substance abuse, and “other.”
Beginning with 2004 data, changes were made to the nonresponse adjustment factor to account for the seasonality of the reporting period. Extra weights for nonresponding hospital outpatient departments and emergency departments were shifted to responding outpatient and emergency departments in reporting periods within the same quarter of the year. The shift in nonresponse adjustment did not significantly affect any of the overall annual estimates.
3. Ratio adjustments
Adjustments were made within hospital strata defined by region. Within the South and West, the adjustment strata for EDs were further defined by hospital ownership control groups. These adjustments were made separately for emergency and outpatient departments. For EDs, the adjustment was a multiplicative factor that had as its numerator the sum of annual visit volumes reported to EDs in sampling frame hospitals in the stratum and as its denominator the estimated number of those visits for that stratum. Through the 1996 NHAMCS, the adjustment for visits to OPDs was a multiplicative factor, which had as its numerator the number of OPDs reported in sampling frame hospitals in the stratum and as its denominator the estimated number of those OPDs for that stratum. The data for the numerator and denominator of both adjustments were based on figures recorded for the data year in the SMG Hospital Market Data Base.
Beginning with the 1997 NHAMCS, the adjustment for OPD estimates was replaced by a ratio that had as its numerator the weighted OPD visit volumes of hospitals in the full NHAMCS sample (16 hospital panels) and as its denominator the weighted OPD visit volumes of hospitals in the 13 hospital panels included in that year’s sample. This adjustment used visit volumes that were based on the most recent survey data collected from hospitals that had participated in NHAMCS for at least one year. For hospitals that had never participated, visit volumes were obtained by phone, from SMG data, or by using the average of visit volumes for refusal hospitals which had converted to respondent status in the 1998 survey.