SESSION V. POTENTIAL FOR POPULATION-BASED SCREENING
Meeting Report:
Applying Genetics and Public Health Strategies to Primary Immunodeficiency Diseases
November 8-9, 2001 ~ Atlanta, Georgia
Prepared by: Office of Genomics and Disease Prevention, Centers for Disease Control and Prevention Department of Health and Human Services
SESSION V. POTENTIAL FOR POPULATION-BASED SCREENING
Speakers discussed the potential role of population-based newborn screening in enhanced early identification, using SCID as an example.
Human Severe Combined Immunodeficiency and Potential for Newborn Screening
Dr. Rebecca Buckley, Duke University
Dr. Buckley proposed a simple screening test for SCID and examined it in terms of the evaluative criteria presented by Dr. Haddow.
Is the disorder well-defined? What is the disorder?
The disorder is well-defined. SCID is a fatal syndrome of diverse genetic origin, characterized by the absence of T- and B-cell (and sometimes natural killer cell) functions.
Is it sufficiently serious in terms of morbidity and mortality to warrant testing?
Yes; it is uniformly fatal in infancy unless corrected by immune reconstitution. SCID is a pediatric emergency that could be routinely diagnosed at birth.
Is the prevalence known in the population tested?
The prevalence is not known. It has to be low, however, because the disorder is uniformly fatal in infancy. Incidence is estimated at 1:100,000 to 1:300,000 live-born infants but is likely higher.
What causes the disorder?
It has been known for nearly 3 decades that SCID is caused by adenosine deaminase deficiency. In the past 8 years, several other molecular defects have been identified, and it has become evident that the genetic origins of this condition are quite diverse. X-linked SCID accounts for about 45percent of all cases and is caused by defects in the common gamma chain of the interleukin 2 (IL-2) receptor. Mutations in the genes encoding adenosine deaminase (ADA), Janus kinase 3 (Jak3), the alpha chain of the IL-7 receptor, recombinase-activating genes 1 or 2, or the Artemis gene also result in SCID and are inherited in an autosomal recessive pattern.
Despite different genetic causes, infants with SCID exhibit consistent laboratory, pathologic, and clinical features. Characteristic laboratory features of SCID include lymphopenia, hypogammaglobulinemia, and absent lymphocyte proliferation to mitogens, antigens, or allogeneic blood mononuclear cells. Infants with SCID have no tonsils, rarely have detectable peripheral lymph nodes, and have small, poorly differentiated thymus glands. The mean age of diagnosis is 6.5 months, but most patients experience recurrent infections before then as
transplacentally-acquired maternal antibody levels decline. Initial infections may be those frequently encountered in infancy, but infants with SCID typically have illnesses that are unresponsive to conventional therapies. Without recognition and treatment, the syndrome is uniformly fatal, usually in the first year of life.
What is the test being proposed?
The proposed test is an absolute lymphocyte count. Because the interpretation of lymphopenia must rely on definitions that are different for infants and children than for adults, the test should be supplemented with enhanced provider education on normal lymphocyte counts in babies.
Is it a screening or diagnostic test? If it is a screening test, what diagnostic tests will follow?
The proposed test will be used for screening. Follow-up tests include a repeat white cell count and manual differential by peripheral heel stick to detect the lymphopenia that is almost invariably present in infants with this disorder. If lymphopenia is confirmed (proposed cutoff = 2,500), then flow cytometry should be ordered immediately to count T, B, and NK cells.
For positive diagnostic tests, what therapeutic intervention will follow? What is its efficacy?
The therapy is nonchemoablative, related-donor bone marrow transplantation or gene therapy. Since 1968 physicians have known that all types of SCID can be treated successfully by bone marrow transplantation without a need for pretransplant chemotherapy. Until 2 decades ago, this required strict human-leukocyte-associated-antigen (HLA) identity between donor and recipient to avoid lethal graft-versus-host disease (GVHD). Techniques developed in the early 1980s to deplete donor marrow of T cells avoided GVHD and allowed omitting prophylactic drugs to suppress immune reconstitution.
If stem cells can be transplanted before development of infections, the intervention’s efficacy is excellent. Dr. Buckley reported on 122 infants with SCID who received hematopoietic stem-cell transplants at Duke University Medical Center from May 19, 1982, through November 8, 2001. Of the 122 infants, 96 were still alive 2 weeks to 19.5 years post transplantation, including all 13 who received HLA-identical marrow and 83 of 109 who were given half-matched T-cell-depleted marrow. Most deaths occurred as a result of infections; no patients died as a result of GVHD.
To demonstrate increased success with transplantation that occurs before infections can develop, Dr. Buckley reported on 21 infants who were given bone marrow transplants in the first 28 days of life and compared them with 70 infants who received transplants after that time. Twenty (95 percent) of the 21 neonates survived. In addition to improved survival, neonatal transplant patients with SCID demonstrated increased lymphocyte proliferation to mitogens, higher numbers of CD3+ and CD45RA+ T cells, and higher thymic output post-transplantation than did those transplanted after 28 days of life. Expanding the analysis to infants transplanted in the first 3.5 months of life still yielded only one death.
These findings suggest that the outcomes of postnatal stem cell transplantation for SCID could be further improved if routine testing for this syndrome were included in newborn screening. An automated cord-blood white blood cell count and manual differential demonstrating lymphopenia within the first 3 months of life could detect nearly all infants born with SCID. Pediatricians and other primary care providers should also perform blood counts and manual differentials on all infants presenting with persistent infections.
What is the detection rate? the false-positive rate? What are the odds of being affected, given a positive result?
The detection rate should approach 100 percent. Rare SCID patients with large numbers of transplacentally transferred T cells may have a normal ALC. The false-positive rate will be low. Other cellular immunodeficiencies that could have lymphopenia at birth include HIV infection, Purine nucleoside phosphorylase (PNP) deficiency, 70-kDa Syk-family protein tyrosine kinase (ZAP70) deficiency, Major histocompatibility complex (MHC) Class I or II deficiency, and DiGeorge syndrome. The odds of being infected given a positive result are >95 percent.
What are the practical problems in implementation? Are special facilities required?
The test cannot be done on a Guthrie spot; it has to be done on anticoagulated blood within 24 hours. Automated blood counts and manual differentials are relatively expensive (~$41). No special facilities are required; hematology laboratories are widely available.
Population-Based Newborn Screening: Current and Emerging Screening Criteria
Dr. Nancy Green, March of Dimes
Dr. Green addressed considerations for integrating screening for PI diseases into existing newborn screening programs. Newborn screening programs began in the 1960s with the work of Dr. Robert Guthrie, who introduced the collection of blood as a dried spot on filter paper for testing newborns for PKU. The successful introduction of dried blood spots for PKU screening led to the development of population screening of newborns nationwide with the use of blood drops collected by a heel stick and absorbed into filter paper.
As a population-based public health activity, newborn screening programs are housed in state public health agencies and operate under policies determined at the state level. Without a national newborn screening policy, state-based programs have developed and expanded sporadically. The result is wide variation in policies, capacity, and techniques. The array of screening tests performed by each state varies and changes periodically, and the disorders screened for vary from state to state.
Guidelines for newborn screening systems are linked to ethical, legal, and social considerations and are based on the premise that screening should be conducted only when science and technology can serve both the individual and public good. Landmark reports from the National Academy of Sciences (1975), the Institute of Medicine (1994), and the Task Force on Genetic Testing (1997) identified criteria that should be used to justify population-based newborn screening programs. In general, these state that the condition should be detectable within the newborn period; an accessible, rapid, and accurate test should be available to screen for (and diagnose) the condition; safe and effective treatments should be available to and accepted by the medical community; and the interventions should have a significant probability of improving the course of the disorder. These criteria assume benefits only to the child being screened.
Newborn screening criteria, none of which are quantitative, continue to be re-examined. Criteria for newborn screening traditionally follow those suggested by Wilson and Jungner (1968) for population-based screening, which state that (1) the condition sought should be an important health problem; (2) an accepted treatment should exist, and facilities for diagnosis and treatment should be available; (3) a latent or presymptomatic phase for the condition should be recognizable and its natural history should be adequately understood; (4) a suitable screening test should exist; (5) screening costs should be balanced in relation to overall health expenditures; and (6) case-finding should be a continuing process. These criteria have not, however, been rigidly observed in decision-making about newborn screening.
The context in which newborn screening programs were first established has changed in recent years. Tandem mass spectrometry offers the possibility of using one test to detect a large group of genetic conditions. Advances resulting from the Human Genome Project, the growing impact of consumer advocacy, and continued and emerging ethical, legal, and social questions raise new issues about newborn screening. A re-examination of newborn screening criteria in the context of PI diseases must address the frequency of the disorders, clinical validity and utility, cost-effectiveness, beneficiaries, and equity in diagnosis and treatment. Other factors that affect decision-making are availability of medical and scientific data, public health issues (e.g., cost-effectiveness, resource allocation), political and advocacy issues, and entrepreneurial issues (e.g., profit interests for testing). Indirect benefits of newborn screening include timely diagnoses and reductions in anxiety, opportunities for genetic counseling and clinical research, and access to new treatments. Administration of extra tests also has costs, such as increased numbers of false-positive results, increased parental anxiety, and delayed diagnoses for infants with false-negative results.
Dr. Green concluded by noting that newborn screening can be expanded or recommended at different levels: recommended, recommended if resources exist, recommended for pilot screening, or not currently recommended.
Screening for PI Diseases: Potential Laboratory Strategies
Dr. Harvey Levy, Children’s Hospital of Boston
[Dr. Levy could not be present at the meeting. His abstract is provided below.]
Aside from a brief spurt of interest and activity, newborn screening for PI diseases has not been even an afterthought in the newborn screening menu. The limiting questions have been: What difference would newborn identification make? Is there therapy that requires presymptomatic detection? If not, how could this screening fit into newborn screening criteria? With changing criteria for screening, the emergence of new therapies, and increased recognition of the benefits of early identification independent of medical treatment, however, the time may be approaching when screening for PI disease does indeed fit the criteria.
Any screening strategy for PI disease must use the dried blood specimen, which accommodates efficient and safe collection, ready transport to a central laboratory, and simple processing at the laboratory. The paradigm is the Guthrie specimen, a simple, filter-paper, dried-blood specimen that is collected from neonates but that can include umbilical cord blood or blood collected at any age. Using this specimen, the potential strategies for screening are:
- Enzyme assay–This was used in screening for ADA deficiency in New York (Moore and Meuwissen. J Pediatr 1974).
- Metabolite measurement–This is the classic approach to newborn screening, exemplified by screening for PKU. Recent advances in technology (e.g., tandem mass spectrometry) could make this possible (CDC. MMWR 2001).
- White blood cell surface markers–Mummified leukocytes can be isolated from dried blood specimens (Yourno. Screening 1993).
Overview of CDC’s Newborn Screening Quality Assurance Programs
Dr. Harry Hannon, NCEH, CDC
[Dr. Hannon could not be present at the meeting. Dr. Robert Vogt presented.]
CDC has a long history of involvement in quality assurance activities in support of newborn screening laboratories. In 1975, the Committee for the Study of Inborn Errors of Metabolism, National Academy of Sciences, stated that greater quality control of PKU screening is essential and recommended that one laboratory within CDC be responsible for maintaining the proficiency of the regional laboratories testing newborns for PKU. Since 1980, CDC, HRSA, and the Association of Public Health Laboratories have researched materials development and assisted laboratories with the quality assurance for dried blood spot screening tests. The heart of these efforts is the Newborn Screening Quality Assurance Program (NSQAP), operated by the Newborn Screening Branch, Division of Laboratory Science, NCEH. NSQAP is a voluntary, non-regulatory program designed to help laboratories evaluate and improve the quality of their testing and to foster the standardization of newborn screening services. NSQAP encompasses two primary monitoring components: bench-level quality control and periodic proficiency testing.
Quality assurance is the dynamic and ongoing process of monitoring a system for reproducibility and reliability that permits corrective action when established criteria are not met. Since 1978, NSQAP, in response to requests from state public health laboratories, has produced, certified, and distributed dried blood spot materials for external quality control and performance surveillance and has maintained related projects to serve newborn screening laboratories. In 1979, the first dried blood spot materials were distributed for congenital hypothyroidism screening; by 1988, the expanded program included PKU and galactosemia screening and monitoring of filter-paper production lots. A quality assurance program was initiated in 1988 for HIV-1 seroprevalence surveys in childbearing women. In 1990-1991, quality assurance programs for congenital adrenal hyperplasia screening and hemoglobinopathies were added.
Currently, 250 national and international screening laboratories from 45 countries participate in the quality assurance program. Most recently, NSQAP has included quality assurance materials for tandem mass spectroscopic analysis of amino acids and metabolites related to mitochondrial diseases. A pilot program for quality assurance of genetic and serologic markers for type 1 diabetes is under development. Technical evaluation of methods to measure lymphocyte counts from dried blood spots is anticipated in 2002.
Research Projects Using Dried Blood Spots
Dr. Robert Vogt, NCEH, CDC
Dr. Vogt gave an overview of technical aspects of the dried blood spot as a sample matrix. Dried blood spots have been widely used in large population research studies and neonatal screening programs because they provide a stable, quantitative matrix that can be easily collected in the field by investigators or study participants. Newly emerging technologies such as microarrays and chip-based PCR will dramatically increase the analytical power and throughput of assays using dried blood spots, lowering their cost and rendering large-scale testing economically feasible.
When properly collected, dried blood spots provide a volumetric blood sample: each 1/8 inch punch contains the equivalent of 1.54+/-0.17 microliters of serum. Most proteins and nucleic acids and many lower molecular weight components are stable on dried blood spots if the blood spots are stored at low humidity and kept dry. Storage is easy and compact, handling is safe, and identifying information can be recorded on the card. The ease of transporting dried blood spots makes them ideal specimens for large-volume testing by centralized laboratories. Matrix effects often become apparent when comparing results between dried blood spots and serum, and assays to measure dried blood spots may require refinement using dried blood spot standards and adaptation of other reagents. If care is taken to optimize the dose-response relation, comparable results between dried blood spot and serum can be obtained for most serologies, blood chemistries, and even for some cellular markers such as CD4. Dried blood spots represent an ideal way to store samples for DNA and mRNA analysis: both are well-preserved, and PCR in situ is quite effective, so no extraction is necessary.
Dried blood spots are being used successfully in a variety of public health and population-based research activities. Each year, CDC’s NSQAP provides reference materials and proficiency testing services to the state and territorial public health laboratories that test an estimated 6 million newborn screening samples from the 4 million babies born in the United States annually. The program has also supported several large-scale epidemiologic research studies that used dried blood spot specimens. These include the provision of quality assurance services for HIV seroprevalence surveys in childbearing women, which involved the collection and analysis of >1 million blood spots, the measurement of AZT in blood spots to assess compliance with AZT therapy among HIV-infected pregnant women, and the testing of 23,000 newborn blood spots to determine perinatal exposure to cocaine in a population-based study in Georgia.
Three studies in particular demonstrate the unique features and advantages of prevention research using dried blood spots collected for newborn screening. The first is a retrospective case-control study of cerebral palsy (Nelson, et al. Ann Neurol 1998) in which recycling immunoaffinity chromatography was used to simultaneously analyze 53 analytes from a 5-mm dried blood spot punch. At least five biomarkers with
100 percent accuracy for cerebral palsy were easily detectable from the neonatal dried blood spot. Even in cerebral palsy cases that were not diagnosed before hospital release, neonatal blood spots showed marked increases of Interleukins (IL) 1, 8, 9, 13, and tumor necrosis factor (TNF) -alpha, and RANTES.
The second project, Diabetes Evaluation in Washington State, is a prospective study to stratify genetic risk for type 1 diabetes. The study is funded by CDC and is being conducted by the Pacific Northwest Research Institute. In the first phase, parents were asked for permission to test their children’s stored newborn dried blood spot for genetic markers associated with risk for type 1 diabetes. Blood spots were subjected to a high-throughput microarray analysis to look for ~25 alleles that are used to assess diabetes risk; testing cost <$2 per sample. Children with combinations of markers that place them in the 20 percent at highest risk for developing diabetes will be offered the opportunity to participate in a follow-up study that will measure autoantibodies associated with progression to diabetes at 3-year intervals. Children whose sequential autoantibody test results suggest a high risk of developing diabetes will be offered enrollment in clinical trials of FDA-approved therapies designed to interrupt disease progression. Although this is a research study and not a public health intervention, it does provide some individual benefits: (1) reduced risk for severe diabetic ketoacidosis and death, (2) earlier opportunities for prevention and intervention, and (3) opportunities for intervention for coeliac disease and autoimmune Addison’s disease.
The final study tested the use of dried blood spots to measure CD4 lymphocytes in blood (Parsons, et al. Abstract. Presented at the VII International Conference on AIDS, 1991). CD4 cells were eluted from 6-mm dried blood spot punches and assayed as solubilized protein. A calibration curve was obtained from soluble CD4 standards that were converted into CD4 T-cell count equivalents. The assay was almost perfectly accurate, with performance as follows: sample recovery, 98 percent; inter- and intra-assay variability, <8 percent; correlation to flow cytometry, 93 percent. After this initial study, however, the test was dropped, and no further development was done. CDC plans to conduct a technical study using fluorescence-based technologies. If it succeeds, CDC will design a small pilot project to evaluate the test.
Discussion–Participants had these additional comments:
- The data from Duke University on the success of transplantation in SCID patients in the first 28 days of life are a credit to Dr. Buckley and the medical center but probably cannot be generalized to other sites. The discussion must therefore be expanded to consider treatment options for babies with SCID and long-term follow-up of patient outcomes. A key gap is lack of follow-up data on patients with SCID and other immunodeficiencies.
- The recommendations for use of manual differentials raised concerns about follow-up for false positive results and the need to address logistical issues (e.g., backup laboratories, cost of testing, infrastructure/workforce for additional testing).
- The use of dried blood spots to measure CD4 lymphocytes in blood is a promising approach that merits further examination and discussion.