|
Commentary: Personalized medicine--Bridging the gaps
July 2014
SHARING OPTIONS:
Shortly after the launch of the
“largest human sequencing operation in the world,” an exploratory study in which 12 adults underwent whole-genome sequencing to detect clinically
meaningful genetic variations was published. The results were sobering: incomplete coverage of inherited disease genes, low reproducibility of detection of
clinically relevant genes and disagreement among experts about which findings were most significant. In short, for the most part, the findings were not
actionable.
Therein lies the biggest conundrum facing the personalized medicine arena today—current and
emerging technologies capable of providing person-specific data far outpace the ability to effectively mine that data to draw clinical conclusions and
develop clinically relevant products. The pressure to do so is growing, however. Consumer “wearables” that measure and record/report vital signs
are spurring patients to try to take control of their own health and of the patient-doctor relationship. The advent of direct-to-consumer genetic testing,
although somewhat curtailed by the U.S. Food and Drug Administration’s admonition to the genetic testing company 23andme, is further challenging
physicians to treat their patients, quite literally, as individuals. High-profile stories of people such as Apple founder Steve Jobs, for whom an
individualized treatment approach seems to have been life-extending, imbue the concept of personalized medicine with emotional fervor.
The challenge, then, for the life-sciences sector is to bridge the gaps between technology innovation and clinical outcomes
with appropriately targeted, cost-effective R&D. Personalized medicine is very much a work in progress because every individual, like every disease and
treatment, is “complicated.” Specific tools are needed to deal with these specific challenges:
- Disease
heterogeneity: Most diseases are complex. Genetic variation can explain only about half of the variability among individuals. For biopharmaceutical
products to reliably deliver, researchers need a better understanding not only of genomics, but also of molecular pathways, proteomics and the impact of
epigenetics (e.g., genetic changes due to environmental factors rather than simple DNA sequence variants) on disease susceptibility, development and
progression.
- Data quality: Next-generation sequencing is a major driver of personalized medicine R&D.
However, factors such as sequencing artifacts (e.g., low-quality reads, contaminating reads) can compromise data quality and analysis. Similarly, differences
among the major sequencing platforms—each has a different error profile, some are better for resequencing while others are better for de-novo
sequencing—can hinder quality-control efforts.
- Data interpretation: Data from the Genetic Epidemiology Research on
Adult Health and Aging cohort, among the largest and most diverse genomic projects in the United States, were recently added to the U.S. National Institutes
of Health’s online database of genotypes and phenotypes. The data transfer has the potential to yield new insights into a plethora of diseases, thereby
informing drug discovery R&D; however, the data are meaningless without robust tools to accurately interpret and leverage the information to identify
subpopulations of potential responders to new entities, as well as factors that contribute to resistance.
- Data
actionability: Largely because of disease heterogeneity, not all data are “actionable;” e.g., you know you have mutations in a
particular gene that may confer greater risk for disease, but what can you do? DNA co-discoverer James Watson had his full genome sequenced in 2007. He
famously said he would make the entire genome freely available, except for his APOE status; he did not want to know his status because an allele of the gene
predisposes to Alzheimer’s disease, which his grandmother died from. That potentially frightening aspect of personalized medicine—the ability to
identify risk before researchers understand how to treat or prevent a disorder—remains today. For certain types of breast and other cancers, where a
number of relatively effective treatments are available, genetic testing is enabling better targeting of treatments to individuals who are likely to respond.
That won’t begin to happen for many other complex diseases until the root cause(s), or at least the affected gene products, are identified. And
identification means more than simply pinpointing genetic mutations; many mutations in diseased cells may be harmless, not drivers of disease. For
researchers to know whether mutations they’ve found have already been unearthed and evaluated by others is a formidable task involving reviewing and
assessing the scientific literature and, from there, trying to determine whether existing treatments, or combinations thereof, might be helpful. Tools that
can simplify the process, such as sophisticated data mining programs that can extract relevant information from the literature rapidly and accurately, are
not yet in widespread use. For the most part, identification of faulty proteins that are disease harbingers can, at best, lead to early detection. But many
people will choose not to take advantage of this aspect of personalized medicine until we have treatments that act effectively with different subtypes and at
different stages of disease.
- Lack of standardization: Consistent standards will be key as personalized medicine R&D moves
from data generation to data analysis. Findings are open to misinterpretation when organizations use multiple vendors, platforms and applications, especially
if laboratory staff are not adequately trained on all technologies. At the discovery level, researchers need universally accepted standards for sample
management, for example, since improper sample collection, storage and/or processing can make those samples useless for their intended purposes. At the
publication level, including full methodologies in submitted papers can help readers better assess findings until the scientific community agrees upon
universal standards. Once such standards are in place, clinicians are likely to have more confidence in findings that could influence practice.
- Disruptive approaches: Novel methods for speeding the discovery of new drug candidates and identifying potentially responsive subgroups
are promising, but have challenges of their own. For example, using computational modeling/simulation techniques in the early R&D stages can save time
and money, but only if the algorithms used in such efforts are reliable, available and reproducible—i.e., include error bounds so others can try to
replicate findings and validate predictions, which often is not the case.
- Collaborations: Information sharing within
companies and among industry, academia and government is vital to realizing the full potential of personalized medicine. Some companies are attempting to
integrate (rather than outsource) steps such as chemical synthesis into their on-site discovery work to shave time and money off the process; this approach
requires heretofore unprecedented collaboration among disciplines such as chemistry and biology, and concomitant culture changes in organizations. A recent
article points to “linguistic barriers” across the sciences and ways to cross those barriers. Industry/academia collaborations, in particular,
could enhance the development of point-of-care diagnostics, another sine qua non of personalized medicine. As George Whitesides recently observed,
challenges here, too, revolve around culture differences, research expectations and communication issues. Coordinated efforts such as the National Consortium
for Data Science, which brings together universities, companies and government agencies, could help smooth the way by encouraging interaction among data
experts from all sectors of the scientific community to deal with key challenges such as data sharing and privacy concerns.
Despite
the challenges, personalized medicine is here to stay and will inevitably become simply the way medicine is practiced. Bottom-line concerns of big
pharma—such as that personalized medicine will segment markets and therefore mean lower profits—are dissipating. In 2012 alone, the U.S.
government’s investment in human genome sequencing projects has generated, directly and indirectly, $65 billion in U.S. economic output and $31 billion
toward the 2012 U.S. GDP, according to a recent report. Physician reluctance to embrace new technologies remains a challenge even in oncology, an early
adopter of genetic risk profiling, but is likely to soften. Regulatory concerns exist but are being dealt with. Importantly, pressure from patients on one
side and industry, academia and government on the other eventually will win out.
Dr. Jaqui Hodgkinson is vice president of
product development at Elsevier. After completing her studies for a BSc in molecular biology at
the University of Durham in England in 1992, Hodgkinson was awarded a UK Medical Research Council Ph.D. grant and moved to the Biochemistry Department at the
University of Oxford in England. In August 1996, Hodgkinson moved to Glaxo Wellcome, then she relocated to the Netherlands in late 1997, initially working at
Solvay Pharmaceuticals. Hodgkinson moved to Elsevier in August 1999 and went on to build a large team of publication specialists in the cardiovascular area.
In 2007, she moved into Elsevier’s operations wing, and in 2013 moved role to head up biology product teams.
(Opinions of the columnist do not necessarily reflect those of the publishers, editors or staff of DDNews.)
Back
|