Special Report on Cell Biology: Life moves on

Physics, chemistry and informatics combine to push life into the 4th dimension

Randall C Willis
Register for free to listen to this article
Listen with Speechify
0:00
5:00
A colleague returns to work from vacation and shows you a photo of his trip. The photo shows a campsite, but the tent is collapsed and the camping gear is strewn around, mixed with tree limbs and rocks.
 
Is this your colleague’s campsite before he set things up? Or might his site have been the victim of a bear attack or rock slide? Or did you colleague just snap this shot on his way to an all-inclusive resort? With this one photo, it is really hard to tell.
 
It would be so much easier to see what his vacation was like if he had a series of images or even a video of the vacation as it took place.
 
Life, like vacations, is dynamic, and to truly understand the physiology of a disease or the mechanism of action for a new therapeutic, it can be as important, if not more important, to understand how a cell or tissue reaches its endpoint than what that endpoint looks like.
 
That’s where advances in microscopy and live-cell imaging come to the fore.
 
Change is inevitable
 
“In high-content imaging in the past, you took a 96-well plate seeded with cells, you'd put the drugs on it, you kill the cells, stain them with different labels and then just do static imaging screens to see what happened to the cells,” says Lynne Chang, senior application specialist at Nikon Instruments. “Now, you can combine live-cell imaging with these high-content screening systems, and you can not only look at 3D cultures in each 96-well well, but also do 3D imaging over time for each well.”
 
Peter Banks, scientific director at BioTek Instruments, echoes this sentiment.
 
“Over the last few years, there has been a lot of work in the development of live-cell probes that allow researchers to really better understand cellular processes by looking at them in real time,” he says, giving the example of calcium flux in signal transduction.
 
“Calcium is a really important second messenger in signal transduction in cells, but it doesn’t last a long time,” he explains. “Typically a ligand binds to its receptor, the calcium flux peaks at around 30 seconds and, within a couple minutes, is gone.”
 
Ca-binding fluorescent dyes and bioluminescent proteins like aequorin were integral to monitoring the waves of calcium release as they pulsed through an activated cell, but Banks is most excited by a newer class of Ca flux markers called genetically encoded calcium indicators that can be transfected into cells.
 
“Consider it a fusion protein consisting of a GFP variant, a calcium-binding protein such as calmodulin and another binding protein that binds calmodulin itself,” he says. “These three proteins together, in the presence of calcium, will not only bind the calcium, but the binding event changes the conformation of the GFP variant to either increase or decrease its fluorescence.”
 
But calcium is just one such signal messenger, and Banks quickly points to the efforts of Montana Molecular to expand the variety of markers they can target.
 
“They have sensors for a whole series of secondary messengers, including diacyl glycerol, PIP2 or cAMP,” he enthuses. “These sensors can be used for the simultaneous detection of all these secondary messengers. That is something that is really unique and really powerful, specifically for drug discovery efforts for GPCRs.”
 
In a similar analysis, researchers with the Munich Cluster for Systems Biology announced in April their attempts to study real-time changes in redox signaling within individual mitochondria, suggesting that oxidative damage of these organelles might contribute to axonal damage in diseases such as multiple sclerosis.
 
“We were able to establish an approach that permits us to simultaneously monitor redox signals together with mitochondrial calcium currents, as well as changes in the electrical potential and the proton gradient across the mitochondrial membrane,” explained study co-leader Thomas Misgeld, and these changes appear to progress along the damaged axons.
 
They also noted, for the first time, that these redox changes are associated with a physical contraction of the mitochondria.
 
“This appears to be a failsafe system that is activated in response to stress and temporarily attenuates mitochondrial activity,” Misgeld said. “Under pathological conditions, the contractions are more prolonged and may become irreversible, and this can ultimately result in irreparable damage to the nerve process.”
 
For Jacob Tesdorpf, director of high-content instruments and applications at PerkinElmer, a lot of the excitement around live-cell imaging is in being able to ask much more sophisticated questions.
 
“We’ve been in high-content screening for more than a decade, and the requirements for live-cell and the complexity of experiments that people do is clearly increasing,” he says. “For example, if you’re looking at inhibiting metastatic behavior, you’re actually looking at that movement of cells and specific phenotypes that show the specific pattern that might be more or less indicative of their potential to create metastasis.”
 
Furthermore, he says, people can use live-cell experiments to identify the most appropriate endpoint for a fixed-cell assay and thus maximize assay conditions before cranking up the throughput.
 
Of course, one of the challenges of working with live-cell imaging is that you have to keep the cells alive, which means enclosing what was a bench-top microscope within an incubator.
 
“Nikon has the BioStation, which is essentially a giant incubator with a fully automated microscope inside with a robotic arm that moves your cell culture dishes in and out of the microscope imaging path,” says Chang. “So you can really start to image cells for a very, very long period of time in an environment that’s very friendly for cells to grow and divide.”
 
This can be particularly important, she says, for experiments with finicky cells such as stem cells.
 
“They’re very sensitive to any mechanical perturbation, such as any changes in pH in the cell culture medium, if you take it out of the incubator,” she explains. “So these types of incubator cell imaging systems are very ideal for long-term recording and time-lapsed imaging of stem cells.”
 
Part of the push to live-cell imaging is the movement within the industry away from in-vivo animal testing and the desire to develop more realistic cellular models of disease, according to Evan Cromwell, research director at Molecular Devices.
 
“We’ve been doing a lot of work using stem-cell-derived models, which are human cells and which recapitulate the in-vivo environment much better than immortalized cell lines, like CHO or HeLa cells,” he says, suggesting that these models are being used extensively in drug safety and toxicology studies.
 
“The other area that live-cell imaging is really taking off is in doing kinetics, so tracking cells, looking at cell cycle, cell mobility and migration,” he adds, echoing Tesdorpf. “There, you’re looking over a 24- to 48-hour period and taking time-lapsed images of the cells.”
 
You can, of course, get much more granular in your analysis of cell behavior, however, and Banks points to markers like EMD Millipore’s SMARTflare technology, which he describes as live-cell RNA expression.
 
“Most people will use qPCR, for example, for doing gene expression studies, but that relies on lysing the cells,” he says. “With this technology, you can actually monitor, in real time, RNA expression.”
 
One of the challenges of live-cell microscopy has been the impact of the light source on the living cells through processes known as photobleaching or phototoxicity.
 
To address this issue, companies have created solutions at both the technological and methodological levels.
 
In the latter case, Banks points to the fact that most systems now rely on time-lapsed imaging where, rather than constantly exposing a sample to light, cellular dynamics are studied by bookending a series of individual images taken over time. Thus, “you can still generate a temporal profile, but you’re not whacking the cells all the time,” he says.
 
Chang similarly points to advances in camera sensitivity.
 
“The cameras have improved significantly so that you have much more sensitive detectors without compromising spatial resolution or field of view,” she says. “You can now image cells for a long period of time with low levels of light exposure and still get beautiful images.”
 
And even how or how much you expose the cells can have a significant impact on cell viability, she adds.
 
“Up until now, if you focus your camera on anything, there’s only really one focal plane that comes into focus,” she explains. “Anything beyond that is sort of blurry.”
 
“In a light sheet system, you’re actually only illuminating the in-focus plane by using thin sheets of light,” she continues. “So, it is a very cell-friendly imaging method, and by moving the light sheet up and down very rapidly and your focus up and down rapidly, you can develop very nice 3D images and do that over time.”
 
Keeping it real
 
As Chang suggests, another advantage of the new microscopic technologies is the ability to provide more information in a third dimension, and there has been growing interest in 3D cell culture experiments.
 
“People take cells out of a tumor, culture it out on a glass cover slip into a single monolayer and study those, but how relevant is that?” asks Chang. “The cells inside our body exist within a tissue or a 3D structure, so more and more people are now studying cultured cells in gel matrices or other substrates that allow the cells to grow into more physiologically relevant structures.”
 
And the differences between 2D- and 3D-cultured cells can be significant.
 
“A number of publications have shown that if you have cells in 2D layer, that you’ll get different responses, for example, different toxicity responses or different biology, than if you have them in a 3D environment or more like an organ or a tissue environment,” explains Cromwell.
 
For example, Tesdorpf points to the work of Vicky Avery from Griffiths University, where she compared a number of different prostate cell lines in 2D and 3D and clearly showed that the drug responsiveness and the behavior of the 3D culture was much closer to the in-vivo situation than the 2D culture.
 
Part of the difference, according to Banks, is simply one of cell viability.
 
“The problem of doing long-term toxicity studies with conventional 2D plating of cells in microplates, for example, is that cells don’t like being in a plate for that long; they’ll die off before the end of your experiment,” he explains. “But if you put it into a 3D culture, it resembles more tissue or organ, and we have been able to demonstrate that cells that clump into a spheroid will be viable and have good function for weeks.”
 
“We’ve been collaborating quite closely with a company called InSphero out of Switzerland that specializes in producing assay-ready 3D microtissues,” adds Tesdorpf. “It is a very powerful tool because you can see behavior that you wouldn’t be able to see in 2D culture.”
 
“For example, we were looking at cancer microtissues with a dye that was originally developed for in-vivo imaging called HypoxiSense, designed to detect hypoxic conditions in tumors,” he explains. “We could show with this dye and our Operetta imager that you can actually detect a hypoxic core inside these tumor microtissues.
 
“This is a behavior that you wouldn’t be able to detect at all in 2D culture, and there is a lot of research going on into how these hypoxia-inducible factors play a major role in metastasis.”
 
Staying on the cancer theme, Chang highlights the breast cancer work of Joan Brugge, chair of cell biology at Harvard University.
 
“They grow [breast cancer cells] in a gel which allows them to form round spheroid hollow structures similar to these spherical gland-like breast epithelial structures,” Chang says. “By studying how the hollowness is maintained in these cultures, they can start to understand why they would fill in, in breast cancer, and start to develop drug targets.”
 
To study these 3D structures, however, you need to image them in 3D. And as soon as your sample starts to get thick and tissue-like, Chang suggests, there is a lot of light scattering, so your image quality deteriorates. There are different ways to get around this light scattering.
 
“One is using the light sheet,” she reminds. “There’s another method called two-photon or multiphoton imaging, which is another technology for just illuminating the in-focus plane rather than illuminating the whole depth of your cell and just trying to collect data from your focus. You would then move that illumination along the Z-axis and change your focus as you’re doing that to collect 3D images.”
 
Tesdorpf similarly highlights the recent launch of PerkinElmer’s Opera Phenix.
 
“Basically, it is a high-throughput spinning-disc confocal system, where we spend a lot of our effort making sure that it is particularly suited for 3D spheroid or microbodies, by looking specifically at the disc design, by enabling crosstalk-free imaging of two to four channels at the same time, but also by using water-immersion objectives, which are particularly well suited for thick samples because you don’t get this refractive index problem that you get with air objectives.”
 
“I’ve been working in this industry now 12 to 15 years, and 3D cell models have always been a promise,” Cromwell says, “but I think it’s been a slow evolution and they’re finally starting to really get traction.”
 
And the move to more physiologically relevant models has resulted in the renaissance of phenotype as an analytical endpoint in drug development.
 
How do I look?
 
“When you look at drug discovery over the last 15 to 20 years, it has really been all about what is the drug target,” says Banks. “Essentially, you have information about new drug targets from the get-go, and all the work you do is looking at that specific drug target’s functional response.”
 
Part of the challenge with this approach to drug discovery, however, has been the sheer redundancies of biochemical and physiological pathways within biological systems that can either make a target irrelevant shortly after it gets hit or offer up unexpected side effects due to secondary pathways in which the target is involved.
 
Thus, Banks sees a slow migration away from target-based assays toward phenotypic assays, where researchers monitor the impact of new therapeutics on the basic phenotype of the cell.
 
“This could be as simple as cell viability, monitoring neurite outgrowth, angiogenesis, the production of reactive oxygen species or hypoxia,” he explains. “The best way of looking at these phenotypes is through microscopy.”
 
BioTek’s Cytation3 system, he explains, allows companies to keep a toe in both camps, not only enabling phenotypic assays but also allowing drug discovery teams to do target-based assays, as well.
 
“There’s a chart that I’ve seen maybe about 20 times which shows that first-in-class drugs come predominantly from phenotypic screens, while follow-on drugs come predominantly from target-based screens,” says Cromwell (see bar graphs at end of this article).
 
He also points to the Tox21 Initiative, “which really wanted to move the industry toward in-vitro models, and the best way to do that was with imaging-based phenotypic screens, because toxicology, in some senses, is pure phenotype.”
 
“We felt very strongly about phenotypic screening, and what we tried to do is to provide the best possible images so that you get a very detailed image of your cells, many different colored markers that you can use, but at the same time being very gentle to the cell so we keep it as unperturbed by the act of imaging as possible,” adds Tesdorpf. “Again, this is why we’re using spinning-disc confocality: because it allows us to minimize the photon dose that the cells are submitted to.”
 
To label or not to label
 
But at the same time researchers are developing more physiological relevant culture conditions to examine more physiologically relevant endpoints, they also have to worry about the perturbations they introduce through the use of target indicators such as fluorescent dyes or biomolecules.
 
“In the past, to track cell movement, one of the easy things to do was to throw in a fluorescent dye that binds to DNA, which lights up the nucleus very easily so you have a nice bright round structure in the cell against a dark background,” says Chang. “But these dyes are very harmful to cells because they actually intercalate into the DNA bases, which is very detrimental to cell division and gene expression in general.”
 
Thus, although such markers remain a hallmark of cell microscopy, there has been an increasing move toward label-free techniques.
 
“It’s a fine balance that you always have to strike in your experiment to ensure that which you are seeing is actually the biology,” says Tesdorpf. “The challenge is to be as gentle as possible and as sensitive as necessary. Label-free technology, whether it’s imaging-based such as with the digital phase contrast or non-imaging-based such as on our plate-reader platforms, is a very powerful, extremely gentle tool to interrogate cellular behavior.”
 
“There’ve been significant improvements in image processing and analysis algorithms that allow you now to be able to track cells based on label-free imaging like phase contrast,” Chang continues.
 
“Phase contrast was great, because all you’re doing is taking cells that are unadultered and put white light on them, and with a little bit of contrasting technique that’s inherent to the microscope, you can see these nice outlines of the cells,” she explains. “But it was a complicated image. It was hard to teach the software to detect the whole cell, because there are lots of other structures inside the cells that show up in phase contrast.
 
“But they’ve made significant improvements in image analysis processes so that the software can very easily pick out the whole cell, even if the cells are right next to each other, and it can actually track the movement of the cells to see where it is moving and how the cell shape is changing.”
 
Cromwell concurs, using Molecular Devices’ label-free (or stain-free in their parlance) imaging cytometer, the SpectaMax MiniMax, as an example.
 
“In the software, we have a machine-learning algorithm that allows the customers to identify cells in the transmitted light image and also identify things that are not cells or are background or artefacts,” he enthuses. “Then, through a number of iterations, you can teach the system to pull out and segment individual cells again using this transmitted light.
 
“So that’s a very powerful tool that you can use to do live-cell imaging, to tracking cells, do cell counting or to do things like you want to segment your cells without introducing a stain and look for sorts of proteins being either up- or down-regulated in your cells.”
 
Thus, as much as anything, it would appear that improvements on the software side are really pushing live-cell imaging into new areas.
 
It’s the data, stupid
 
“The hardware has really matured,” offers Cromwell. “The cameras that you can get now, the light engines, the optics, they’re all very robust, high-performance systems. And to be honest, between the different vendors, we all have access to sort of the same hardware.”
 
“But it’s on the software side that you can differentiate yourself by being able to analyze the images better or faster,” he continues. “In oncology research, you’re looking at tumor material and you have a very heterogeneous population, or even the stem cells where you have multiple types of cells there. You want to be able to identify those cells, do population analysis, and that all requires really good software to pull those out.”
 
The challenge, however, is how do you turn biologists and biochemists into information technologists? Simplicity is key.
 
“For biologists, it is pretty easy to tell that this cell is different from my control cells, and it’s different in a good way or a bad way,” says Tesdorpf. “But how do you describe different in numerical terms?”
 
“We invest a lot in image analysis software and coming up with parameters that enable a robust description of these phenotypes,” he explains. “The software that we use, Harmony or Columbus, offers a whole range of texture parameters that describe the intensity patterns rather than the intensity itself of certain markers.”
 
“We can now really collect a very broad range of different parameters describing a cell phenotype, and then we can use machine learning tools and software to actually select the most discriminating feature combinations to tell, say, the positive control from the negative control, and therefore support phenotypic screening.”
 
Case in point: this summer, PerkinElmer expects to launch a high-content product based on Spotfire, called the High-Content Profiler.
 
“It takes the data generated by Opera Phenix, or whatever instrument the customer might be using, loads them into Spotfire and enables the customer to normalize the data and walk through a series of very simple steps to then apply machine learning,” Tesdorpf explains. “In a very unbiased way, tell the software this is a set of positive controls, this is a set of negative controls, now please go and find me those features that are the best capable of differentiating the positive from the negative control and then accordingly find the hits inside my screen.”
 
But more than just setting positive and negative boundaries, Chang suggests, the machine learning mechanism can also enable on-the-fly data interpretation, which can significantly reduce data acquisition by eliminating unproductive streams sooner.
 
“These high-content imaging software packages are very sophisticated in the sense that it will analyze the images that it’s collecting and then, based on the results, it will actually go back and reanalyze some of the wells,” she says. “You can say image my 96 wells, and in the wells that have more than 100 nuclei, revisit those wells and do a more careful imaging, maybe higher magnification or different colors. So there’s a feedback loop between the data results and the acquisition portion of the experiment.”
 
“If you’re imaging this plate and it looks like the results are not very exciting, just stop the experiment,” she adds. “In the past, I would say 90 percent of the data you were collecting was useless. So now, if you’re analyzing the data as you’re acquiring it, you can actually prevent unnecessary data collection and streamline your data collection.”
 
Banks concurs.
 
“We can set the threshold intensity, for example, that will trigger only hit wells be imaged. If you think about a typical high-content screen hit rate of being between 1 and 5 percent, you could save up to an order of magnitude in time and also reduce the amount of data storage required.”
 
But of course, all of this data is for naught if the researchers don’t understand the output, which is why data visualization has been such an important area for Molecular Devices.
 
“You can imagine you now have 20 different parameters in a heterogeneous cell population where you have five different cell types of thousands and thousands of cells you’re trying to analyze,” Cromwell explains. “The challenge now becomes how do you keep track of that?”
 
“You have the bioinformaticians, the statisticians, who can take that data, crunch it down and create complicated heat maps, but your general biologist in a lab is not going to be able to do that. So we’re really looking at how you can create software packages that allow the typical biologist to be able to visualize that and be able to make sense of this really complicated data set.”
 
And as developing technologies continue to allow researchers to ask ever-more-complicated questions, the ability to understand the answers to those questions will become more and more challenging. But then, that’s life.

Figure A
 
Figure B
The rise of phenotypic screens. A) In the decade from 1999-2008, more first-in-class drugs arose from phenotypic screens, while target-based screens accounted for the majority of follow-on drugs (adapted from Swinney & Anthony Nat Rev Drug Disc. 2011). B) Ultimately, the difference between phenotypic and target-based screens is where the drug target comes into play (adapted from Terstappen et al. Nat Rev Drug Disc. 2007).

Randall C Willis

Subscribe to Newsletter
Subscribe to our eNewsletters

Stay connected with all of the latest from Drug Discovery News.

March 2024 Issue Front Cover

Latest Issue  

• Volume 20 • Issue 2 • March 2024

March 2024

March 2024 Issue