Several researchers share tales around the development of their widely used methods.
Every paper is a win. Each one comes together when experiment and analysis, usually doused in sweat and setbacks, culminates in work that researchers are proud to show others beyond their collaborators. Sometimes a described method surprises its developers because it finds many users or shapes their careers, or a subdiscipline, in unexpected ways. For Nature Methods’ 20th anniversary, some methods developers share vignettes of such moments.
Fiji: building community in imaging
As imaging technology changed and brought on high-throughput experiments and large files, computational image analysis was needed. Wayne Rasband, who was then at the US National Institutes of Health, developed the widely adopted software tool ImageJ for image processing and analysis. He deserves credit for his vision of a community-based image analysis tool, which inspired the open-source image processing package Fiji1, says Kevin Eliceiri, a Fiji co-developer and researcher at the Morgridge Institute for Research who also directs the center for quantitative cell imaging at University of Wisconsin–Madison’s Laboratory for Optical and Computational Instrumentation (LOCI). Fiji’s architecture is built on ImageJ. What’s best about the Fiji ‘package’ is the community, Eliceiri says. He highlights the role of the paper’s co-authors and others and names LOCI colleague Curtis Rueden the “lead Fiji maintainer.”
Fiji is supported by numerous laboratories, including one led by Pavel Tomancak at the Max Planck Institute of Molecular Cell Biology and Genetics. He also directs the Central European Institute of Technology consortium in Brno, Czech Republic. To him, Fiji has always been about providing a platform through which to share and maintain image analysis-solutions, including those from his group. What started as a side project has had success that he and his colleagues could never have predicted, he says. Fiji “definitely opened many doors for me,” he says. He was invited to give talks, and eventually his Fiji activity brought in grants, and it gave him an entrance to the bioimage analysis community. He balanced keeping a software tool up and running with tending to his lab projects. Fiji’s success, he says, is thanks to the selfless work of many in the bioimage analysis community. “Some stuck to it despite having more lucrative options in the industry, and to them, on behalf of the biology community, I express a heartfelt gratitude,” he says.
It’s hard when “the talented students leave, pursue their own interests,” says Tomancak, but community has kept Fiji strong. The software’s biggest asset, he says, is how it handles large datasets on relatively modest computer hardware. That’s due to design decisions Fiji’s core developers made early on. “Now, we are harvesting the fruits of those early decisions.”
Eliceiri also enjoys how Fiji’s open-source community involves researchers sharing expertise and developing, sharing and adapting image-analysis code. One person’s plug-in or macro can turn out to help others the developer didn’t know. As a collaborative tool and with its community spirit, it has spurred better science. In the rapidly evolving software world, it’s great how Fiji has been sustained and stayed relevant, he says.
Fiji’s success has helped him to “be open source in all my research endeavors,” says Eliceiri. He also works on open hardware projects and sees expertise-sharing in resources such as the image.sc forum and within the organization BioImaging North America. Fiji has shaped research in his group and that of close colleagues and collaborators. Fiji’s deep-learning functionality, says Eliceiri, is also a community effort. Special recognition, he says, goes to Human Technopole researcher Florian Jug for his many contributions in this area; his work also inspired others, including Eliceiri and his group, to add deep-learning functionality.
To give Fiji users better access to deep-learning Python libraries and code, bridges have been built between Fiji to the Python libraries PyImageJ and the napari-imagej plug-in, a viewer for multi-dimensional images.
Cryo-EM: good-bye blob
“Blobology” is what some crystallographers used to call cryo-electron microscopy, says David Agard, a researcher in the Department of Biochemistry and Biophysics at the University of California, San Francisco (UCSF), and the founding director of the Chan Zuckerberg Imaging Institute. “You would have your protein of interest or your complex or your ribosome, and it would look like a blob.” One could see no atomic detail—nothing that would give chemical insights, as crystallography results could. Thus, he says, “it was found wanting by the structural biology community.”
In cryo-electron microscopy (cryo-EM), a sample is plunge-frozen and then structure is reconstructed from an electron beam barrage. But camera noise was an issue, the sample-holding stages weren’t terribly stable and beam-induced sample movement during exposure was common.
Direct electron-counting had been seen as “the absolute necessary next step to go,” says Howard Hughes Medical Institute investigator Yifan Cheng, but its value “was doubted by many in the field.” Cheng is Agard’s departmental colleague at UCSF and co-author of the paper2 that presents direct electron counting and an algorithm for motion correction in cryo-EM. When the scientists first saw the structural data for proteins that they generated using direct electron counting with cryo-EM, “it was extraordinary,” says Agard.
To design and develop a camera capable of single-electron counting, the team worked with camera manufacturer Gatan Inc. and drew on other advances such as detectors developed at the Lawrence Berkeley National Laboratory (LBL) for high-energy physics applications. Gatan was hesitant about this project because it had been developing an advanced scintillator camera, says Agard. The scintillator, which converts high-energy radiation from the electron beam into visible light, seemed to be part of cryo-EM’s problem. When electrons enter the scintillator, he says, they bounce back from the fiber optic coupling to the detector, adding halos to the low-resolution images.
There had been efforts to use silicon sensors, placed directly on a chip. But this failed because it led to harsh radiation damage. Cryo-EM cameras had to be radiation hardened, and they needed improved chip lithography and better sensor designs. The approach, developed by Peter Denes at LBL, of using synchrotron detectors for ultra-high-resolution electron microscopy in materials science provided a way forward for cryo-EM. When Agard landed a seed grant, he and his team, along with LBL scientists and Gatan engineers, set out to design a scintillator-less camera. They also explored ways for the camera to read out the images fast to allow detection of individual electrons. Then they worked on an algorithm to correct beam-induced motion.
The image blur was first thought to be due to sample drift, but research from the lab of Nikolaus Grigorieff, then an HHMI investigator at Brandeis University and now at the University of Massachusetts Chan Medical School, showed that it came from beam-induced motion. First, says Agard, the sample buckles as electrons hit it. Stresses from plunge-freezing the sample are relieved. Radiation gases are released that also lead to image distortion. Because of all of this, he says “the sample is moving all across the field.”
The electron-counting detectors counted essentially every electron, he says, were practically noiseless and provided fast readout, so that the typical long exposure could be broken into a set of movie frames that allowed for all sorts of motion. Most importantly, he says, the beam-induced motion could be tracked and corrected. The far better motion-corrected images make it possible to do computational particle-sorting to find the good data that could produce atomic-resolution images.
The team’s work led to what became the Gatan K2 camera. Much computation takes place on the camera itself. Plenty of computational progress has resulted from the use of graphical processing unit (GPU) computing and, says Agard, “game-changing” software from the lab of Sjors Scheres at the MRC Laboratory of Molecular Biology.
The structure revealed in the team’s Nature Methods paper—a 3.3-Å-resolution structure of the Thermoplasma acidophilum 20S proteasome, which is around 700 kilodaltons in size—was impressive, says Agard. “The technology we established in this Nature Methods paper enabled atomic structural determination, and triggered the ‘resolution revolution’,” says Cheng. Shortly after that paper, Cheng’s team and colleagues in the UCSF lab of David Julius applied this method to determine the structure of TRPV1, a mammalian transient receptor potential channel, both in a closed state at 3.4-Å resolution and in activated states3,4.
This work “really woke up the field,” says Agard. Membrane proteins had been hard to study with crystallography. Previous work using film “was able to generate good reconstructions of very big things such as very large viruses.” But the huge win with this new technology, he says, was the ability to tackle much smaller things, which made the work on the TRPV1 channel structure “the real stunner.”
The project began in the late 2000s and came to fruition, with a working device, around 2012. Testing and manufacture of the silicon chips took time, as did many other steps. The algorithm that the group presented for blur correction has since been improved and is now used across cryo-EM and cryo-electron tomography (cryo-ET). The Gatan camera currently in use is the K3, which is faster and more sensitive than the K2. The technique, he says, piqued the interest of the pharma industry because of its potential to deliver structures of important receptors. Agard says that cryo-EM has become a dominant technology for structure determination.
jGCaMP7: broadly used neural sensors
Genetically encoded calcium indicators (GECIs) let neuroscientists image neural activity. Neuroscientist Hod Dana is first author on the paper5 presenting jGCaMP7 sensors. At the time, he was wrapping up a postdoctoral fellowship at HHMI’s Janelia Research Campus and just establishing his own lab. He is now a researcher at the Lerner Research Institute of the Cleveland Clinic Foundation and at the Cleveland Clinic Lerner College of Medicine at Case Western Reserve University.
As a graduate student and postdoc, Dana closely followed Nature Methods and used or adopted ideas from journal papers for his own work. “I was very excited to have our paper5 accepted and see my name added to the author list of the journal,” he says. The paper’s impact was “very positive for me,” he says. The GCaMP family of sensors was well known, which boosted interest in checking out the new generation of the sensor. The work and this project “was definitely my ‘baby’,” he says, but also says that it was a collaborative effort of Janelia’s Genetically Encoded Neuronal Indicators and Effectors (GENIE) project team, who also deserve credit for the sensor’s success. Dana continues to use these sensors to study the progression of neurodegenerative and neurological conditions in animal models.
“I was very excited to have our paper accepted and see my name added to the author list of the journal,” says Hod Dana.
“The years I spent developing calcium sensors helped me to understand their capabilities and limitations,” he says. The experience helped him use them in studying disease conditions and other scientific questions.
To develop jGCaMP7, Dana and his colleagues optimized green fluorescent protein–based GECIs. The new sensors could track larger populations of neurons than was previously possible with GECIs. It became easier to discern individual neural spikes and to use imaging to study neuronal traits, such as neurites and the network of neural and glial fibers called neuropil. Dana is proud that the jGCaMP7 sensor is used in many labs, across different brain regions, species and experimental conditions.
During the project, they screened potential sensor candidates to hunt for the ones that gave the most predictive results for a wide range of experiments. They released four different types of jGCaMP7 sensors. Says Dana: “I was relieved for the first time when I had the sensors working in our hands and keep getting this feeling every time I see or hear that they work well for someone else.”
Behavior tracked: LEAP and SLEAP
Many organisms spend much of their life moving. Plants change shape; animals flee from a threat or approach one another in courtship. One can track such behavior through observation, but “you can’t really scale an ethologist sitting in a bush with a notepad and watching whether a duck will keep rolling its egg,” says Salk Institute for Biological Studies researcher Talmo Pereira. Manually annotating photos and video of moving organisms gets tedious and time-consuming. To capture such data for quantitative analysis of behavior, scientists can choose computational tools, such as DeepLabCut6, LEAP7 and SLEAP8.
When the DeepLabCut preprint was published in April 2018, Pereira was finishing his PhD at Princeton University. One month later, he and colleagues published their LEAP preprint. The teams had been working independently from one another. “The best tool, hands down, always is the tool that lets you get your science done the fastest,” he says. It should also be the easiest to set up and use. His bias lies with his team’s tools. “One of the most rewarding things I’ve been seeing is the positive feedback from the community,” he says.
These computational tools apply deep learning. To use Pereira’s tools, no data mountains for training models are needed. Users load videos into the system and click in a few video frames to identify body parts such as the head, torso and tail of an animal. With this input, the system annotates all the frames in one or many videos, although users should spot-check correctness, says Pereira. They can then assess how a plant’s shape changes or tease out the first hints of neurodegenerative disease in animal models.
Pereira’s Salk Institute lab, which he started in late 2021, is entirely computational, but he has considerable wet-lab assay experience. “I’ve done enough to know what I’m good at and where my strengths lie and they are not on the bench,” he says, laughing. He grew up poor in a crime-ridden area in Campinas, Brazil. In middle school, he taught himself how to build a computer and to program in Visual Basic and C++. He hacked video games with his own scripts. He grew up with relatives and arrived in the United States as a teenager. A decade earlier, his mother had immigrated to the United States and had supported him from afar.
An internship at the US National Institutes of Health introduced him to neuroscience and behavior analysis, and he landed an undergraduate scholarship at the University of Maryland, Baltimore County. Other internships followed. A summer of failed co-immunoprecipitation experiments at the Broad Institute of Harvard and MIT taught him about cell signaling pathways and exposed him to computational biology. Next, he interned in Sebastian Seung’s Massachusetts Institute of Technology lab, where he encountered “a level of thinking about the brain that was only really tractable with computational methods,” says Pereira. After he finished an ethology-focused internship with David Anderson at the California Institute of Technology, the BRAIN Initiative—Brain Research Through Advancing Innovative Neurotechnologies—was getting underway. He was fascinated by the project’s vision about where neuroscience needed to go. He began a PhD in neuroscience at Princeton University, where he set out to work on software for tracking animal movements to gain insight into neural circuits. He was keen to find a way to circumvent the need for big data mountains of images for training. After all, he says, no scientist would build a training set by manually annotating 100,000 video frames of courting fruit flies.
In 2017, Pereira began working in earnest on pose estimation software and collaborated with then-undergraduate Diego Aldarondo, later a co-first author on the LEAP paper. The work of Kristin Branson and her group at the Janelia Research Campus heavily inspired him, he says.
Pereira and his colleagues then engineered SLEAP, software designed to work across multiple animal species. A research stint at Google AI had showed him how essential software engineering is. When working on platforms such as Google Photos or YouTube, “you really can’t have a bug,” he says. SLEAP can be used to track multiple animals, which is useful for tracking social behaviors. “We got it working, reproducibly across multiple datasets, we extracted the behaviors.” When the DeepLabCut preprint was published, it was challenging for his young career, says Pereira. He had been scooped. But he has continued working with the LEAP and SLEAP systems. He has been happy to see that “you can extract real biology from this” and also avoid image-processing steps that call for heaps of annotated data. He and the team have built the systems, following industrial software engineering principles, to be stable, accessible and easy to set up and use. “We put so much emphasis on usability,” he says.
Proteomics: detangle with target–decoy
It takes a hunt through databases to find matches for one’s experimental peptide spectra. Such spectra are gained through large-scale proteomic studies that involve liquid chromatography–tandem mass spectrometry experiments. It’s rare that a spectrum finds no match, but what happens all too frequently is that a peptide is identified inaccurately.
As Harvard Medical School researcher Steven Gygi and then-PhD-student Joshua Elias point out9, among the methods to control for such errors, they think most highly of the target–decoy search strategy because of its throughput and accuracy. Elias is now on the Stanford University faculty and mass spectrometry platform leader at the Chan Zuckerberg Biohub San Francisco.
The target–decoy method involves a ‘target’ protein sequence database and a ‘decoy’ database of false peptides, such as one with the reverse sequences of real ones. Using these resources, one can assess false positive rates in large groups of peptide–spectrum matches. His lab had previously developed the approach, says Gygi, to check for error in a paper10 from his lab led by former postdoctoral fellow Junming Peng and Elias. At the time, says Gygi, “we could find no other way to accurately assess the number of false positives present in the dataset.” They then realized how the universally the method could be used with any search engine and basically any dataset. They set out to validate the underlying assumptions of the method.
The target–decoy method has grown to underpin datasets in almost every proteomics paper, he says, and to be used for validation of many kinds of data: proteomic data acquisition from data-dependent approaches such as shotgun proteomics and data-independent workflows that involve selecting a mass range for analysis in an untargeted manner.
“It was a lot of fun to see something so remarkably simple become a powerful and accurate way to control error even in the largest of proteomics experiments,” says Steven Gygi.
In his view, the target–decoy method “is underappreciated and many people don’t know that my lab pioneered its use.” He and his team learned a great deal when they used it to look behind the curtain at the types of false positives that were ending up in finalized datasets. This issue of false positives was quite unknown before target–decoy analysis made it possible to find such error rates in proteomics datasets. Says Gygi, “It was a lot of fun to see something so remarkably simple become a powerful and accurate way to control error even in the largest of proteomics experiments.”
Speedy epigenomic analysis: ATAC-seq
Chromatin is the eukaryotic cell’s way of scrunching the genome into the nucleus: DNA is wrapped tightly around histone proteins. One analogy for chromatin that he likes, says Stanford University School of Medicine researcher William Greenleaf, is that it’s akin to taking a phone cable that stretches from New York to Los Angeles and packing that into a two-bedroom house.
Profiling genome and epigenomic regulation requires finding accessible chromatin, which is where transcription factors bind. That analysis can be performed with the assay11 Greenleaf and colleagues co-developed, called transposase-accessible chromatin using sequencing (ATAC-seq).
In ATAC-seq, adaptors are attached to the prokaryotic Tn5 transposase, which can cut and tag a genome. With this ‘tagmentation’ followed by sequencing, one can identify open chromatin. The method took fewer cells and was faster than other approaches, and it’s straightforward to use, says Greenleaf. Cells are permeabilized, you transpose into the chromatin “and then you’re done.“ Library prep and the assay itself are the same step.
Greenleaf recalls arriving at Stanford having worked mainly in single-molecule biophysics, but during his postdoctoral fellowship, he had developed sequencing methods. Once at Stanford, he advanced more of his genomics-related ideas, one of which was to find more sensitive ways to capture epigenetic events in individual cells.
A focus on chromatin modifications and changes to chromatin architecture could be used to probe fundamental regulatory processes in the cell. At the time, researchers were typically getting an averaged readout from analyzing ground-up tissues. “That’s very confusing, if you really believe that the average has something to do with the reality,” he says. Single-cell (sc) RNA-seq, he says, was in its earliest stages of development.
Single-cell applications of RNA-seq and ATAC-seq are similar conceptually, but they are distinct in the information they provide, says Greenleaf. In some ways scRNA-seq data define the ‘what’ of molecular phenotype: they show which genes are being expressed in a given cell. And to some degree, the scATAC-seq data provide an aspect of the ‘why’: which regulatory elements are accessible and likely to be driving these gene expression programs. “I think these data types are quite synergistic with one another—I always used to say they are like peanut butter and jelly until I realized only Americans eat peanut butter and jelly,” he says. Thus, depending on where he is giving a talk, he will offer among other examples: “like beer and tapas,” “like gin and tonic” or “like dumplings and vinegar.”
“I am most gratified by the fact that scientists around the world have been able to use ATAC-seq to advance their research on a very wide range of topics,” says Howard Chang.
Howard Chang, a colleague of Greenleaf’s at the Stanford University School of Medicine, co-developed ATAC-seq. He was eager to bring the study of gene regulation to the analysis of human disease states. At the time, around 2012, such assays included DNase hypersensitivity, which needed “needed tens of millions of cells and many laborious steps,” says Chang. And because scientists cultured cells for analysis, one couldn’t probe the cells’ original state or ask questions of rare biological samples. Jason Buenrostro, who is first author on the ATAC-seq paper, was then a first-year graduate student in the Chang and Greenleaf labs and is now a principal investigator at the Harvard Department of Stem Cell and Regenerative Biology.
After they tried out using transposase on naked DNA for a different application, they applied it directly to chromatin, says Chang. A week later, Buenrostro showed promising results that, with further optimization, became ATAC-seq.
Says Chang, “I am most gratified by the fact that scientists around the world have been able to use ATAC-seq to advance their research on a very wide range of topics.” They explore questions in cancer biology, immunotherapy, neurodegeneration and development, and it’s been used in areas he had never imagined it would be applied to, such as evolutionary biology, agricultural science and animal husbandry.
Greenleaf, too, is happy see the method’s wide applications. For a method developer, he says, “that’s the coin of the realm.” He remembers presenting the method at conferences. Not long thereafter, he attended a talk where the speaker gave no introduction and just showed the ATAC-seq track. Says Greenleaf. “It’s sort of entered the Zeitgeist.” ATAC-seq was developed in close collaboration with Chang and Buenrostro and “wouldn’t be what it is today without those guys,” he says. He continues to work with them, especially his Stanford colleague Chang.
Direct to RNA with nanopores
DNA is quite hardy, so if it’s left out on a bench, it can still be analyzed. RNA, on the other hand, is best kept in a minus –80 °C freezer, says Libby Snell, an RNA biologist at Oxford Nanopore Technologies (ONT). RNases that degrade RNA are everywhere, and thus, when working with RNA, “you have to be very meticulous,” says Snell, who co-developed direct nanopore-based RNA sequencing (RNA-seq)12. A peek at ONT’s RNA lab reveals how fastidious Snell’s team is about clean surfaces and pipettes. RNA has long held her interest, given how it informs on events in a cell or organism. ONT application scientist Daniel Garalde, the paper’s first author, calls the work “one of the highlights of my career.”
RNA sequencing long involved a proxy: analysis of cDNA, which requires an extra step of reverse transcribing RNAs into complementary DNA sequences. This changed with the advent of direct RNA sequencing using nanopore sequencer arrays, developed at ONT. Snell enjoys seeing the many ways the method is being used. Its potential reaches to RNA modification analysis and perhaps mRNA vaccine quality control.
“I’m really proud of it and I have a framed copy of the Nature Methods cover on my wall,” says Daniel Turner.
Daniel Turner, the paper’s last author, used to be at ONT and is now chief scientific officer at Cambridge, UK–based Enhanc3D Genomics. The idea of direct RNA sequencing was, he says, ambitious and challenging. The direct RNA-sequencing project helped to shape his career. Moreover, he says, “it’s not easy to figure out the potential value of something that just doesn’t exist,” and he appreciates how the company had “faith in the vision.” Turner says of the paper12, “I’m really proud of it and I have a framed copy of the Nature Methods cover on my wall.”
Garalde, who co-developed direct nanopore RNA sequencing, now works in ONT’s business division in California with a focus on emerging techniques. He had joined ONT with a background in computer engineering from the University of California, Santa Cruz. He considered a postdoctoral fellowship with Hagan Bayley at the University of Oxford, one of ONT’s founders, but decided to join the company instead.
In nanopore sequencing, a polynucleotide moves through a nanopore that spans a membrane in a volume of salt buffer. The readout is an electrical signal that provides information about what just traveled through the nanopore. Software converts the signal to a base sequence. Working on RNA meant re-engineering much of the technology used to sequence DNA, says Garalde.
The voltage that runs across the nanopore accelerates the passage of DNA through the nanopore. Garalde and his colleagues worked on ways to slow this down, but the helicases that work with DNA didn’t work with RNA. They needed to get an RNA signal readout in the sensors. After Garalde left for California, Snell led next versions of the direct RNA sequencing chemistry up to the latest release. “It’s just tremendously better,” Garalde says.
The team developed the method’s aspects in parallel, says Turner. For library prep, they worked on affixing adapters to RNA so that the nanopores would process it; they needed to assure RNA didn’t degrade in the flow cell; the right motor protein had to control the speed with which RNA passes through the nanopore; and they sought a signal readout and software for base-calling. The sequencers were not as accurate and fast as they are today, he says, but the sequencing hardware itself—the MinION sequencer—existed.
In Snell’s academic training at the Universities of Oxford and Reading, she used cDNA sequencing to study ancient cell lineages and the relatedness of organisms in the eukaryotic tree of life. She was intrigued about working in industry when she came across ONT, then a startup. Its long-read DNA sequencing promised to avoid needing to put together genome assemblies piecemeal from short DNA sequences. She also knew that it would be good to avoid the use of reverse transcriptases and polymerases to make cDNA, which can bias sequencing. Among the challenges with RNA, says Snell, were finding ways to keep RNA molecules intact.
The work on direct RNA sequencing began around 2013 and the paper was published in 2018. At the time, it was not yet clear how scientists might apply the method but, says Snell, “I was always super excited by it,” she says. Projects like her academic ones would benefit greatly from such a capability.
The team hoped the same motor protein that threads the DNA molecule through the nanopore would work for RNA. But a DNA-specific motor protein disengages from RNA, she says. After screening a variety of motor proteins, they decided on a motor protein called M1. The one they now use, M2, is RNA specific, says Snell. It does not work on DNA, just as the DNA one does not work for RNA. They explored pore modifications, too. They would set up flow cells and then wait. “Sometimes it would work and sometimes it wouldn’t,” she says. They decided to not only work with RNA-specific motor proteins but also thread RNA not from the 5′ to the 3′ end—as with DNA—but from the 3′ to the 5′ end. “That just sort of cracked it all open at that point,” she says. The motor still needed an adaptor, but this shift simplified sample prep and made it easier to get signals to train a base caller that would read out the sequence.
Turner enjoys the fact that no other technology can sequence RNA directly and that a cDNA step is no longer needed with nanopore sequencing. One “gets so much closer to the action,” he says: actual RNA strands from the organism touch the nanopores during sequencing.
Currently, says Snell, the nanopore sequencers read RNA at 120 nucleotides per second with a pore optimized for RNA and the RNA-specific motor protein. “We’re at about 98.8% accuracy.” When training researchers, she relishes how excited they are to sequence RNA directly. It’s normal to her, but “it’s still pretty amazing, right?”
References
-
Schindelin, J. et al. Nat. Methods 9, 676–682 (2012).
Google Scholar
-
Li, X. et al. Nat. Methods 10, 584–590 (2013).
Google Scholar
-
Cao, E., Liao, M., Cheng, Y. & Julius, D. Nature 504, 113–118 (2013).
Google Scholar
-
Liao, M., Cao, E., Julius, D. & Cheng, Y. Nature 504, 107–112 (2013).
Google Scholar
-
Dana, H. et al. Nat. Methods 16, 649–657 (2019).
Google Scholar
-
Mathis, A. et al. Nat. Neurosci. 21, 1281–1289 (2018).
Google Scholar
-
Pereira, T. D. et al. Nat. Methods 16, 117–125 (2019).
Google Scholar
-
Pereira, T. D. et al. Nat. Methods 19, 486–495 (2022).
Google Scholar
-
Elias, J. E. & Gygi, S. P. Nat. Methods 4, 207–214 (2007).
Google Scholar
-
Peng, J., Elias, J. E., Thoreen, C. C., Licklider, L. J. & Gygi, S. P. J. Proteome Res. 2, 43–50 (2003).
Google Scholar
-
Buenrostro, J. D., Giresi, P. G., Zaba, L. C., Chang, H. Y. & Greenleaf, W. J. Nat. Methods 10, 1213–1218 (2013).
Google Scholar
-
Garalde, D. R. et al. Nat. Methods 15, 201–206 (2018).
Google Scholar
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Reprints and permissions
About this article
Cite this article
Marx, V. 20 years of Nature Methods: how some papers shaped science and careers.
Nat Methods 21, 1786–1791 (2024). https://doi.org/10.1038/s41592-024-02452-x
-
Published: 09 October 2024
-
Issue Date: October 2024
-
DOI: https://doi.org/10.1038/s41592-024-02452-x