Concerns about the future of the clinician–scientist have been expressed for many decades. In 1984, Gordon Gill wrote of ‘the end of the physician scientist’ as he observed academically oriented US doctors move into basic science labs to learn the new techniques of molecular biology1. In the following decades, fewer medical doctors remained active in biomedical science. In the 1980s, physicians represented 4.5% of the US biomedical research workforce: that has dropped to 1.5% today2. Earlier this year, the UK House of Lords Science and Technology Committee reported an alarming decline in the number of clinical academics in the UK and highlighted the attendant risks of this to the long-term health and wealth of the country.

If this species is going to be saved, its characteristics must be defined. It seems reasonable to define academic clinician–scientists as healthcare professionals, primarily not working in industry, who spend part of their time delivering healthcare to patients (or populations) and a substantial amount of their time dedicated to research.

At the more applied end of research, being active in clinical practice can assist a scientist. A practising surgeon or interventional radiologist, for example, is much more likely to come up with ideas about how a procedure might be enhanced and lead a trial that tests those ideas. In the case of non-interventional specialties, a doctor’s clinical practice can provide a window on the world of natural human variation. Homo sapiens is a remarkably outbred and mobile species and one in which phenotypic outliers tend to present themselves to doctors. If a doctor retains the eye of a natural historian and maintains their intellectual curiosity, much can be learned from the deeper exploration of small numbers (or even individual patients) with what appear to be unusual manifestations of disease. The father of this approach was the incomparable medical explorer of human biology, Archibald Garrod (1857–1936), but more recently Brown and Goldstein exemplified this approach through their study of rare patients with extreme hypercholesterolemia (winning them a Nobel Prize in Medicine in 1985).

When I have visited world-class biomedical research institutes, I have frequently heard scientists complain that it is difficult to find local clinicians to interact with. Instead of trying to reach across a cultural and intellectual divide, often needing to identify translators who can help doctors and scientists understand one other, I believe that it is more efficient to develop a cadre of people who are truly bilingual in both research and clinical practice. This is challenging, as it takes time and money to train someone to be both a specialist doctor and a scientist. The UK was once in a unique position to generate such people. It had a flexible postgraduate clinical training system and a wealthy charity, the Wellcome Trust (now Wellcome), that provided both long-term funding and targeted support to allow trainee clinician–scientists to establish their careers and flourish. A remarkable proportion of Wellcome-funded clinician scientists went on to become international research leaders (including Adrian Hill, who leads the Jenner Institute that developed vaccines against COVID-19 and malaria; and the 2019 Nobel Prize winner Peter Ratcliffe).

Unfortunately, postgraduate clinical training in the UK has become more rigid and prolonged, which deters clinicians from simultaneously developing a research career. In addition, Wellcome, although remaining open to supporting clinician–scientists, have shifted their focus away from this group of trainees. The UK does have a National Institute for Health and Care Research that funds trainees; however, it has a strong focus on health outcomes in the short or medium term and does not tend to support more exploratory biomedical research. Other European countries now seem to be taking the lead; for example, Germany has recently established Clinician Scientist Programs in many of its states.

In the USA, industry is proving increasingly attractive for clinical academics. Such jobs provide enhanced remuneration, more resources and a team approach to problem solving. In recent years, the opportunities have become even more exciting as companies have seen the value of clinician–scientists working on earlier phases of drug development, such as target identification and experimental medicine. However, those contemplating a move to industry should be aware that business imperatives will dictate which areas of scientific endeavour are to be pursued and which to be dropped. If you choose to remain in academia, although you are likely to be less well paid and have fewer resources to work with, you will have the precious gift of autonomy. So long as you can raise funds, you are largely free to pursue whatever question you think is interesting, using whatever methods you have at your disposal. As the old movie title goes ‘It’s a Wonderful Life’.

For this species to survive and thrive, funders and professional regulatory bodies should nurture clinician–scientists rather than erect barriers. Individually, the current ranks of clinician–scientists need to inspire and support trainees as they negotiate the demanding path towards independence. If we can get this right, this group will continue to make invaluable contributions to human knowledge and health.