Artificial intelligence (AI) has revolutionized a vast range of domains in the past few years. Notably, large language models (LLMs) have led to the development of general human-like conversational agents, such as ChatGPT, as well as expert, domain-specific tools, including, to name a few, NYUTron1 — an all-purpose clinical prediction engine trained with clinical records — and ChemCrow2 — an agent designed to accomplish tasks across organic synthesis, drug discovery, and materials design. However, the training and implementation of these large-scale models face substantial challenges on state-of-the-art digital processors, particularly regarding computing speed and energy cost. This limitation arises from the design of conventional digital processors, which separates memory and computing processors — a phenomenon known as the von Neumann bottleneck. In order to scale up AI systems, it is crucial to investigate alternative computing architectures.


Credit: Jorg Greuel / Getty Images

Inspired by the neuronal systems in the human brain, neuromorphic computing has the potential to address the aforementioned bottleneck that can be found in traditional digital computing. Neuromorphic computers perform computations by mimicking the structure and function of neurons and synapses in the brain. Ultimately, this means that both information processing and memory are collocated and integrated into the artificial neural system, naturally avoiding the energy-costly memory movement step inherent in the von Neumann computing architecture. Nevertheless, while neuromorphic processors undoubtedly offer tremendous potential for computing and scalability, there are still a myriad of challenges and gaps to be addressed by the research community.

In an effort to foster a collaborative environment for discussing advancements and challenges in this burgeoning field, a group of Nature Portfolio journals organized — back in October 2024 — the 2nd Nature Conference on Neuromorphic Computing, in collaboration with Tsinghua University, focusing on the transformative power of neuromorphic computing in advancing AI.

The conference covered a wide range of physical realizations of neuromorphic systems, including the use of memristors, spintronic devices, and event sensors. For instance, the use of memristors to mimic the human brain’s energy-efficient synapses and neurons, known as in-memory computing (IMC), directly leverages local memory devices to perform computations, thereby avoiding the energy-intensive step of moving data around. IMCs not only enable the deployment of AI tasks on local devices (meaning, AI on the edge) — which is essential to various applications such as autonomous driving and clinical diagnostics — but they can also leverage some intrinsic properties of the hardware for computing tasks — which is critical for realizing the full potential of memristors, as highlighted by Damien Querlioz in a News & Views. Along this line of research, Bin Gao, Huaqiang Wu, and colleagues presented in a recent Article within this issue of Nature Computational Science the implementation of Deep Bayesian active learning within the IMC framework. The authors used memristor arrays to eliminate extensive data movement during vector–matrix multiplication (VMM) — a common step in the training process of machine learning — and utilized the intrinsic randomness properties of memristors to efficiently generate random numbers for weight updates during the training of probabilistic AI algorithms, all in all substantially reducing time latency and power consumption.

The importance of IMC can be further exemplified by an Article by Julian Büchel, Abu Sebastian, and colleagues, also published in this issue of Nature Computational Science. The authors proposed a three-dimensional (3D) construction of non-volatile memory devices (NVM) that simultaneously satisfies memory requirements and solves the parameter-fetching bottleneck in large LLMs with reduced energy cost. The authors used a conditional computing model that was designed to reduce inference cost and training resources. Implementing conditional computing on digital processors is, however, notoriously known to be impractical for large-scale models, as it usually requires an order of magnitude more parameters for better performance. To address this issue, the authors demonstrated that mapping the conditional computing mechanism onto a 3D IMC architecture can be a promising approach to scale up large models. It is worth noticing that the computing efficiency of VMM can also benefit from the analog operations in NVM, as highlighted by Anand Subramoney in a News & Views.

With the increasing availability of neuromorphic processors, finding practical applications has become more important than ever. The neuromorphic computing conference highlighted this trend by discussing many potential applications, such as healthcare diagnostics, visual adaptation, and signal processing. In addition, the concept of hardware–software co-design (or algorithm-guided hardware design) was also emphasized in various talks, as this is regarded as essential to fully realize the benefits of neuromorphic processors in several applications. This issue of Nature Computational Science includes an Article on this research direction, where Zhongrui Wang, Dashan Shang, and colleagues present a hardware–software co-design approach to enable the learning of cross-modal, event-driven signals for efficient real-time knowledge generalization. Among other results, the authors demonstrate that their framework can achieve a level of energy efficiency and zero-shot cross-modal intelligence comparable to the human brain.

Although the field of neuromorphic computing has been progressing rapidly in recent years, several challenges still remain. For instance, an issue that was broadly discussed during the conference is the lack of community-acknowledged benchmark datasets: without standardized benchmarks, it is difficult to accurately measure new technological advancements. Efforts such as the NeuroBench framework (ref. 3) serve as a good starting point in this aspect. Another relevant challenge is the absence of best practices for code sharing, as neuromorphic-related source code is often highly dependent on the underlying hardware. Thus, establishing standard practices or infrastructures for sharing code and data, while taking into account hardware differences, is an important step in order to facilitate the dissemination of new approaches and technologies. Addressing these challenges will be essential for the continued success of neuromorphic computing.