Main

A digital twin has been defined as “a set of virtual information constructs that mimics the structure, context, and behavior of an individual or unique physical asset, is dynamically updated with data from its physical twin throughout its life cycle, and ultimately informs decisions that realize value”1. This definition highlights the ways in which a digital twin goes beyond traditional modeling and simulation to bring value, through the bidirectional feedback flows between the physical asset and its virtual representation2. The flow of data from physical to virtual enables the virtual representation to be dynamically updated, thus tailoring it to the specific behavior of its physical counterpart, while the flow from virtual to physical induces changes in the physical asset, through control, sensor steering or other manipulations of the physical system. The virtual representation typically comprises a disparate set of models representing different disciplines, subsystems and components of the physical system. These models may be based on an understanding of the physical principles (for instance, analytical and empirical models that are grounded in physical governing laws). These models may also be purely numerical, such as statistical models that fit from data and machine learning. These models may also be hybrid combinations of physics-based and data-driven models. These digital twin elements and the centrality of the bidirectional flows to a digital twin are depicted in Fig. 1. The figure also depicts the role of a human in the loop and the importance of holistic and continual validation, verification and uncertainty quantification.

Fig. 1: The elements of a digital twin.
figure 1

A digital twin goes beyond traditional modeling and simulation to include dynamic bidirectional feedback flows between the virtual representation and the physical system. Holistic and continual validation, verification and uncertainty quantification are essential to a digital twin. A human may also play an essential role in the digital twin decision-making loop. Photo credits: top left (clockwise from top), Buena Vista Images/DigitalVision/Getty, Ignatiev/E+/Getty, peepo/E+/Getty; bottom (left to right), YOSHIKAZU TSUNO/Staff/AFP/Getty, Sutthichai Supapornpasupad/Moment/Getty.

Full size image

Decades of advances in modeling and simulation are powerful precursors to digital twins. A digital twin builds on these advances but goes beyond them, as a capability to provide a physical asset with a unique digital identity. Underlying this unique identity are the requirements to digitally capture relevant asset aspects, such as actual geometry, realized performance and other physical state observations. Also required is the ability to present and manipulate the digital twin view to humans for complex decision-making. These aspects may be achieved by a data-centric approach that measures and stores asset-specific information, or by a model-centric approach that combines analysis, models and data from other indirect measurements. These analyses can leverage well-known simulation technologies, such as behavioral simulation, computational fluid dynamics, finite-element analysis and computer-aided design, combined with additional numerical methods to twin the model with the physical asset.

Digital twin use cases

Notably, across aerospace and mechanical engineering, the use cases for digital twins are vast and the potential benefits large. Digital twins have the potential to speed up development, reduce risk, predict issues and drive reduced sustainment costs. Digital twins enable new ways to collaborate across the life cycle and the supply chain. In the design phase, digital twins provide an opportunity to unlock the advantages of digital engineering, including reduced prototype cycle times, lowered technical risks and reduced experimental test costs. In manufacturing, digital twins could enable improved first-time yield, products optimized with design-for-manufacturing considerations, and optimized factory operations to improve cycle time and cost. In operations, digital twins can lead to improved system capability, increased operational availability, reduced maintenance costs and reduced root-cause-corrective-action turnaround times.

Multiple use cases are currently under investigation or development in each of these life-cycle phases.

Design and engineering

In the engineering phase, we differentiate the models of a product or a critical component (models largely developed in recent decades) from digital twins when data from current or historical mission, manufacturing or testing are available to tailor predictions to a specific physical asset or an aggregation of assets. The combination of data with models enables a drastic enhancement of the accuracy of the predicted information to directly support decisions related to the modeled asset(s).

The main use cases in this phase are: (1) exchange digital twins across the supply chain to accelerate the design cycle time; (2) mission analysis-based design, leveraging mission data to run tailored simulations to improve performance; (3) perform design optimization to improve the quality, performance or cost of the design; (4) virtual verification and validation, performing system and component testing in the digital space before any physical activities take place to reduce cost; and (5) virtual certification, leveraging digital twin validation to receive certification credit and reduce certification cost.

Manufacturing

In the manufacturing phase, there are two different groups of digital twins: those related to the product and those related to the factory optimization (such as manufacturing process, supply chain and logistics). The latter are covered by ref. 3.

Prototype digital twins for products can be used to convey to manufacturing, from engineering, the design of the system or the part. They represent the parts in terms of geometry, tolerances and material information (called model-based definition) and enable the automation of manufacturing and inspection processes with the objective to improve quality and to reduce the time for ramping up production. When the inspection of the part occurs and the part assumes a unique identity (for instance, based on serialization), then the data tailor the prototype digital twin to the precise physical counterpart (for instance, tolerances are removed and effective dimensions are used). These digital twins are also used in other use cases, such as product acceptance (do the products satisfy tolerances?) or additional analysis for overall part suitability to improve production yield. For example, a virtual assembly is performed using digital twins of the parts to verify that the tolerances stack up in a compliant and consistent way, or to enable the operator to perform the activities in an augmented or virtual reality environment with the objective to reduce cost and improve quality of assembly.

Sustainment and operation

One use of digital twins is to perform mission and maintenance training; this capability improves mission readiness and reduces human errors with clear benefits for operators. In addition, many valuable decisions in this area can be improved with the support of digital twins. An example is the decision to remove from the wing an engine that is currently in operation. A digital twin providing the estimation of the lifing based on historical missions enables the operator to execute this decision when most appropriate. The decision is based on the actual condition of the engine, instead of being scheduled at a fixed time. These condition-based maintenance and prognostic use cases are improving the availability of critical products in the field.

Another relevant use case in this area is to employ digital twins to improve mission performance. If the digital twin provides real-time estimation of key aspects in fractions of the decision time, it can be evaluated multiple times to perform a ‘what if’ analysis during operation and support more robust decisions, or be used to analyze whether more performance can be provided for a limited amount of time. As a concrete example, drawn from ref. 4, the latest update of the Pratt & Whitney F119 turbofan has digital twins of critical components, which enable the estimation of the remaining life based on the historical usage of the engine. The information from the digital twins shows that the parts of some engines, due to a use pattern that is different from the designed one, have a longer life than expected. This condition enables saving a substantial amount of maintenance cost—about US$800 million across the aircraft’s life cycle—if the maintenance is delayed to later in time, or if the additional margin is used to improve the performance in the remaining scheduled life.

The digital twin as an asset in its own right

Investment in digital twin technologies is required to realize the potential benefits across all these life-cycle phases and achieve the full promise of digital twins. These benefits can be realized at scale only if the digital twin comes to be thought of as an asset in its own right, an asset that—just like the physical asset—must be conceptualized, architected, designed, built, deployed and sustained (Fig. 2). Each one of these phases of the digital twin life cycle requires trade-offs that weigh digital twin development and operational costs with the value added.

Fig. 2: The digital twin life cycle.
figure 2

The digital twin life cycle includes conception, development, validation, operation and disposal, in addition to dynamic updating and continual validation. The digital twin must be thought of as an asset in its own right, with its own life cycle that requires intentional design, planning, investment and trade-offs.

Full size image

In the following, we summarize some of the key considerations needed in the different phases of the digital twin life cycle. Our experience is that current practice is to build the digital twin after the physical asset has been realized, cobbling together whatever models and data flows are available. While this may provide sufficient value in some settings, for complex engineering systems—especially those with long service lives—we emphasize the importance of a structured and intentional approach to the life cycle of the digital twin, with investments paralleling the investments that are made over the life cycle of a physical asset. This systematic life-cycle approach to a digital twin remains largely absent in current practices.

Digital twin conception

Just as the conceptual design phase is critical in the design of a physical asset, realizing the full value of a digital twin requires an intentional conceptual design phase for the digital twin itself. This conception phase should define requirements and architecture for the digital twin, including identifying its operational environment(s), potential use cases and the associated fidelity requirements. The digital twin’s conceptual design phase should identify existing models, existing data sources and gaps in digital twin capabilities (including gaps in data, models, integration and workflows) that will require development to fill. In conceiving the digital twin and defining its requirements, it is also essential to architect a plan for investment and sustainment of the digital twin over the full life cycle of the physical counterpart. Just as manufacturability and maintainability are considerations in the design of a physical asset, they must be considerations in the design of a digital twin.

Digital twin development

Development of the digital twin requires model development, software engineering, integration and workflow establishment. In some cases, the virtual representation underlying the digital twin may use models employed for the design of the physical asset, while in other cases, the digital twin may need its own models to be developed. Surrogate modeling may play a particularly important role in developing the digital twin, due to the need to satisfy stringent computational constraints in operation, particularly for digital twins that support real-time updating and decision support. Surrogate modeling provides a way to balance model fidelity with computational costs, where the surrogates may be derived via physics-based reduced-order modeling, empirical data fits, machine learning or hybrid combinations thereof.

Digital twin validation

Validation is “the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model”5,6. The latter part of this definition, emphasizing the ‘intended uses of the model’ is highly relevant for digital twin validation. As described in the National Academies of Sciences, Engineering, and Medicine (NASEM) consensus study report2, a digital twin must be fit for purpose, meaning that it must balance the fidelity of predictions, as defined in the digital twin’s conceptual design phase, with the computational demands (such as time, power and complexity). The notion of being fit for purpose must also factor in acceptable levels of uncertainty to drive decisions and risk preferences of decision-makers. Figure 2 highlights two distinct roles of validation—the substantial validation effort that must accompany digital twin development and the continual validation that must accompany digital twin updates. Methodologies to achieve continual in-the-loop validation largely remain an open challenge, yet represent a key enabler to realizing the potential of digital twins for high-consequence and safety-critical applications.

Digital twin operation

In the operation phase, the digital twin is evaluated to support the decision-making process. It is composed of two major aspects: (1) a digital framework to execute, evaluate and visualize the digital twin; and (2) business processes that integrate the digital twin information for decision-making.

The digital framework collects data from the different authoritative sources, such as sensor data and other context information or product configurations, and provides the storage and computational services to execute the digital twin. Digital twin execution entails running the needed simulations and, given the user queries or requests, providing visualization of the information. Digital twin execution may also entail feeding a more complex set of services for information composition or autonomous decision-making.

The business processes are heavily dependent on the particular use case; their description is outside the scope of this Perspective.

Digital twin updating

Updating is a critical element of a digital twin and a key distinguishing factor that enables a digital twin to bring added value. Digital twin updating may take place with varying extents and over different timescales. Whatever the extent and timescale of the updates, the notion of continuous validation is critical. Updates that take place in real time or near real time (for instance, via data assimilation or parameter-estimation algorithms) must be accompanied by a form of certification that the components of the digital twin virtual representation remain within acceptable bounds. For example, one could imagine a scenario where the physical counterpart undergoes a change in state, so that the acquired data lead to a parameter update that puts the digital twin’s underlying models beyond their validation conditions. Such a scenario requires action to recognize that the digital twin’s predictions are no longer trusted, triggering a more relevant update that entails additional validation and possibly additional digital twin development (such as the development of a new or refined model). This is similarly important for digital twin updates that take place on slower timescales. For example, if the physical counterpart is updated (such as through hardware or software upgrades, part replacement and so forth), one needs to ensure that the linkage between the physical counterpart and the digital twin is maintained. In current practices, these kinds of updates are largely human driven and manual. A further aspect of digital twin updating is the updating of the digital twin virtual representation itself, not because the physical counterpart has changed but because there is a pull to update the digital twin’s capabilities. This pull could be because the operational environment and its associated fidelity requirements have changed, because the availability and/or quality of data have changed, or because there has been an advance in modeling or algorithmic capability that would bring value to the digital twin (for instance, through reduced computational time or reduced power consumption for onboard processing). Ensuring that the digital twin’s virtual representation receives its own maintenance, sustainment and upgrades is critical to realizing the full value, as discussed in the NASEM report (Ch. 7 in ref. 2).

Digital twin disposal

The disposal of a digital twin must follow the end of life of the physical assets. There are two major scenarios:

  • The physical asset is at the end of life and appropriately disposed. In this case, the digital twin, depending on the regulations and policies, will have to be archived for future analysis or for historical data valuable for new generation of similar products.

  • The physical asset, or a part of it, is having a second life (such as in a circular economy). In this specific case, the digital twin will become a relevant source of information for the reuse of the assets or of the parts. For example, in the second life of a battery, the digital twin is still very relevant for the forecast of the battery’s storage capacity.

An identical twin or a fit-for-purpose capability?

It is tempting to imagine a digital twin as a true ‘identical twin’ of a physical asset, representing the system at an exquisite level of detail and providing a visual representation that looks and acts just like the physical asset. Indeed, some definitions of digital twins promote this vision. While this vision may be realizable in limited cases for simple systems with narrow requirement sets, for complex mechanical and aerospace systems it is our view that it is neither beneficial nor tractable to envision a digital twin in this way. Instead, a digital twin must be envisioned to be fit for purpose, where the determination of fitness depends on the capability needs and the cost–benefit trade-offs. For complex mechanical and aerospace systems, there is value in conceiving a digital twin to have a capability progression. For example, for the same physical asset, one could imagine a digital twin at different levels of capability such as design and prototyping, monitoring and diagnostics, forecasting and predictive analytics, control, autonomous operations, and system-of-systems optimization. Each one of these capability levels places different demands on the digital twin. While the trade-offs will necessarily be case specific, we discuss in the following some of the key trade-off elements that should be considered.

Requirements and trade-offs in fit-for-purpose digital twins

In the conception and development of a digital twin, the digital twin solution must meet multiple business and technical requirements spanning across the entire life cycle. For example, the cost of operation, including the continual alignment of the digital twin with its physical counterpart, depends heavily on the amount of data and computation needed to carry out a trustable forecast of a specific aspect of the physical asset. In some cases, these data and computational requirements may not be directly aligned with business objectives. In such cases, the indirect benefit offered by the digital twin must be factored into resource allocation decisions.

The typical business requirements can be summarized as follows: (1) the return on investments (meaning, the value delivered during the operation minus the cost of development, operation and disposal) must be acceptable to the business; (2) the digital twin forecast must be trustable to allow improvement of business decisions; (3) the user experience must meet user expectations; and (4) the digital twin must be integrated within the business processes (for instance, operator workflows) of the physical asset.

Digital twins for mechanical and aerospace engineering systems must be consistent with the same first principles of the physical twins. This implies that, in the exploration of the solution space, one of the most important trade-offs to be analyzed is the availability of models and the complexity of modeling based on physics-based knowledge versus the amount of data needed for training and continual updating. At one extreme, we may have detailed models that capture (or abstract) the entire physics of the asset and require a minimal set of data, over time, to become tuned to the real world as an accurate digital replica. At the other extreme, we may have data-driven models (such as machine learning models) without any a priori physics or behavioral knowledge that are trained on collected data and that learn the input–output relations based only on data. In the middle ground, we may have a set of hybrid techniques, such as physics-informed machine learning7, data-driven reduced modeling8,9, sparse identification of nonlinear dynamical systems10 and scientific machine learning11, which use a mix of physics-based information and field data to build a digital twin that better fits the purpose and matches the business and technical requirements.

Figure 3 shows, in the vertical direction, the potential trade-offs between physics-based models and data, and the impact of key requirements. For example, physics-based models are capable of forecasting even with limited historical information, while data-driven models are limited when extrapolating outside of the supplied data. The chosen trade-off will dictate the satisfaction of technical requirements, such as: (1) frequency and latency of evaluation of the digital twin; (2) accuracy over time and update frequency; (3) visualization and query capabilities; and (4) data collection and computational requirements during development, validation and operation.

Fig. 3: Digital twin trade-offs.
figure 3

In the design of digital twins, the type of modeling approach, whether more data driven or physics based, determines the trade-offs among multiple factors, such as cost (of development and operation), trust, evaluation speed and cadence of model update.

Full size image

A fundamental requirement to be satisfied by the digital twin solution is the capability to maintain, across the entire life cycle, the association of the digital identity with the physical asset and its configuration, such that the digital twin is always representing the correct asset. Techniques for a unique digital signature of a mechanical asset can be based on unique physical characteristics or can rely on manual processes. We note that manual processes are prone to errors and not always compatible with business processes during operation.

When moving into implementation, the digital twin must also satisfy architectural and deployment requirements. The related operational framework is a key aspect that, if not already present, needs to be conceived and designed. In some autonomous or real-time aerospace systems, access to cloud infrastructures is infeasible and the entire operational process (from data acquisition to computation, forecast and action) must be executed on edge or embedded devices. In contrast, if a cloud infrastructure is adopted, the communication bandwidth and latency requirements, from the field to the cloud, must be met to enable the flow of data with the required frequency. These deployment requirements drastically limit the space of possible digital twin solutions. An ever-increasing concern that deserves a deeper analysis is the cybersecurity of the entire solution: altering the outputs or the data inputs of the digital twin may have severe business and safety consequences; as such, security requirements must be part of the digital twin design from its initial conception.

The critical role of uncertainty quantification in a digital twin

As mentioned in the previous section, the business trust of the digital twin forecast is one of the most critical requirements to be demonstrated. Currently, this is tackled with a pragmatic approach of extensive testing and system validation. While this may be acceptable and feasible for many use cases, for others a more formal approach to conduct uncertainty quantification of the forecast based on robust proof is needed. Uncertainty quantification via formalized methods, such as the probabilistic graphical modeling approach12, implies considerable modeling, simulation and testing effort; in current practices, formal uncertainty quantification is typically not conducted until the end of the design phase. Sensitivity analysis is a powerful yet underutilized aspect of uncertainty quantification. Instead, preference is given to collect more and more data, even though doing so does not necessarily improve the accuracy and trust of the digital twin. The incorporation of more formal uncertainty quantification approaches, including sensitivity analysis, is an important direction for future digital twin development. There are important standardization efforts in this area13, aligning terminologies across stakeholders and providing common frameworks for uncertainty quantification. This is an important area of research for which future progress is needed to increase trust and enable a larger deployment of digital twin solutions, especially for safety-critical systems.

Final remarks

Digital twins have already been demonstrated to bring value in mechanical and aerospace systems, speeding up development, reducing risk, predicting issues and driving reduced sustainment costs. While early digital twin deployments have been successful, realizing the full potential benefits at scale requires a more structured and intentional approach to digital twin conception, design, development, operation and sustainment. We advocate for the value of considering the digital twin as an asset in its own right, and approaching its development with the same principles of cost–benefit–risk trade-offs that are employed in the design and development of physical assets. While we have focused on digital twins in mechanical and aerospace engineering, this Perspective is also highly relevant to digital twins in other domains, where a similarly intentional approach to digital twin development would prove valuable. Lastly, we note that a discussion of workforce gaps and opportunities is beyond the scope of this Perspective, but remains a critically important consideration in achieving the full benefits of digital twins.