Although widely used statistical and simulation models are excellent
methods for data analysis, they have limitations in analyzing the
complex real world. Statistical analysis can build high-fidelity models
through calibration using big data, but it is difficult to implement a
wide range of models (Richardson, 2015). This is because, as the model
becomes more complex, programming becomes more difficult, and excessive
time is consumed for analytical calculations. Simulation can model and
analyze a wide world based on real data and theories. Although these
models have advantages in verification, validation and optimization
aspects, they have limitations in distribution assumptions or are not
easy to apply during simulation run-time (Liu et al., 2021).
Consider analyzing the logistics supply chain. The models implement a
large-scale logistics system that not only reflects the various
processes of SCM, but also considers precise demand patterns and raw
material supply conditions. In simulation-based analysis, it is possible
to build, verify, and optimize the model of the entire SCM, but it is
difficult to include detailed parts such as failure of logistics
equipment such as conveyor, palletizer, and folk lift. Their failures
can become a bottleneck and affect the entire SCM, but in general, in
the simulation model, these accidents are reflected as random event
occurrences based on statistical theory. e.g., it is assumed that the
failure of conveyor occurs randomly in a Normal (5, 10). On the other
hand, using a statistical model, it is possible to analyze the failures
of logistics equipment in a highly realistic way using big data.
However, if the SCM is extended to a wide range, the complexity becomes
too large, making it very difficult to build a model.
Digital twin (DT) can simulate a wide world with high fidelity. DT is
not a term referring to a specific technology. It is an idea that
simulates the fusion of many existing cutting-edge technologies. It
becomes a replica of reality by implementing the real world as closely
as possible and calibrating it with data. It was originally proposed as
a concept to support decision-making in the design phase of a product
(Grieves and Vickers, 2017), but it is being used as an analysis tool of
the total life cycle (Tao et al., 2019). The use of DT has the advantage
of being able to understand anomalous events or unknown phenomena (Tao
et al., 2018), but two major issues have not been established in
academia.
First, it is difficult to integrate multi-scale, multi-physics, and
interfaces. DT studies should integrate models across lifecycle stages,
taking into account various levels of detail and all relevant
disciplines (Boschert and Rosen, 2016). There are 4 phases (design,
manufacturing, service, retire) in the total life of a product (Liu et
al., 2021). The units of various influence factors such as people,
equipment, and systems are all different. It is difficult to build an
integrated model considering their interfaces and protocols during the
total lifecycle.
Second, due to integration difficulties, a standardized process widely
used for DT modeling has not yet been established. Existing DT studies
utilize tools that are easy to implement for each phase of the total
lifecycle. Many types of tools such as Predix, ANSYS, Bluemix, and
MindSphere are being used. Most of the past DT studies have only
integrated these multiple models into simple input/output (Liu et al.,
2021). Therefore, there is a need to establish a standardized process
that can fundamentally solve the mutual influence of different
disciplines, time and space, and different formats and protocols.
As a method to solve the two problems, this study proposes the
construction of a digital twin using system dynamics (SD). In SD, it is
possible to implement a complex world with a feedback loop composed of
root causes (Sterman, 2010). Multiple data scales of heterogeneous
physics existing on various platforms can be integrated, and multiple
time horizons can be controlled, making it easy to build an integrated
model. The latest SD tools have dramatically increased the reality by
overcoming the limitations of traditional simulation models such as
distributional assumptions, and a method to analyze the system model
analytically is also provided. In particular, it is possible to support
WinBurg-based MCMC in the system dynamics model, and to support
interworking with programming languages R and Python. Application
during run-time became possible, and data calibration became easy,
resulting in very high fidelity (Richardson, 2015). In other words, if a
DT model is built using SD, data of different formats and protocols and
multi-disciplinary interface support are possible. Since SD models for
each phase of the total life have already been studied a lot, it is
possible to build a DT model by integrating them.
Therefore, in this study, we propose to construct a DT model using SD.
The proposed method is applied to the operation and maintenance
system of ROK Naval ships. This system is the service phase of its
total life. It is often known that the implementation of the service
phase is the most difficult. This is because the service target is
decentralized and it is difficult to consider all utilization in various
environments (reliability, convenience, real-time operation status,
maintenance strategy, etc.). Through the model building process proposed
in this study, the process of integrating multi-scales and
multi-disciplinary of multi-physics is confirmed, and anomalous events
or unknown latent effects are identified.
The
remainder of this paper consists of six sections. The title of each
section (Fig. 1) is the DT construction process using SD we propose.
After selecting the target system, we explore and analyze the root
causes. Various methods such as statistical modeling (B-spline, Bayesian
estimation, phase type distribution fitting, etc.) and simulation are
applied for analysis (Chapter 2). The dynamic variables related to the
analyzed root causes are implemented as a system dynamics model (Chapter
3). The models built in Chapter 3 are applied as one module constituting
the integrated model. In Chapter 4, the model is integrated and
validated according to the causal relationship between modules. Simulate
and analyze the integrated DT model to identify potential problems and
latent effects (Chapter 5). Chapter 6 summarizes the results and
limitations of the study and suggests future research directions.