Our paper, co-authored with Eugene Syriani on DEVS Model Construction as a Reinforcement Learning Problem has been accepted for the 2022 Annual Modeling and Simulation Conference (ANNSIM) in San Diego, California. Pre-print available.
Simulators are crucial components in many software-intensive systems, such as cyber-physical systems and digital twins. The inherent complexity of such systems renders the manual construction of simulators an error-prone and costly endeavor, and automation techniques are much sought after. However, current automation techniques are typically tailored to a particular system and cannot be easily transposed to other settings. In this paper, we propose an approach for the automated construction of simulators that can overcome this limitation, based on the inference of Discrete Event System Specifications (DEVS) models by reinforcement learning. Reinforcement learning allows inferring knowledge on the construction process of the simulator, instead of inferring the simulator itself. This, in turn, fosters reuse across different systems. DEVS further improves the reusability of this knowledge, as the vast majority of simulation formalisms can be efficiently translated to DEVS. We demonstrate the performance and generalizability of our approach on an illustrative example implemented in Python and Tensorforce.