Together with Kyanna Dagenais, we are honored to receive the ACM SIGSOFT Distinguished Paper Award at the ACM/IEEE 28th International Conference on Model Driven Engineering Languages and Systems (MODELS) for our work “Complex Model Transformations by Reinforcement Learning with Uncertain Human Guidance” [preprint].
In this work, we present a method and technical framework for guiding reinforcement learning (RL) agents through uncertain human advice to infer complex model transformation chains. Our approach helps exploit early-emerging expert intuitions through a trade-off between the certainty and timeliness of human advice. This, in turn, fosters efficiency and better scalability in key model-driven engineering (MDE) problems, such as model synchronization, model consistency management, and design-space exploration (DSE).
Many (if not most) of us in this field believe that to meet the need for rigorous yet efficient engineering methods, we must blend traditional engineering paradigms (such as MDE) with machine learning and AI. This work is an important step towards such engineering methods, with a very clear focus on the efficient externalization of valuable tacit knowledge.
This is a special kind of recognition and very much appreciated as the first author is my fantastic student, Kyanna, a promising young researcher who authored this paper while still an undergrad earlier in 2025. (Now in our Master’s program.) Kyanna spent countless late evenings in the lab working on the technical implementation and the experiments, eventually producing a very nice piece. The work builds directly on her previous work that earned her the First Prize at the 2024 ACM Student Research Competition (SRC) at MODELS’24.
Thank you for your perseverance and drive. You motivate us all.
We’d like to thank the three anonymous reviewers who recognized the value in this work and gave very constructive and valuable feedback, and the conference chairs for nominating our work for the award!

Preprint available here: https://arxiv.org/abs/2506.20883.
Abstract. Model-driven engineering problems often require complex model transformations (MTs), i.e., MTs that are chained in extensive sequences. Pertinent examples of such problems include model synchronization, automated model repair, and design space exploration. Manually developing complex MTs is an error-prone and often infeasible process. Reinforcement learning (RL) is an apt way to alleviate these issues. In RL, an autonomous agent explores the state space through trial and error to identify beneficial sequences of actions, such as MTs. However, RL methods exhibit performance issues in complex problems. In these situations, human guidance can be of high utility. In this paper, we present an approach and technical framework for developing complex MT sequences through RL, guided by potentially uncertain human advice. Our framework allows user-defined MTs to be mapped onto RL primitives, and executes them as RL programs to find optimal MT sequences. Our evaluation shows that human guidance, even if uncertain, substantially improves RL performance, and results in more efficient development of complex MTs. Through a trade-off between the certainty and timeliness of human advice, our method takes a step towards RL-driven human-in-the-loop engineering methods.