Our paper Complex Model Transformations by Reinforcement Learning with Uncertain Human Guidance, co-authored with Kyanna Dagenais (former undergraduate and incoming graduate student in our lab) has been accepted for the ACM/IEEE 28th International Conference on Model Driven Engineering Languages and Systems (MODELS).
MODELS is a CORE A-ranked conference, the flagship event of the model-driven engineering community. It is always an honor and a special kind of joy to have a manuscript accepted here. This time, I’m especially happy as Kyanna achieved this success as an undergraduate student. Previously, Kyanna developed a new theory and method for guiding reinforcement learning agents through uncertain human advice, which she now integrated into a state-of-the-art model transformation engine to infer complex model transformation chains. Her approach will foster efficiency and better scalability in key model-driven engineering areas, such as model synchronization, model consistency management, and design-space exploration.
Kyanna spent countless late evenings in the lab working on the technical implementation and the experiments, eventually producing a very nice piece. Congratulations, your success is well-deserved!

Preprint available here: https://arxiv.org/abs/2506.20883.
Abstract. Model-driven engineering problems often require complex model transformations (MTs), i.e., MTs that are chained in extensive sequences. Pertinent examples of such problems include model synchronization, automated model repair, and design space exploration. Manually developing complex MTs is an error-prone and often infeasible process. Reinforcement learning (RL) is an apt way to alleviate these issues. In RL, an autonomous agent explores the state space through trial and error to identify beneficial sequences of actions, such as MTs. However, RL methods exhibit performance issues in complex problems. In these situations, human guidance can be of high utility. In this paper, we present an approach and technical framework for developing complex MT sequences through RL, guided by potentially uncertain human advice. Our framework allows user-defined MTs to be mapped onto RL primitives, and executes them as RL programs to find optimal MT sequences. Our evaluation shows that human guidance, even if uncertain, substantially improves RL performance, and results in more efficient development of complex MTs. Through a trade-off between the certainty and timeliness of human advice, our method takes a step towards RL-driven human-in-the-loop engineering methods.