Kyanna Dagenais, research assistant in our lab, just got her paper “Towards Model Repair by Human Opinion-Guided Reinforcement Learning” accepted for the ACM Student Research Competition (SRC), co-located with the ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems (MODELS).
The accepted paper comes with an invitation to a poster presentation at the conference.
Congratulations to Kyanna on the fantastic job.
Abstract. Model repair often entails long sequences of several model transformations. Finding the correct model repair sequence is challenging, and its complexity increases with the number of model transformations involved in the repair sequence. In realistic, longitudinally extensive modelling settings, the same model repair scenarios might be encountered repeatedly, providing an excellent opportunity to learn the most appropriate repair actions through reinforcement learning (RL). While this idea has been explored in MDE before, the efficiency of RL-based methods in long repair sequences is still an open challenge. In this paper, we assess the ability of RL-based methods to support particularly long model repair sequences and improve performance by human guidance in the form of opinions–cognitive constructs that are subject to uncertainty but also emerge earlier than hard evidence can be produced. Our findings indicate that opinion-guided RL significantly improves the performance of model repair, even with moderately uncertain human opinions.
Preprint. Check back for the pre-print at the end of July.