Autonomous systems are increasingly utilised for various purposes in our society. However, it is not guaranteed that autonomous systems can operate safely and effectively. They must consider uncertainties in the environment. Yet, the sensors these systems rely on are not always precise, as seen in a car's parking assistant incorrectly indicating obstacles in the rain. If these sensors are not flawless, how do we ensure the safety and effectiveness of autonomous systems? This is the essence of the FuRoRe project.
While methods exist to create runtime monitors and corresponding algorithms from precise models, there is a lack of an automatic method for less precise models. FuRoRe addresses this issue. The project revisits the drawing board: Can we define the mathematical problem behind runtime monitors for so-called (precise) Markov models on uncertain Markov models? Where do current algorithms fall short? Additionally, the project aims to prevent runtime monitors from becoming complex systems themselves. Can we learn a smaller algorithm that is nearly as effective as the complex one?