Add to Favourites
To login click here

This paper provides a framework to represent, quantify, and evaluate safety in AI. It uses a logic-based approach rather than a numerical one to define safety, enabling efficient training of safe-by-construction deep reinforcement learning policies. The paper discusses the advantages of this approach compared to traditional shielding techniques, such as the ability to monitor the agent’s actions and provide feedback to the agent.