Microsoft has developed new tools to make generative AI safer to use by filtering malicious prompts, detecting ungrounded outputs, and evaluating model safety. These tools aim to mitigate the risk of prompt attacks and ensure responsible AI practices.