Explainability is a crucial concept in the regulation of artificial intelligence, as it allows for transparency and oversight of complex models. It is seen as democratic and necessary for the rule of law, and is a key component of AI regulations around the world.