A recent study found that AI-generated legal outputs often contain falsehoods, a problem known as “hallucination.” Some experts believe this issue stems from our expectations of AI and how we use it. To mitigate this problem, fact-checking systems should be implemented alongside AI tools.