DeepMind’s recent report attempts to answer the paradox of why large language models (LLMs) notoriously lapse into inaccuracies, despite their ability to self-correct. The paper argues that LLMs are not yet capable of self-correcting their reasoning, and that self-correction has been inherent to the machine learning discipline for a long while. The latest development is to use prompts to get a program to go back over the answers it has produced and check whether they are accurate.
