A new research paper has concluded that future generations of AI models may have to base their answers on data created by past iterations, which could ultimately spiral into incomprehensible content. The research paper, which is yet to be peer-reviewed, is calling it “model collapse”. The paper’s authors, a group of British and Canadian scientists, warn that as an increasing amount of AI-generated content is published online, future AIs trained on this material may put out garbage and incomprehensible content. Platforms like ChatGPT, Bard, Dalle-E, and many other Generative AI tools and services are called Large Language Models or LLMs and require large amounts of data.
