Researchers from the Technical University of Darmstadt and The University of Bath conducted experiments to debunk claims that large language models (LLMs) can teach themselves new and potentially dangerous tricks. They found that LLMs only appear to be picking up new skills, but it is actually the result of in-context learning, model memory, and general linguistic knowledge. It is important for users to be aware of these limitations when interacting with chatbots powered by LLMs.