This article discusses the concept of bullshit in relation to language models such as ChatGPT. It argues that these models are not designed to accurately represent the world, but rather to give the impression of doing so. The article distinguishes between “hard” and “soft” bullshit and suggests that ChatGPT may produce both types.
