This study from MIT neuroscientists has found that deep neural networks, which are computational models trained to recognize objects or words, often also respond the same way to images or words that have no resemblance to the target. When these models were asked to generate an image or a word that they would put in the same category as a specific input, such as a picture of a bear, most of what they produced was unrecognizable to human observers. The findings offer a new way for researchers to evaluate how well these models mimic the organization of human sensory perception.
