Meta Platforms, Inc. has developed a new deep learning application called Image Decoder which translates brain activity into images of what the subject is looking at or thinking of in near real-time. This technology is based on Meta’s open source foundation model DINOv2 and is being developed by researchers at the Facebook Artificial Intelligence Research lab (FAIR) and PSL University in Paris. The Image Decoder system combines two fields, machine learning and magnetoencephalogphy (MEG), to measure and record brain activity and then use deep learning to translate it into images.
