This thesis explores the potential of over-the-air computation (AirComp) as a new wireless communication method for distributed machine learning (ML) services. AirComp promises to communicate with massive gains in terms of energy, latency, and spectrum efficiency compared to traditional methods. However, AirComp introduces errors in the process, which harms the convergence rate and region of optimality of ML algorithms. This thesis presents methods to reduce these errors and analyze their effects on ML performance.
Previous ArticlePurdue’s Computer Vision Device Mimics The Human Retina
Next Article Ibm Offers Quantum Error Suppression Out Of The Box