Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere have developed a technique called PockEngine that enables deep-learning models to efficiently and continuously learn from new user data directly on an edge device like a smartphone. This technique determines which parts of a huge machine-learning model need to be updated to improve accuracy, and only stores and computes with those specific pieces. When compared to other methods, PockEngine significantly sped up on-device training, performing up to 15 times faster on some hardware platforms, without causing any dip in accuracy.
