Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere have developed a technique called PockEngine that enables deep-learning models to efficiently adapt to new sensor data directly on an edge device. This on-device training method determines which parts of a huge machine-learning model need to be updated to improve accuracy, and only stores and computes with those specific pieces. Compared to other methods, PockEngine significantly sped up on-device training, performing up to 15 times faster on some hardware platforms, and did not cause models to have any dip in accuracy. The technique can enable better privacy, lower costs, customization ability, and also lifelong learning.
