Method to Compress Machine Learning Models Earns International Conference on Communications Best Paper Award

Cong Shen, assistant professor of electrical and computer engineering at the University of Virginia School of Engineering, earned the best paper award in signal processing for communications at the International Conference on Communications—the annual flagship event of the Institute of Electrical and Electronics Engineers Communications Society.

Shen demonstrated a way to shrink machine learning models held and exchanged through a wireless communications network. The method can compress a machine learning model to one-sixteenth its original size while preserving performance.

With this method, wireless devices like smart phones can train machine learning models while keeping the training data on the device, which promotes data privacy, reduces vulnerabilities to attack and lowers the overhead cost of moving large data files from devices to a central server.

Shen collaborated with Xiang Chen, associate professor in the School of Electronics and Information Technology at Sun Yat-Sen University, Guangzhou, China. Chen’s Ph.D. student Sihui Zheng first-authored the team’s paper, Design and Analysis of Uplink and Downlink Communications for Federated Learning, published in full in the IEEE Journal on Selected Areas in Communication.

“We are among the first to both theoretically and experimentally prove that you can achieve the same federated learning performance with a significant compression of the machine learning models between your cell phone and a central server,” Shen said.

Shen seeks to balance learning performance and communication efficiency by tailoring signal processing and communication techniques to a popular machine learning paradigm called federated learning. Federated learning leverages the massive network of devices and the rich data they possess to train algorithms to find useful patterns and anomalies while satisfying privacy and security constraints. Rather than uploading data to a server, federated learning deploys its algorithms and gathers feedback from devices at the network’s edge over a certain time period.

Transferring artificial intelligence applications and services to the billions of devices connected to the internet—often called edge intelligence or edge computing—is a focal point for UVA Engineering faculty in machine learning, signal and image processing and communications, a research strength of the Charles L. Brown Department of Electrical and Computer Engineering.

“Professor Shen is a rising star at the confluence of wireless communications and machine learning research. We are proud of this and his many other achievements, which exemplify our commitment to game-changing innovations in AI for engineering and engineering for AI,” Nikolaos Sidiropoulos, Louis T. Rader professor and department chair, said.

The research Shen presented is an early, tangible outcome of his partnership with Jie Xu, assistant professor at the University of Miami. A grant from the National Science Foundation’s communications, circuits and sensing-systems program supports their design of a novel communication system that incorporates the unique characteristics of federated learning.

Shen’s Ph.D. student Xizixiang Wei, who also works on federated learning, designed a novel machine learning algorithm that mitigates the effect of communication noise. Wei’s work was also accepted and presented at the International Communications Conference.