Research Improves Predictive Performance of Deep Learning Algorithms that Use Graphs as their Inputs

At some point in your travels, you have likely relied on a smartphone app to plan a road trip or get around traffic jams. And you would be in good company. Almost 80% of smartphone owners say they regularly use navigation aids such as Google Maps or Waze.

These applications rely on machine learning to analyze typical traffic patterns and real-time data to predict what traffic will look like in the near future. But first, the traffic condition needs to be represented in a mathematical model that the machine learning algorithms can understand—a data structure consisting of edges (i.e., roads) and nodes (i.e., intersections) called graphs or networks.

Jundong Li, assistant professor of electrical and computer engineering at the University of Virginia, develops more effective and efficient deep learning algorithms to learn actionable patterns from graph data. Li holds joint appointments in UVA Engineering’s Department of Computer Science and the University’s School of Data Science.

“Graph data is everywhere around us,” Li said. “Transportation is just one example. Other common uses include social networks, knowledge graphs and scientific collaboration networks.”

The beauty of graph data is that it provides a general language to describe data and their interactions. In traffic prediction, for example, the node could denote a neighborhood or an entire city, and the edges represent the traffic flow between different areas.

Li and Shuiwang Ji, associate professor of computer science and engineering at Texas A&M, have earned a National Science Foundation grant to improve the predictive performance of deep learning algorithms that use graphs as their inputs. In particular, they are investigating a popular type of deep learning algorithms that are designed for graphs, called graph convolutional networks. Their goal is to realize the full potential of these deep learning algorithms to advance many graph-related applications.

“People nowadays use different machine learning techniques for different problems. For example, convolutional neural networks are widely used for face recognition, and recurrent neural networks are used for speech recognition. We focus on a different family of deep learning models—graph convolutional networks—that can be used to address many high-impact problems such as traffic prediction, product recommendation, and molecular property prediction,” Li said.

Li and Ji will delve into the building blocks of these deep learning algorithms, analyze the limitations of existing solutions, and design new deep learning architectures to make graph-related prediction tasks more accurate.

“We want to develop more powerful operations that can well characterize the properties of real-world graphs and develop more tailored deep learning architectures in handling graph applications from different domains,” Li said.

Their NSF-funded research will innovate two basic operations of graph neural networks, convolution and pooling. Both operations take advantage of the interdependencies and connectedness of the graph data, but with a different lens or focus.

Convolution operation is important for node-level learning tasks. Returning to the traffic management problem, convolution operations can be used to predict what will happen at a given intersection by combining information from neighboring roads and intersections. Another useful application is detecting a bot or malicious actor in a social network. In this case, the convolution operation is important to distinguish between normal and anomalous network activity to classify each individual in the network as a normal user or a spammer.

Pooling operations enables the prediction for a whole graph. Predicting the toxicity of a chemical compound is one potential benefit of this type of operation, with significant implications in the areas of metabolic engineering and drug discovery. In this example, a node represents a unique chemical element and the graph represents the compound structure. In this operation, information about each node (or element) is aggregated; the whole is qualitatively different from the sum of its parts.

This project is initiated based on the mutual interests of Li and Ji on graph mining and deep learning. With their track record in these areas, they are confident that this project will lay a solid foundation for research in deep learning for graphs and accelerate the computational understanding of real-world graphs and related applications.