Medical researchers say their ability to revolutionize cancer treatments is tantalizingly close, if they could just find more efficient and cost-effective ways to sequence the human genome, which is basically the process of sorting and analyzing the unique alphabet soup of each person’s DNA.

With more than 23.6 million new cases of cancer expected worldwide by 2030, pathology samples available for medical research are growing. When pathology samples are gene-sequenced and matched with genome-specific data on successful cancer treatments, researchers can generate suggestions for the most effective treatments for individual patients. Oncologists can then offer targeted therapies with a higher likelihood of success and lower risk of side effects.

The roadblock to such medical marvels is computing. Right now, there are frustrating limits on the speeds for moving and processing data. It takes 20 hours to sequence a single human pathology sample and analyze it against other DNA samples, and a course of cancer treatment entails many samples. To sequence and analyze the pathology samples from the worldwide cases expected by 2030, researchers would need storage equivalent to three million of today’s highest-capacity computer drives, together with proportional computational horsepower.

Enter the University of Virginia-led, 11-university Center for Research in Intelligent Storage and Processing in Memory, or CRISP.

Center investigators are designing a new architecture that pairs processing and memory into a single unit.  By housing the data at or near the processor, the amount of data — and speed at which it is processed — can be dramatically increased.  The processing acceleration is aimed at fueling breakthroughs, like targeted cancer treatments, in many global challenges.

Center for Research in Intelligent Storage and Processing in Memory investigators

Center for Research in Intelligent Storage and Processing in Memory investigators gathered at the University of Virginia in November for the center’s second-annual research review. UVA Engineering was tapped in 2018 to lead the $29.7 million center that aims to eliminate the bottleneck between computer processing and memory.

UVA Engineering was tapped in 2018 to spearhead the $29.7 million center and remove the “memory wall,” a term coined by UVA computer science professor emeritus William Wulf and his then-graduate student Sally McKee in 1994. The memory wall results from two issues: outdated computing architecture with a physical separation between computer processors and memory; and the fact that a processor can run much faster than the speed at which it’s being fed with data.

Researchers are in the second year of the five-year grant cycle and are focusing their efforts across three simultaneous tracks: hardware design, system software support and application case studies. The case studies help drive the research and accelerate research milestones.

UVA Engineering’s Department of Computer Science and Charles L. Brown Department of Electrical and Computer Engineering are exploring several potential solutions, along with the supporting software. UVA is collaborating with Cornell University, Pennsylvania State University, University of California at Los Angeles, University of California at San Diego, University of California at Santa Barbara, Stanford University, University of Illinois at Urbana-Champaign, University of Washington, University of Wisconsin and University of Pennsylvania.

“Working with experts from other universities, as well as industry and government, is moving us toward solutions to this problem faster than otherwise possible,” said Kevin Skadron, Harry Douglas Forsyth professor and chair of computer science and center leader. “We are two years into important work that is the foundation for the way we will process big data in the future.”

UVA Engineering hosted 113 attendees at the second-annual meeting of the lead investigators and graduate student researchers at UVA in November. The gathering created a forum for lead investigators to share updates and for graduate student researchers to present posters.

Industry sponsors and government partners also participated, including representatives from the Defense Advanced Research Projects Agency. The agency is part of the Department of Defense and is focused on emerging technologies, recognizing the urgency for computational processing power that extends past the current architecture.

The Semiconductor Research Corporation is the North Carolina-based consortium of scientists and engineers from technology companies, universities and government agencies that funds and oversees the five-year Joint University Microelectronics Program. The Center for Research in Intelligent Storage and Processing in Memory is one of six major initiatives under the program.

Todd Younkin, executive director of the Joint University Microelectronics Program at the Semiconductor Research Corporation, stressed the importance of the research. It will ensure that computer systems are able to keep up with the rapid growth in data and continue the pace of innovation that benefits society through improvements in health care, science, technology and many other applications. 

“Center-scale collaboration and industry/academic partnerships dramatically accelerate the pace of discovery and technology transfer,” Younkin said. “Regular meetings, student exchanges among universities and internships help further accelerate the collaborations.”

Center researchers will continue to meet each year for the next three years. Skadron emphasized that the vision of the center going forward is to identify new techniques, focus on solving selected challenges, and build future processing prototypes along with the supporting software.

Skadron said the effort to speed up the 20-hour process of sequencing a human genome and matching it to successful cancer treatments is a powerful example of how the center’s work will benefit medical research.

“Our recent results show that we can speed up sequence alignment from 20 hours to less than a second, and we believe that further speed-ups of 100 times or more are possible with the new architectures center researchers are developing,” said Tajana Rosing, professor in the Department of Computer Science and Engineering at the University of California at San Diego. 

This advancement would enable treatment decisions to be made in one office visit, rather than multiple appointments. It would also allow smaller hospitals to afford the computing infrastructure for such state-of-the-art treatments.

The savings in treatment costs are hard to estimate in future dollars, but could be significant. At the beginning of the last decade, the National Institutes of Health projected that medical expenditures for cancer in the year 2020 would reach at least $158 billion, based on 2010 dollars.

“This one example highlights the importance of our collaboration with other universities, across multiple disciplines, to remove the memory wall,” Skadron said. “Industry and government are working with us to realize the incredible breakthroughs that can occur with large data sets. All sectors of our economy and society will benefit."