Review and Rebuttal: Zero-shot In-context Adversarial Learning for Improving Research Ideation

 

Abstract:

Recent studies highlight that the advancements in Large Language Models (LLMs) have opened up exciting possibilities for scientific discovery, where LLMs can assist researchers in generating novel hypotheses and ideas. In this work, we draw inspiration from Generative Adversarial Networks (GANs) and make the first effort to formalize the concept of zero-shot in-context adversarial learning and implement it through multi-LLM-agent interactions to improve the research ideation process. Our approach takes the best of two worlds: (1) by making in-context learning adversarial, the utilization of an LLM’s vast parametric knowledge can be optimized; and (2) by keeping adversarial learning in context, we eliminate the need for bi-level optimization through additional model training. To evaluate the quality of the open-ended generation produced by LLMs, we develop a relative quality ranking metric, designed to serve as a proxy for human evaluation when human assessments are impractical or costly. Our findings demonstrate that zero-shot in-context adversarial learning significantly enhances idea generation across two dimensions. Specifically, with GPT-4o, the novelty of generated ideas improved by 21\%, and feasibility of the ideas saw an impressive increase of 322\%. These results underscore the transformative potential of zero-shot in-context adversarial learning in driving innovation and creativity within the research process.

Committee:

  • Yu Meng, Committee Chair (CS/SEAS/UVA)
  • Aidong Zhang, Advisor (CS, BME/SEAS, SDS/UVA)
  • Yangfeng Ji (CS/SEAS/UVA)
  • Yen-Ling Kuo (CS/SEAS/UVA)