Measuring Research Convergence in Interdisciplinary Teams Using Large Language Models and Graph Analytics
arXiv cs.AI / 2026/3/24
💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research
要点
- The paper proposes a multi-layer AI framework that combines large language models, graph analytics, and human-in-the-loop evaluation to measure how interdisciplinary research teams converge on shared knowledge over time.
- It uses LLMs to extract structured research viewpoints mapped to the NABC (Needs-Approach-Benefits-Competition) framework and to infer potential “viewpoint flows” between presenters to create a common semantic foundation.
- The framework supports three complementary analyses: similarity-based qualitative grouping of viewpoints (popular vs. unique), quantitative cross-domain influence using network centrality metrics, and temporal analysis of convergence dynamics.
- To mitigate uncertainty from LLM-based inference, it adds expert validation via structured surveys and cross-layer consistency checks to verify alignment across components.
- A case study on water insecurity research within the Arizona Water Innovation Initiatives shows increasing viewpoint convergence and reveals domain-specific influence patterns, illustrating the framework’s practical value.

