Revisiting Semantic Role Labeling: Efficient Structured Inference with Dependency-Informed Analysis

arXiv cs.CL / 5/5/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper revisits Semantic Role Labeling (SRL) with a focus on structured, explicit predicate–argument representations rather than relying on the more implicit semantics typical of many LLM-based approaches.
  • It proposes a modern encoder-based SRL framework that preserves explicit structure while achieving inference up to 10× faster than prior typical implementations.
  • Using BERT-base, the method reaches comparable predictive quality, and swapping in RoBERTa or DeBERTa further improves F1 scores within the same structured framework.
  • The authors introduce a dependency-informed diagnostic and representation-level analysis to show that dependency cues mainly enhance structural stability.
  • They also demonstrate a downstream use case where the explicit predicate–argument structure can support multilingual SRL projection.

Abstract

Semantic Role Labeling (SRL) provides an explicit representation of predicate-argument structure, capturing linguistically grounded relations such as who did what to whom. While recent NLP progress has been dominated by large language models (LLMs), these systems often rely on implicit semantic representations, often lacking explicit structural constraints and systematic explanatory mechanisms. Traditionally, SRL systems have often relied on AllenNLP; however, the framework entered maintenance mode in December 2022, limiting compatibility with evolving encoder architectures and modern inference requirements. We revisit structured SRL modeling, introducing a modernized encoder-based framework that preserves explicit predicate-argument structure while enabling inference 10 times faster. Using BERT-base, the model attains comparable predictive performance, and RoBERTa and DeBERTa further improve F1 performance within the same framework. We adopt a dependency-informed diagnostic methodology to characterize span-level inconsistencies and conduct a representation-level analysis of LLM behavior under dependency-informed structural signals. Results indicate that dependency cues primarily improve structural stability. Finally, we illustrate how the framework's explicit predicate-argument structure can support multilingual SRL projection as a downstream application.

Revisiting Semantic Role Labeling: Efficient Structured Inference with Dependency-Informed Analysis | AI Navigate