Is It Novel and Why? Fine-Grained Patent Novelty Prediction Based on Passage Retrieval

arXiv cs.CL / 5/5/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper argues that novelty prediction for patents should move beyond claim-level binary classification because it can rely on spurious correlations and lacks the feature-level granularity needed for real examination.
  • It introduces FiNE-Patents, a dataset of 3,658 first patent claims with fine-grained, feature-level prior-art passage references extracted from European Search Opinion (ESOP) documents.
  • The proposed task reframes novelty assessment as a joint retrieval and abstract-reasoning problem: models must find passages that disclose specific claim features and determine which features make the claim novel.
  • The authors implement LLM-based workflows that decompose claims into features, check each feature against prior art, and then aggregate results into a claim-level novelty prediction.
  • Experimental results show the workflows beat embedding-based baselines for both passage retrieval and novel feature identification, and LLMs are more robust than trained classifiers to spurious correlations; the dataset and code are released to support further research.

Abstract

Novelty assessment is a critical yet complex task in the examination process for patent acceptance, requiring examiners to determine whether an invention is disclosed in a prior art document. The process involves intricate matching between specific features of a patent claim and passages in the prior art. While prior work has approached novelty prediction primarily as a binary classification task at the claim level, we argue that this formulation is susceptible to spurious correlations and lacks the granularity required for practical application. In this work, we introduce FiNE-Patents (Fine-grained Novelty Examination of Patents), a novel dataset comprising 3,658 first patent claims annotated with fine-grained, feature-level prior art references extracted from European Search Opinion (ESOP) documents. We propose shifting the evaluation paradigm from simple binary classification to a joint retrieval and abstract reasoning task at the feature level, requiring models to identify specific passages from a prior art document that disclose individual claim features, and to identify which features of a claim make it novel. We implement and evaluate LLM-based workflows that decompose claims into features, analyze each feature against prior art, and finally derive a claim-level novelty prediction. Our experiments demonstrate that these workflows outperform embedding-based baselines on passage retrieval and novel feature identification. Furthermore, we show that unlike trained classifiers, LLMs are robust against spurious correlations present in the claim-level novelty classification task. We release the dataset and code to foster further research into transparent and granular patent analysis.