From Recency Bias to Stable Convergence Block Kaczmarz Methods for Online Preference Learning in Matchmaking Applications
arXiv cs.LG / 4/14/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Kaczmarz-based online preference learning algorithms designed for real-time personalized matchmaking in reciprocal recommender systems.
- It shows that post-step L2 normalization used in Kaczmarz-inspired online learners creates an exponential recency bias, making older interactions effectively vanish after only a small number of swipes.
- To address this, the authors replace the normalization step with a Tikhonov-regularized projection denominator that bounds step size analytically while preserving interaction history.
- The work further proposes an adaptive variant for cases where candidate tag vectors are not pre-normalized, producing per-candidate step sizes via the ||a||^2 + alpha denominator.
- In large-scale simulation (6,400 swipes), BlockNK is reported to achieve the best preference alignment and direction stability under label noise, and candidate filtering improves asymptotic alignment while potentially adding a feedback loop risk.
Related Articles

Emerging Properties in Unified Multimodal Pretraining
Dev.to

Build a Profit-Generating AI Agent with LangChain: A Step-by-Step Tutorial
Dev.to

Open source AI is winning — but here's why I still pay $2/month for Claude API
Dev.to

AI Agents Need Real Email Infrastructure
Dev.to

Beyond the Prompt: Why AI Agents Are Hitting the Deployment Wall
Dev.to