A Comparative Study of Demonstration Selection for Practical Large Language Models-based Next POI Prediction
arXiv cs.CL / 4/9/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper compares multiple demonstration selection strategies for next point-of-interest (POI) prediction using large language models (LLMs) and in-context learning (ICL) over historical check-in data.
- It shows that the choice of demonstrations strongly affects ICL effectiveness, motivating a systematic comparison of selection methods rather than relying on arbitrary or single-purpose approaches.
- Across three real-world datasets, simple heuristics like geographical proximity, temporal ordering, and sequential patterns outperform more complex and costly embedding-based demonstration selection methods on both accuracy and computational cost.
- In some cases, LLMs prompted with demonstrations chosen via these heuristics outperform existing fine-tuned models without additional training, suggesting practical deployment advantages.
- The authors release the associated codebase, enabling replication and further experimentation for real-world POI/trajectory prediction systems.
Related Articles

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to

Moving from proof of concept to production: what we learned with Nometria
Dev.to

Frontend Engineers Are Becoming AI Trainers
Dev.to