A Note on How to Remove the $\ln\ln T$ Term from the Squint Bound

arXiv cs.LG / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The note explains that the “shifted KT potentials” approach from Orabona and Pál (2016) can be interpreted as modifying the prior used in the Krichevsky–Trofimov (KT) algorithm.
  • It shows an equivalence between that prior change and the method’s effect of eliminating the unwanted \(\ln\ln T\) factor in parameter-free learning with expert bounds.
  • The author extends the same underlying idea to the Squint algorithm, aiming to remove the \(\ln\ln T\) factor from the data-independent bound.
  • Overall, the contribution is a technical clarification and transfer of a proof technique across different online learning settings (KT and Squint).

Abstract

In Orabona and P\'al [2016], we introduced the shifted KT potentials, to remove the \ln \ln T factor in the parameter-free learning with expert bound. In this short technical note, I show that this is equivalent to changing the prior in the Krichevsky--Trofimov algorithm. Then, I show how to use the same idea to remove the \ln \ln T factor in the data-independent bound for the Squint algorithm.