AI Navigate

Comparison of Outlier Detection Algorithms on String Data

arXiv cs.LG / 3/13/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • A new arXiv thesis compares two string data outlier detection algorithms: a variant of local outlier factor using a weighted Levenshtein distance, and a hierarchical left regular expression learner.
  • The first method adapts LOF to strings by calculating data density with a Levenshtein-based metric that incorporates hierarchical character classes.
  • The second method introduces a hierarchical left regular expression learner that infers a regex representing the expected data to identify anomalies.
  • Experimental results across various datasets show that both algorithms can conceptually detect outliers in string data, with the regex-based approach excelling when the expected structure is distinct from outliers and LOF variants performing well when edit distances separate outliers from expected data.
  • The work addresses a gap in string data outlier detection and suggests applications in data cleaning and anomaly detection for system log files.

Abstract

Outlier detection is a well-researched and crucial problem in machine learning. However, there is little research on string data outlier detection, as most literature focuses on outlier detection of numerical data. A robust string data outlier detection algorithm could assist with data cleaning or anomaly detection in system log files. In this thesis, we compare two string outlier detection algorithms. Firstly, we introduce a variant of the well-known local outlier factor algorithm, which we tailor to detect outliers on string data using the Levenshtein measure to calculate the density of the dataset. We present a differently weighted Levenshtein measure, which considers hierarchical character classes and can be used to tune the algorithm to a specific string dataset. Secondly, we introduce a new kind of outlier detection algorithm based on the hierarchical left regular expression learner, which infers a regular expression for the expected data. Using various datasets and parameters, we experimentally show that both algorithms can conceptually find outliers in string data. We show that the regular expression-based algorithm is especially good at finding outliers if the expected values have a distinct structure that is sufficiently different from the structure of the outliers. In contrast, the local outlier factor algorithms are best at finding outliers if their edit distance to the expected data is sufficiently distinct from the edit distance between the expected data.