AI Navigate

Unpacking Interpretability: Human-Centered Criteria for Optimal Combinatorial Solutions

arXiv cs.AI / 3/11/2026

Ideas & Deep Analysis

Key Points

  • The study investigates what makes one solution to a combinatorial optimization problem more interpretable to humans when multiple solutions are equally optimal.
  • An experimental setup with human participants showed that interpretability preferences correspond to alignment with a greedy heuristic, simple item composition within bins, and ordered visual representation.
  • The strongest factors influencing interpretability were ordered representation and heuristic alignment, with compositional simplicity also playing a significant role.
  • Reaction time data indicated faster choices mainly when heuristic differences were pronounced, while gaze data did not reliably correlate with complexity.
  • These insights provide actionable criteria to design and present machine-generated optimal solutions in a more interpretable manner, facilitating better human-algorithm collaboration and balancing optimality with interpretability in real-world tasks.

Computer Science > Human-Computer Interaction

arXiv:2603.08856 (cs)
[Submitted on 9 Mar 2026]

Title:Unpacking Interpretability: Human-Centered Criteria for Optimal Combinatorial Solutions

View a PDF of the paper titled Unpacking Interpretability: Human-Centered Criteria for Optimal Combinatorial Solutions, by Dominik Pegler and 4 other authors
View PDF
Abstract:Algorithmic support systems often return optimal solutions that are hard to understand. Effective human-algorithm collaboration, however, requires interpretability. When machine solutions are equally optimal, humans must select one, but a precise account of what makes one solution more interpretable than another remains missing. To identify structural properties of interpretable machine solutions, we present an experimental paradigm in which participants chose which of two equally optimal solutions for packing items into bins was easier to understand. We show that preferences reliably track three quantifiable properties of solution structure: alignment with a greedy heuristic, simple within-bin composition, and ordered visual representation. The strongest associations were observed for ordered representations and heuristic alignment, with compositional simplicity also showing a consistent association. Reaction-time evidence was mixed, with faster responses observed primarily when heuristic differences were larger, and aggregate webcam-based gaze did not show reliable effects of complexity. These results provide a concrete, feature-based account of interpretability in optimal packing solutions, linking solution structure to human preference. By identifying actionable properties (simple compositions, ordered representation, and heuristic alignment), our findings enable interpretability-aware optimization and presentation of machine solutions, and outline a path to quantify trade-offs between optimality and interpretability in real-world allocation and design tasks.
Comments:
Subjects: Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.08856 [cs.HC]
  (or arXiv:2603.08856v1 [cs.HC] for this version)
  https://doi.org/10.48550/arXiv.2603.08856
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Dominik Pegler [view email]
[v1] Mon, 9 Mar 2026 19:18:52 UTC (2,248 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Unpacking Interpretability: Human-Centered Criteria for Optimal Combinatorial Solutions, by Dominik Pegler and 4 other authors
  • View PDF
  • TeX Source
Current browse context:
cs.HC
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.