Using Analogical Reasoning to Prompt LLMs for their Intuitions of Abstract Spatial Schemas

1LMU Munich 2CISUC, University of Coimbra 3Munich Center for Machine Learning (MCML)

Abstract

Abstract notions are often comprehended through analogies, wherein there exists correspondence or partial similarity with more concrete concepts. A fundamental aspect of human cognition involves synthesising embodied experiences into spatial schemas, which profoundly influence conceptualisation and underlie language acquisition. Recent studies have demonstrated that Large Language Models (LLMs) exhibit certain spatial intuitions akin to human language. For instance, both humans and LLMs tend to associate ↑ with 'hope' more readily than with 'warn'. However, the nuanced partial similarities between concrete (e.g., ↑) and abstract (e.g., hope) concepts, remain insufficiently explored. Therefore, we propose a novel methodology utilising analogical reasoning to elucidate these associations and examine whether LLMs adjust their associations in response to analogical prompts. We find that analogy-prompting is slightly increasing agreement with human choices and the answers given by models include valid explanations supported by analogies, even when in disagreement with human results.

Example Inference

Example Inference

Results

Click on a model to select results

BibTeX

@article{wicke2024using,
  author    = {Wicke, Philipp and Hirlimann, Lea and Cunha, Joao Miguel},
  title     = {Using Analogical Reasoning to Prompt LLMs for their Intuitions of Abstract Spatial Schemas},
  journal   = {Workshop Proceedings of the First Workshop on Analogical Abstraction in Cognition, Perception, and Language (Analogy-ANGLE) co-located at IJCAI},
  year      = {2024},
}