< Explain other AI papers

We Can't Understand AI Using our Existing Vocabulary

John Hewitt, Robert Geirhos, Been Kim

2025-02-17

We Can't Understand AI Using our Existing Vocabulary

Summary

This paper talks about how humans need to create new words, called neologisms, to better communicate with AI and understand how it works. These new words can help bridge the gap between human and machine thinking.

What's the problem?

AI and humans think in very different ways, and our current vocabulary isn't good enough to fully explain or control what AI is doing. This makes it hard for us to understand AI's decisions or teach it exactly what we want.

What's the solution?

The researchers suggest creating new words, or neologisms, that can describe specific concepts shared between humans and machines. For example, they created a 'length neologism' to control how long AI responses are and a 'diversity neologism' to make AI give more varied answers. These words are designed to be detailed enough to be useful but not so complicated that they can't work in different situations.

Why it matters?

This matters because it could make AI easier to control and understand. By developing a shared language between humans and machines, we can improve how we interact with AI, making it more reliable and effective in tasks like communication, decision-making, and problem-solving.

Abstract

This position paper argues that, in order to understand AI, we cannot rely on our existing vocabulary of human words. Instead, we should strive to develop neologisms: new words that represent precise human concepts that we want to teach machines, or machine concepts that we need to learn. We start from the premise that humans and machines have differing concepts. This means interpretability can be framed as a communication problem: humans must be able to reference and control machine concepts, and communicate human concepts to machines. Creating a shared human-machine language through developing neologisms, we believe, could solve this communication problem. Successful neologisms achieve a useful amount of abstraction: not too detailed, so they're reusable in many contexts, and not too high-level, so they convey precise information. As a proof of concept, we demonstrate how a "length neologism" enables controlling LLM response length, while a "diversity neologism" allows sampling more variable responses. Taken together, we argue that we cannot understand AI using our existing vocabulary, and expanding it through neologisms creates opportunities for both controlling and understanding machines better.