Skip to main content

A Thought on Understanding

·114 words·1 min

Do large language models, like ChatGPT, understand what we ask and they respond? Geoffrey Hinton claims it “ seems very hard” to deny given their performance. But is this understanding comparable to our human capacity to understand?

A colleague pondered that this prompt-response interaction resembles the way we test our students’ understanding. And he is right. I added the caveat that we consider such tests reliable given the fact that humans are answering them.

Large language models can pass some of these narrow-focused, watered-down Turing-tests. May we, in turn, from the fact that students manage to pass some of these exams conclude that humans understand like large-language models do?

Extrapolation is a tricky thing…