Discussion about this post

User's avatar
John Laird's avatar

My assumption is that mental imagery is a critical capability that humans use for many of the problems you describe. As long as LLMs only process language, their only hope is to find associations to language descriptions of similar problems and fake it. One would hope that someday these systems would be extended to have mental imagery components (as found in cognitive architectures), not to mention episodic memory!

Expand full comment
Pär Winzell's avatar

I enjoyed this article is a lot, thank you! Since GPT-4 was released since its publication, I thought I'd try the stack-of-items prompt – and unlike the GPT-3.x models, 4 does in fact pass. I won't try to draw far-ranging conclusions from that one data point, but it seems a meaningful one nonetheless.

Q: Imagine a stack of items, arranged from bottom to top: cat, laptop, television, apple. The apple is now moved to the bottom. What items is on top of the stack now?

A: If the apple is moved to the bottom of the stack, the new arrangement from bottom to top would be: apple, cat, laptop, television. The item on top of the stack now is the television.

(Bard gets it hilariously wrong...ish? "The items on top of the stack now are the cat and the laptop. The apple was originally on the top of the stack, but it was moved to the bottom. This means that the cat and the laptop are now on top of the stack.")

Expand full comment
21 more comments...

No posts