Just to expand on this, LLMs have ways of controlling how “surprising” they are. ChatGPT hides this by being opinionated and choosing for you, but the API exposes them.
However, surprising to an LLM is different because it processes tokens where we process thoughts. If you crank up the surprise on an LLM it starts putting together tokens that don’t often go together for a reason. It doesn’t fall apart all at once, but it starts talking like someone taking LSD and then it starts talking like someone trying to type with their face. It makes it more creative at the cost of coherence.
People should be encouraged to play with these settings more because they could see the illusion of sapience slipping on and off, exposing the machinery underneath. I think that would be so much more effective than trying to explain how it doesn’t think to people while they are looking at evidence in front of their own faces that feels like it proves you wrong.