The Barnum effect of chat bots

Here’s a post that summarizes generative “AI” quite well. Hat tip to Brian Krebs, posting this on Mastodon. I can’t embed the post here—maybe he has his embedding turned off.

The original that Krebs cites, by David Gerard, can be found here.

Like the social media websites concocted by the graduates of Prof Fogg’s class at Stanford, addiction is at the core.

I don’t want to quote Gerard’s post too extensively but I will borrow two paragraphs, two of which also appeared quoted by Krebs:

Large language models work the same way as a carnival psychic. Chatbots look smart by the Barnum Effect—which is where you read what’s actually a generic statement about people and you take it as being personally about you. The only intelligence there is yours.

With ChatGPT, Sam Altman hit upon a way to use the Hook Model with a text generator. The unreliability and hallucinations themselves are the hook—the intermittent reward, to keep the user running prompts and hoping they’ll get a win this time.

I have seen a few science situations where it was quite useful at taking large amounts of data and summarizing them, but I have indeed fallen for the trap when it came to something more generic.
 
Speaking of Mastodon, someone posted the below a year or more ago, but I could never find it on any search engine, even when feeding in pretty precise keywords. It reappeared today, so, at the risk of not being able to find it again:
 
Sign with 'Pack it in' and 'Pack it out'. The first half of the sign sees two figures walking to the right. The second half of the sign sees one figure loading something into the car boot. At the top of the photo is a social media comment, 'Um … what happened to the second person?'
 

The lesson: check your designs with a large group whenever you can, especially signage.


You may also like




Leave a Reply

Your email address will not be published. Required fields are marked *