Posted over on Knuth's webpage is an email he recently wrote to Stephen Wolfram in which he describes some prodding of ChatGPT. He posed 20 questions and gives the verbatim answers as well as commentary on the answers.
Preface: Since one of today's popular recreations is to play with
chatGPT, I decided on 07 April 2023 to try my own little experiment,
as part of a correspondence with Stephen Wolfram.The results were sufficiently interesting that I passed them on
to a few friends the next day, and I've also been mentioning them in
conversation when the topic comes up.So I was asked to post the story online, and here it is (lightly edited)!
-- Don Knuth
PS: I did not edit my questions or the computer's answers, only my
own commentary at the end.
It is an interesting read that I highly recommend. And he ends his letter:
Well this has been interesting indeed. Studying the task of
how to fake it certainly leads to insightful subproblems galore.
As well as fun conversations during meals.On the other hand, Gary Marcus's column in the April CACM
brilliantly describes the terrifying consequences of these
developments.I find it fascinating that novelists galore have written for decades
about scenarios that might occur after a "singularity" in which
superintelligent machines exist. But as far as I know, not a single
novelist has realized that such a singularity would almost surely
be preceded by a world in which machines are 0.01% intelligent
(say), and in which millions of real people would be able to interact
with them freely at essentially no cost.
The Gary Marcus CACM piece he refers to is Hoping for the Best as AI Evolves.
(Score: 2, Insightful) by pTamok on Monday May 15, @10:35AM (2 children)
...it was an interesting read.
Knuth makes some good points, which is not entirely unexpected.
My experience of LLM generated text is that is does come across as highly confident and 'believable', so if you are not a domain-expert yourself, you can easily be fooled if you are not naturally critical. The chatbots tend to fall apart when you cross-examine them and dig into their 'thinking', or foce them to show 'proof' (or at least, references) with what can tun out to be entirely 'hallucinated' evidence. The old saw about people's experience with new media applies: you tend to believe them when you know nothing about the events being reported other than the report itself; but when you have personal experience of the event, you realise just how poor news reporting is. It's the same with LLM output. It's sometimes brilliant, but can be complete fiction. The trick is knowing the difference. Teaching AIs to know the difference and to seek out truth could be interesting.
(Score: 0) by Anonymous Coward on Tuesday May 16, @01:38AM (1 child)
So some users here are just bots?
(Score: 1, Funny) by Anonymous Coward on Tuesday May 16, @05:06AM
All of them, anymore. Khallow being replaced was only the least noticeable.