36 Comments
⭠ Return to thread

When ChatGPT is asked to show references it uses to compose its answer, sometimes it makes up the references. I think they call this hallucinating. I expect when confronted with fraud accusations, these authors will claim hallucinations made them do it.

And with this level of fraud in top journals, is it any wonder ChatGPT has learned the same behavior?

Expand full comment