Share this comment
When ChatGPT is asked to show references it uses to compose its answer, sometimes it makes up the references. I think they call this hallucinating. I expect when confronted with fraud accusations, these authors will claim hallucinations made them do it.
And with this level of fraud in top journals, is it any wonder ChatGPT has learned the same behavior?
© 2025 Meryl Nass
Substack is the home for great culture
When ChatGPT is asked to show references it uses to compose its answer, sometimes it makes up the references. I think they call this hallucinating. I expect when confronted with fraud accusations, these authors will claim hallucinations made them do it.
And with this level of fraud in top journals, is it any wonder ChatGPT has learned the same behavior?