OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance
AI models often produce false outputs, or "hallucinations." Now OpenAI has admitted they may result from fundamental mistakes it makes when training its models. The Register: The admission came in a paper [PDF] published in early September, titled "Why Language Models Hallucinate," and penned by thr...
17 Sep 01:28 · slashdot.org