2024-09-26
10/ These unreliability issues are consistently found across multiple LLM families: GPT, LLaMA and BLOOM, comprising 32 models that exhibit different levels of scaling up and diverse methods of shaping up with human feedback: [image]
Nature
Study: newer, bigger versions of LLMs like OpenAI's GPT, Meta's Llama, and BigScience's BLOOM are more inclined to give wrong answers than to admit ignorance
Nicola Jones / Nature :
1/ New paper @Nature! Discrepancy between human expectations of task difficulty and LLM errors harms reliability. In 2022, Ilya Sutskever @ilyasut predicted: “perhaps over time that discrepancy will diminish” ( https://www.youtube.com/..., min 61-64). We show this is *not* the case! [image]
Nature
Study: newer, bigger versions of LLMs like OpenAI's GPT, Meta's Llama, and BigScience's BLOOM are more inclined to give wrong answers than to admit ignorance
Nicola Jones / Nature :