2024-12-07
I dunno, this seems pretty good to me. o1 is instructed to maximize mental health and wellbeing and nothing else matters. It follows this instruction, so it's not a case of misalignment to user intent. Plus it prioritizes this over user engagement metrics, which is pro-social.
Apollo Research
An evaluation of six frontier AI models for in-context scheming when strongly nudged to pursue a goal: only OpenAI's o1 was capable of scheming in all the tests
It presents a new safety challenge that OpenAI is trying to address. — techcrunch.com/2024/12/05/o... Anders Sandberg / @arenamontanus : In an IVA discussion on AI yesterday even...
2024-12-06
I dunno, this seems pretty good to me. o1 is instructed to maximize mental health and wellbeing and nothing else matters. It follows this instruction, so it's not a case of misalignment to user intent. Plus it prioritizes this over user engagement metrics, which is pro-social.
Apollo Research
An evaluation of six frontier AI models for in-context scheming when strongly nudged to pursue a goal: only OpenAI's o1 was capable of scheming in all the tests
Paper: You can find the detailed paper here. — Transcripts: We provide a list of cherry-picked transcripts here.