Джерело:
Slashdot
Дата публікації:
19/09/2024 00:21
Постійна адреса новини:
http://www.vsinovyny.com/11300302
19/09/2024 00:21 // Slashdot
OpenAI truly does not want you to know what its latest AI model is "thinking." From a report: Since the company launched its "Strawberry" AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been sending out warning emails and threats of bans to any user who tries to probe how the model works. Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. When users ask an "o1" model a question in ChatGPT, users have the option of seeing this chain-of-thought process written out in the ChatGPT interface. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model. Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1's raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets.
Read more of this story at Slashdot.
« |
Наступна новина з архіву Более половины атак оккупанты провели на двух направлениях: произошло 153 боевых столкновений - Генштаб |
Попередня новина з архіву Підступна оптична ілюзія: треба знайти дивну тварину серед овець за 7 секунд |
» | |
|
||||