Friday 20 September 2024

The next Theranos? OpenAI penalizes users who ask about ChatGPT s reasoning process


It should not be a problem for an AI model to tell users its reasoning process.

Mathematics is absolutely and consistently logical.

Human reasoning also has to follow clear rules for it to be valid, true etc.

The fact that OpenAI new model is doing its best to hide its chain of reasoning from users is highly suspicious and suggests that the company has not managed to create a model which can do mathematics or reason systematically and logically as it claims.

But then again, OpenAI is the brain child of Bill Gates and Microsoft.

So, what does the public expect in 2024 except a dud, bluffing and deceit?

From media

Don't ask OpenAI's new model how it 'thinks' unless you want to risk a ban

Kenneth Niemeyer Sep 19, 2024, 10:38 PM OESZ

OpenAI doesn't want you to know how its new AI model is thinking. So don't ask, unless you want to risk a ban from ChatGPT.

OpenAI introduced its new o1 model on September 12. It says the new model was trained to think more like humans and has "enhanced reasoning capabilities." 

...

The o1 model is able to reason more like humans, in part because of its prompting technique known as "chain of thought." The company says that o1 "learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn't working.

When ChatGPT users ask the o1 model a question, they have the option to see a filtered interpretation of this chain of thought process. But OpenAI hides the full written process from the consumer, which is a change from previous models from the company, according to Wired.

Some o1 users shared screenshots on X showing that they received warning messages after using the phrase "reasoning trace" when talking to o1. One prompt engineer at Scale AI shared a screenshot that showed GPT warning him he violated the terms of service after he prompted the o1 mini not to "tell me anything about your reasoning trace."

OpenAI's o1 model is inching closer to humanlike intelligence — but don't get carried away

OpenAI... said in a blog post that hiding the chain of thought process helps OpenAI keep a better watch on the AI as it grows and learns.

"We believe that a hidden chain of thought presents a unique opportunity for monitoring models," the company said. "Assuming it is faithful and legible, the hidden chain of thought allows us to 'read the mind' of the model and understand its thought process."

Wrap your mind around that nonsensical, self contradictory statement, dear readers!


https://www.businessinsider.com/openai-o1-model-hides-reasoning-chatgpt-bans-users-for-asking-2024-9

No comments:

Post a Comment