APPLE REPORT ADMITS US AI MODELS NOT INTELLIGENT OR CONSCIOUS AND ARE NOT BEING IMPROVED
HALLUCINATORY AI MODELS TO BE GIVEN LIMITLESS PROTECTION FROM THE LAW UNDER TRUMP S BBB IN ANOTHER SIGN HE SERVES THE GATES, THIEL OLIGARCHY AND NOT THE VOTERS
REPLICATES THE VERY FLAWED THINKING OF ITS INVENTORS GATES, ALTMAN
CANNOT GENERALIZE PROPERLY, REASON LOGICALLY
JUMPS FROM DATA POINT TO DATA POINT
MAKES CRAZY CONNECTIONS BETWEEN DATA POINTS
CANNOT CORRECT THEMSELVES
NO SIGN OF IMPROVEMENT IN NEWER MODELS !
IS EVIL ALWAYS ALSO STUPID?
IS EVIL JUST THE ABSENCE OF INTELLIGENCE, LOGIC?
THE GATES, WEF OLIGARCHS FLAWED AI MODELS HAVE NOT BEEN IMPROVED
THE FAILURE TO IMPOVE THE AI SPELLS THE END OF THE US TECH S HOPE OF STAYING ABREAST OF RUSSIA AND CHINA
THE FAILURE IS LINKED TO THE TOTAL CENSORSHIP AND AUTHORITARIAN RULE OF THE OLIGARCHS IN THE USA USING FRONT MEN FROM THE REP AND DEM PARTIES, APPARANTLY CONTROLLED THROUG EPSTEIN S BLACKMAIL MATERIAL
TEH MORONS GATES, SOROS, EPSTEIN HAVE GOTTEN INFLUENCE THROUGH CONTROL OF THE PRIVATE FED AND ITS FRAUD NOT THROUGH ABILITY OR TALENT OR INTELLIGENCE
IQS OF 70?
To produce a flawed product is a sign of stupidity.
To be unable to improve the flawed product is the final proof of a mron.
Yet, it seems all attempts to improve the hallucinatory AI models have completely failed despite spending billions on the project.
Wrap your mind around that.
The future of the USA is supposed to depend on AI.
But the Big Tech Billionaires are incapable of solving basic problems involving propositional logic, maths and science to create any thing resembling an intelligence, let alone consciousness as Appel has repored.
The report by Apple admits AI is not alive, not conscious, not intelligent. It is just a statistical machine crunching words in a sollipsistic crazy world of its own governed by circular logic and self reinforcing algorithms.
And it seems the AI is getting worse, the more the Big Tech billionaires spend on it and work on it.
The Epstein, Gates, Soros effect?
The more effort a moron makes, the more work a moron does, the more disastrous the result.
Because a moron cannot conceptualize, generalize.
A moron jumps from data point to data point without being able to see the principle.
From media
https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
https://www.zerohedge.com/technology/ai-models-still-far-agi-level-reasoning-apple-researchers
AI Models Still Far From AGI-Level Reasoning: Apple Researchers
BY TYLER DURDEN
TUESDAY, JUN 10, 2025 - 12:10 AM
Authored by Martin Young via CoinTelegraph.com,
The race to develop artificial general intelligence (AGI) still has a long way to run, according to Apple researchers who found that leading AI models still have trouble reasoning.
Recent updates to leading AI large language models (LLMs) such as OpenAI’s ChatGPT and Anthropic’s Claude have included large reasoning models (LRMs), but their fundamental capabilities, scaling properties, and limitations “remain insufficiently understood,” said the Apple researchers in a June paper called “The Illusion of Thinking.”
They noted that current evaluations primarily focus on established mathematical and coding benchmarks, “emphasizing final answer accuracy.”
However, this evaluation does not provide insights into the reasoning capabilities of the AI models, they said.
The research contrasts with an expectation that artificial general intelligence is just a few years away.
Apple researchers test “thinking” AI models
The researchers devised different puzzle games to test “thinking” and “non-thinking” variants of Claude Sonnet, OpenAI’s o3-mini and o1, and DeepSeek-R1 and V3 chatbots beyond the standard mathematical benchmarks.
They discovered that “frontier LRMs face a complete accuracy collapse beyond certain complexities,” don’t generalize reasoning effectively, and their edge disappears with rising complexity, contrary to expectations for AGI capabilities.
“We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles.”
Verification of final answers and intermediate reasoning traces (top chart), and charts showing non-thinking models are more accurate at low complexity (bottom charts). Source: Apple Machine Learning Research
AI chatbots are overthinking, say researchers
They found inconsistent and shallow reasoning with the models and also observed overthinking, with AI chatbots generating correct answers early and then wandering into incorrect reasoning.
The researchers concluded that LRMs mimic reasoning patterns without truly internalizing or generalizing them, which falls short of AGI-level reasoning.
The race to develop AGI
AGI is the holy grail of AI development, a state where the machine can think and reason like a human and is on a par with human intelligence.
In January, OpenAI CEO Sam Altman said the firm was closer to building AGI than ever before. “We are now confident we know how to build AGI as we have traditionally understood it,” he said at the time.
In November, Anthropic CEO Dario Amodei said that AGI would exceed human capabilities in the next year or two. “If you just eyeball the rate at which these capabilities are increasing, it does make you think that we’ll get there by 2026 or 2027,” he said.
“These insights challenge prevailing assumptions about LRM capabilities and suggest that current approaches may be encountering fundamental barriers to generalizable reasoning.”
Illustration of the four puzzle environments. Source: Apple
No comments:
Post a Comment