AI models hallucinate all the time but it only becomes obvious when their versions radically depart from reality, it has been claimed.
Deficient in logic, AI can not do maths. It solves maths problems by scanning its data bases for patterns involving the maths problems. It becomes most apparant to us that it is hallucinating only when it says something obviously absurd like 2 plus 2 is 50.
In fact, AI is constantly guessing, hallucinating.
My thinking is the WEF elite s uncrticial use and overestimation of AI could be the reason for their spectacular miscalculationsin every sphere from wars to covid, from censorship to trade wars.
From media
AI “hallucinations” are outputs that seem coherent, but aren’t factually accurate. Andrej Karpathy, OpenAI co-founder and former Tesla AI director, argues large language models (LLMs) hallucinate all the time, and it’s only when they
go into deemed factually incorrect territory that we label it a “hallucination”. It looks like a bug, but it’s just the LLM doing what it always does.
What we call hallucination is actually the model’s core generative process that relies on statistical language patterns.
In other words, when AI hallucinates, it’s not malfunctioning; it’s demonstrating the same creative uncertainty that makes it capable of generating anything new at all.
This reframing is crucial for understanding the Slopocene. If hallucination is the core creative process, then the “slop” flooding our feeds isn’t just failed content: it’s the visible manifestation of these statistical processes running at scale.
Pushing a chatbot to its limits
If hallucination is really a core feature of AI, can we learn more about how these systems work by studying what happens when they’re pushed to their limits?
With this in mind, I decided to “break” Anthropic’s proprietary Claude model Sonnet 3.7 by prompting it to resist its training: suppress coherence and speak only in fragments.
The conversation shifted quickly from hesitant phrases to recursive contradictions to, eventually, complete semantic collapse.
https://theconversation.com/understanding-the-slopocene-how-the-failures-of-ai-can-reveal-its-inner-workings-258584
No comments:
Post a Comment