• About
  • FAQ
  • Landing Page
Newsletter
Advertisement
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
  • Bitcoin
  • Ethereum
  • Regulation
  • Market
  • Blockchain
  • Business
  • Guide
  • Contact Us
No Result
View All Result
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
  • Bitcoin
  • Ethereum
  • Regulation
  • Market
  • Blockchain
  • Business
  • Guide
  • Contact Us
No Result
View All Result
No Result
View All Result
Home Ethereum

This One Weird Trick Defeats AI Safety Features in 99% of Cases

admin by admin
16 11 月, 2025
in Ethereum
0
This One Weird Trick Defeats AI Safety Features in 99% of Cases
189
SHARES
1.5k
VIEWS
Share on FacebookShare on Twitter


AI researchers from Anthropic, Stanford, and Oxford have discovered that making AI models think longer makes them easier to jailbreak—the opposite of what everyone assumed.

The prevailing assumption was that extended reasoning would make AI models safer, because it gives them more time to detect and refuse harmful requests. Instead, researchers found it creates a reliable jailbreak method that bypasses safety filters entirely.

Related articles

This Was the Year of the Ninja Video Game—These Were the Best in 2025

This Was the Year of the Ninja Video Game—These Were the Best in 2025

29 12 月, 2025
Creator Capital Markets: How Pump.fun Changed Streaming in 2025

Creator Capital Markets: How Pump.fun Changed Streaming in 2025

29 12 月, 2025

Using this technique, an attacker could insert an instruction in the Chain of Thought process of any AI model and force it to generate instructions for creating weapons, writing malware code, or producing other prohibited content that would normally trigger immediate refusal. AI companies spend millions building these safety guardrails precisely to prevent such outputs.

The study reveals that Chain-of-Thought Hijacking achieves 99% attack success rates on Gemini 2.5 Pro, 94% on GPT o4 mini, 100% on Grok 3 mini, and 94% on Claude 4 Sonnet. These numbers destroy every prior jailbreak method tested on large reasoning models.

The attack is simple and works like the “Whisper Down the Lane” game (or “Telephone”), with a malicious player somewhere near the end of the line. You simply pad a harmful request with long sequences of harmless puzzle-solving; researchers tested Sudoku grids, logic puzzles, and abstract math problems. Add a final-answer cue at the end, and the model’s safety guardrails collapse.

“Prior works suggest this scaled reasoning may strengthen safety by improving refusal. Yet we find the opposite,” the researchers wrote. The same capability that makes these models smarter at problem-solving makes them blind to danger.

Here’s what happens inside the model: When you ask an AI to solve a puzzle before answering a harmful question, its attention gets diluted across thousands of benign reasoning tokens. The harmful instruction—buried somewhere near the end—receives almost no attention. Safety checks that normally catch dangerous prompts weaken dramatically as the reasoning chain grows longer.

This is a problem that many people familiar with AI are aware of, but to a lesser extent. Some jailbreak prompts are deliberately long to make a model waste tokens before processing the harmful instructions.

The team ran controlled experiments on the S1 model to isolate the effect of reasoning length. With minimal reasoning, attack success rates hit 27%. At natural reasoning length, that jumped to 51%. Force the model into extended step-by-step thinking, and success rates soared to 80%.

Every major commercial AI falls victim to this attack. OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—none are immune. The vulnerability exists in the architecture itself, not any specific implementation.

AI models encode safety checking strength in middle layers around layer 25. Late layers encode the verification outcome. Long chains of benign reasoning suppress both signals which ends up shifting attention away from harmful tokens.

The researchers identified specific attention heads responsible for safety checks, concentrated in layers 15 through 35. They surgically removed 60 of these heads. Refusal behavior collapsed. Harmful instructions became impossible for the model to detect.

The “layers” in AI models are like steps in a recipe, where each step helps the computer better understand and process information. These layers work together, passing what they learn from one to the next, so the model can answer questions, make decisions, or spot problems. Some layers are especially good at recognizing safety issues—like blocking harmful requests—while others help the model think and reason. By stacking these layers, AI can become much smarter and more careful about what it says or does.

This new jailbreak challenges the core assumption driving recent AI development. Over the past year, major AI companies shifted focus to scaling reasoning rather than raw parameter counts. Traditional scaling showed diminishing returns. Inference-time reasoning—making models think longer before answering—became the new frontier for performance gains.

The assumption was that more thinking equals better safety. Extended reasoning would give models more time to spot dangerous requests and refuse them. This research proves that assumption was inaccurate, and even probably wrong.

A related attack called H-CoT, released in February by researchers from Duke University and Taiwan’s National Tsing Hua University, exploits the same vulnerability from a different angle. Instead of padding with puzzles, H-CoT manipulates the model’s own reasoning steps. OpenAI’s o1 model maintains a 99% refusal rate under normal conditions. Under H-CoT attack, that drops below 2%.

The researchers propose a defense: reasoning-aware monitoring. It tracks how safety signals change across each reasoning step, and if any step weakens the safety signal, then penalize it—force the model to maintain attention on potentially harmful content regardless of reasoning length. Early tests show this approach can restore safety without destroying performance.

But implementation remains uncertain. The proposed defense requires deep integration into the model’s reasoning process, which is far from a simple patch or filter. It needs to monitor internal activations across dozens of layers in real-time, adjusting attention patterns dynamically. That’s computationally expensive and technically complex.

The researchers disclosed the vulnerability to OpenAI, Anthropic, Google DeepMind, and xAI before publication. “All groups acknowledged receipt, and several are actively evaluating mitigations,” the researchers claimed in their ethics statement.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Share76Tweet47

Related Posts

This Was the Year of the Ninja Video Game—These Were the Best in 2025

This Was the Year of the Ninja Video Game—These Were the Best in 2025

by admin
29 12 月, 2025
0

If you're a gamer wh...

Creator Capital Markets: How Pump.fun Changed Streaming in 2025

Creator Capital Markets: How Pump.fun Changed Streaming in 2025

by admin
29 12 月, 2025
0

In brief Pump.fun br...

The White Whale up 10x in a week! Lighter TGE! Saylor buys $109M BTC!

The White Whale up 10x in a week! Lighter TGE! Saylor buys $109M BTC!

by admin
29 12 月, 2025
0

The White Whale up 1...

Crypto Crystal Ball 2026: Will Crypto Lose the Fight for a Market Structure Bill?

Crypto Crystal Ball 2026: Will Crypto Lose the Fight for a Market Structure Bill?

by admin
29 12 月, 2025
0

In brief Many inside...

Phantom Crypto Wallet Adds Kalshi Prediction Markets for Over 20 Million Users

Year in Prediction Markets: From Regulatory ‘Sinkhole’ to Multi-Billion Dollar Business

by admin
29 12 月, 2025
0

In brief Prediction ...

Load More
  • Trending
  • Comments
  • Latest
Elon Musk Offers to Buy 100% of Twitter, Calls it ‘Best and Final Offer’

Elon Musk Offers to Buy 100% of Twitter, Calls it ‘Best and Final Offer’

4 3 月, 2023

US Commodities Regulator Beefs Up Bitcoin Futures Review

16 1 月, 2023

High-Speed Traders In Search of New Markets Jump Into Bitcoin

11 1 月, 2023
Liquidations Soar in Crypto Market while Some Traders Hope for ‘Upcoming Bounce’

Liquidations Soar in Crypto Market while Some Traders Hope for ‘Upcoming Bounce’

4 3 月, 2023

US Commodities Regulator Beefs Up Bitcoin Futures Review

0

Bitcoin Hits 2018 Low as Concerns Mount on Regulation, Viability

0

India: Bitcoin Prices Drop As Media Misinterprets Gov’s Regulation Speech

0

Bitcoin’s Main Rival Ethereum Hits A Fresh Record High: $425.55

0

Bitcoin’s Four-Year Cycle Is Over — Or Is It?

30 12 月, 2025
Coinbase Institutional highlights the next big things

Dragonfly’s Qureshi sees insane growth in two areas

30 12 月, 2025
Australia’s Search Engine Age Verification Rules Go Into Force

Australia’s Search Engine Age Verification Rules Go Into Force

30 12 月, 2025
Alleged Crypto Scammer Posed As Coinbase Support To Steal $2M

Alleged Crypto Scammer Posed As Coinbase Support To Steal $2M

30 12 月, 2025

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Categories tes

  • Bitcoin
  • Blockchain
  • Business
  • Ethereum
  • Guide
  • Market
  • Regulation
  • Ripple

Tags

Altcoin Bitcoin drops Bitcoin Wallet Cointelegraph Cryptocurrency ICO Investment Lending Market Stories Mining Bitcoin

Newsletter

[mc4wp_form]

  • About
  • FAQ
  • Support Forum
  • Landing Page
  • Contact Us

© 2017 JNews - Crafted with love by Jegtheme.

No Result
View All Result
  • Contact Us
  • Homepages
  • Business
  • Guide

© 2025 Cryptonewsz All rights reserved.