Skip to content

WT快讯

WeTrying | 币圈快讯早知道

Menu
  • 首页
  • 工具包
Menu

Devs Are Making Claude Talk Like a Caveman to Cut Costs—And It Works

Posted on 2026年4月7日

Add Decrypt as your preferred source to see more of our stories on Google.

In brief A developer discovered that forcing Claude to speak like a caveman slashes output tokens, and therefore costs, by up to 75%.

The internet immediately turned it into a GitHub skill.

With Anthropic charging so high per output tokens, grunt-mode is less of a joke and more of a budget strategy.

Somewhere between prompt engineering and performance art, a developer posted a discovery on Reddit that made the AI community laugh before paying attention: teach Claude to communicate like a prehistoric human and watch your token bill shrink by up to 75%.

The post hit r/ClaudeAI last week and has since racked up over 400 comments and 10K votes—a rare combination of genuine technical insight and absurdist comedy that the internet tends to reward.

The mechanic is simple. Instead of letting Claude warm up with pleasantries, narrate every step it takes, and close with an offer to help further, the developer constrains the model to short, stripped-down sentences. Tool first, result first, no explanation. A normal web search task that would run about 180 output tokens dropped to roughly 45. The original poster claims up to 75% reduction in output, achieved by making the model sound like it just discovered fire.

In caveman terms, as one Redditor said: “Why waste time say lot word when few word do trick?”

What this technique does not touch is the input context: the full conversation history, attached files, and system instructions that the model re-reads on every single turn. That input typically dwarfs the output, especially in longer coding sessions. Real-world sessions counting all this input, account for savings around 25%, not 75%. Still meaningful, just not the headline number.



It’s also a good idea to feed the model with normal instructions. Don’t give it the “caveman” talk as it could spiral down into a “garbage in, garbage out” situation.

There is also the question of intelligence degradation. A handful of researchers in the thread argued that forcing an AI to inhabit a less sophisticated persona could actively hurt its reasoning quality—that the verbal constraints might bleed into cognitive ones. The concern has not been definitively settled, but it is worth considering when evaluating results.

Skill good, skill go viral

Despite the caveats, the technique found a second life on GitHub almost immediately.

Developer Shawnchee packaged the rules into a standalone caveman-skill compatible with Claude Code, Cursor, Windsurf, Copilot, and over 40 other agents. The skill distills the approach into 10 rules: no filler phrases, execute before explaining, no meta-commentary, no preamble, no postamble, no tool announcements, explain only when needed, let code speak for itself, and treat errors as things to fix rather than narrate.

Benchmarks in the repo, verified with tiktoken, show output token reductions of 68% on web search tasks, 50% on code edits, and 72% on question-and-answer exchanges—for an average output reduction of 61% across four standard tasks.

A parallel repo by developer Julius Brussee took a slightly different approach, framing the same idea as a SKILL.md file with 562 stars on GitHub. The spec: respond like a smart caveman, cut articles, filler, and pleasantries, keep all technical substance. Code blocks remain unchanged. Error messages are quoted exactly. Technical terms stay intact. Caveman only speaks the English wrapper around the facts.

This one even comes with different modes to affect how much you want to strip, switching between Normal, Lite, and Ultra. The models do the exact same work but provide a much shorter answer, which results in a big saving over time.

The broader cost context gives the joke a sharper edge. Anthropic is among the most expensive models in terms of price per token. For developers running agentic workflows with dozens of turns per session, output verbosity is not a stylistic complaint. It is a line item. If a caveman grunt can replace a five-sentence summary of what the model just did, those saved tokens add up across thousands of API calls.

The caveman skill is installable in one command via skills.sh and works globally across projects. Whether or not it makes Claude marginally less articulate, it has already made a lot of developers significantly less annoyed.


分享到:

  • 在 Facebook 上共享(在新窗口中打开) Facebook
  • 共享到 X(在新窗口中打开) X
  • 共享到 Threads(在新窗口中打开) Threads
  • 共享到 Bluesky(在新窗口中打开) Bluesky
  • 共享到 Telegram(在新窗口中打开) Telegram
  • 共享到 Nextdoor(在新窗口中打开) 隔壁
  • 分享到 Tumblr (在新窗口中打开) Tumblr
  • 共享到 Mastodon(在新窗口中打开) Mastodon

赞过:

赞 正在加载……

相关

发表评论取消回复

近期文章

  • Kalshi Scores Biggest Legal Win Yet in Appeals Court Decision Against New Jersey
  • Kalshi Scores Biggest Legal Win Yet in Appeals Court Decision Against New Jersey
  • Circle’s Arc Network Reveals Quantum Resistance Plans as Bitcoin, Ethereum Face Threat
  • Rwanda swats Bybit’s P2P platform offering franc-to-crypto trading
  • Bitcoin price risks ‘$15K shakeout’ in the next 5 months, BTC analyst warns

归档

  • 2026 年 4 月
  • 2026 年 3 月
  • 2026 年 2 月
  • 2026 年 1 月
  • 2025 年 12 月
  • 2025 年 11 月
  • 2025 年 10 月
  • 2025 年 9 月
  • 2025 年 8 月
  • 2025 年 7 月
  • 2025 年 6 月
  • 2025 年 5 月
  • 2025 年 4 月

分类

  • 1kx (1)
  • 21Shares (1)
  • a16z (1)
  • Aave (3)
  • ai16z (1)
  • Alameda Research (1)
  • Alpaca (1)
  • Arbitrum (1)
  • Ark Invest (1)
  • Arkham (1)
  • Avail (1)
  • Azuki (1)
  • Base (1)
  • Berachain (1)
  • Bitget (8)
  • BlackRock (3)
  • Brian Armstrong (1)
  • BTC (5)
  • Bybit (2)
  • Canary (1)
  • Cathie Wood (1)
  • Coinbase (3)
  • Coinbase Prime (2)
  • Coinbase Ventures (3)
  • CoinDesk (2)
  • CoinGecko (1)
  • Cointelegraph (1)
  • COMP (1)
  • Compound (1)
  • DAO (1)
  • DATA (2)
  • DeAI (1)
  • DePIN (1)
  • DEX (3)
  • EARN (1)
  • Eliza (1)
  • ETF (4)
  • ETH (4)
  • Ethos Network (1)
  • Fartcoin (2)
  • FDUSD (1)
  • FLock.io (1)
  • FLUID (1)
  • FUEL (1)
  • Gas (2)
  • GPU (1)
  • Grayscale (1)
  • IEO (1)
  • Inception (1)
  • IOG (1)
  • Jupiter (1)
  • Kairos (1)
  • Kaito (1)
  • Launchpool (1)
  • Layer2 (1)
  • Liquidity (1)
  • Magicblock (1)
  • Mango Markets (1)
  • Mechanism Capital (1)
  • Meebits (1)
  • Meme (3)
  • Netflix (1)
  • NVIDIA (1)
  • Ondo (1)
  • OpenAI (2)
  • Paradigm (1)
  • Polygon (3)
  • Pudgy Penguins (1)
  • pump.fun (1)
  • Raydium (2)
  • Robert Leshner (1)
  • Robinhood (1)
  • Sam Altman (1)
  • SEC (4)
  • Securitize (1)
  • SideKick (1)
  • SNX (1)
  • SOL (1)
  • Solana (3)
  • Stani Kulechov (1)
  • StarkWare (1)
  • STO (1)
  • Stripe (1)
  • SunDog (1)
  • SunPump (1)
  • Synthetix (1)
  • TechFlow (40,273)
  • The Block (2)
  • Tron (2)
  • TRX (1)
  • Upbit (1)
  • USDC (3)
  • WBTC (2)
  • Web3 (4)
  • WLD (1)
  • WOO X (1)
  • Xai (1)
  • Zora (1)
  • 交易所动态 (8)
  • 人工智能 (1)
  • 以太坊 (4)
  • 以太坊基金会 (1)
  • 信托 (1)
  • 借贷 (2)
  • 公链 (1)
  • 基础设施 (1)
  • 大额投融资 (1)
  • 存储 (2)
  • 孙宇晨 (2)
  • 安全 (2)
  • 富达 (1)
  • 工具 (2)
  • 币安 (7)
  • 快讯 (40,969)
  • 托管 (1)
  • 指数 (1)
  • 支付 (1)
  • 数据 (6)
  • 数据追踪 (4)
  • 智能合约 (1)
  • 未分类 (316)
  • 模块化 (1)
  • 欧洲 (1)
  • 欧盟 (1)
  • 比特币 (7)
  • 永续合约 (1)
  • 治理 (1)
  • 波场 (1)
  • 游戏 (3)
  • 火币 (1)
  • 灰度 (1)
  • 特朗普 (5)
  • 社交 (2)
  • 稳定币 (3)
  • 空投 (6)
  • 纳斯达克 (1)
  • 美国 (6)
  • 美国证券交易委员会 (3)
  • 英伟达 (2)
  • 英国 (1)
  • 萨尔瓦多 (1)
  • 融资 (3)
  • 行情异动 (7)
  • 贝莱德 (1)
  • 质押 (4)
  • 赵长鹏 (1)
  • 跨链 (3)
  • 跨链桥 (1)
  • 迪拜 (1)
  • 重要消息 (45)
  • 金库 (1)
  • 钱包 (4)
  • 阿根廷 (1)
  • 阿里云 (1)
  • 隐私 (2)
  • 项目重要进展 (9)
  • Bluesky
  • Mail
©2026 WT快讯 | Design: Newspaperly WordPress Theme
%d