[{"data":1,"prerenderedAt":1083},["ShallowReactive",2],{"reviews:\u002Freviews\u002Fcaveman":3},{"id":4,"title":5,"accent":6,"body":7,"description":1034,"estReadTime":1035,"extension":1036,"eyebrow":1037,"icon":1038,"intro":1039,"lastUpdated":1040,"meta":1041,"navigation":1042,"next":1043,"path":1046,"prev":1047,"review":1050,"seo":1060,"stem":1072,"tocItems":1073,"__hash__":1082},"docs\u002Freviews\u002Fcaveman.md","Caveman plugin","reviews",{"type":8,"value":9,"toc":1029},"minimark",[10,47,149,318,463,536,654,970,1025],[11,12,15,27,39],"docs-section",{"id":13,"title":14},"verdict","Verdict",[16,17,18,22,23,26],"p",{},[19,20,21],"strong",{},"Caveman is a useful output-side compression trick that pays for itself on prose-heavy work and quietly underdelivers on code."," In our four-prompt benchmark on Opus (high effort), output tokens dropped 31–37% on the code and mixed prompts but went ",[19,24,25],{},"up"," ~30% on the explanation prompt — Caveman didn't shrink the answer, it rewrote it tersely and used the saved budget to add more sub-points. End-to-end the session ran ~34% cheaper.",[16,28,29,30,33,34,38],{},"The interesting finding isn't the savings number — it's the ",[19,31,32],{},"workflow regression",": the same \"write me a utility + tests\" prompt that produced two files on disk in baseline produced an inline code dump (no ",[35,36,37],"code",{},"Write"," tool call) under Caveman. Same code quality, different behavior. That's the kind of trade-off the README doesn't surface.",[40,41,44],"docs-callout",{"title":42,"variant":43},"Recommend it for","info",[16,45,46],{},"Solo, explanation-heavy sessions where you want denser output — debugging walkthroughs, architecture write-ups, \"explain this\" prompts. Skip it (or toggle off) for hands-on coding sessions where you want Claude to actually create files, and for any pairing or onboarding context where readability matters more than cost.",[11,48,51,89,105],{"id":49,"title":50},"what-it-is","What it is",[16,52,53,60,61,64,65,68,69,68,72,75,76,68,79,68,82,68,85,88],{},[54,55,59],"a",{"href":56,"rel":57},"https:\u002F\u002Fgithub.com\u002FJuliusBrussee\u002Fcaveman",[58],"nofollow","Caveman"," is a Claude Code plugin (and skill bundle) by Julius Brussee. It registers a ",[35,62,63],{},"SessionStart"," hook that injects a terse \"caveman\" output style — short fragments, dropped articles, no preamble — and ships intensity levels (",[35,66,67],{},"lite",", ",[35,70,71],{},"full",[35,73,74],{},"ultra",", plus a 文言文 wenyan variant), a statusline badge, and four sub-skills: ",[35,77,78],{},"caveman-commit",[35,80,81],{},"caveman-review",[35,83,84],{},"caveman-compress",[35,86,87],{},"caveman-help",".",[16,90,91,92,95,96,99,100,104],{},"It's a ",[19,93,94],{},"style-only"," plugin — it doesn't reduce input tokens, only the model's prose output. Your ",[35,97,98],{},"CLAUDE.md",", file reads, and tool outputs still cost what they cost. That makes it a complement to the input-side hygiene we cover in ",[54,101,103],{"href":102},"\u002Ftokens","Token Mastery",", not a replacement.",[106,107,112],"pre",{"className":108,"code":109,"language":110,"meta":111,"style":111},"language-bash shiki shiki-themes github-light","claude plugin marketplace add JuliusBrussee\u002Fcaveman\nclaude plugin install caveman@caveman\n","bash","",[35,113,114,136],{"__ignoreMap":111},[115,116,119,123,127,130,133],"span",{"class":117,"line":118},"line",1,[115,120,122],{"class":121},"s7eDp","claude",[115,124,126],{"class":125},"sYBdl"," plugin",[115,128,129],{"class":125}," marketplace",[115,131,132],{"class":125}," add",[115,134,135],{"class":125}," JuliusBrussee\u002Fcaveman\n",[115,137,139,141,143,146],{"class":117,"line":138},2,[115,140,122],{"class":121},[115,142,126],{"class":125},[115,144,145],{"class":125}," install",[115,147,148],{"class":125}," caveman@caveman\n",[11,150,153,159,198,209,212,278,289,311],{"id":151,"title":152},"how-we-tested","How we tested",[16,154,155,156,158],{},"The bench lives in a scratch folder, not Claudeverse — testing inside this repo would let our ",[35,157,98],{}," and file reads dominate the input side and wash out output deltas.",[106,160,162],{"className":108,"code":161,"language":110,"meta":111,"style":111},"mkdir ~\u002Fcaveman-bench && cd ~\u002Fcaveman-bench\necho \"Scratch folder for benchmarking Claude Code output styles.\" > CLAUDE.md\n",[35,163,164,183],{"__ignoreMap":111},[115,165,166,169,172,176,180],{"class":117,"line":118},[115,167,168],{"class":121},"mkdir",[115,170,171],{"class":125}," ~\u002Fcaveman-bench",[115,173,175],{"class":174},"sgsFI"," && ",[115,177,179],{"class":178},"sYu0t","cd",[115,181,182],{"class":125}," ~\u002Fcaveman-bench\n",[115,184,185,188,191,195],{"class":117,"line":138},[115,186,187],{"class":178},"echo",[115,189,190],{"class":125}," \"Scratch folder for benchmarking Claude Code output styles.\"",[115,192,194],{"class":193},"sD7c4"," >",[115,196,197],{"class":125}," CLAUDE.md\n",[16,199,200,201,204,205,208],{},"Two sessions, same model (Opus), same effort level (high), three prompts each, fresh ",[35,202,203],{},"\u002Fclear"," between. Baseline first, then ",[35,206,207],{},"claude plugin install caveman@caveman"," and re-run.",[16,210,211],{},"The three prompts span the variance:",[213,214,215,231],"table",{},[216,217,218],"thead",{},[219,220,221,225,228],"tr",{},[222,223,224],"th",{},"Prompt",[222,226,227],{},"Shape",[222,229,230],{},"Why",[232,233,234,248,265],"tbody",{},[219,235,236,242,245],{},[237,238,239],"td",{},[19,240,241],{},"A — debug",[237,243,244],{},"React 15 child re-render walkthrough",[237,246,247],{},"Prose-heavy, no code generation",[219,249,250,255,262],{},[237,251,252],{},[19,253,254],{},"B — code",[237,256,257,258,261],{},"TS ",[35,259,260],{},"chunk\u003CT>"," utility + 6 Vitest cases",[237,263,264],{},"Code-heavy, control case",[219,266,267,272,275],{},[237,268,269],{},[19,270,271],{},"C — mixed",[237,273,274],{},"Cmd\u002FCtrl+K dialog in Nuxt 4 (files + a11y + manual tests)",[237,276,277],{},"Realistic mixed workload",[16,279,280,281,284,285,288],{},"For each, we captured: the full response verbatim, ",[35,282,283],{},"\u002Fcost"," output, and one statusline screenshot to confirm the ",[35,286,287],{},"[CAVEMAN]"," badge was active.",[40,290,293],{"title":291,"variant":292},"The statusline merge","tip",[16,294,295,296,299,300,302,303,306,307,310],{},"If you already use ",[35,297,298],{},"ccusage"," (or any custom statusline), Caveman's ",[35,301,63],{}," hook ",[19,304,305],{},"does not overwrite it"," — it just writes a flag file at ",[35,308,309],{},"~\u002F.claude\u002F.caveman-active",". Wrap both in a script that reads the flag and prepends the badge to your existing statusline output. Otherwise the plugin runs invisibly and you can't tell which mode is active.",[16,312,313],{},[314,315],"img",{"alt":316,"src":317},"Statusline showing the CAVEMAN badge to the left of the ccusage usage strip","\u002Fimages\u002Freviews\u002Fcaveman\u002Fstatusline-badge.png",[11,319,322,327,393,400,407,414,419,425,431,435,441,447,451,457],{"id":320,"title":321},"results","What we found",[323,324,326],"h3",{"id":325},"token-deltas-per-prompt-opus-output","Token deltas per prompt (Opus output)",[213,328,329,345],{},[216,330,331],{},[219,332,333,335,339,342],{},[222,334,224],{},[222,336,338],{"align":337},"right","Baseline output",[222,340,341],{"align":337},"Caveman output",[222,343,344],{"align":337},"Delta",[232,346,347,363,378],{},[219,348,349,352,355,358],{},[237,350,351],{},"A — explanation",[237,353,354],{"align":337},"~1.0k tokens",[237,356,357],{"align":337},"~1.3k tokens",[237,359,360],{"align":337},[19,361,362],{},"+30%",[219,364,365,367,370,373],{},[237,366,254],{},[237,368,369],{"align":337},"~2.6k tokens",[237,371,372],{"align":337},"~1.8k tokens",[237,374,375],{"align":337},[19,376,377],{},"−31%",[219,379,380,382,385,388],{},[237,381,271],{},[237,383,384],{"align":337},"~6.3k tokens",[237,386,387],{"align":337},"~4.0k tokens",[237,389,390],{"align":337},[19,391,392],{},"−37%",[16,394,395,396,399],{},"Session totals: ",[19,397,398],{},"$0.34 baseline → $0.22 Caveman"," (~34% cheaper end-to-end).",[16,401,402,403,406],{},"The Prompt A reversal was the surprise. We expected the biggest savings on the most prose-heavy task. What actually happened: Caveman covered more causes (7 vs. 6) with denser sentences, then used the saved budget to add more confirm\u002Ffix sub-points. Caveman didn't shrink the answer — it rewrote it tersely and ",[19,404,405],{},"expanded the scope"," with the saved tokens. That's a useful finding on its own: \"denser per token\" doesn't always mean \"fewer tokens.\"",[323,408,410,411,413],{"id":409},"side-by-side-cost-output","Side-by-side ",[35,412,283],{}," output",[415,416,418],"h4",{"id":417},"prompt-a-explanation","Prompt A — explanation",[16,420,421],{},[314,422],{"alt":423,"src":424},"Baseline \u002Fcost output for Prompt A — explanation walkthrough","\u002Fimages\u002Freviews\u002Fcaveman\u002Fbaseline-A-cost.png",[16,426,427],{},[314,428],{"alt":429,"src":430},"Caveman \u002Fcost output for Prompt A — denser per token but ~30% larger","\u002Fimages\u002Freviews\u002Fcaveman\u002Fcaveman-A-cost.png",[415,432,434],{"id":433},"prompt-b-code","Prompt B — code",[16,436,437],{},[314,438],{"alt":439,"src":440},"Baseline \u002Fcost output for Prompt B — utility + tests, files written to disk","\u002Fimages\u002Freviews\u002Fcaveman\u002Fbaseline-B-cost.png",[16,442,443],{},[314,444],{"alt":445,"src":446},"Caveman \u002Fcost output for Prompt B — same code, dumped inline, no Write tool call","\u002Fimages\u002Freviews\u002Fcaveman\u002Fcaveman-B-cost.png",[415,448,450],{"id":449},"prompt-c-mixed","Prompt C — mixed",[16,452,453],{},[314,454],{"alt":455,"src":456},"Baseline \u002Fcost output for Prompt C — full Nuxt search-dialog walkthrough","\u002Fimages\u002Freviews\u002Fcaveman\u002Fbaseline-C-cost.png",[16,458,459],{},[314,460],{"alt":461,"src":462},"Caveman \u002Fcost output for Prompt C — same coverage, ~37% fewer output tokens","\u002Fimages\u002Freviews\u002Fcaveman\u002Fcaveman-C-cost.png",[11,464,467,470,487,493,506,533],{"id":465,"title":466},"regression","The file-write regression",[16,468,469],{},"Prompt B is where the headline finding lives, and it isn't a token-count problem.",[16,471,472,475,476,478,479,482,483,486],{},[19,473,474],{},"Baseline behavior."," Claude used the ",[35,477,37],{}," tool, created ",[35,480,481],{},"chunk.ts"," and ",[35,484,485],{},"chunk.test.ts"," on disk, then confirmed in chat:",[488,489,490],"blockquote",{},[16,491,492],{},"Created chunk.ts and chunk.test.ts. The implementation uses slice in a stride loop (O(n), no mutation of input), and the test for the throw case covers both 0 and a negative value since \"size \u003C 1\" includes both.",[16,494,495,498,499,505],{},[19,496,497],{},"Caveman behavior."," Same prompt, same model, same folder. Claude printed the full code in chat and ",[19,500,501,502,504],{},"did not call the ",[35,503,37],{}," tool"," — no files appeared on disk. The code itself was equivalent (the throw test even covered more cases), but the workflow shifted: a \"write me a utility\" prompt produced a code dump in the terminal instead of files in the repo.",[40,507,510],{"title":508,"variant":509},"Why this happens (best guess)","warning",[16,511,512,513,517,518,521,522,525,526,529,530,532],{},"Caveman's style instruction nudges Claude toward \"answer first, terse, no preamble.\" That style cue appears to compete with the model's tool-use bias — when the answer is \"here is the code,\" terse Claude leans toward ",[514,515,516],"em",{},"printing"," rather than ",[514,519,520],{},"doing",". We did not test whether ",[35,523,524],{},"\u002Fcaveman lite"," or ",[35,527,528],{},":ultra"," change this; if you adopt Caveman for coding work, verify your file-creation prompts still trigger ",[35,531,37],{}," calls before you trust it.",[16,534,535],{},"This is the kind of caveat the README doesn't surface, and it's the headline reason Caveman shouldn't be a default in coding sessions.",[11,537,540,651],{"id":538,"title":539},"when","When it helps \u002F when it hurts",[213,541,542,554],{},[216,543,544],{},[219,545,546,549,552],{},[222,547,548],{},"Scenario",[222,550,14],{"align":551},"center",[222,553,230],{},[232,555,556,569,581,593,606,619,638],{},[219,557,558,561,566],{},[237,559,560],{},"Solo debugging walkthroughs",[237,562,563],{"align":551},[19,564,565],{},"Helps",[237,567,568],{},"Denser per token, easy to skim",[219,570,571,574,578],{},[237,572,573],{},"Architecture \u002F \"explain X\" prompts",[237,575,576],{"align":551},[19,577,565],{},[237,579,580],{},"Same coverage, less verbose",[219,582,583,586,590],{},[237,584,585],{},"Mixed plan + code prompts",[237,587,588],{"align":551},[19,589,565],{},[237,591,592],{},"Saw the largest absolute savings (37%)",[219,594,595,598,603],{},[237,596,597],{},"Pure code generation",[237,599,600],{"align":551},[19,601,602],{},"Mixed",[237,604,605],{},"Real token savings, but file-write regression",[219,607,608,611,616],{},[237,609,610],{},"Pairing \u002F teaching \u002F onboarding",[237,612,613],{"align":551},[19,614,615],{},"Hurts",[237,617,618],{},"Caveman prose reads great solo and terribly when someone else has to act on it",[219,620,621,624,628],{},[237,622,623],{},"Any session relying on tool calls",[237,625,626],{"align":551},[19,627,615],{},[237,629,630,631,633,634,637],{},"Style cue can suppress ",[35,632,37],{},"\u002F",[35,635,636],{},"Edit"," usage",[219,639,640,643,648],{},[237,641,642],{},"Junior devs learning Claude Code",[237,644,645],{"align":551},[19,646,647],{},"Skip",[237,649,650],{},"Readability matters more than cost at this stage",[16,652,653],{},"Re-readability is the gut check. Run a Caveman response past a teammate who didn't write the prompt: if they can act on it, great. If they have to ask \"wait, what does this mean?\", you're trading clarity for cost — and clarity usually wins long-term.",[11,655,658,684,690,709,712,924,952],{"id":656,"title":657},"install","Try it yourself",[106,659,660],{"className":108,"code":109,"language":110,"meta":111,"style":111},[35,661,662,674],{"__ignoreMap":111},[115,663,664,666,668,670,672],{"class":117,"line":118},[115,665,122],{"class":121},[115,667,126],{"class":125},[115,669,129],{"class":125},[115,671,132],{"class":125},[115,673,135],{"class":125},[115,675,676,678,680,682],{"class":117,"line":138},[115,677,122],{"class":121},[115,679,126],{"class":125},[115,681,145],{"class":125},[115,683,148],{"class":125},[16,685,686,687,689],{},"Confirm the plugin is active by checking the flag file the ",[35,688,63],{}," hook writes:",[106,691,693],{"className":108,"code":692,"language":110,"meta":111,"style":111},"cat ~\u002F.claude\u002F.caveman-active\n# expected: full   (or whatever mode is active)\n",[35,694,695,703],{"__ignoreMap":111},[115,696,697,700],{"class":117,"line":118},[115,698,699],{"class":121},"cat",[115,701,702],{"class":125}," ~\u002F.claude\u002F.caveman-active\n",[115,704,705],{"class":117,"line":138},[115,706,708],{"class":707},"sAwPA","# expected: full   (or whatever mode is active)\n",[16,710,711],{},"If you already have a custom statusline, Caveman won't overwrite it — wrap your existing script and prepend the badge yourself:",[106,713,716],{"className":108,"code":714,"filename":715,"language":110,"meta":111,"style":111},"#!\u002Fusr\u002Fbin\u002Fenv bash\n# Read the active caveman mode (if any) and prepend a colored badge\n# to whatever your existing statusline command emits.\nmode_file=\"$HOME\u002F.claude\u002F.caveman-active\"\nbadge=\"\"\nif [ -f \"$mode_file\" ]; then\n  mode=$(tr '[:lower:]' '[:upper:]' \u003C \"$mode_file\" | tr -d '[:space:]')\n  if [ -n \"$mode\" ]; then\n    badge=$'\\033[38;5;208m['\"CAVEMAN${mode:+:$mode}\"$']\\033[0m '\n  fi\nfi\nprintf \"%s%s\" \"$badge\" \"$(npx ccusage statusline 2>\u002Fdev\u002Fnull)\"\n","~\u002F.claude\u002Fstatusline.sh",[35,717,718,723,728,734,752,763,789,832,854,881,887,893],{"__ignoreMap":111},[115,719,720],{"class":117,"line":118},[115,721,722],{"class":707},"#!\u002Fusr\u002Fbin\u002Fenv bash\n",[115,724,725],{"class":117,"line":138},[115,726,727],{"class":707},"# Read the active caveman mode (if any) and prepend a colored badge\n",[115,729,731],{"class":117,"line":730},3,[115,732,733],{"class":707},"# to whatever your existing statusline command emits.\n",[115,735,737,740,743,746,749],{"class":117,"line":736},4,[115,738,739],{"class":174},"mode_file",[115,741,742],{"class":193},"=",[115,744,745],{"class":125},"\"",[115,747,748],{"class":174},"$HOME",[115,750,751],{"class":125},"\u002F.claude\u002F.caveman-active\"\n",[115,753,755,758,760],{"class":117,"line":754},5,[115,756,757],{"class":174},"badge",[115,759,742],{"class":193},[115,761,762],{"class":125},"\"\"\n",[115,764,766,769,772,775,778,781,783,786],{"class":117,"line":765},6,[115,767,768],{"class":193},"if",[115,770,771],{"class":174}," [ ",[115,773,774],{"class":193},"-f",[115,776,777],{"class":125}," \"",[115,779,780],{"class":174},"$mode_file",[115,782,745],{"class":125},[115,784,785],{"class":174}," ]; ",[115,787,788],{"class":193},"then\n",[115,790,792,795,797,800,802,805,808,811,813,815,817,820,823,826,829],{"class":117,"line":791},7,[115,793,794],{"class":174},"  mode",[115,796,742],{"class":193},[115,798,799],{"class":174},"$(",[115,801,219],{"class":121},[115,803,804],{"class":125}," '[:lower:]'",[115,806,807],{"class":125}," '[:upper:]'",[115,809,810],{"class":193}," \u003C",[115,812,777],{"class":125},[115,814,780],{"class":174},[115,816,745],{"class":125},[115,818,819],{"class":193}," |",[115,821,822],{"class":121}," tr",[115,824,825],{"class":178}," -d",[115,827,828],{"class":125}," '[:space:]'",[115,830,831],{"class":174},")\n",[115,833,835,838,840,843,845,848,850,852],{"class":117,"line":834},8,[115,836,837],{"class":193},"  if",[115,839,771],{"class":174},[115,841,842],{"class":193},"-n",[115,844,777],{"class":125},[115,846,847],{"class":174},"$mode",[115,849,745],{"class":125},[115,851,785],{"class":174},[115,853,788],{"class":193},[115,855,857,860,862,865,868,871,874,876,878],{"class":117,"line":856},9,[115,858,859],{"class":174},"    badge",[115,861,742],{"class":193},[115,863,864],{"class":125},"$'\\033[38;5;208m['\"CAVEMAN${",[115,866,867],{"class":174},"mode",[115,869,870],{"class":193},":",[115,872,873],{"class":125},"+",[115,875,870],{"class":193},[115,877,847],{"class":174},[115,879,880],{"class":125},"}\"$']\\033[0m '\n",[115,882,884],{"class":117,"line":883},10,[115,885,886],{"class":193},"  fi\n",[115,888,890],{"class":117,"line":889},11,[115,891,892],{"class":193},"fi\n",[115,894,896,899,902,904,907,909,912,915,918,921],{"class":117,"line":895},12,[115,897,898],{"class":178},"printf",[115,900,901],{"class":125}," \"%s%s\"",[115,903,777],{"class":125},[115,905,906],{"class":174},"$badge",[115,908,745],{"class":125},[115,910,911],{"class":125}," \"$(",[115,913,914],{"class":121},"npx",[115,916,917],{"class":125}," ccusage statusline ",[115,919,920],{"class":193},"2>",[115,922,923],{"class":125},"\u002Fdev\u002Fnull)\"\n",[106,925,930],{"className":926,"code":927,"filename":928,"language":929,"meta":111,"style":111},"language-jsonc shiki shiki-themes github-light","\"statusLine\": {\n  \"type\": \"command\",\n  \"command\": \"bash ~\u002F.claude\u002Fstatusline.sh\"\n}\n","~\u002F.claude\u002Fsettings.json","jsonc",[35,931,932,937,942,947],{"__ignoreMap":111},[115,933,934],{"class":117,"line":118},[115,935,936],{},"\"statusLine\": {\n",[115,938,939],{"class":117,"line":138},[115,940,941],{},"  \"type\": \"command\",\n",[115,943,944],{"class":117,"line":730},[115,945,946],{},"  \"command\": \"bash ~\u002F.claude\u002Fstatusline.sh\"\n",[115,948,949],{"class":117,"line":736},[115,950,951],{},"}\n",[16,953,954,955,958,959,961,962,965,966,969],{},"Switch intensity with ",[35,956,957],{},"\u002Fcaveman ultra"," (or ",[35,960,67],{},") and the badge updates accordingly. If the ",[35,963,964],{},".caveman-active"," file isn't there after a fresh session, run ",[35,967,968],{},"\u002Fcaveman"," once manually to force-write it.",[11,971,974],{"id":972,"title":973},"caveats","Caveats",[975,976,977,996,1008,1019],"ul",{},[978,979,980,983,984,987,988,991,992,995],"li",{},[19,981,982],{},"The 30–37% savings are a floor, not a ceiling."," Our bench ran with ",[35,985,986],{},"effortLevel: high",", which generates lots of extended-thinking tokens that Caveman doesn't compress. At ",[35,989,990],{},"medium"," or default effort — where most users live — prose makes up a larger share of total output, so deltas should be ",[514,993,994],{},"larger"," than what we measured here.",[978,997,998,1001,1002,1004,1005,1007],{},[19,999,1000],{},"Style hooks aren't a tokens-mastery substitute."," Caveman touches output only. Your ",[35,1003,98],{}," diet, file-read hygiene, and model routing (covered in ",[54,1006,103],{"href":102},") are still the bigger lever.",[978,1009,1010,1013,1014,1018],{},[19,1011,1012],{},"Plugin freshness matters."," Community projects come and go. Verify the ",[54,1015,1017],{"href":56,"rel":1016},[58],"GitHub repo"," is still maintained before adopting on a team.",[978,1020,1021,1024],{},[19,1022,1023],{},"One-data-point caveat."," This bench is one user, one model, three prompts. Your workload — especially if it skews toward generation, search, or long tool-call chains — may differ. The methodology section is here so you can re-run it on your own setup.",[1026,1027,1028],"style",{},"html pre.shiki code .s7eDp, html code.shiki .s7eDp{--shiki-default:#6F42C1}html pre.shiki code .sYBdl, html code.shiki .sYBdl{--shiki-default:#032F62}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html pre.shiki code .sgsFI, html code.shiki .sgsFI{--shiki-default:#24292E}html pre.shiki code .sYu0t, html code.shiki .sYu0t{--shiki-default:#005CC5}html pre.shiki code .sD7c4, html code.shiki .sD7c4{--shiki-default:#D73A49}html pre.shiki code .sAwPA, html code.shiki .sAwPA{--shiki-default:#6A737D}",{"title":111,"searchDepth":138,"depth":138,"links":1030},[1031,1032],{"id":325,"depth":730,"text":326},{"id":409,"depth":730,"text":1033},"Side-by-side \u002Fcost output","A Claude Code plugin that rewrites Claude's output style into terse, \"caveman\" prose. We benchmarked it across explanation-heavy, code-heavy, and mixed prompts — here's where it actually pays off and where it quietly breaks your workflow.","8 min","md","Field review","LucideMicroscope","Caveman compresses Claude Code's output style. The README claims 65–75% token savings. Our four-prompt benchmark on Opus shows real savings on prose and mixed work, ~zero on pure code, and one workflow regression worth knowing about.","2026-04-25",{},true,{"title":1044,"path":1045},"Workshops","\u002Fworkshops","\u002Freviews\u002Fcaveman",{"title":1048,"path":1049},"Resources","\u002Fresources",{"subject":59,"subjectLink":56,"category":1051,"version":1052,"lastTested":1040,"verdictTone":1053,"verdict":1054,"tags":1055},"Claude Code plugin","caveman@caveman (Apr 2026)","mixed","Cuts output tokens 30–37% on prose and mixed prompts, but suppressed `Write`-tool calls on a \"build me a utility\" prompt — solo explanatory sessions only, not hands-on coding workflows.",[1056,1057,1058,1059],"plugin","tokens","output-style","statusline",{"title":1061,"description":1062,"keywords":1063,"proficiencyLevel":1070,"timeRequired":1071},"Caveman plugin review — real benchmarks for Claude Code's terse output mode","We tested Caveman across explanation, code, and mixed prompts on Claude Opus. Output-token deltas, a workflow regression, and an honest \"when to use it\" verdict.",[1064,1065,1066,1067,1068,1069],"claude code plugin","caveman plugin","claude code output style","token savings","julius brussee caveman","claude code plugin review","Intermediate","PT8M","reviews\u002Fcaveman",[1074,1075,1076,1077,1078,1079,1080,1081],{"id":13,"title":14,"level":138},{"id":49,"title":50,"level":138},{"id":151,"title":152,"level":138},{"id":320,"title":321,"level":138},{"id":465,"title":466,"level":138},{"id":538,"title":539,"level":138},{"id":656,"title":657,"level":138},{"id":972,"title":973,"level":138},"uZfq2dosLsq5x4-ZWOkrxvUkNAL1z5fgeSYEiyw3a4o",1777109530012]