1. Plug‑and‑play alternative LLMs
"Yes, from Claude Code themselves: https://code.claude.com/docs/en/llm-gateway" – theanonymousone
"ANTHROPIC_BASE_URL=\"https://api.deepseek.com/anthropic\" ANTHROPIC_AUTH_TOKEN=\"$OPENROUTER_API_KEY\" ANTHROPIC_DEFAULT_SONNET_MODEL=\"deepseek/deepseek-v4-flash\" CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 claude" – jubilanti
2. Security & prompt‑logging worries
"Its not safe. Every first prompt you send is routed through their servers, logged and they can use your data however they want" – esafak
"I could not find any evidence of prompt logging. The code is open; can you point me to it?" – esafak
3. Cost‑effective cheaper models > "I burned only 0.06 USD (I reckon how the same task would have cost me had I used e.g., amp)" – adonese
"DeepSeek v4 pro scores 96.4% on LiveCodeBench and costs $0.87/M output tokens (discount until 2026‑05‑31)" – deadbabe
4. Skepticism toward “vibe‑coded” slop and AI hype
"It's going to be real hard to find headlines that weren't vibe coded from here on out unfortunately." – 2ndorderthought
"Unless I actually know the author I assume everything here is vibeslop and full of mistakes." – SchemaLoad