Project ideas from Hacker News discussions.

Big GPUs don't need big PCs

πŸ“ Discussion Summary (Click to expand)

1. Low-Power Devices Suffice for Daily Tasks; Remote to Servers for Heavy Work

Mini PCs, Pis, or VMs handle browsing, coding, and light tasks, with desktops as remote servers for compute-intensive jobs.
"I should be running one of those $300 mini PCs at <20W... Just remote into my beefy workstation" - 3eb7988a1663
"optimal setup is to use a mini-PC as your personal computer and a full-size desktop as a server" - adrian_b
"uses proxmox VM with eGPU... It’s more than enough" - ekropotin

2. Key Benefits: Low Power, Noise, and Cost Savings

Users praise <20W idle draw, silence, and efficiency for remote/solar setups.
"power draw is a huge win... 6W at idle... saving watts when using solar batteries" - themafia
"low noise. Many consider fan noise under load to be the most important property" - ivanjermakov
"$200 NUC has been good enough for like 15 years" - jasonwatkinspdx

3. eGPU/LLM Inference Viable on Pi; PCIe Bandwidth Not a Bottleneck, Multi-GPU Limited

Pi+eGPU excels for single-user LLMs (low data transfer post-load), but layer-parallelism stalls multi-GPU without tensor support.
"PCIe bandwidth really doesn't bottleneck LLM inference for single-user workloads" - yoan9224
"multi GPU setups are completely stalled unless... parallel [users]" - numpad0
"inter-layer transfer sizes are in kilobyte ranges and PCIe x1 is plenty" - numpad0


πŸš€ Project Ideas

LLM Hardware Optimizer

Summary

  • Aggregates and verifies LLM inference benchmarks like inferbench.com, adding real-time used market prices from eBay/Kijiji, power draw metrics, and multi-GPU configs.
  • Core value: Helps users find cheapest tok/s/$ setups for local LLMs, solving "spec out a cheap rig" frustration (Eisenstein) and used pricing gaps (kilpikaarna).

Details

Key Value
Target Audience Local LLM hobbyists and remote/off-grid users
Core Feature Crowdsourced benchmarks with ML-verified submissions, eBay API price scraping, ROI calculator
Tech Stack Next.js, PostgreSQL, Ollama for verification, Puppeteer for scraping
Difficulty Medium
Monetization Revenue-ready: Freemium (basic free, pro with alerts)

Notes

  • HN loves benchmarks: "Cool site" (nodja), "Nice! Though for older hardware... second hand market" (kilpikaarna).
  • High discussion potential on new GPUs/models; practical for solar users (themafia).

eGPU Passthrough Manager

Summary

  • Open-source tool for Proxmox/VM setups to auto-configure eGPU passthrough on Pi5/mini-PCs, fixing BAR/PCIe issues, with one-click Wake-on-LAN to servers.
  • Core value: Enables "Pi 5 with high-end GPU via enclosure" (yoan9224) and "remote into beefy workstation" (3eb7988a1663) without tinkering.

Details

Key Value
Target Audience eGPU experimenters on low-power hosts (Pis, mini PCs)
Core Feature VFIO/IOMMU automation scripts, PCIe bandwidth tester, RDP/SSH fallback for IDEs
Tech Stack Bash/Python, Proxmox API, pciutils, llama.cpp integration
Difficulty Medium
Monetization Hobby

Notes

  • Addresses "eg. BAR problems" (yjftsjthsd-h), "proxmox VM with eGPU" (ekropotin); HN would share configs.
  • Utility for tidy desks/soundproof closets (reactordev, loeg).

Multi-GPU Agent Scheduler

Summary

  • Software to split LLM tasks into parallel agents (manager + delegates) across cheap multi-GPU setups, bypassing sequential layer stalls without tensor parallel.
  • Core value: Utilizes idle GPUs in pipeline setups ("GPUs sitting idle" - yoan9224), for concurrent users/tasks on crypto-mining-style boards.

Details

Key Value
Target Audience Multi-GPU home labbers on x1 PCIe lanes
Core Feature Prompt decomposition to agents, round-robin dispatch, KV cache sharing via PCIe DMA
Tech Stack Rust + llama.cpp/vLLM, Tokio for async, EXL2 quantization
Difficulty High
Monetization Revenue-ready: SaaS for teams ($10/mo/host)

Notes

  • Quotes: "break up tasks... multiple tasks concurrently... 'manager' and 'delegated engineers'" (numpad0), "agents" (zozbot234).
  • Sparks HN debates on clustering (syntaxing, sgt); practical for production inference.

Read Later