Project ideas from Hacker News discussions.

Tiger Style: Coding philosophy (2024)

πŸ“ Discussion Summary (Click to expand)

The discussion revolves around a proposed set of strict coding guidelines, provoking disagreement on the nature and application of such rules.

Here are the three most prevalent themes:

1. Function Length Limits are Arbitrary/Context-Dependent Guidelines, Not Absolute Rules

Many users argue that the suggested limit of 70 lines for functions is an arbitrary metric that fails to account for the reason code is structured the way it is. While many agree that overly long functions are often a code smell, they contend that rigid adherence causes more harm than good, especially for sequential logic.

"The length of functions in terms of line count has absolutely nothing to do with 'a more modular and maintainable codebase', as explained in the manifesto." β€” keyle

"Maturity is knowing when the guideline doesn’t apply. But if you find yourself constantly writing a lot of long functions, it’s a good signal something is off." β€” stouset

2. Conflict Between System Clarity (Macro View) and Part Clarity (Micro View)

A major debate centers on whether conceptual locality or sequential flow provides better understanding. Critics argue that splitting sequential, well-ordered logic into many named small functions ("splitting for splitting's sake") increases cognitive load by forcing the reader to jump between locations to follow the process. Proponents argue that named abstractions, regardless of location, are far superior to reading long, complex blocks of code.

"Now you have 12 functions, scattered over multiple packages, and the order of things is all confused, you have to debug through to see where it goes. You've just increased the cognitive load of dealing with your product by a factor of 12." β€” keyle

"With properly factored code, you only jump away when you need to... The cognitive load is decreased significantly. Because now the functions have names; the steps in the process are labelled." β€” zahlman

3. Practicality and Context (Hobby vs. Commercial/Critical Systems)

Several users question the feasibility and necessity of extreme rigor (like aiming for "zero technical debt") in time-sensitive commercial environments, contrasting it with the requirements of safety-critical software (like financial databases or aerospace systems). The consensus is that rigid adherence to such demanding standards is often a "privilege" of projects where failure cost is exceptionally high.

"I would not hire a monk of TigerStyle. We'd get nothing done! This amount of coding perfection is best for hobby projects without deadlines." β€” baalimago

"This is the kind of development that one needs for safety critical applications. E.g., nuclear power plants or airplane control software. I don't think it is economically feasible for less critical software." β€” cjfd


πŸš€ Project Ideas

Code Structure Auditor (CS-Auditor)

Summary

  • A static analysis tool designed to go beyond simple line counting for function length.
  • It assesses code structure longevity and maintainability by analyzing the cohesion and context switching costs associated with moderately long functions (e.g., 70-150 lines).
  • Core Value Proposition: Provide nuanced feedback on code structure beyond arbitrary line limits, helping developers decide when to factor code for testability/readability vs. when to keep sequential logic together for flow comprehension.

Details

Key Value
Target Audience Developers pushing back against dogmatic line limits (e.g., 70 lines) but who agree that very long functions are problematic, especially those handling sequential, stateful pipelines (like transaction processing).
Core Feature A linter plugin that analyzes functions between 70 and 150 lines and scores them based on: 1. Variable Span Complexity: How many unique variables are defined before they are used/mutated later in the function. 2. Call Ratio: Ratio of direct function calls (delegation) to local assignments/control flow within the block. Low ratio suggests a "scripting block" that might benefit from abstraction via named sub-functions (even if they are private/local).
Tech Stack Language-agnostic core using AST parsing (e.g., Tree-sitter libraries for popular languages like Python, TypeScript, Rust, Go). Integrates as a GitHub Action or IDE linting rule.
Difficulty Medium (Requires building parsers/analyzers for multiple languages, but leverages existing parsing infrastructure).
Monetization Hobby

Notes

  • Why HN commenters would love it: It directly addresses the nuanced debate about function length, supporting both sides by focusing on the cognitive load imposed by sprawl rather than just the line count. It validates the experienced view that in inherently sequential logic (like transaction processing described by jackfranklyn), long sequential functions are sometimes necessary, provided internal organization (naming, variable use) is sound.
  • Potential for discussion or practical utility: This tool shifts the discussion from "Is 70 lines too short?" to "Is the complexity inside this 95-line function justified by its sequential nature, or is it spaghetti that needs naming?" This could generate productive debate on code smell detection methodology.

Contextual Testability Analyzer (CTA)

Summary

  • A service designed to help developers weigh the trade-off between function extraction for unit testing isolation versus maintaining tight contextual coupling for complex stateful workflows.
  • It helps architecturally separate pure logic from orchestration by suggesting extraction points where logic is reusable or easily isolated.
  • Core Value Proposition: Quantify the ease of isolating logic into testable units, addressing the pain point raised by jackfranklyn regarding complex financial pipelines where testing requires tracing state across many outputs.

Details

Key Value
Target Audience Developers working on complex business logic, especially state machines, transaction processors, or data pipelines where conditional flow is abundant.
Core Feature Analyzes large functions or sequential blocks, flagging sections that meet criteria for testability isolation (e.g., high ratios of input parameters/context accesses yielding a single, derived output) versus sections that manage critical path dependencies/conditional branching (orchestration). It suggests alternative structures like the Typestate pattern (jrpelkonen mention) for enforcing sequencing at compile-time, providing concrete refactoring blueprints.
Tech Stack Backend analysis service (e.g., Rust/Go for performance) offering an API endpoint or CLI. Frontend visualization of the suggested separation (e.g., "Logic Block A" vs. "Orchestrator Flow B").
Difficulty High (Requires deep semantic analysis of control flow and state tracking across assignments/parameters).
Monetization Hobby

Notes

  • Why HN commenters would love it: It specifically targets the difficulty of debugging intertwined state mutation vs. immutable/pure function composition in complex scenarios. It offers a concrete utility for enforcing the separation of concerns ("Focus on separating pure code from stateful code," mentioned by multiple users) by showing where the boundary should be drawn for maximum testability benefit without sacrificing sequential clarity.
  • Potential for discussion or practical utility: This could form the basis for "Testable Unit Proposals" where the developer submits the code block and the CTA suggests the ideal boundary for testing, potentially sparking discussions on applying Haskell/Rust features like dependent types or typestate in mainstream languages.

Dogmatic Linter Rule Suppression Explainer

Summary

  • A lightweight tool that hooks into existing CI/CD linting pipelines (enforcing rules like LOC limits) to mandate contextual justification for suppressing warnings/errors.
  • Instead of just allowing a comment like // lgtm_suppress, it requires a brief, standardized markdown justification referencing design principles vs. immediate context constraints.
  • Core Value Proposition: Combat the "dogmatic rule enforcement" criticized by users (jval43, Kwpolska) by requiring developers to articulate why an exception (like a long function or high complexity score) is made, shifting the culture from blind compliance to reasoned exception.

Details

Key Value
Target Audience Development teams that use aggressive, hard-enforced style linters but find themselves fighting the tools when specific business domains require exceptions.
Core Feature Intercepts linter suppression directives. If a rule (e.g., max-lines) is suppressed, the system requires the comment justifying the suppression to conform to a simple template: // EXCEPTION: [Principle Violated] - Justification: [Concise context explaining why exception is needed, e.g., "Sequential Pipeline Clarity"]. The tool could even auto-generate a suggestion based on the file/function context.
Tech Stack Node.js/Python script (simple CLI/pre-commit hook) that processes git diffs or file content against existing lint output configuration.
Difficulty Low (Primarily text processing and integration with existing tooling frameworks like ESLint/Prettier hooks).
Monetization Hobby

Notes

  • Why HN commenters would love it: This directly addresses the frustration with juniors applying rules "overzealously" (jval43) and the need for tooling to allow for exceptions (Hackbraten). It codifies the maturity (stouset) required to know when a guideline doesn't apply, forcing that justification into the repository permanently.
  • Potential for discussion or practical utility: This could become a popular DevOps utility. Developers would appreciate having a mechanism to document why they chose to deviate from style guides in specific engineering trade-off situations, preventing future debates about "why did someone write this messy function?"