Here is a summary of the three most prevalent themes from the Hacker News discussion on NanoLang:
1. Feasibility of LLMs on Novel Languages
The core debate centers on whether an LLM can effectively learn and generate code for a new language with minimal training data. Proponents argue that LLMs understand grammar structures and can succeed with proper documentation and iterative feedback, while skeptics believe that without a massive corpus of existing code, the models lack the statistical probability to generate correct output.
- LLMs can learn via grammar and feedback: > "LLMs understand grammars really really well. If you have a grammar for your language the LLM can one-shot perfect code." > β nl
- Without training data, LLMs are limited: > "Where are the millions lines of code needed to train LLM in this Nanolang? LLM are like parrots. if you dont give them data to extract the statistic probability of the next word, you will not get any usefull output." > β Surac
- Empirical evidence suggests success with context and tooling: > "The thing that really unlocked it was Claude being able to run a file listing... and then start picking through the examples that were most relevant to figuring out the syntax." > β simonw
2. The Value and Practicality of Required Testing
NanoLang's requirement that every function have tests at compile time sparked discussion on whether this is a beneficial constraint or unnecessary noise. Some view it as a way to enforce quality and catch errors early, while others worry about "code pollution" and the sheer volume of boilerplate test code required for every single function.
- Testing enforces discipline but creates overhead: > "I think that a real world file of source code will be either completely polluted by tests (they are way longer than the actual code they test) or become... [boilerplate] to please the compiler." > β pmontra
- It encourages good habits: > "One novel part here is every function is required to have tests that run at compile time." > β spicybright
3. Human Readability and Language Design
Beyond LLMs, users evaluated NanoLang purely as a human programming language, focusing on its syntax (prefix notation/S-expressions) and structure. While some found the syntax "jarring" or "unholy," others appreciated the clarity and simplicity, highlighting the trade-offs between terseness, expressiveness, and ease of reading.
- Syntax preferences divide users: > "I find Polish or Reverse Polish notation jarring after a lifetime of thinking in terms of operator precedence." > β noduerme
- Some found the design surprisingly clean: > "Really clean language where the design decisions have led to fewer traps (cond is a good choice)." > β sheepscreek
- Others saw it as a mix of existing paradigms: > "Itβs peculiar to see s-expressions mixed together with imperative style." > β sheepscreek