Three prevailing themes in the discussion
| Theme | Key points | Representative quotes |
|---|---|---|
| LLMs tend to output mediocre, outdated Go code | Models often reproduce old idioms, ignore newer language features, and can produce unsafe concurrency patterns. | âIn DecemberâŻ2024⌠such tools tendedâunsurprisinglyâto produce Go code in a style similar to the mass of Go code used during training⌠even when there were newer, better ways to express the same idea.â â homarp âThe code that LLMs output is pretty average⌠itâs what I would expect a midâlevel engineer to produce.â â cedws âTheyâre particularly bad about concurrent go code⌠it routinely slips past review because it seems simple and simple is correct, right?â â Groxx |
| Goâs stability and tooling make it attractive for LLMâassisted development | Backwardâcompatibility, robust standard library, and integrated tooling (testing, linting, compilation) give developers confidence to use Go even when LLMs are imperfect. | âEven though all the code compiles because go is backwards compatible it all looks so much different.â â robviren âGoâs tooling is so good that I still end up writing it in Go.â â jjice âThe Go team has built such trust with backwards compatibility⌠other ecosystems⌠everything seems to be @Deprecated or @Experimental.â â iamcalledrob |
| Difficulty correcting LLMsâ knowledge base and the risk of perpetuating bad advice | Once a model learns outdated or incorrect patterns, they persist across generations; fixing requires community effort and careful training data curation. | âOnce the bad advice is in the model itâs never going away⌠itâs unlikely that model trainers will submit their RC models to various communities to make sure it isnât lying about those specific topics.â â munkâa âI definitely see that with C++ code⌠Not so easy to âfixâ, though.â â BiraIgnacio |
These themes capture the main concerns and observations about LLMâgenerated Go code, the languageâs strengths, and the challenges of keeping AI models upâtoâdate.