LLM-generated content policy #4010
Replies: 2 comments
-
I agree with your reasoning and choices. I love LLMs when they are useful, the same way I love search engines when they get me to what I was looking for. Thank you for not calling this AI - as that term really annoys the hell out of me. * I've adopted Richard Stallman's term for a recent talk of his, where he addresses Stochastic Parrots / LLMs as Bullshit generators. I hope more people would use it as well, it describes these tools very well. However, if you feel the word bullshit is too harsh, referring to them as Stochastic Parrots can do as well. |
Beta Was this translation helpful? Give feedback.
-
Latest terrible LLM interaction: LLM bot @jacks-sam1010 from latta.ai tries to generate a PR in response to an issue in #4157 (comment). The generated code does not even compile, and is also wrong on every other level. I copied the change into #4158 so people can see just how bad latta.ai is. |
Beta Was this translation helpful? Give feedback.
-
Sadly, I'm starting to observe users posting unreviewed LLM-generated content in my repos (not just chezmoi). Specifically, I have seen:
In all cases, the LLM-generated content is superficially plausible but fundamentally incorrect.
It takes more effort to understand and fix the LLM-generated content than it takes to write correct content in the first place. Posting unreviewed LLM-generated content is extremely disrespectful of my, and the other maintainers', time.
Note that I don't care if people use an LLM to help them generate content, but I do expect them to review it for correctness before posting it here.
Going forward, I propose the following policy:
I plan to update the developer guide and issue templates with clear warnings, but would like to hear feedback from others first.
Beta Was this translation helpful? Give feedback.
All reactions