foobar/
foobar/|12月 14, 2025 23:36
The Problem with Slop The two places I spend most of my time, Twitter and code, are increasingly filled with AI-generated rather than human-written content. My instinctual reaction is that this is bad, but a common rebuttal is, “Who cares where it came from, judge the work on its own quality. Besides, AI is smarter than a lot of humans”. The Compiler Analogy Imagine a fictional yet realistic story. Some of the smartest people in the world invent a new Rust compiler. It runs build pipelines 100x faster, turning hours into seconds. And sometimes its generated bytecode delivers runtime speedups too! But there’s a catch: it will never throw an error. If your high-level source code is invalid, it won’t notify you, it’ll just compile incorrect behavior. The bytecode will always compile, always deploy, and always run! But its runtime behavior will sometimes be wrong. Should you use this compiler? Why or why not? Advocates claim the efficiency gains are simply too large to ignore. Just take a look at the compiled bytecode and fix any bugs that pop up. Now instead of slogging through pesky errors, anyone can be a programmer. Opponents say it doesn’t help to move faster in the wrong direction. Bytecode is being generated faster than ever, but it’s intense to peer review. They’re concerned that people are no longer publishing source code for quality concerns, and just committing the compiled bytecode instead. AI Writing The fast buggy compiler story is fake, but a parallel story is playing out across all media - books, articles, and tweets. Ideas = Source Code Media = Compiled Bytecode There’s a transformation process that an idea undergoes to make it ready for mass consumption. You might create a pithy catchphrase, or a visual metaphor, or a lengthy essay to spread your idea. This is hard work! To paraphrase Pascal, “I am sorry to write you such a long letter, I did not have time to write a short one.” AI writing speeds up this work 100x or more. You can input a half-baked raw idea, typos and all, and get an aesthetically compelling writeup defending that position in seconds from an LLM. Ask it to do the same for a contradictory idea and it will instantly comply. This is, in essence, a 100x faster Rust compiler that will never throw any error. Your idea, or source code, can be anything. It will “compile” this into a superficially compelling argument regardless of the validity of the idea. The Judgment DDoS Pre-AI, there was a strong correlation between the fundamental correctness of the work and its superficial aesthetics. In code, it is hard to produce code that compiles and passes tests without understanding the underlying structure. It is likewise difficult to write well about a topic without understanding it. Even the extreme case of an intentionally deceiving author must spend serious effort for good aesthetics. Modern LLMs almost always produce aesthetic output. First-order heuristics no longer filter out bad writing or bad code. Everything is superficially plausible. Why is this bad? Because instead of eliminating errors, we obfuscate them. An incorrect human-written article could be dismissed in five seconds, an AI-written one takes five minutes. Bad human-written code would be caught in a PR, bad AI-written code slips through to bring down prod. Turning every invalid idea into an aesthetic presentation is a DDoS attack on taste and judgment. Human errors are far more detectable than AI errors. There must be a correlation between the aesthetics and the fundamentals. Like GitHub PRs are done in source code instead of bytecode, so should written ideas be reviewed and discussed in their human-generated form. That’s the only way to preserve taste and judgment.(foobar/)
+4
Mentioned
Share To

Timeline

HotFlash

APP

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads