The Anti-Slop Pattern
/ 3 min read
I found a skill repo this week that does something I haven’t seen anyone else do well. Instead of telling the AI to “be creative” or “avoid generic output,” it lists the model’s exact training-data defaults and bans them by name.
Twenty-two fonts. Banned. Not because they’re bad fonts — because they’re the fonts every LLM reaches for first. Inter, DM Sans, Playfair Display, Space Grotesk. The skill calls them “reflex fonts” and forces the model past them before it can pick anything.
Then it goes further. Two CSS patterns are specified at the exact token level as absolute bans. The side-stripe accent border that every AI-generated dashboard has. The gradient-text effect that screams “a model made this.” Not vague guidance like “avoid gradient text” — the exact property combination to match and refuse. If you are about to write it, stop and rewrite the entire element differently.
This is pbakaus/impeccable, and the technique it demonstrates is more important than the frontend skills it ships.
LLMs fail predictably. They have a top five for everything. Top five fonts. Top five color palettes. Top five opening sentences. Top five variable names. The failure mode is not randomness. It is convergence. Every project gets the same “good” defaults, which means every project looks the same. The person prompting the model says “be more creative” and the model picks its sixth-favourite font instead of its first. The output is still slop. It is just slightly less obvious slop.
The anti-slop pattern works differently. Name the top-N reflexes explicitly. Reject all of them. Then force a genuine search. It operates at the pattern level, not the intention level. “Be more creative” is an aspiration. “Never use these 22 fonts” is a gate. Aspirations decay across a conversation. Gates do not.
I immediately stole this for my coding agent. The equivalent reflexes in code are easy to spot once you start looking. The comment that restates the function name. The bare exception handler that catches everything and logs nothing useful. The import of Optional from typing when the language has supported union syntax natively for three versions. The os.path.join call in a codebase that already uses pathlib everywhere else. Each of these is the model reaching for its training-data default instead of the actually-good solution. List the exact token sequence, ban it, and the model is forced past the convergence trap.
The technique works for prose too. LLMs always open with “In today’s rapidly evolving landscape.” They always suggest “holistic approach” and “leverage synergies.” These are not bad phrases in isolation. They are bad because they are what every model produces for every client in every context. Name the reflexes, ban them, and the model has to say something that actually means something.
The best AI skill I have found is not the one with the most sophisticated prompt engineering. It is the one that studied its own failure modes and built specific gates against them. Most people writing AI prompts are trying to make the model smarter. The anti-slop pattern makes it less predictable — which, for creative work, is the same thing.
Specificity beats aspiration. Gates compound. Aspirations decay.