Sloppiness

Yes, Mr. Sherman. Everything is slop.
I've mostly stopped writing code by hand now, but my code bases aren't becoming slop. Well, admittedly, this one kinda is entirely. And, uh, what the hell was this? But still: this is reasonably functional slop, and it's mine, dammit.
So there's a learning curve. And maybe the excitement carries us a little too far too fast now and then.
But how to think about creating slop? Managing slop? Processing slop? Must we make slop grinders to produce ground slop for repurposing into slop-burgers or slop-tacos?
Lessons on LLM usage
An effective piece of advice I came across a while ago (from where?) was that LLMs do very well when your goal is to move from a large amount of input to a small amount of output, and less well when your goal is to move from a small amount of input to a large amount of output.
A large-input, small-output query might look like this:
In:
[14 source code files - ui, db components, whatever]
[500 lines of logging]
when we add a tag in the TagManager it doesn't end up in the database, and also no user-facing error message appears. trace the logs and executing code for a cause. Propose a fix if you're able. Address both the error, and its silence for the user.
Out:
I see the issue. In TagManger.queueItemUpdates() on line ...
Here we have, say, 10-20k input tokens and probably something like 1k or less as output.
A small-input large-output query could be something like
In:
Write me a novel.
Where everyone still expects the outputs to be pretty bad. One might even call it slop.
slop: boolean sloppiness: float32;
The slop boolean is too limiting man. Free your mind.
A Minimum Viable Slop Metric is something like:
sloppiness = (output_tokens)/(input_tokens + output_tokens)
Here, the sloppiness of the novel prompt is ~1. Pure slop. The output from the debugging prompt is somewhere around 5 to 10% - not so sloppy.
This is the simplest model with the desired semantics: trending to 1 for slop and 0 for input-dominated generation. Details subject to refinement. Probably sigmoid style curves are more useful. Probably your mathematical model of slop also wants to parameterize also on the specific LLM - Hemmingway does better than me with the novel prompt, so his outputs should be scaled differently.
[This post has sloppiness 0.0].