AI and the Early Adoption Skill Premium
A comment I've seen a few times from the LLM-reluctant software people goes like this:
If the models are getting easier to use with each generation, then there's no real penalty for being a late adopter.
The premise is true: the tools are getting easier to use - for more tasks and by a wider user base. But the conclusion is false.
If the tools get easier to use for many tasks, the market value for completing these tasks will plummet. Nobody is going to be paid well for naively operating a mass-market tool.
At the same time, the tools will become more-and-more powerful abstraction abstractors (not a typo, ugh), who are sensitive to more and more skilled usage.
Steve Yegge is getting pretty far out there, but it's also clear that he's grinding out a very different set of experiences, successes, and failures than even the average vibe-coder. He's probably asking too much of the current models, but because he's doing it, he'll have a better nose than me for what the next generation is really capable of.
Zoom
This is complicated by the velocity. Noone on earth achieved Sonnet 3.5 mastery, in the Ericsson ten-thousand-hours sense, because the model wasn't SOTA long enough! The same has been true for every model (before or) since, and is likely to continue to be true at least in the medium term. If you're waiting for a mature toolchain to exist then it could be a long wait.
What are the skills of effective vibecoding? WHO KNOWS, but intuition around this stuff is going to come from experience.