I think the term "vibe coding" has no universally accepted definition but if you mean "coding without any prior coding experience" then the answer is no. Any leadership that know even the slightest about software know that AI tooling is on a spectrum. Vibe coding is at one end and tools like Claude Code and Cursor are on the other.
Finally, you can ask your leadership to give you time to pay back technical debt. And AI adoption is the reason why.
I have been seeing a pattern where leadership buys Copilot/Cursor licenses and expects immediate 10x gains, but the engineering team struggles to adopt them.
The thesis of this article is that AI acts as a throughput multiplier. If your codebase is clean (SOLID, DRY, explicit interfaces), AI accelerates you. If your codebase is spaghetti or relies on "tribal knowledge" (implicit context), AI just generates bugs faster than you can fix them.
I argue that "clean code" is no longer an aesthetic preference but a hard requirement for AI enablement, because AI agents effectively have no long-term memory of your project's history.
Curious if others are seeing this friction between "AI expectations" and "Legacy Code reality"?
Language models can generate a Python function that does the math perfectly.
I bet you would get better results if you tweaked the prompt to say "Generate a Python program that solves X math problem" and then just ran the resulting Python script.
That could only generate constructivist [0] proofs, and there are many things done in modern maths which are not constructivist. Maybe a better approach would be to use Curry-Howard [1] correspondence to directly get proofs from generated programs
Exactly, we need computer-equipped neural nets. Models need to use traditional UIs (including programming languages) and then we can talk about how to stop them. :)