Hacker Newsnew | past | comments | ask | show | jobs | submit | yshrestha's commentslogin

I think the term "vibe coding" has no universally accepted definition but if you mean "coding without any prior coding experience" then the answer is no. Any leadership that know even the slightest about software know that AI tooling is on a spectrum. Vibe coding is at one end and tools like Claude Code and Cursor are on the other.


    > …if you mean "coding without any prior coding experience"…
Nobody would mistake these thread titles [1] to mean the vibe coder they refer to is "coding without any prior coding experience".

The two fanboys in the video I linked above don't give the impression that they're "coding without any prior coding experience".

    > …AI tooling is on a spectrum. Vibe coding is at one end…
I'm pretty sure those three vibe coders mention their usage of some relatively sophisticated (to me) AI tooling in their adoption of vibe coding.

     > …then the answer is no…
Then what is the answer given the above clarification?

The question again: Does leadership consider AI adoption as being synonymous with vibe coding?

[1] https://g2ww.short.gy/gitWiTheProgramr


Author here.

Finally, you can ask your leadership to give you time to pay back technical debt. And AI adoption is the reason why.

I have been seeing a pattern where leadership buys Copilot/Cursor licenses and expects immediate 10x gains, but the engineering team struggles to adopt them.

The thesis of this article is that AI acts as a throughput multiplier. If your codebase is clean (SOLID, DRY, explicit interfaces), AI accelerates you. If your codebase is spaghetti or relies on "tribal knowledge" (implicit context), AI just generates bugs faster than you can fix them.

I argue that "clean code" is no longer an aesthetic preference but a hard requirement for AI enablement, because AI agents effectively have no long-term memory of your project's history.

Curious if others are seeing this friction between "AI expectations" and "Legacy Code reality"?


As a human, I can tell you that I suck at predicting exponential growth.


Location: USA

Remote: Yes

Willing to relocate: No

Technologies: PyTorch, MONAI, DICOM, HL7, medical device regulations

Resume/CV: https://www.linkedin.com/in/yujanshrestha/

Email: yshrestha@innolitics.com

I am an expert in AI/ML for medical devices. I can help with engineering and regulatory concerns.


I thought this was an interesting analysis of the potential legal frameworks around generative AI.


- Start a personal blog about software engineering principles

- Have some good code on GitHub

- Work for a startup, they are more likely to take chances on you

- Prefer working on-site if you can. Working remote is great once you have learned how to learn

Good luck with your search.


I have been using Notion AI for a couple of weeks and have found a couple of patterns / antipatterns that have worked well for my workflow.


To learn how to stop saying "yes" to everything.


This is actually a pretty good one. Possibly life changing and yet not intimidating.


A very simple way to improve work life balance



Excellent YouTube channel about a group of friends that say "yes" to things: https://www.youtube.com/@YesTheory

Very inspiring!


Language models can generate a Python function that does the math perfectly.

I bet you would get better results if you tweaked the prompt to say "Generate a Python program that solves X math problem" and then just ran the resulting Python script.

It does not need to be AGI to be useful.


you can also tell the model that it doesnt know how to do math, and it respects that

https://twitter.com/goodside/status/1568448128495534081


This is pretty cool, although the "don't use outside the security sandbox" made me laugh: https://twitter.com/goodside/status/1568704302813700096/phot...


You mean "generate a Python function that calls a library that does math perfectly, right?


In the limit, it's going to design an AI to write some python to call a library that does the math perfectly.


Unlike 99.99% of human programmers, who can and often do implement everything in sympy/numpy from scratch ;-)


Exactly! Hey it gets the job done :)

Software is just a tall wedding cake of abstractions built on top of abstractions.


That could only generate constructivist [0] proofs, and there are many things done in modern maths which are not constructivist. Maybe a better approach would be to use Curry-Howard [1] correspondence to directly get proofs from generated programs

[0] https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_...

[1] https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...



That is also a very valid and interesting thing to do.

But it's also quite interesting to see how the model would do "by itself". All kinds of interesting lessons to be learned!


Yeah! It is interesting to try and figure out "what" the model is actually learning. It is a valid thread of scientific inquiry.


Exactly, we need computer-equipped neural nets. Models need to use traditional UIs (including programming languages) and then we can talk about how to stop them. :)


This is an interesting account of an airliner incident caused by confusing UI and over reliance on automation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: