Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experiments at Pythagora[0], we've found that sweet spot is technical person who doesn't want to know, doesn't know, or doesn't care about the details, but is still technical enough to be able to guide the AI. Also, it's not either/or, for best effect use human and AI brainpower combined, because what's trivial vs tedious for human and AI is different so actually we can complement each other.

Also, current crop of LLMs are not there yet for large/largish projects. GPT4 is too slow and expensive, while Groq is superfast but open source models are not quite there yet. Claude is somewhere in the middle. I expect somewhere in the next 12 months there's going to be a tipping point where they will be capable, fast, and reliable enough to be in wide use for coding in this style[1].

[0] I have an AI horse in the game with http://pythagora.ai, so yeah I'm biased [1] It already works well for snippet-level cases (eg GitHub copilot or Cursor.sh) where you still have creative control as a human. It's exponentially harder to have the AI be (mostly) in control.



.


I would clarify that "there" in my "not there yet" doesn't assume superhuman AGI developer that will automagically solve all the software development projects. That's a deep philosophical issue best addressed in a pub somewhere ;-)

But roughly on par with what could be expected of today's junior software developer (unaided by AI)? Definitely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: